WorldWideScience

Sample records for single-target visual search

  1. Visual search, visual streams, and visual architectures.

    Science.gov (United States)

    Green, M

    1991-10-01

    Most psychological, physiological, and computational models of early vision suggest that retinal information is divided into a parallel set of feature modules. The dominant theories of visual search assume that these modules form a "blackboard" architecture: a set of independent representations that communicate only through a central processor. A review of research shows that blackboard-based theories, such as feature-integration theory, cannot easily explain the existing data. The experimental evidence is more consistent with a "network" architecture, which stresses that: (1) feature modules are directly connected to one another, (2) features and their locations are represented together, (3) feature detection and integration are not distinct processing stages, and (4) no executive control process, such as focal attention, is needed to integrate features. Attention is not a spotlight that synthesizes objects from raw features. Instead, it is better to conceptualize attention as an aperture which masks irrelevant visual information.

  2. Visual Representation Determines Search Difficulty: Explaining Visual Search Asymmetries

    Directory of Open Access Journals (Sweden)

    Neil eBruce

    2011-07-01

    Full Text Available In visual search experiments there exist a variety of experimental paradigms in which a symmetric set of experimental conditions yields asymmetric corresponding task performance. There are a variety of examples of this that currently lack a satisfactory explanation. In this paper, we demonstrate that distinct classes of asymmetries may be explained by virtue of a few simple conditions that are consistent with current thinking surrounding computational modeling of visual search and coding in the primate brain. This includes a detailed look at the role that stimulus familiarity plays in the determination of search performance. Overall, we demonstrate that all of these asymmetries have a common origin, namely, they are a consequence of the encoding that appears in the visual cortex. The analysis associated with these cases yields insight into the problem of visual search in general and predictions of novel search asymmetries.

  3. Visual search deficits in amblyopia.

    Science.gov (United States)

    Tsirlin, Inna; Colpa, Linda; Goltz, Herbert C; Wong, Agnes M F

    2018-04-01

    Amblyopia is a neurodevelopmental disorder defined as a reduction in visual acuity that cannot be corrected by optical means. It has been associated with low-level deficits. However, research has demonstrated a link between amblyopia and visual attention deficits in counting, tracking, and identifying objects. Visual search is a useful tool for assessing visual attention but has not been well studied in amblyopia. Here, we assessed the extent of visual search deficits in amblyopia using feature and conjunction search tasks. We compared the performance of participants with amblyopia (n = 10) to those of controls (n = 12) on both feature and conjunction search tasks using Gabor patch stimuli, varying spatial bandwidth and orientation. To account for the low-level deficits inherent in amblyopia, we measured individual contrast and crowding thresholds and monitored eye movements. The display elements were then presented at suprathreshold levels to ensure that visibility was equalized across groups. There was no performance difference between groups on feature search, indicating that our experimental design controlled successfully for low-level amblyopia deficits. In contrast, during conjunction search, median reaction times and reaction time slopes were significantly larger in participants with amblyopia compared with controls. Amblyopia differentially affects performance on conjunction visual search, a more difficult task that requires feature binding and possibly the involvement of higher-level attention processes. Deficits in visual search may affect day-to-day functioning in people with amblyopia.

  4. Visual search of Mooney faces

    Directory of Open Access Journals (Sweden)

    Jessica Emeline Goold

    2016-02-01

    Full Text Available Faces spontaneously capture attention. However, which special attributes of a face underlie this effect are unclear. To address this question, we investigate how gist information, specific visual properties and differing amounts of experience with faces affect the time required to detect a face. Three visual search experiments were conducted investigating the rapidness of human observers to detect Mooney face images. Mooney images are two-toned, ambiguous images. They were used in order to have stimuli that maintain gist information but limit low-level image properties. Results from the experiments show: 1 although upright Mooney faces were searched inefficiently, they were detected more rapidly than inverted Mooney face targets, demonstrating the important role of gist information in guiding attention towards a face. 2 Several specific Mooney face identities were searched efficiently while others were not, suggesting the involvement of specific visual properties in face detection. 3 By providing participants with unambiguous gray-scale versions of the Mooney face targets prior to the visual search task, the targets were detected significantly more efficiently, suggesting that prior experience with Mooney faces improves the ability to extract gist information for rapid face detection. However, a week of training with Mooney face categorization did not lead to even more efficient visual search of Mooney face targets. In summary, these results reveal that specific local image properties cannot account for how faces capture attention. On the other hand, gist information alone cannot account for how faces capture attention either. Prior experience facilitates the effect of gist on visual search of faces, making faces a special object category for guiding attention.

  5. Visual memory and visual perception: when memory improves visual search.

    Science.gov (United States)

    Riou, Benoit; Lesourd, Mathieu; Brunel, Lionel; Versace, Rémy

    2011-08-01

    This study examined the relationship between memory and perception in order to identify the influence of a memory dimension in perceptual processing. Our aim was to determine whether the variation of typical size between items (i.e., the size in real life) affects visual search. In two experiments, the congruency between typical size difference and perceptual size difference was manipulated in a visual search task. We observed that congruency between the typical and perceptual size differences decreased reaction times in the visual search (Exp. 1), and noncongruency between these two differences increased reaction times in the visual search (Exp. 2). We argue that these results highlight that memory and perception share some resources and reveal the intervention of typical size difference on the computation of the perceptual size difference.

  6. Characteristic sounds facilitate visual search.

    Science.gov (United States)

    Iordanescu, Lucica; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru

    2008-06-01

    In a natural environment, objects that we look for often make characteristic sounds. A hiding cat may meow, or the keys in the cluttered drawer may jingle when moved. Using a visual search paradigm, we demonstrated that characteristic sounds facilitated visual localization of objects, even when the sounds carried no location information. For example, finding a cat was faster when participants heard a meow sound. In contrast, sounds had no effect when participants searched for names rather than pictures of objects. For example, hearing "meow" did not facilitate localization of the word cat. These results suggest that characteristic sounds cross-modally enhance visual (rather than conceptual) processing of the corresponding objects. Our behavioral demonstration of object-based cross-modal enhancement complements the extensive literature on space-based cross-modal interactions. When looking for your keys next time, you might want to play jingling sounds.

  7. Visualization of Pulsar Search Data

    Science.gov (United States)

    Foster, R. S.; Wolszczan, A.

    1993-05-01

    The search for periodic signals from rotating neutron stars or pulsars has been a computationally taxing problem to astronomers for more than twenty-five years. Over this time interval, increases in computational capability have allowed ever more sensitive searches, covering a larger parameter space. The volume of input data and the general presence of radio frequency interference typically produce numerous spurious signals. Visualization of the search output and enhanced real-time processing of significant candidate events allow the pulsar searcher to optimally processes and search for new radio pulsars. The pulsar search algorithm and visualization system presented in this paper currently runs on serial RISC based workstations, a traditional vector based super computer, and a massively parallel computer. A description of the serial software algorithm and its modifications for massively parallel computing are describe. The results of four successive searches for millisecond period radio pulsars using the Arecibo telescope at 430 MHz have resulted in the successful detection of new long-period and millisecond period radio pulsars.

  8. Dual-target cost in visual search for multiple unfamiliar faces.

    Science.gov (United States)

    Mestry, Natalie; Menneer, Tamaryn; Cave, Kyle R; Godwin, Hayward J; Donnelly, Nick

    2017-08-01

    The efficiency of visual search for one (single-target) and either of two (dual-target) unfamiliar faces was explored to understand the manifestations of capacity and guidance limitations in face search. The visual similarity of distractor faces to target faces was manipulated using morphing (Experiments 1 and 2) and multidimensional scaling (Experiment 3). A dual-target cost was found in all experiments, evidenced by slower and less accurate search in dual- than single-target conditions. The dual-target cost was unequal across the targets, with performance being maintained on one target and reduced on the other, which we label "preferred" and "non-preferred" respectively. We calculated the capacity for each target face and show reduced capacity for representing the non-preferred target face. However, results show that the capacity for the non-preferred target can be increased when the dual-target condition is conducted after participants complete the single-target conditions. Analyses of eye movements revealed evidence for weak guidance of fixations in single-target search, and when searching for the preferred target in dual-target search. Overall, the experiments show dual-target search for faces is capacity- and guidance-limited, leading to superior search for 1 face over the other in dual-target search. However, learning faces individually may improve capacity with the second face. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  9. Development of a Computerized Visual Search Test

    Science.gov (United States)

    Reid, Denise; Babani, Harsha; Jon, Eugenia

    2009-01-01

    Visual attention and visual search are the features of visual perception, essential for attending and scanning one's environment while engaging in daily occupations. This study describes the development of a novel web-based test of visual search. The development information including the format of the test will be described. The test was designed…

  10. The development of organized visual search

    Science.gov (United States)

    Woods, Adam J.; Goksun, Tilbe; Chatterjee, Anjan; Zelonis, Sarah; Mehta, Anika; Smith, Sabrina E.

    2013-01-01

    Visual search plays an important role in guiding behavior. Children have more difficulty performing conjunction search tasks than adults. The present research evaluates whether developmental differences in children's ability to organize serial visual search (i.e., search organization skills) contribute to performance limitations in a typical conjunction search task. We evaluated 134 children between the ages of 2 and 17 on separate tasks measuring search for targets defined by a conjunction of features or by distinct features. Our results demonstrated that children organize their visual search better as they get older. As children's skills at organizing visual search improve they become more accurate at locating targets with conjunction of features amongst distractors, but not for targets with distinct features. Developmental limitations in children's abilities to organize their visual search of the environment are an important component of poor conjunction search in young children. In addition, our findings provide preliminary evidence that, like other visuospatial tasks, exposure to reading may influence children's spatial orientation to the visual environment when performing a visual search. PMID:23584560

  11. Collinearity Impairs Local Element Visual Search

    Science.gov (United States)

    Jingling, Li; Tseng, Chia-Huei

    2013-01-01

    In visual searches, stimuli following the law of good continuity attract attention to the global structure and receive attentional priority. Also, targets that have unique features are of high feature contrast and capture attention in visual search. We report on a salient global structure combined with a high orientation contrast to the…

  12. Survival Processing Enhances Visual Search Efficiency.

    Science.gov (United States)

    Cho, Kit W

    2018-05-01

    Words rated for their survival relevance are remembered better than when rated using other well-known memory mnemonics. This finding, which is known as the survival advantage effect and has been replicated in many studies, suggests that our memory systems are molded by natural selection pressures. In two experiments, the present study used a visual search task to examine whether there is likewise a survival advantage for our visual systems. Participants rated words for their survival relevance or for their pleasantness before locating that object's picture in a search array with 8 or 16 objects. Although there was no difference in search times among the two rating scenarios when set size was 8, survival processing reduced visual search times when set size was 16. These findings reflect a search efficiency effect and suggest that similar to our memory systems, our visual systems are also tuned toward self-preservation.

  13. Beyond the search surface: visual search and attentional engagement.

    Science.gov (United States)

    Duncan, J; Humphreys, G

    1992-05-01

    Treisman (1991) described a series of visual search studies testing feature integration theory against an alternative (Duncan & Humphreys, 1989) in which feature and conjunction search are basically similar. Here the latter account is noted to have 2 distinct levels: (a) a summary of search findings in terms of stimulus similarities, and (b) a theory of how visual attention is brought to bear on relevant objects. Working at the 1st level, Treisman found that even when similarities were calibrated and controlled, conjunction search was much harder than feature search. The theory, however, can only really be tested at the 2nd level, because the 1st is an approximation. An account of the findings is developed at the 2nd level, based on the 2 processes of input-template matching and spreading suppression. New data show that, when both of these factors are controlled, feature and conjunction search are equally difficult. Possibilities for unification of the alternative views are considered.

  14. Visual search elicits the electrophysiological marker of visual working memory.

    Directory of Open Access Journals (Sweden)

    Stephen M Emrich

    Full Text Available BACKGROUND: Although limited in capacity, visual working memory (VWM plays an important role in many aspects of visually-guided behavior. Recent experiments have demonstrated an electrophysiological marker of VWM encoding and maintenance, the contralateral delay activity (CDA, which has been shown in multiple tasks that have both explicit and implicit memory demands. Here, we investigate whether the CDA is evident during visual search, a thoroughly-researched task that is a hallmark of visual attention but has no explicit memory requirements. METHODOLOGY/PRINCIPAL FINDINGS: The results demonstrate that the CDA is present during a lateralized search task, and that it is similar in amplitude to the CDA observed in a change-detection task, but peaks slightly later. The changes in CDA amplitude during search were strongly correlated with VWM capacity, as well as with search efficiency. These results were paralleled by behavioral findings showing a strong correlation between VWM capacity and search efficiency. CONCLUSIONS/SIGNIFICANCE: We conclude that the activity observed during visual search was generated by the same neural resources that subserve VWM, and that this activity reflects the maintenance of previously searched distractors.

  15. Visual search is modulated by action intentions

    NARCIS (Netherlands)

    Bekkering, H; Neggers, SFW

    The influence of action intentions on visual selection processes was investigated in a visual search paradigm. A predefined target object with a certain orientation and color was presented among distractors, and subjects had to either look and point at the target or look at and grasp the target.

  16. Conditional Probability Modulates Visual Search Efficiency

    Directory of Open Access Journals (Sweden)

    Bryan eCort

    2013-10-01

    Full Text Available We investigated the effects of probability on visual search. Previous work has shown that people can utilize spatial and sequential probability information to improve target detection. We hypothesized that performance improvements from probability information would extend to the efficiency of visual search. Our task was a simple visual search in which the target was always present among a field of distractors, and could take one of two colors. The absolute probability of the target being either color was 0.5; however, the conditional probability – the likelihood of a particular color given a particular combination of two cues – varied from 0.1 to 0.9. We found that participants searched more efficiently for high conditional probability targets and less efficiently for low conditional probability targets, but only when they were explicitly informed of the probability relationship between cues and target color.

  17. Eye Movements and Visual Search: A Bibliography,

    Science.gov (United States)

    1983-01-01

    duration and velocity. Neurology, 1975, 25, 1065-1070. EYM, SAC 40 Bard, C.; Fleury, M.; Carriere, L.; Halle, M. Analysis of Gymnastics Judges’ Visual...Nodine, C.F.; Carmody, D.P.; Herman, E. Eye Movements During Search for Artistically Embedded Targets. Bulletin of the Psychonomic Society, 1979, 13

  18. Visual reinforcement shapes eye movements in visual search.

    Science.gov (United States)

    Paeye, Céline; Schütz, Alexander C; Gegenfurtner, Karl R

    2016-08-01

    We use eye movements to gain information about our visual environment; this information can indirectly be used to affect the environment. Whereas eye movements are affected by explicit rewards such as points or money, it is not clear whether the information gained by finding a hidden target has a similar reward value. Here we tested whether finding a visual target can reinforce eye movements in visual search performed in a noise background, which conforms to natural scene statistics and contains a large number of possible target locations. First we tested whether presenting the target more often in one specific quadrant would modify eye movement search behavior. Surprisingly, participants did not learn to search for the target more often in high probability areas. Presumably, participants could not learn the reward structure of the environment. In two subsequent experiments we used a gaze-contingent display to gain full control over the reinforcement schedule. The target was presented more often after saccades into a specific quadrant or a specific direction. The proportions of saccades meeting the reinforcement criteria increased considerably, and participants matched their search behavior to the relative reinforcement rates of targets. Reinforcement learning seems to serve as the mechanism to optimize search behavior with respect to the statistics of the task.

  19. Parallel coding of conjunctions in visual search.

    Science.gov (United States)

    Found, A

    1998-10-01

    Two experiments investigated whether the conjunctive nature of nontarget items influenced search for a conjunction target. Each experiment consisted of two conditions. In both conditions, the target item was a red bar tilted to the right, among white tilted bars and vertical red bars. As well as color and orientation, display items also differed in terms of size. Size was irrelevant to search in that the size of the target varied randomly from trial to trial. In one condition, the size of items correlated with the other attributes of display items (e.g., all red items were big and all white items were small). In the other condition, the size of items varied randomly (i.e., some red items were small and some were big, and some white items were big and some were small). Search was more efficient in the size-correlated condition, consistent with the parallel coding of conjunctions in visual search.

  20. One visual search, many memory searches: An eye-tracking investigation of hybrid search.

    Science.gov (United States)

    Drew, Trafton; Boettcher, Sage E P; Wolfe, Jeremy M

    2017-09-01

    Suppose you go to the supermarket with a shopping list of 10 items held in memory. Your shopping expedition can be seen as a combination of visual search and memory search. This is known as "hybrid search." There is a growing interest in understanding how hybrid search tasks are accomplished. We used eye tracking to examine how manipulating the number of possible targets (the memory set size [MSS]) changes how observers (Os) search. We found that dwell time on each distractor increased with MSS, suggesting a memory search was being executed each time a new distractor was fixated. Meanwhile, although the rate of refixation increased with MSS, it was not nearly enough to suggest a strategy that involves repeatedly searching visual space for subgroups of the target set. These data provide a clear demonstration that hybrid search tasks are carried out via a "one visual search, many memory searches" heuristic in which Os examine items in the visual array once with a very low rate of refixations. For each item selected, Os activate a memory search that produces logarithmic response time increases with increased MSS. Furthermore, the percentage of distractors fixated was strongly modulated by the MSS: More items in the MSS led to a higher percentage of fixated distractors. Searching for more potential targets appears to significantly alter how Os approach the task, ultimately resulting in more eye movements and longer response times.

  1. Guided Text Search Using Adaptive Visual Analytics

    Energy Technology Data Exchange (ETDEWEB)

    Steed, Chad A [ORNL; Symons, Christopher T [ORNL; Senter, James K [ORNL; DeNap, Frank A [ORNL

    2012-10-01

    This research demonstrates the promise of augmenting interactive visualizations with semi- supervised machine learning techniques to improve the discovery of significant associations and insights in the search and analysis of textual information. More specifically, we have developed a system called Gryffin that hosts a unique collection of techniques that facilitate individualized investigative search pertaining to an ever-changing set of analytical questions over an indexed collection of open-source documents related to critical national infrastructure. The Gryffin client hosts dynamic displays of the search results via focus+context record listings, temporal timelines, term-frequency views, and multiple coordinate views. Furthermore, as the analyst interacts with the display, the interactions are recorded and used to label the search records. These labeled records are then used to drive semi-supervised machine learning algorithms that re-rank the unlabeled search records such that potentially relevant records are moved to the top of the record listing. Gryffin is described in the context of the daily tasks encountered at the US Department of Homeland Security s Fusion Center, with whom we are collaborating in its development. The resulting system is capable of addressing the analysts information overload that can be directly attributed to the deluge of information that must be addressed in the search and investigative analysis of textual information.

  2. Race Guides Attention in Visual Search.

    Directory of Open Access Journals (Sweden)

    Marte Otten

    Full Text Available It is known that faces are rapidly and even unconsciously categorized into social groups (black vs. white, male vs. female. Here, I test whether preferences for specific social groups guide attention, using a visual search paradigm. In Experiment 1 participants searched displays of neutral faces for an angry or frightened target face. Black target faces were detected more efficiently than white targets, indicating that black faces attracted more attention. Experiment 2 showed that attention differences between black and white faces were correlated with individual differences in automatic race preference. In Experiment 3, using happy target faces, the attentional preference for black over white faces was eliminated. Taken together, these results suggest that automatic preferences for social groups guide attention to individuals from negatively valenced groups, when people are searching for a negative emotion such as anger or fear.

  3. Similarity relations in visual search predict rapid visual categorization

    Science.gov (United States)

    Mohan, Krithika; Arun, S. P.

    2012-01-01

    How do we perform rapid visual categorization?It is widely thought that categorization involves evaluating the similarity of an object to other category items, but the underlying features and similarity relations remain unknown. Here, we hypothesized that categorization performance is based on perceived similarity relations between items within and outside the category. To this end, we measured the categorization performance of human subjects on three diverse visual categories (animals, vehicles, and tools) and across three hierarchical levels (superordinate, basic, and subordinate levels among animals). For the same subjects, we measured their perceived pair-wise similarities between objects using a visual search task. Regardless of category and hierarchical level, we found that the time taken to categorize an object could be predicted using its similarity to members within and outside its category. We were able to account for several classic categorization phenomena, such as (a) the longer times required to reject category membership; (b) the longer times to categorize atypical objects; and (c) differences in performance across tasks and across hierarchical levels. These categorization times were also accounted for by a model that extracts coarse structure from an image. The striking agreement observed between categorization and visual search suggests that these two disparate tasks depend on a shared coarse object representation. PMID:23092947

  4. Stable statistical representations facilitate visual search.

    Science.gov (United States)

    Corbett, Jennifer E; Melcher, David

    2014-10-01

    Observers represent the average properties of object ensembles even when they cannot identify individual elements. To investigate the functional role of ensemble statistics, we examined how modulating statistical stability affects visual search. We varied the mean and/or individual sizes of an array of Gabor patches while observers searched for a tilted target. In "stable" blocks, the mean and/or local sizes of the Gabors were constant over successive displays, whereas in "unstable" baseline blocks they changed from trial to trial. Although there was no relationship between the context and the spatial location of the target, observers found targets faster (as indexed by faster correct responses and fewer saccades) as the global mean size became stable over several displays. Building statistical stability also facilitated scanning the scene, as measured by larger saccadic amplitudes, faster saccadic reaction times, and shorter fixation durations. These findings suggest a central role for peripheral visual information, creating context to free resources for detailed processing of salient targets and maintaining the illusion of visual stability.

  5. Reader error, object recognition, and visual search

    Science.gov (United States)

    Kundel, Harold L.

    2004-05-01

    Small abnormalities such as hairline fractures, lung nodules and breast tumors are missed by competent radiologists with sufficient frequency to make them a matter of concern to the medical community; not only because they lead to litigation but also because they delay patient care. It is very easy to attribute misses to incompetence or inattention. To do so may be placing an unjustified stigma on the radiologists involved and may allow other radiologists to continue a false optimism that it can never happen to them. This review presents some of the fundamentals of visual system function that are relevant to understanding the search for and the recognition of small targets embedded in complicated but meaningful backgrounds like chests and mammograms. It presents a model for visual search that postulates a pre-attentive global analysis of the retinal image followed by foveal checking fixations and eventually discovery scanning. The model will be used to differentiate errors of search, recognition and decision making. The implications for computer aided diagnosis and for functional workstation design are discussed.

  6. Adding a visualization feature to web search engines: it's time.

    Science.gov (United States)

    Wong, Pak Chung

    2008-01-01

    It's widely recognized that all Web search engines today are almost identical in presentation layout and behavior. In fact, the same presentation approach has been applied to depicting search engine results pages (SERPs) since the first Web search engine launched in 1993. In this Visualization Viewpoints article, I propose to add a visualization feature to Web search engines and suggest that the new addition can improve search engines' performance and capabilities, which in turn lead to better Web search technology.

  7. Usability Testing of a Large, Multidisciplinary Library Database: Basic Search and Visual Search

    Directory of Open Access Journals (Sweden)

    Jody Condit Fagan

    2006-09-01

    Full Text Available Visual search interfaces have been shown by researchers to assist users with information search and retrieval. Recently, several major library vendors have added visual search interfaces or functions to their products. For public service librarians, perhaps the most critical area of interest is the extent to which visual search interfaces and text-based search interfaces support research. This study presents the results of eight full-scale usability tests of both the EBSCOhost Basic Search and Visual Search in the context of a large liberal arts university.

  8. Priming and the guidance by visual and categorical templates in visual search

    NARCIS (Netherlands)

    Wilschut, A.M.; Theeuwes, J.; Olivers, C.N.L.

    2014-01-01

    Visual search is thought to be guided by top-down templates that are held in visual working memory. Previous studies have shown that a search-guiding template can be rapidly and strongly implemented from a visual cue, whereas templates are less effective when based on categorical cues. Direct visual

  9. A ?snapshot? of the visual search behaviours of medical sonographers

    OpenAIRE

    Carrigan, Ann J; Brennan, Patrick C; Pietrzyk, Mariusz; Clarke, Jillian; Chekaluk, Eugene

    2015-01-01

    Abstract Introduction: Visual search is a task that humans perform in everyday life. Whether it involves looking for a pen on a desk or a mass in a mammogram, the cognitive and perceptual processes that underpin these tasks are identical. Radiologists are experts in visual search of medical images and studies on their visual search behaviours have revealed some interesting findings with regard to diagnostic errors. In Australia, within the modality of ultrasound, sonographers perform the diag...

  10. Simulation of visual search in the natural 2-D situation

    OpenAIRE

    Blanka Borin

    2004-01-01

    The goal of this research was to imitate the process of visual search in a natural two-dimensional situation and also to investigate the influence of variable features on the speed of the visual search. The experiment was designed upon one of the most influential theories in the research field of the visual search phenomenon – The Feature Integration Theory (Treisman, 1982). Although the FIT theory claims, that in case of a larger number of synchronous targets the mechanism of attention...

  11. The prevalence effect in lateral masking and its relevance for visual search.

    Science.gov (United States)

    Geelen, B P; Wertheim, A H

    2015-04-01

    In stimulus displays with or without a single target amid 1,644 identical distractors, target prevalence was varied between 20, 50 and 80 %. Maximum gaze deviation was measured to determine the strength of lateral masking in these arrays. The results show that lateral masking was strongest in the 20 % prevalence condition, which differed significantly from both the 50 and 80 % prevalence conditions. No difference was observed between the latter two. This pattern of results corresponds to that found in the literature on the prevalence effect in visual search (stronger lateral masking corresponding to longer search times). The data add to similar findings reported earlier (Wertheim et al. in Exp Brain Res, 170:387-402, 2006), according to which the effects of many well-known factors in visual search correspond to those on lateral masking. These were the effects of set size, disjunctions versus conjunctions, display area, distractor density, the asymmetry effect (Q vs. O's) and viewing distance. The present data, taken together with those earlier findings, may lend credit to a causal hypothesis that lateral masking could be a more important mechanism in visual search than usually assumed.

  12. Reward and Attentional Control in Visual Search

    Science.gov (United States)

    Anderson, Brian A.; Wampler, Emma K.; Laurent, Patryk A.

    2015-01-01

    It has long been known that the control of attention in visual search depends both on voluntary, top-down deployment according to context-specific goals, and on involuntary, stimulus-driven capture based on the physical conspicuity of perceptual objects. Recent evidence suggests that pairing target stimuli with reward can modulate the voluntary deployment of attention, but there is little evidence that reward modulates the involuntary deployment of attention to task-irrelevant distractors. We report several experiments that investigate the role of reward learning on attentional control. Each experiment involved a training phase and a test phase. In the training phase, different colors were associated with different amounts of monetary reward. In the test phase, color was not task-relevant and participants searched for a shape singleton; in most experiments no reward was delivered in the test phase. We first show that attentional capture by physically salient distractors is magnified by a previous association with reward. In subsequent experiments we demonstrate that physically inconspicuous stimuli previously associated with reward capture attention persistently during extinction—even several days after training. Furthermore, vulnerability to attentional capture by high-value stimuli is negatively correlated across individuals with working memory capacity and positively correlated with trait impulsivity. An analysis of intertrial effects reveals that value-driven attentional capture is spatially specific. Finally, when reward is delivered at test contingent on the task-relevant shape feature, recent reward history modulates value-driven attentional capture by the irrelevant color feature. The influence of learned value on attention may provide a useful model of clinical syndromes characterized by similar failures of cognitive control, including addiction, attention-deficit/hyperactivity disorder, and obesity. PMID:23437631

  13. The target effect: visual memory for unnamed search targets.

    Science.gov (United States)

    Thomas, Mark D; Williams, Carrick C

    2014-01-01

    Search targets are typically remembered much better than other objects even when they are viewed for less time. However, targets have two advantages that other objects in search displays do not have: They are identified categorically before the search, and finding them represents the goal of the search task. The current research investigated the contributions of both of these types of information to the long-term visual memory representations of search targets. Participants completed either a predefined search or a unique-object search in which targets were not defined with specific categorical labels before searching. Subsequent memory results indicated that search target memory was better than distractor memory even following ambiguously defined searches and when the distractors were viewed significantly longer. Superior target memory appears to result from a qualitatively different representation from those of distractor objects, indicating that decision processes influence visual memory.

  14. Eye Movements Reveal How Task Difficulty Moulds Visual Search

    Science.gov (United States)

    Young, Angela H.; Hulleman, Johan

    2013-01-01

    In two experiments we investigated the relationship between eye movements and performance in visual search tasks of varying difficulty. Experiment 1 provided evidence that a single process is used for search among static and moving items. Moreover, we estimated the functional visual field (FVF) from the gaze coordinates and found that its size…

  15. Guidance of visual search by memory and knowledge.

    Science.gov (United States)

    Hollingworth, Andrew

    2012-01-01

    To behave intelligently in the world, humans must be able to find objects efficiently within the complex environments they inhabit. A growing proportion of the literature on visual search is devoted to understanding this type of natural search. In the present chapter, I review the literature on visual search through natural scenes, focusing on the role of memory and knowledge in guiding attention to task-relevant objects.

  16. Visual search in barn owls: Task difficulty and saccadic behavior.

    Science.gov (United States)

    Orlowski, Julius; Ben-Shahar, Ohad; Wagner, Hermann

    2018-01-01

    How do we find what we are looking for? A target can be in plain view, but it may be detected only after extensive search. During a search we make directed attentional deployments like saccades to segment the scene until we detect the target. Depending on difficulty, the search may be fast with few attentional deployments or slow with many, shorter deployments. Here we study visual search in barn owls by tracking their overt attentional deployments-that is, their head movements-with a camera. We conducted a low-contrast feature search, a high-contrast orientation conjunction search, and a low-contrast orientation conjunction search, each with set sizes varying from 16 to 64 items. The barn owls were able to learn all of these tasks and showed serial search behavior. In a subsequent step, we analyzed how search behavior of owls changes with search complexity. We compared the search mechanisms in these three serial searches with results from pop-out searches our group had reported earlier. Saccade amplitude shortened and fixation duration increased in difficult searches. Also, in conjunction search saccades were guided toward items with shared target features. These data suggest that during visual search, barn owls utilize mechanisms similar to those that humans use.

  17. Perceptual Dependencies in Information Visualization Assessed by Complex Visual Search

    NARCIS (Netherlands)

    Berg, Ronald van den; Cornelissen, Frans W.; Roerdink, Jos B.T.M.

    A common approach for visualizing data sets is to map them to images in which distinct data dimensions are mapped to distinct visual features, such as color, size and orientation. Here, we consider visualizations in which different data dimensions should receive equal weight and attention. Many of

  18. Words, Shape, Visual Search and Visual Working Memory in 3-Year-Old Children

    Science.gov (United States)

    Vales, Catarina; Smith, Linda B.

    2015-01-01

    Do words cue children's visual attention, and if so, what are the relevant mechanisms? Across four experiments, 3-year-old children (N = 163) were tested in visual search tasks in which targets were cued with only a visual preview versus a visual preview and a spoken name. The experiments were designed to determine whether labels facilitated…

  19. The roles of non-retinotopic motions in visual search

    Directory of Open Access Journals (Sweden)

    Ryohei eNakayama

    2016-06-01

    Full Text Available In visual search, a moving target among stationary distracters is detected more rapidly and more efficiently than a static target among moving distracters. Here we examined how this search asymmetry depends on motion signals from three distinct coordinate system – retinal, relative, and spatiotopic (head/body-centered. Our search display consisted of a target element, distracters elements, and a fixation point tracked by observers. Each element was composed of a spatial carrier grating windowed by a Gaussian envelope, and the motions of carriers, windows, and fixation were manipulated independently and used in various combinations to decouple the respective effects of motion coordinates systems on visual search asymmetry. We found that retinal motion hardly contributes to reaction times and search slopes but that relative and spatiotopic motions contribute to them substantially. Results highlight the important roles of non-retinotopic motions for guiding observer attention in visual search.

  20. Numerosity estimates for attended and unattended items in visual search.

    Science.gov (United States)

    Kelley, Troy D; Cassenti, Daniel N; Marusich, Laura R; Ghirardelli, Thomas G

    2017-07-01

    The goal of this research was to examine memories created for the number of items during a visual search task. Participants performed a visual search task for a target defined by a single feature (Experiment 1A), by a conjunction of features (Experiment 1B), or by a specific spatial configuration of features (Experiment 1C). On some trials following the search task, subjects were asked to recall the total number of items in the previous display. In all search types, participants underestimated the total number of items, but the severity of the underestimation varied depending on the efficiency of the search. In three follow-up studies (Experiments 2A, 2B, and 2C) using the same visual stimuli, the participants' only task was to estimate the number of items on each screen. Participants still underestimated the numerosity of the items, although the degree of underestimation was smaller than in the search tasks and did not depend on the type of visual stimuli. In Experiment 3, participants were asked to recall the number of items in a display only once. Subjects still displayed a tendency to underestimate, indicating that the underestimation effects seen in Experiments 1A-1C were not attributable to knowledge of the estimation task. The degree of underestimation depends on the efficiency of the search task, with more severe underestimation in efficient search tasks. This suggests that the lower attentional demands of very efficient searches leads to less encoding of numerosity of the distractor set.

  1. Simulation of visual search in the natural 2-D situation

    Directory of Open Access Journals (Sweden)

    Blanka Borin

    2004-08-01

    Full Text Available The goal of this research was to imitate the process of visual search in a natural two-dimensional situation and also to investigate the influence of variable features on the speed of the visual search. The experiment was designed upon one of the most influential theories in the research field of the visual search phenomenon – The Feature Integration Theory (Treisman, 1982. Although the FIT theory claims, that in case of a larger number of synchronous targets the mechanism of attention serially directs the mental processing from one target towards another, the results of our experiment has shown the possibility of not just serial but also parallel visual search. The results of the experiment have also shown that the similarity between features of the target and its surroundings takes effect on the speed of the target recognition. If the features are very similar or if there is no difference between the target and its surroundings, the visual search for the target is longer in comparison to the visual search for the target, which features don't resemble the target's surroundings.

  2. Exposure to arousal-inducing sounds facilitates visual search.

    Science.gov (United States)

    Asutay, Erkin; Västfjäll, Daniel

    2017-09-04

    Exposure to affective stimuli could enhance perception and facilitate attention via increasing alertness, vigilance, and by decreasing attentional thresholds. However, evidence on the impact of affective sounds on perception and attention is scant. Here, a novel aspect of affective facilitation of attention is studied: whether arousal induced by task-irrelevant auditory stimuli could modulate attention in a visual search. In two experiments, participants performed a visual search task with and without auditory-cues that preceded the search. Participants were faster in locating high-salient targets compared to low-salient targets. Critically, search times and search slopes decreased with increasing auditory-induced arousal while searching for low-salient targets. Taken together, these findings suggest that arousal induced by sounds can facilitate attention in a subsequent visual search. This novel finding provides support for the alerting function of the auditory system by showing an auditory-phasic alerting effect in visual attention. The results also indicate that stimulus arousal modulates the alerting effect. Attention and perception are our everyday tools to navigate our surrounding world and the current findings showing that affective sounds could influence visual attention provide evidence that we make use of affective information during perceptual processing.

  3. How important is lateral masking in visual search?

    NARCIS (Netherlands)

    Wertheim, AH; Hooge, ITC; Krikke, K; Johnson, A

    Five experiments are presented, providing empirical support of the hypothesis that the sensory phenomenon of lateral masking may explain many well-known visual search phenomena that are commonly assumed to be governed by cognitive attentional mechanisms. Experiment I showed that when the same visual

  4. Changing Perspective: Zooming in and out during Visual Search

    Science.gov (United States)

    Solman, Grayden J. F.; Cheyne, J. Allan; Smilek, Daniel

    2013-01-01

    Laboratory studies of visual search are generally conducted in contexts with a static observer vantage point, constrained by a fixation cross or a headrest. In contrast, in many naturalistic search settings, observers freely adjust their vantage point by physically moving through space. In two experiments, we evaluate behavior during free vantage…

  5. Interaction between numbers and size during visual search

    NARCIS (Netherlands)

    Krause, F.; Bekkering, H.; Pratt, J.; Lindemann, O.

    2017-01-01

    The current study investigates an interaction between numbers and physical size (i.e. size congruity) in visual search. In three experiments, participants had to detect a physically large (or small) target item among physically small (or large) distractors in a search task comprising single-digit

  6. On the network-based emulation of human visual search

    NARCIS (Netherlands)

    Gerrissen, J.F.

    1991-01-01

    We describe the design of a computer emulator of human visual search. The emulator mechanism is eventually meant to support ergonomic assessment of the effect of display structure and protocol on search performance. As regards target identification and localization, it mimics a number of

  7. Identifying a "default" visual search mode with operant conditioning.

    Science.gov (United States)

    Kawahara, Jun-ichiro

    2010-09-01

    The presence of a singleton in a task-irrelevant domain can impair visual search. This impairment, known as the attentional capture depends on the set of participants. When narrowly searching for a specific feature (the feature search mode), only matching stimuli capture attention. When searching broadly (the singleton detection mode), any oddball captures attention. The present study examined which strategy represents the "default" mode using an operant conditioning approach in which participants were trained, in the absence of explicit instructions, to search for a target in an ambiguous context in which one of two modes was available. The results revealed that participants behaviorally adopted the singleton detection as the default mode but reported using the feature search mode. Conscious strategies did not eliminate capture. These results challenge the view that a conscious set always modulates capture, suggesting that the visual system tends to rely on stimulus salience to deploy attention.

  8. Object-based target templates guide attention during visual search

    OpenAIRE

    Berggren, Nick; Eimer, Martin

    2018-01-01

    During visual search, attention is believed to be controlled in a strictly feature-based fashion, without any guidance by object-based target representations. To challenge this received view, we measured electrophysiological markers of attentional selection (N2pc component) and working memory (SPCN) in search tasks where two possible targets were defined by feature conjunctions (e.g., blue circles and green squares). Critically, some search displays also contained nontargets with two target f...

  9. Interaction between numbers and size during visual search

    OpenAIRE

    Krause, Florian; Bekkering, Harold; Pratt, Jay; Lindemann, Oliver

    2016-01-01

    The current study investigates an interaction between numbers and physical size (i.e. size congruity) in visual search. In three experiments, participants had to detect a physically large (or small) target item among physically small (or large) distractors in a search task comprising single-digit numbers. The relative numerical size of the digits was varied, such that the target item was either among the numerically large or small numbers in the search display and the relation between numeric...

  10. Probing the Feature Map for Faces in Visual Search

    Directory of Open Access Journals (Sweden)

    Hua Yang

    2011-05-01

    Full Text Available Controversy surrounds the mechanisms underlying the pop-out effect for faces in visual search. Is there a feature map for faces? If so, does it rely on the categorical distinction between faces and nonfaces, or on image-level face semblance? To probe the feature map, we compared search efficiency for faces, and nonface stimuli with high, low, and no face semblance. First, subjects performed a visual search task with objects as distractors. Only faces popped-out. Moreover, search efficiency for nonfaces correlated with image-level face semblance of the target. In a second experiment, faces were used as distractors but nonfaces did not pop-out. Interestingly, search efficiency for nonfaces was not modulated by face semblance, although searching for a face among faces was particularly difficult, reflecting a categorical boundary between nonfaces and faces. Finally, inversion and contrast negation significantly interacted with the effect of face semblance, ruling out the possibility that search efficiency solely depends on low-level features. Our study supports a parallel search for faces that is perhaps preattentive. Like other features (color, orientation etc., there appears to be a continuous face feature map for visual search. Our results also suggest that this map may include both image-level face semblance and face categoricity.

  11. Selection-for-action in visual search

    NARCIS (Netherlands)

    Hannus, A; Cornelissen, FW; Lindemann, O; Bekkering, H

    2005-01-01

    Grasping an object rather than pointing to it enhances processing of its orientation but not its color. Apparently, visual discrimination is selectively enhanced for a behaviorally relevant feature. In two experiments we investigated the limitations and targets of this bias. Specifically, in

  12. A 'snapshot' of the visual search behaviours of medical sonographers.

    Science.gov (United States)

    Carrigan, Ann J; Brennan, Patrick C; Pietrzyk, Mariusz; Clarke, Jillian; Chekaluk, Eugene

    2015-05-01

    Introduction : Visual search is a task that humans perform in everyday life. Whether it involves looking for a pen on a desk or a mass in a mammogram, the cognitive and perceptual processes that underpin these tasks are identical. Radiologists are experts in visual search of medical images and studies on their visual search behaviours have revealed some interesting findings with regard to diagnostic errors. In Australia, within the modality of ultrasound, sonographers perform the diagnostic scan, select images and present to the radiologist for reporting. Therefore the visual task and potential for errors is similar to a radiologist. Our aim was to explore and understand the detection, localisation and eye-gaze behaviours of a group of qualified sonographers. Method : We measured clinical performance and analysed diagnostic errors by presenting fifty sonographic breast images that varied on cancer present and degree of difficulty to a group of sonographers in their clinical workplace. For a sub-set of sonographers we obtained eye-tracking metrics such as time-to-first fixation, total visit duration and cumulative dwell time heat maps. Results : The results indicate that the sonographers' clinical performance was high and the eye-tracking metrics showed diagnostic error types similar to those found in studies on radiologist visual search. Conclusion : This study informs us about sonographer visual search patterns and highlights possible ways to improve diagnostic performance via targeted education.

  13. LoyalTracker: Visualizing Loyalty Dynamics in Search Engines.

    Science.gov (United States)

    Shi, Conglei; Wu, Yingcai; Liu, Shixia; Zhou, Hong; Qu, Huamin

    2014-12-01

    The huge amount of user log data collected by search engine providers creates new opportunities to understand user loyalty and defection behavior at an unprecedented scale. However, this also poses a great challenge to analyze the behavior and glean insights into the complex, large data. In this paper, we introduce LoyalTracker, a visual analytics system to track user loyalty and switching behavior towards multiple search engines from the vast amount of user log data. We propose a new interactive visualization technique (flow view) based on a flow metaphor, which conveys a proper visual summary of the dynamics of user loyalty of thousands of users over time. Two other visualization techniques, a density map and a word cloud, are integrated to enable analysts to gain further insights into the patterns identified by the flow view. Case studies and the interview with domain experts are conducted to demonstrate the usefulness of our technique in understanding user loyalty and switching behavior in search engines.

  14. Does linear separability really matter? Complex visual search is explained by simple search

    Science.gov (United States)

    Vighneshvel, T.; Arun, S. P.

    2013-01-01

    Visual search in real life involves complex displays with a target among multiple types of distracters, but in the laboratory, it is often tested using simple displays with identical distracters. Can complex search be understood in terms of simple searches? This link may not be straightforward if complex search has emergent properties. One such property is linear separability, whereby search is hard when a target cannot be separated from its distracters using a single linear boundary. However, evidence in favor of linear separability is based on testing stimulus configurations in an external parametric space that need not be related to their true perceptual representation. We therefore set out to assess whether linear separability influences complex search at all. Our null hypothesis was that complex search performance depends only on classical factors such as target-distracter similarity and distracter homogeneity, which we measured using simple searches. Across three experiments involving a variety of artificial and natural objects, differences between linearly separable and nonseparable searches were explained using target-distracter similarity and distracter heterogeneity. Further, simple searches accurately predicted complex search regardless of linear separability (r = 0.91). Our results show that complex search is explained by simple search, refuting the widely held belief that linear separability influences visual search. PMID:24029822

  15. Priming and the guidance by visual and categorical templates in visual search

    Directory of Open Access Journals (Sweden)

    Anna eWilschut

    2014-02-01

    Full Text Available Visual search is thought to be guided by top-down templates that are held in visual working memory. Previous studies have shown that a search-guiding template can be rapidly and strongly implemented from a visual cue, whereas templates are less effective when based on categorical cues. Direct visual priming from cue to target may underlie this difference. In two experiments we first asked observers to remember two possible target colors. A postcue then indicated which of the two would be the relevant color. The task was to locate a briefly presented and masked target of the cued color among irrelevant distractor items. Experiment 1 showed that overall search accuracy improved more rapidly on the basis of a direct visual postcue that carried the target color, compared to a neutral postcue that pointed to the memorized color. However, selectivity towards the target feature, i.e. the extent to which observers searched selectively among items of the cued versus uncued color, was found to be relatively unaffected by the presence of the visual signal. In Experiment 2 we compared search that was based on either visual or categorical information, but now controlled for direct visual priming. This resulted in no differences in overall performance nor selectivity. Altogether the results suggest that perceptual processing of visual search targets is facilitated by priming from visual cues, whereas attentional selectivity is enhanced by a working memory template that can formed from both visual and categorical input. Furthermore, if the priming is controlled for, categorical- and visual-based templates similarly enhance search guidance.

  16. Priming and the guidance by visual and categorical templates in visual search.

    Science.gov (United States)

    Wilschut, Anna; Theeuwes, Jan; Olivers, Christian N L

    2014-01-01

    Visual search is thought to be guided by top-down templates that are held in visual working memory. Previous studies have shown that a search-guiding template can be rapidly and strongly implemented from a visual cue, whereas templates are less effective when based on categorical cues. Direct visual priming from cue to target may underlie this difference. In two experiments we first asked observers to remember two possible target colors. A postcue then indicated which of the two would be the relevant color. The task was to locate a briefly presented and masked target of the cued color among irrelevant distractor items. Experiment 1 showed that overall search accuracy improved more rapidly on the basis of a direct visual postcue that carried the target color, compared to a neutral postcue that pointed to the memorized color. However, selectivity toward the target feature, i.e., the extent to which observers searched selectively among items of the cued vs. uncued color, was found to be relatively unaffected by the presence of the visual signal. In Experiment 2 we compared search that was based on either visual or categorical information, but now controlled for direct visual priming. This resulted in no differences in overall performance nor selectivity. Altogether the results suggest that perceptual processing of visual search targets is facilitated by priming from visual cues, whereas attentional selectivity is enhanced by a working memory template that can formed from both visual and categorical input. Furthermore, if the priming is controlled for, categorical- and visual-based templates similarly enhance search guidance.

  17. Selection-for-action in visual search.

    Science.gov (United States)

    Hannus, Aave; Cornelissen, Frans W; Lindemann, Oliver; Bekkering, Harold

    2005-01-01

    Grasping an object rather than pointing to it enhances processing of its orientation but not its color. Apparently, visual discrimination is selectively enhanced for a behaviorally relevant feature. In two experiments we investigated the limitations and targets of this bias. Specifically, in Experiment 1 we were interested to find out whether the effect is capacity demanding, therefore we manipulated the set-size of the display. The results indicated a clear cognitive processing capacity requirement, i.e. the magnitude of the effect decreased for a larger set size. Consequently, in Experiment 2, we investigated if the enhancement effect occurs only at the level of behaviorally relevant feature or at a level common to different features. Therefore we manipulated the discriminability of the behaviorally neutral feature (color). Again, results showed that this manipulation influenced the action enhancement of the behaviorally relevant feature. Particularly, the effect of the color manipulation on the action enhancement suggests that the action effect is more likely to bias the competition between different visual features rather than to enhance the processing of the relevant feature. We offer a theoretical account that integrates the action-intention effect within the biased competition model of visual selective attention.

  18. Visual search for features and conjunctions in development.

    Science.gov (United States)

    Lobaugh, N J; Cole, S; Rovet, J F

    1998-12-01

    Visual search performance was examined in three groups of children 7 to 12 years of age and in young adults. Colour and orientation feature searches and a conjunction search were conducted. Reaction time (RT) showed expected improvements in processing speed with age. Comparisons of RT's on target-present and target-absent trials were consistent with parallel search on the two feature conditions and with serial search in the conjunction condition. The RT results indicated searches for feature and conjunctions were treated similarly for children and adults. However, the youngest children missed more targets at the largest array sizes, most strikingly in conjunction search. Based on an analysis of speed/accuracy trade-offs, we suggest that low target-distractor discriminability leads to an undersampling of array elements, and is responsible for the high number of misses in the youngest children.

  19. Visual search of illusory contours: Shape and orientation effects

    Directory of Open Access Journals (Sweden)

    Gvozdenović Vasilije

    2008-01-01

    Full Text Available Illusory contours are specific class of visual stimuli that represent stimuli configurations perceived as integral irrespective of the fact that they are given in fragmented uncompleted wholes. Due to their specific features, illusory contours gained much attention in last decade representing prototype of stimuli used in investigations focused on binding problem. On the other side, investigations of illusory contours are related to problem of the level of their visual processing. Neurophysiologic studies show that processing of illusory contours proceed relatively early, on the V2 level, on the other hand most of experimental studies claim that illusory contours are perceived with engagement of visual attention, binding their elements to whole percept. This research is focused on two experiments in which visual search of illusory contours are based on shape and orientation. The main experimental procedure evolved the task proposed by Bravo and Nakayama where instead of detection, subjects were performing identification of one among two possible targets. In the first experiment subjects detected the presence of illusory square or illusory triangle, while in the second experiment subject were detecting two different orientations of illusory triangle. The results are interpreted in terms of visual search and feature integration theory. Beside the type of visual search task, search type proved to be dependent of specific features of illusory shapes which further complicate theoretical interpretation of the level of their perception.

  20. Visual search by chimpanzees (Pan): assessment of controlling relations.

    Science.gov (United States)

    Tomonaga, M

    1995-03-01

    Three experimentally sophisticated chimpanzees (Pan), Akira, Chloe, and Ai, were trained on visual search performance using a modified multiple-alternative matching-to-sample task in which a sample stimulus was followed by the search display containing one target identical to the sample and several uniform distractors (i.e., negative comparison stimuli were identical to each other). After they acquired this task, they were tested for transfer of visual search performance to trials in which the sample was not followed by the uniform search display (odd-item search). Akira showed positive transfer of visual search performance to odd-item search even when the display size (the number of stimulus items in the search display) was small, whereas Chloe and Ai showed a transfer only when the display size was large. Chloe and Ai used some nonrelational cues such as perceptual isolation of the target among uniform distractors (so-called pop-out). In addition to the odd-item search test, various types of probe trials were presented to clarify the controlling relations in multiple-alternative matching to sample. Akira showed a decrement of accuracy as a function of the display size when the search display was nonuniform (i.e., each "distractor" stimulus was not the same), whereas Chloe and Ai showed perfect performance. Furthermore, when the sample was identical to the uniform distractors in the search display, Chloe and Ai never selected an odd-item target, but Akira selected it when the display size was large. These results indicated that Akira's behavior was controlled mainly by relational cues of target-distractor oddity, whereas an identity relation between the sample and the target strongly controlled the performance of Chloe and Ai.

  1. Perceptual load corresponds with factors known to influence visual search.

    Science.gov (United States)

    Roper, Zachary J J; Cosman, Joshua D; Vecera, Shaun P

    2013-10-01

    One account of the early versus late selection debate in attention proposes that perceptual load determines the locus of selection. Attention selects stimuli at a late processing level under low-load conditions but selects stimuli at an early level under high-load conditions. Despite the successes of perceptual load theory, a noncircular definition of perceptual load remains elusive. We investigated the factors that influence perceptual load by using manipulations that have been studied extensively in visual search, namely target-distractor similarity and distractor-distractor similarity. Consistent with previous work, search was most efficient when targets and distractors were dissimilar and the displays contained homogeneous distractors; search became less efficient when target-distractor similarity increased irrespective of display heterogeneity. Importantly, we used these same stimuli in a typical perceptual load task that measured attentional spillover to a task-irrelevant flanker. We found a strong correspondence between search efficiency and perceptual load; stimuli that generated efficient searches produced flanker interference effects, suggesting that such displays involved low perceptual load. Flanker interference effects were reduced in displays that produced less efficient searches. Furthermore, our results demonstrate that search difficulty, as measured by search intercept, has little bearing on perceptual load. We conclude that rather than be arbitrarily defined, perceptual load might be defined by well-characterized, continuous factors that influence visual search. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  2. Words, shape, visual search and visual working memory in 3-year-old children.

    Science.gov (United States)

    Vales, Catarina; Smith, Linda B

    2015-01-01

    Do words cue children's visual attention, and if so, what are the relevant mechanisms? Across four experiments, 3-year-old children (N = 163) were tested in visual search tasks in which targets were cued with only a visual preview versus a visual preview and a spoken name. The experiments were designed to determine whether labels facilitated search times and to examine one route through which labels could have their effect: By influencing the visual working memory representation of the target. The targets and distractors were pictures of instances of basic-level known categories and the labels were the common name for the target category. We predicted that the label would enhance the visual working memory representation of the target object, guiding attention to objects that better matched the target representation. Experiments 1 and 2 used conjunctive search tasks, and Experiment 3 varied shape discriminability between targets and distractors. Experiment 4 compared the effects of labels to repeated presentations of the visual target, which should also influence the working memory representation of the target. The overall pattern fits contemporary theories of how the contents of visual working memory interact with visual search and attention, and shows that even in very young children heard words affect the processing of visual information. © 2014 John Wiley & Sons Ltd.

  3. Target-nontarget similarity decreases search efficiency and increases stimulus-driven control in visual search.

    Science.gov (United States)

    Barras, Caroline; Kerzel, Dirk

    2017-10-01

    Some points of criticism against the idea that attentional selection is controlled by bottom-up processing were dispelled by the attentional window account. The attentional window account claims that saliency computations during visual search are only performed for stimuli inside the attentional window. Therefore, a small attentional window may avoid attentional capture by salient distractors because it is likely that the salient distractor is located outside the window. In contrast, a large attentional window increases the chances of attentional capture by a salient distractor. Large and small attentional windows have been associated with efficient (parallel) and inefficient (serial) search, respectively. We compared the effect of a salient color singleton on visual search for a shape singleton during efficient and inefficient search. To vary search efficiency, the nontarget shapes were either similar or dissimilar with respect to the shape singleton. We found that interference from the color singleton was larger with inefficient than efficient search, which contradicts the attentional window account. While inconsistent with the attentional window account, our results are predicted by computational models of visual search. Because of target-nontarget similarity, the target was less salient with inefficient than efficient search. Consequently, the relative saliency of the color distractor was higher with inefficient than with efficient search. Accordingly, stronger attentional capture resulted. Overall, the present results show that bottom-up control by stimulus saliency is stronger when search is difficult, which is inconsistent with the attentional window account.

  4. Synaesthetic colours do not camouflage form in visual search.

    Science.gov (United States)

    Gheri, C; Chopping, S; Morgan, M J

    2008-04-07

    One of the major issues in synaesthesia research is to identify the level of processing involved in the formation of the subjective colours experienced by synaesthetes: are they perceptual phenomena or are they due to memory and association learning? To address this question, we tested whether the colours reported by a group of grapheme-colour synaesthetes (previously studied in an functional magnetic resonance imaging experiment) influenced them in a visual search task. As well as using a condition where synaesthetic colours should have aided visual search, we introduced a condition where the colours experienced by synaesthetes would be expected to make them worse than controls. We found no evidence for differences between synaesthetes and normal controls, either when colours should have helped them or where they should have hindered. We conclude that the colours reported by our population of synaesthetes are not equivalent to perceptual signals, but arise at a cognitive level where they are unable to affect visual search.

  5. Rapid Resumption of Interrupted Search Is Independent of Age-Related Improvements in Visual Search

    Science.gov (United States)

    Lleras, Alejandro; Porporino, Mafalda; Burack, Jacob A.; Enns, James T.

    2011-01-01

    In this study, 7-19-year-olds performed an interrupted visual search task in two experiments. Our question was whether the tendency to respond within 500 ms after a second glimpse of a display (the "rapid resumption" effect ["Psychological Science", 16 (2005) 684-688]) would increase with age in the same way as overall search efficiency. The…

  6. Searching while loaded: Visual working memory does not interfere with hybrid search efficiency but hybrid search uses working memory capacity.

    Science.gov (United States)

    Drew, Trafton; Boettcher, Sage E P; Wolfe, Jeremy M

    2016-02-01

    In "hybrid search" tasks, such as finding items on a grocery list, one must search the scene for targets while also searching the list in memory. How is the representation of a visual item compared with the representations of items in the memory set? Predominant theories would propose a role for visual working memory (VWM) either as the site of the comparison or as a conduit between visual and memory systems. In seven experiments, we loaded VWM in different ways and found little or no effect on hybrid search performance. However, the presence of a hybrid search task did reduce the measured capacity of VWM by a constant amount regardless of the size of the memory or visual sets. These data are broadly consistent with an account in which VWM must dedicate a fixed amount of its capacity to passing visual representations to long-term memory for comparison to the items in the memory set. The data cast doubt on models in which the search template resides in VWM or where memory set item representations are moved from LTM through VWM to earlier areas for comparison to visual items.

  7. Short-term perceptual learning in visual conjunction search.

    Science.gov (United States)

    Su, Yuling; Lai, Yunpeng; Huang, Wanyi; Tan, Wei; Qu, Zhe; Ding, Yulong

    2014-08-01

    Although some studies showed that training can improve the ability of cross-dimension conjunction search, less is known about the underlying mechanism. Specifically, it remains unclear whether training of visual conjunction search can successfully bind different features of separated dimensions into a new function unit at early stages of visual processing. In the present study, we utilized stimulus specificity and generalization to provide a new approach to investigate the mechanisms underlying perceptual learning (PL) in visual conjunction search. Five experiments consistently showed that after 40 to 50 min of training of color-shape/orientation conjunction search, the ability to search for a certain conjunction target improved significantly and the learning effects did not transfer to a new target that differed from the trained target in both color and shape/orientation features. However, the learning effects were not strictly specific. In color-shape conjunction search, although the learning effect could not transfer to a same-shape different-color target, it almost completely transferred to a same-color different-shape target. In color-orientation conjunction search, the learning effect partly transferred to a new target that shared same color or same orientation with the trained target. Moreover, the sum of transfer effects for the same color target and the same orientation target in color-orientation conjunction search was algebraically equivalent to the learning effect for trained target, showing an additive transfer effect. The different transfer patterns in color-shape and color-orientation conjunction search learning might reflect the different complexity and discriminability between feature dimensions. These results suggested a feature-based attention enhancement mechanism rather than a unitization mechanism underlying the short-term PL of color-shape/orientation conjunction search.

  8. Adaptation to a simulated central scotoma during visual search training.

    Science.gov (United States)

    Walsh, David V; Liu, Lei

    2014-03-01

    Patients with a central scotoma usually use a preferred retinal locus (PRL) consistently in daily activities. The selection process and time course of the PRL development are not well understood. We used a gaze-contingent display to simulate an isotropic central scotoma in normal subjects while they were practicing a difficult visual search task. As compared to foveal search, initial exposure to the simulated scotoma resulted in prolonged search reaction time, many more fixations and unorganized eye movements during search. By the end of a 1782-trial training with the simulated scotoma, the search performance improved to within 25% of normal foveal search. Accompanying the performance improvement, there were also fewer fixations, fewer repeated fixations in the same area of the search stimulus and a clear tendency of using one area near the border of the scotoma to identify the search target. The results were discussed in relation to natural development of PRL in central scotoma patients and potential visual training protocols to facilitate PRL development. Published by Elsevier Ltd.

  9. More than a memory: Confirmatory visual search is not caused by remembering a visual feature.

    Science.gov (United States)

    Rajsic, Jason; Pratt, Jay

    2017-10-01

    Previous research has demonstrated a preference for positive over negative information in visual search; asking whether a target object is green biases search towards green objects, even when this entails more perceptual processing than searching non-green objects. The present study investigated whether this confirmatory search bias is due to the presence of one particular (e.g., green) color in memory during search. Across two experiments, we show that this is not the critical factor in generating a confirmation bias in search. Search slowed proportionally to the number of stimuli whose color matched the color held in memory only when the color was remembered as part of the search instructions. These results suggest that biased search for information is due to a particular attentional selection strategy, and not to memory-driven attentional biases. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Design and Implementation of Cancellation Tasks for Visual Search Strategies and Visual Attention in School Children

    Science.gov (United States)

    Wang, Tsui-Ying; Huang, Ho-Chuan; Huang, Hsiu-Shuang

    2006-01-01

    We propose a computer-assisted cancellation test system (CACTS) to understand the visual attention performance and visual search strategies in school children. The main aim of this paper is to present our design and development of the CACTS and demonstrate some ways in which computer techniques can allow the educator not only to obtain more…

  11. Crowded visual search in children with normal vision and children with visual impairment

    NARCIS (Netherlands)

    Huurneman, Bianca; Cox, Ralf F. A.; Vlaskamp, Björn N. S.; Boonstra, F. Nienke

    This study investigates the influence of oculomotor control, crowding, and attentional factors on visual search in children with normal vision ([NV], n = 11), children with visual impairment without nystagmus ([VI-nys], n = 11), and children with VI with accompanying nystagmus ([VI+nys], n = 26).

  12. Visual search for arbitrary objects in real scenes

    Science.gov (United States)

    Alvarez, George A.; Rosenholtz, Ruth; Kuzmova, Yoana I.; Sherman, Ashley M.

    2011-01-01

    How efficient is visual search in real scenes? In searches for targets among arrays of randomly placed distractors, efficiency is often indexed by the slope of the reaction time (RT) × Set Size function. However, it may be impossible to define set size for real scenes. As an approximation, we hand-labeled 100 indoor scenes and used the number of labeled regions as a surrogate for set size. In Experiment 1, observers searched for named objects (a chair, bowl, etc.). With set size defined as the number of labeled regions, search was very efficient (~5 ms/item). When we controlled for a possible guessing strategy in Experiment 2, slopes increased somewhat (~15 ms/item), but they were much shallower than search for a random object among other distinctive objects outside of a scene setting (Exp. 3: ~40 ms/item). In Experiments 4–6, observers searched repeatedly through the same scene for different objects. Increased familiarity with scenes had modest effects on RTs, while repetition of target items had large effects (>500 ms). We propose that visual search in scenes is efficient because scene-specific forms of attentional guidance can eliminate most regions from the “functional set size” of items that could possibly be the target. PMID:21671156

  13. Collinear integration affects visual search at V1.

    Science.gov (United States)

    Chow, Hiu Mei; Jingling, Li; Tseng, Chia-huei

    2013-08-29

    Perceptual grouping plays an indispensable role in figure-ground segregation and attention distribution. For example, a column pops out if it contains element bars orthogonal to uniformly oriented element bars. Jingling and Tseng (2013) have reported that contextual grouping in a column matters to visual search behavior: When a column is grouped into a collinear (snakelike) structure, a target positioned on it became harder to detect than on other noncollinear (ladderlike) columns. How and where perceptual grouping interferes with selective attention is still largely unknown. This article contributes to this little-studied area by asking whether collinear contour integration interacts with visual search before or after binocular fusion. We first identified that the previously mentioned search impairment occurs with a distractor of five or nine elements but not one element in a 9 × 9 search display. To pinpoint the site of this effect, we presented the search display with a short collinear bar (one element) to one eye and the extending collinear bars to the other eye, such that when properly fused, the combined binocular collinear length (nine elements) exceeded the critical length. No collinear search impairment was observed, implying that collinear information before binocular fusion shaped participants' search behavior, although contour extension from the other eye after binocular fusion enhanced the effect of collinearity on attention. Our results suggest that attention interacts with perceptual grouping as early as V1.

  14. Visual search for arbitrary objects in real scenes.

    Science.gov (United States)

    Wolfe, Jeremy M; Alvarez, George A; Rosenholtz, Ruth; Kuzmova, Yoana I; Sherman, Ashley M

    2011-08-01

    How efficient is visual search in real scenes? In searches for targets among arrays of randomly placed distractors, efficiency is often indexed by the slope of the reaction time (RT) × Set Size function. However, it may be impossible to define set size for real scenes. As an approximation, we hand-labeled 100 indoor scenes and used the number of labeled regions as a surrogate for set size. In Experiment 1, observers searched for named objects (a chair, bowl, etc.). With set size defined as the number of labeled regions, search was very efficient (~5 ms/item). When we controlled for a possible guessing strategy in Experiment 2, slopes increased somewhat (~15 ms/item), but they were much shallower than search for a random object among other distinctive objects outside of a scene setting (Exp. 3: ~40 ms/item). In Experiments 4-6, observers searched repeatedly through the same scene for different objects. Increased familiarity with scenes had modest effects on RTs, while repetition of target items had large effects (>500 ms). We propose that visual search in scenes is efficient because scene-specific forms of attentional guidance can eliminate most regions from the "functional set size" of items that could possibly be the target.

  15. The role of memory for visual search in scenes.

    Science.gov (United States)

    Le-Hoa Võ, Melissa; Wolfe, Jeremy M

    2015-03-01

    Many daily activities involve looking for something. The ease with which these searches are performed often allows one to forget that searching represents complex interactions between visual attention and memory. Although a clear understanding exists of how search efficiency will be influenced by visual features of targets and their surrounding distractors or by the number of items in the display, the role of memory in search is less well understood. Contextual cueing studies have shown that implicit memory for repeated item configurations can facilitate search in artificial displays. When searching more naturalistic environments, other forms of memory come into play. For instance, semantic memory provides useful information about which objects are typically found where within a scene, and episodic scene memory provides information about where a particular object was seen the last time a particular scene was viewed. In this paper, we will review work on these topics, with special emphasis on the role of memory in guiding search in organized, real-world scenes. © 2015 New York Academy of Sciences.

  16. Anticipation and visual search behaviour in expert soccer goalkeepers

    NARCIS (Netherlands)

    Savelsbergh, G.J.P.; van der Kamp, J.; Williams, A.M.; Ward, P.

    2005-01-01

    A novel methodological approach is presented to examine the visual search behaviours employed by expert goalkeepers during simulated penalty kick situations in soccer. Expert soccer goalkeepers were classified as successful or unsuccessful based on their performance on a film-based test of

  17. The long and the short of priming in visual search

    NARCIS (Netherlands)

    Kruijne, W.; Meeter, M.

    2015-01-01

    Memory affects visual search, as is particularly evident from findings that when target features are repeated from one trial to the next, selection is faster. Two views have emerged on the nature of the memory representations and mechanisms that cause these intertrial priming effects: independent

  18. Urban camouflage assessment through visual search and computational saliency

    NARCIS (Netherlands)

    Toet, A.; Hogervorst, M.A.

    2013-01-01

    We present a new method to derive a multiscale urban camouflage pattern from a given set of background image samples. We applied this method to design a camouflage pattern for a given (semi-arid) urban environment. We performed a human visual search experiment and a computational evaluation study to

  19. The Development of Visual Search Strategies in Biscriptal Readers.

    Science.gov (United States)

    Liow, Susan Rikard; Green, David; Tam, Melissa

    1999-01-01

    To test whether cognitive processing in bilingual depends on script combinations and language proficiency, this study investigated the development of alphabetic and logographic visual search strategies in two kinds of biscriptals: (1) Malay-English and (2) Chinese-English readers. Results support the view that there are script implications of…

  20. Accurate expectancies diminish perceptual distraction during visual search

    Science.gov (United States)

    Sy, Jocelyn L.; Guerin, Scott A.; Stegman, Anna; Giesbrecht, Barry

    2014-01-01

    The load theory of visual attention proposes that efficient selective perceptual processing of task-relevant information during search is determined automatically by the perceptual demands of the display. If the perceptual demands required to process task-relevant information are not enough to consume all available capacity, then the remaining capacity automatically and exhaustively “spills-over” to task-irrelevant information. The spill-over of perceptual processing capacity increases the likelihood that task-irrelevant information will impair performance. In two visual search experiments, we tested the automaticity of the allocation of perceptual processing resources by measuring the extent to which the processing of task-irrelevant distracting stimuli was modulated by both perceptual load and top-down expectations using behavior, functional magnetic resonance imaging, and electrophysiology. Expectations were generated using a trial-by-trial cue that provided information about the likely load of the upcoming visual search task. When the cues were valid, behavioral interference was eliminated and the influence of load on frontoparietal and visual cortical responses was attenuated relative to when the cues were invalid. In conditions in which task-irrelevant information interfered with performance and modulated visual activity, individual differences in mean blood oxygenation level dependent responses measured from the left intraparietal sulcus were negatively correlated with individual differences in the severity of distraction. These results are consistent with the interpretation that a top-down biasing mechanism interacts with perceptual load to support filtering of task-irrelevant information. PMID:24904374

  1. Accurate expectancies diminish perceptual distraction during visual search

    Directory of Open Access Journals (Sweden)

    Jocelyn L Sy

    2014-05-01

    Full Text Available The load theory of visual attention proposes that efficient selective perceptual processing of task-relevant information during search is determined automatically by the perceptual demands of the display. If the perceptual demands required to process task-relevant information are not enough to consume all available capacity, then the remaining capacity automatically and exhaustively spills-over to task-irrelevant information. The spill-over of perceptual processing capacity increases the likelihood that task-irrelevant information will impair performance. In two visual search experiments, we tested the automaticity of the allocation of perceptual processing resources by measuring the extent to which the processing of task-irrelevant distracting stimuli was modulated by both perceptual load and top-down expectations using behavior, fMRI, and electrophysiology. Expectations were generated by a trial-by-trial cue that provided information about the likely load of the upcoming visual search task. When the cues were valid, behavioral interference was eliminated and the influence of load on frontoparietal and visual cortical responses was attenuated relative to when the cues were invalid. In conditions in which task-irrelevant information interfered with performance and modulated visual activity, individual differences in mean BOLD responses measured from the left intraparietal sulcus were negatively correlated with individual differences in the severity of distraction. These results are consistent with the interpretation that a top-down biasing mechanism interacts with perceptual load to support filtering of task-irrelevant information.

  2. Aurally Aided Visual Search Performance Comparing Virtual Audio Systems

    DEFF Research Database (Denmark)

    Larsen, Camilla Horne; Lauritsen, David Skødt; Larsen, Jacob Junker

    2014-01-01

    Due to increased computational power, reproducing binaural hearing in real-time applications, through usage of head-related transfer functions (HRTFs), is now possible. This paper addresses the differences in aurally-aided visual search performance between a HRTF enhanced audio system (3D) and an...... with white dots. The results indicate that 3D audio yields faster search latencies than panning audio, especially with larger amounts of distractors. The applications of this research could fit virtual environments such as video games or virtual simulations.......Due to increased computational power, reproducing binaural hearing in real-time applications, through usage of head-related transfer functions (HRTFs), is now possible. This paper addresses the differences in aurally-aided visual search performance between a HRTF enhanced audio system (3D...

  3. Aurally Aided Visual Search Performance Comparing Virtual Audio Systems

    DEFF Research Database (Denmark)

    Larsen, Camilla Horne; Lauritsen, David Skødt; Larsen, Jacob Junker

    2014-01-01

    Due to increased computational power reproducing binaural hearing in real-time applications, through usage of head-related transfer functions (HRTFs), is now possible. This paper addresses the differences in aurally-aided visual search performance between an HRTF enhanced audio system (3D) and an...... with white dots. The results indicate that 3D audio yields faster search latencies than panning audio, especially with larger amounts of distractors. The applications of this research could fit virtual environments such as video games or virtual simulations.......Due to increased computational power reproducing binaural hearing in real-time applications, through usage of head-related transfer functions (HRTFs), is now possible. This paper addresses the differences in aurally-aided visual search performance between an HRTF enhanced audio system (3D...

  4. Behavior and neural basis of near-optimal visual search

    Science.gov (United States)

    Ma, Wei Ji; Navalpakkam, Vidhya; Beck, Jeffrey M; van den Berg, Ronald; Pouget, Alexandre

    2013-01-01

    The ability to search efficiently for a target in a cluttered environment is one of the most remarkable functions of the nervous system. This task is difficult under natural circumstances, as the reliability of sensory information can vary greatly across space and time and is typically a priori unknown to the observer. In contrast, visual-search experiments commonly use stimuli of equal and known reliability. In a target detection task, we randomly assigned high or low reliability to each item on a trial-by-trial basis. An optimal observer would weight the observations by their trial-to-trial reliability and combine them using a specific nonlinear integration rule. We found that humans were near-optimal, regardless of whether distractors were homogeneous or heterogeneous and whether reliability was manipulated through contrast or shape. We present a neural-network implementation of near-optimal visual search based on probabilistic population coding. The network matched human performance. PMID:21552276

  5. Chemical and visual communication during mate searching in rock shrimp.

    Science.gov (United States)

    Díaz, Eliecer R; Thiel, Martin

    2004-06-01

    Mate searching in crustaceans depends on different communicational cues, of which chemical and visual cues are most important. Herein we examined the role of chemical and visual communication during mate searching and assessment in the rock shrimp Rhynchocinetes typus. Adult male rock shrimp experience major ontogenetic changes. The terminal molt stages (named "robustus") are dominant and capable of monopolizing females during the mating process. Previous studies had shown that most females preferably mate with robustus males, but how these dominant males and receptive females find each other is uncertain, and is the question we examined herein. In a Y-maze designed to test for the importance of waterborne chemical cues, we observed that females approached the robustus male significantly more often than the typus male. Robustus males, however, were unable to locate receptive females via chemical signals. Using an experimental set-up that allowed testing for the importance of visual cues, we demonstrated that receptive females do not use visual cues to select robustus males, but robustus males use visual cues to find receptive females. Visual cues used by the robustus males were the tumults created by agitated aggregations of subordinate typus males around the receptive females. These results indicate a strong link between sexual communication and the mating system of rock shrimp in which dominant males monopolize receptive females. We found that females and males use different (sex-specific) communicational cues during mate searching and assessment, and that the sexual communication of rock shrimp is similar to that of the American lobster, where females are first attracted to the dominant males by chemical cues emitted by these males. A brief comparison between these two species shows that female behaviors during sexual communication contribute strongly to the outcome of mate searching and assessment.

  6. The Mechanisms Underlying the ASD Advantage in Visual Search.

    Science.gov (United States)

    Kaldy, Zsuzsa; Giserman, Ivy; Carter, Alice S; Blaser, Erik

    2016-05-01

    A number of studies have demonstrated that individuals with autism spectrum disorders (ASDs) are faster or more successful than typically developing control participants at various visual-attentional tasks (for reviews, see Dakin and Frith in Neuron 48:497-507, 2005; Simmons et al. in Vis Res 49:2705-2739, 2009). This "ASD advantage" was first identified in the domain of visual search by Plaisted et al. (J Child Psychol Psychiatry 39:777-783, 1998). Here we survey the findings of visual search studies from the past 15 years that contrasted the performance of individuals with and without ASD. Although there are some minor caveats, the overall consensus is that-across development and a broad range of symptom severity-individuals with ASD reliably outperform controls on visual search. The etiology of the ASD advantage has not been formally specified, but has been commonly attributed to 'enhanced perceptual discrimination', a superior ability to visually discriminate between targets and distractors in such tasks (e.g. O'Riordan in Cognition 77:81-96, 2000). As well, there is considerable evidence for impairments of the attentional network in ASD (for a review, see Keehn et al. in J Child Psychol Psychiatry 37:164-183, 2013). We discuss some recent results from our laboratory that support an attentional, rather than perceptual explanation for the ASD advantage in visual search. We speculate that this new conceptualization may offer a better understanding of some of the behavioral symptoms associated with ASD, such as over-focusing and restricted interests.

  7. Attentional Capture to a Singleton Distractor Degrades Visual Marking in Visual Search

    Directory of Open Access Journals (Sweden)

    Kenji Yamauchi

    2017-05-01

    Full Text Available Visual search is easier after observing some distractors in advance; it is as if the previewed distractors were excluded from the search. This effect is referred to as the preview benefit, and a memory template that visually marks the old locations of the distractors is thought to help in prioritizing the locations of newly presented items. One remaining question is whether the presence of a conspicuous item during the sequential shift of attention within the new items reduces this preview benefit. To address this issue, we combined the above preview search and a conventional visual search paradigm using a singleton distractor and examined whether the search performance was affected by the presence of the singleton. The results showed that the slope of reaction time as a function of set size became steeper in the presence of a singleton, indicating that the singleton distractor reduced the preview benefit. Furthermore, this degradation effect was positively correlated with the degree of conventional attentional capture to a singleton measured in a separate experiment with simultaneous search. These findings suggest that the mechanism of visual marking shares common attentional resources with the search process.

  8. Visual search performance in infants associates with later ASD diagnosis

    Directory of Open Access Journals (Sweden)

    C.H.M. Cheung

    2018-01-01

    Full Text Available An enhanced ability to detect visual targets amongst distractors, known as visual search (VS, has often been documented in Autism Spectrum Disorders (ASD. Yet, it is unclear when this behaviour emerges in development and if it is specific to ASD. We followed up infants at high and low familial risk for ASD to investigate how early VS abilities links to later ASD diagnosis, the potential underlying mechanisms of this association and the specificity of superior VS to ASD. Clinical diagnosis of ASD as well as dimensional measures of ASD, attention-deficit/hyperactivity disorder (ADHD and anxiety symptoms were ascertained at 3 years. At 9 and 15 months, but not at age 2 years, high-risk children who later met clinical criteria for ASD (HR-ASD had better VS performance than those without later diagnosis and low-risk controls. Although HR-ASD children were also more attentive to the task at 9 months, this did not explain search performance. Superior VS specifically predicted 3 year-old ASD but not ADHD or anxiety symptoms. Our results demonstrate that atypical perception and core ASD symptoms of social interaction and communication are closely and selectively associated during early development, and suggest causal links between perceptual and social features of ASD. Keywords: Visual search, Visual attention, ASD, ADHD, Infant, Familial risk

  9. Temporal stability of visual search-driven biometrics

    Science.gov (United States)

    Yoon, Hong-Jun; Carmichael, Tandy R.; Tourassi, Georgia

    2015-03-01

    Previously, we have shown the potential of using an individual's visual search pattern as a possible biometric. That study focused on viewing images displaying dot-patterns with different spatial relationships to determine which pattern can be more effective in establishing the identity of an individual. In this follow-up study we investigated the temporal stability of this biometric. We performed an experiment with 16 individuals asked to search for a predetermined feature of a random-dot pattern as we tracked their eye movements. Each participant completed four testing sessions consisting of two dot patterns repeated twice. One dot pattern displayed concentric circles shifted to the left or right side of the screen overlaid with visual noise, and participants were asked which side the circles were centered on. The second dot-pattern displayed a number of circles (between 0 and 4) scattered on the screen overlaid with visual noise, and participants were asked how many circles they could identify. Each session contained 5 untracked tutorial questions and 50 tracked test questions (200 total tracked questions per participant). To create each participant's "fingerprint", we constructed a Hidden Markov Model (HMM) from the gaze data representing the underlying visual search and cognitive process. The accuracy of the derived HMM models was evaluated using cross-validation for various time-dependent train-test conditions. Subject identification accuracy ranged from 17.6% to 41.8% for all conditions, which is significantly higher than random guessing (1/16 = 6.25%). The results suggest that visual search pattern is a promising, temporally stable personalized fingerprint of perceptual organization.

  10. Temporal Stability of Visual Search-Driven Biometrics

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Hong-Jun [ORNL; Carmichael, Tandy [Tennessee Technological University; Tourassi, Georgia [ORNL

    2015-01-01

    Previously, we have shown the potential of using an individual s visual search pattern as a possible biometric. That study focused on viewing images displaying dot-patterns with different spatial relationships to determine which pattern can be more effective in establishing the identity of an individual. In this follow-up study we investigated the temporal stability of this biometric. We performed an experiment with 16 individuals asked to search for a predetermined feature of a random-dot pattern as we tracked their eye movements. Each participant completed four testing sessions consisting of two dot patterns repeated twice. One dot pattern displayed concentric circles shifted to the left or right side of the screen overlaid with visual noise, and participants were asked which side the circles were centered on. The second dot-pattern displayed a number of circles (between 0 and 4) scattered on the screen overlaid with visual noise, and participants were asked how many circles they could identify. Each session contained 5 untracked tutorial questions and 50 tracked test questions (200 total tracked questions per participant). To create each participant s "fingerprint", we constructed a Hidden Markov Model (HMM) from the gaze data representing the underlying visual search and cognitive process. The accuracy of the derived HMM models was evaluated using cross-validation for various time-dependent train-test conditions. Subject identification accuracy ranged from 17.6% to 41.8% for all conditions, which is significantly higher than random guessing (1/16 = 6.25%). The results suggest that visual search pattern is a promising, fairly stable personalized fingerprint of perceptual organization.

  11. Macular degeneration affects eye movement behaviour during visual search

    Directory of Open Access Journals (Sweden)

    Stefan eVan Der Stigchel

    2013-09-01

    Full Text Available Patients with a scotoma in their central vision (e.g. due to macular degeneration, MD commonly adopt a strategy to direct the eyes such that the image falls onto a peripheral location on the retina. This location is referred to as the preferred retinal locus (PRL. Although previous research has investigated the characteristics of this PRL, it is unclear whether eye movement metrics are modulated by peripheral viewing with a PRL as measured during a visual search paradigm. To this end, we tested four MD patients in a visual search paradigm and contrasted their performance with a healthy control group and a healthy control group performing the same experiment with a simulated scotoma. The experiment contained two conditions. In the first condition the target was an unfilled circle hidden among c-shaped distractors (serial condition and in the second condition the target was a filled circle (pop-out condition. Saccadic search latencies for the MD group were significantly longer in both conditions compared to both control groups. Results of a subsequent experiment indicated that this difference between the MD and the control groups could not be explained by a difference in target selection sensitivity. Furthermore, search behaviour of MD patients was associated with saccades with smaller amplitudes towards the scotoma, an increased intersaccadic interval and an increased number of eye movements necessary to locate the target. Some of these characteristics, such as the increased intersaccadic interval, were also observed in the simulation group, which indicate that these characteristics are related to the peripheral viewing itself. We suggest that the combination of the central scotoma and peripheral viewing can explain the altered search behaviour and no behavioural evidence was found for a possible reorganization of the visual system associated with the use of a PRL. Thus the switch from a fovea-based to a PRL-based reference frame impairs search

  12. Bottom-up guidance in visual search for conjunctions.

    Science.gov (United States)

    Proulx, Michael J

    2007-02-01

    Understanding the relative role of top-down and bottom-up guidance is crucial for models of visual search. Previous studies have addressed the role of top-down and bottom-up processes in search for a conjunction of features but with inconsistent results. Here, the author used an attentional capture method to address the role of top-down and bottom-up processes in conjunction search. The role of bottom-up processing was assayed by inclusion of an irrelevant-size singleton in a search for a conjunction of color and orientation. One object was uniquely larger on each trial, with chance probability of coinciding with the target; thus, the irrelevant feature of size was not predictive of the target's location. Participants searched more efficiently for the target when it was also the size singleton, and they searched less efficiently for the target when a nontarget was the size singleton. Although a conjunction target cannot be detected on the basis of bottom-up processing alone, participants used search strategies that relied significantly on bottom-up guidance in finding the target, resulting in interference from the irrelevant-size singleton.

  13. Prior knowledge of category size impacts visual search.

    Science.gov (United States)

    Wu, Rachel; McGee, Brianna; Echiverri, Chelsea; Zinszer, Benjamin D

    2018-03-30

    Prior research has shown that category search can be similar to one-item search (as measured by the N2pc ERP marker of attentional selection) for highly familiar, smaller categories (e.g., letters and numbers) because the finite set of items in a category can be grouped into one unit to guide search. Other studies have shown that larger, more broadly defined categories (e.g., healthy food) also can elicit N2pc components during category search, but the amplitude of these components is typically attenuated. Two experiments investigated whether the perceived size of a familiar category impacts category and exemplar search. We presented participants with 16 familiar company logos: 8 from a smaller category (social media companies) and 8 from a larger category (entertainment/recreation manufacturing companies). The ERP results from Experiment 1 revealed that, in a two-item search array, search was more efficient for the smaller category of logos compared to the larger category. In a four-item search array (Experiment 2), where two of the four items were placeholders, search was largely similar between the category types, but there was more attentional capture by nontarget members from the same category as the target for smaller rather than larger categories. These results support a growing literature on how prior knowledge of categories affects attentional selection and capture during visual search. We discuss the implications of these findings in relation to assessing cognitive abilities across the lifespan, given that prior knowledge typically increases with age. © 2018 Society for Psychophysiological Research.

  14. Crowded visual search in children with normal vision and children with visual impairment.

    Science.gov (United States)

    Huurneman, Bianca; Cox, Ralf F A; Vlaskamp, Björn N S; Boonstra, F Nienke

    2014-03-01

    This study investigates the influence of oculomotor control, crowding, and attentional factors on visual search in children with normal vision ([NV], n=11), children with visual impairment without nystagmus ([VI-nys], n=11), and children with VI with accompanying nystagmus ([VI+nys], n=26). Exclusion criteria for children with VI were: multiple impairments and visual acuity poorer than 20/400 or better than 20/50. Three search conditions were presented: a row with homogeneous distractors, a matrix with homogeneous distractors, and a matrix with heterogeneous distractors. Element spacing was manipulated in 5 steps from 2 to 32 minutes of arc. Symbols were sized 2 times the threshold acuity to guarantee visibility for the VI groups. During simple row and matrix search with homogeneous distractors children in the VI+nys group were less accurate than children with NV at smaller spacings. Group differences were even more pronounced during matrix search with heterogeneous distractors. Search times were longer in children with VI compared to children with NV. The more extended impairments during serial search reveal greater dependence on oculomotor control during serial compared to parallel search. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Who should be searching? Differences in personality can affect visual search accuracy

    OpenAIRE

    Biggs, A. T.; Clark, K.; Mitroff, S. R.

    2017-01-01

    Visual search is an everyday task conducted in a wide variety of contexts. Some searches are mundane, such as finding a beverage in the refrigerator, and some have life-or-death consequences, such as finding improvised explosives at a security checkpoint or within a combat zone. Prior work has shown numerous influences on search, including “bottom-up” (physical stimulus attributes) and “top-down” factors (task-relevant or goal-driven aspects). Recent work has begun to focus on “observer-speci...

  16. Synaesthetic colours do not camouflage form in visual search

    OpenAIRE

    Gheri, C; Chopping, S; Morgan, M.J

    2008-01-01

    One of the major issues in synaesthesia research is to identify the level of processing involved in the formation of the subjective colours experienced by synaesthetes: are they perceptual phenomena or are they due to memory and association learning? To address this question, we tested whether the colours reported by a group of grapheme-colour synaesthetes (previously studied in an functional magnetic resonance imaging experiment) influenced them in a visual search task. As well as using a co...

  17. Electrophysiological measurement of information flow during visual search

    OpenAIRE

    Cosman, Joshua D.; Arita, Jason T.; Ianni, Julianna D.; Woodman, Geoffrey F.

    2015-01-01

    The temporal relationship between different stages of cognitive processing is long-debated. This debate is ongoing, primarily because it is often difficult to measure the time course of multiple cognitive processes simultaneously. We employed a manipulation that allowed us to isolate ERP components related to perceptual processing, working memory, and response preparation, and then examined the temporal relationship between these components while observers performed a visual search task. We f...

  18. Visual Fashion-Product Search at SK Planet

    OpenAIRE

    Kim, Taewan; Kim, Seyeong; Na, Sangil; Kim, Hayoon; Kim, Moonki; Jeon, Byoung-Ki

    2016-01-01

    We build a large-scale visual search system which finds similar product images given a fashion item. Defining similarity among arbitrary fashion-products is still remains a challenging problem, even there is no exact ground-truth. To resolve this problem, we define more than 90 fashion-related attributes, and combination of these attributes can represent thousands of unique fashion-styles. The fashion-attributes are one of the ingredients to define semantic similarity among fashion-product im...

  19. Visual Information and Support Surface for Postural Control in Visual Search Task.

    Science.gov (United States)

    Huang, Chia-Chun; Yang, Chih-Mei

    2016-10-01

    When standing on a reduced support surface, people increase their reliance on visual information to control posture. This assertion was tested in the current study. The effects of imposed motion and support surface on postural control during visual search were investigated. Twelve participants (aged 21 ± 1.8 years; six men and six women) stood on a reduced support surface (45% base of support). In a room that moved back and forth along the anteroposterior axis, participants performed visual search for a given letter in an article. Postural sway variability and head-room coupling were measured. The results of head-room coupling, but not postural sway, supported the assertion that people increase reliance on visual information when standing on a reduced support surface. Whether standing on a whole or reduced surface, people stabilized their posture to perform the visual search tasks. Compared to a fixed target, searching on a hand-held target showed greater head-room coupling when standing on a reduced surface. © The Author(s) 2016.

  20. Reading and visual search: a developmental study in normal children.

    Directory of Open Access Journals (Sweden)

    Magali Seassau

    Full Text Available Studies dealing with developmental aspects of binocular eye movement behaviour during reading are scarce. In this study we have explored binocular strategies during reading and during visual search tasks in a large population of normal young readers. Binocular eye movements were recorded using an infrared video-oculography system in sixty-nine children (aged 6 to 15 and in a group of 10 adults (aged 24 to 39. The main findings are (i in both tasks the number of progressive saccades (to the right and regressive saccades (to the left decreases with age; (ii the amplitude of progressive saccades increases with age in the reading task only; (iii in both tasks, the duration of fixations as well as the total duration of the task decreases with age; (iv in both tasks, the amplitude of disconjugacy recorded during and after the saccades decreases with age; (v children are significantly more accurate in reading than in visual search after 10 years of age. Data reported here confirms and expands previous studies on children's reading. The new finding is that younger children show poorer coordination than adults, both while reading and while performing a visual search task. Both reading skills and binocular saccades coordination improve with age and children reach a similar level to adults after the age of 10. This finding is most likely related to the fact that learning mechanisms responsible for saccade yoking develop during childhood until adolescence.

  1. The influence of attention, learning, and motivation on visual search.

    Science.gov (United States)

    Dodd, Michael D; Flowers, John H

    2012-01-01

    The 59th Annual Nebraska Symposium on Motivation (The Influence of Attention, Learning, and Motivation on Visual Search) took place April 7-8, 2011, on the University of Nebraska-Lincoln campus. The symposium brought together leading scholars who conduct research related to visual search at a variety levels for a series of talks, poster presentations, panel discussions, and numerous additional opportunities for intellectual exchange. The Symposium was also streamed online for the first time in the history of the event, allowing individuals from around the world to view the presentations and submit questions. The present volume is intended to both commemorate the event itself and to allow our speakers additional opportunity to address issues and current research that have since arisen. Each of the speakers (and, in some cases, their graduate students and post docs) has provided a chapter which both summarizes and expands on their original presentations. In this chapter, we sought to a) provide additional context as to how the Symposium came to be, b) discuss why we thought that this was an ideal time to organize a visual search symposium, and c) to briefly address recent trends and potential future directions in the field. We hope you find the volume both enjoyable and informative, and we thank the authors who have contributed a series of engaging chapters.

  2. Do the Contents of Visual Working Memory Automatically Influence Attentional Selection During Visual Search?

    OpenAIRE

    Woodman, Geoffrey F.; Luck, Steven J.

    2007-01-01

    In many theories of cognition, researchers propose that working memory and perception operate interactively. For example, in previous studies researchers have suggested that sensory inputs matching the contents of working memory will have an automatic advantage in the competition for processing resources. The authors tested this hypothesis by requiring observers to perform a visual search task while concurrently maintaining object representations in visual working memory. The hypothesis that ...

  3. Visual search guidance is best after a short delay.

    Science.gov (United States)

    Schmidt, Joseph; Zelinsky, Gregory J

    2011-03-25

    Search displays are typically presented immediately after a target cue, but in the real-world, delays often exist between target designation and search. Experiments 1 and 2 asked how search guidance changes with delay. Targets were cued using a picture or text label, each for 3000ms, followed by a delay up to 9000ms before the search display. Search stimuli were realistic objects, and guidance was quantified using multiple eye movement measures. Text-based cues showed a non-significant trend towards greater guidance following any delay relative to a no-delay condition. However, guidance from a pictorial cue increased sharply 300-600ms after preview offset. Experiment 3 replicated this guidance enhancement using shorter preview durations while equating the time from cue onset to search onset, demonstrating that the guidance benefit is linked to preview offset rather than a more complete encoding of the target. Experiment 4 showed that enhanced guidance persists even with a mask flashed at preview offset, suggesting an explanation other than visual priming. We interpret our findings as evidence for the rapid consolidation of target information into a guiding representation, which attains its maximum effectiveness shortly after preview offset. Copyright © 2011 Elsevier Ltd. All rights reserved.

  4. Lifespan changes in attention revisited: Everyday visual search.

    Science.gov (United States)

    Brennan, Allison A; Bruderer, Alison J; Liu-Ambrose, Teresa; Handy, Todd C; Enns, James T

    2017-06-01

    This study compared visual search under everyday conditions among participants across the life span (healthy participants in 4 groups, with average age of 6 years, 8 years, 22 years, and 75 years, and 1 group averaging 73 years with a history of falling). The task involved opening a door and stepping into a room find 1 of 4 everyday objects (apple, golf ball, coffee can, toy penguin) visible on shelves. The background for this study included 2 well-cited laboratory studies that pointed to different cognitive mechanisms underlying each end of the U-shaped pattern of visual search over the life span (Hommel et al., 2004; Trick & Enns, 1998). The results recapitulated some of the main findings of the laboratory study (e.g., a U-shaped function, dissociable factors for maturation and aging), but there were several unique findings. These included large differences in the baseline salience of common objects at different ages, visual eccentricity effects that were unique to aging, and visual field effects that interacted strongly with age. These findings highlight the importance of studying cognitive processes in more natural settings, where factors such as personal relevance, life history, and bodily contributions to cognition (e.g., limb, head, and body movements) are more readily revealed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  5. Automatic guidance of attention during real-world visual search.

    Science.gov (United States)

    Seidl-Rathkopf, Katharina N; Turk-Browne, Nicholas B; Kastner, Sabine

    2015-08-01

    Looking for objects in cluttered natural environments is a frequent task in everyday life. This process can be difficult, because the features, locations, and times of appearance of relevant objects often are not known in advance. Thus, a mechanism by which attention is automatically biased toward information that is potentially relevant may be helpful. We tested for such a mechanism across five experiments by engaging participants in real-world visual search and then assessing attentional capture for information that was related to the search set but was otherwise irrelevant. Isolated objects captured attention while preparing to search for objects from the same category embedded in a scene, as revealed by lower detection performance (Experiment 1A). This capture effect was driven by a central processing bottleneck rather than the withdrawal of spatial attention (Experiment 1B), occurred automatically even in a secondary task (Experiment 2A), and reflected enhancement of matching information rather than suppression of nonmatching information (Experiment 2B). Finally, attentional capture extended to objects that were semantically associated with the target category (Experiment 3). We conclude that attention is efficiently drawn towards a wide range of information that may be relevant for an upcoming real-world visual search. This mechanism may be adaptive, allowing us to find information useful for our behavioral goals in the face of uncertainty.

  6. Automatic guidance of attention during real-world visual search

    Science.gov (United States)

    Seidl-Rathkopf, Katharina N.; Turk-Browne, Nicholas B.; Kastner, Sabine

    2015-01-01

    Looking for objects in cluttered natural environments is a frequent task in everyday life. This process can be difficult, as the features, locations, and times of appearance of relevant objects are often not known in advance. A mechanism by which attention is automatically biased toward information that is potentially relevant may thus be helpful. Here we tested for such a mechanism across five experiments by engaging participants in real-world visual search and then assessing attentional capture for information that was related to the search set but was otherwise irrelevant. Isolated objects captured attention while preparing to search for objects from the same category embedded in a scene, as revealed by lower detection performance (Experiment 1A). This capture effect was driven by a central processing bottleneck rather than the withdrawal of spatial attention (Experiment 1B), occurred automatically even in a secondary task (Experiment 2A), and reflected enhancement of matching information rather than suppression of non-matching information (Experiment 2B). Finally, attentional capture extended to objects that were semantically associated with the target category (Experiment 3). We conclude that attention is efficiently drawn towards a wide range of information that may be relevant for an upcoming real-world visual search. This mechanism may be adaptive, allowing us to find information useful for our behavioral goals in the face of uncertainty. PMID:25898897

  7. Visual working memory simultaneously guides facilitation and inhibition during visual search.

    Science.gov (United States)

    Dube, Blaire; Basciano, April; Emrich, Stephen M; Al-Aidroos, Naseem

    2016-07-01

    During visual search, visual working memory (VWM) supports the guidance of attention in two ways: It stores the identity of the search target, facilitating the selection of matching stimuli in the search array, and it maintains a record of the distractors processed during search so that they can be inhibited. In two experiments, we investigated whether the full contents of VWM can be used to support both of these abilities simultaneously. In Experiment 1, participants completed a preview search task in which (a) a subset of search distractors appeared before the remainder of the search items, affording participants the opportunity to inhibit them, and (b) the search target varied from trial to trial, requiring the search target template to be maintained in VWM. We observed the established signature of VWM-based inhibition-reduced ability to ignore previewed distractors when the number of distractors exceeds VWM's capacity-suggesting that VWM can serve this role while also representing the target template. In Experiment 2, we replicated Experiment 1, but added to the search displays a singleton distractor that sometimes matched the color (a task-irrelevant feature) of the search target, to evaluate capture. We again observed the signature of VWM-based preview inhibition along with attentional capture by (and, thus, facilitation of) singletons matching the target template. These findings indicate that more than one VWM representation can bias attention at a time, and that these representations can separately affect selection through either facilitation or inhibition, placing constraints on existing models of the VWM-based guidance of attention.

  8. Visual Search Performance in Patients with Vision Impairment: A Systematic Review.

    Science.gov (United States)

    Senger, Cassia; Margarido, Maria Rita Rodrigues Alves; De Moraes, Carlos Gustavo; De Fendi, Ligia Issa; Messias, André; Paula, Jayter Silva

    2017-11-01

    Patients with visual impairment are constantly facing challenges to achieve an independent and productive life, which depends upon both a good visual discrimination and search capacities. Given that visual search is a critical skill for several daily tasks and could be used as an index of the overall visual function, we investigated the relationship between vision impairment and visual search performance. A comprehensive search was undertaken using electronic PubMed, EMBASE, LILACS, and Cochrane databases from January 1980 to December 2016, applying the following terms: "visual search", "visual search performance", "visual impairment", "visual exploration", "visual field", "hemianopia", "search time", "vision lost", "visual loss", and "low vision". Two hundred seventy six studies from 12,059 electronic database files were selected, and 40 of them were included in this review. Studies included participants of all ages, both sexes, and the sample sizes ranged from 5 to 199 participants. Visual impairment was associated with worse visual search performance in several ophthalmologic conditions, which were either artificially induced, or related to specific eye and neurological diseases. This systematic review details all the described circumstances interfering with visual search tasks, highlights the need for developing technical standards, and outlines patterns for diagnosis and therapy using visual search capabilities.

  9. Information-Limited Parallel Processing in Difficult Heterogeneous Covert Visual Search

    Science.gov (United States)

    Dosher, Barbara Anne; Han, Songmei; Lu, Zhong-Lin

    2010-01-01

    Difficult visual search is often attributed to time-limited serial attention operations, although neural computations in the early visual system are parallel. Using probabilistic search models (Dosher, Han, & Lu, 2004) and a full time-course analysis of the dynamics of covert visual search, we distinguish unlimited capacity parallel versus serial…

  10. Visual search among items of different salience: removal of visual attention mimics a lesion in extrastriate area V4.

    Science.gov (United States)

    Braun, J

    1994-02-01

    In more than one respect, visual search for the most salient or the least salient item in a display are different kinds of visual tasks. The present work investigated whether this difference is primarily one of perceptual difficulty, or whether it is more fundamental and relates to visual attention. Display items of different salience were produced by varying either size, contrast, color saturation, or pattern. Perceptual masking was employed and, on average, mask onset was delayed longer in search for the least salient item than in search for the most salient item. As a result, the two types of visual search presented comparable perceptual difficulty, as judged by psychophysical measures of performance, effective stimulus contrast, and stability of decision criterion. To investigate the role of attention in the two types of search, observers attempted to carry out a letter discrimination and a search task concurrently. To discriminate the letters, observers had to direct visual attention at the center of the display and, thus, leave unattended the periphery, which contained target and distractors of the search task. In this situation, visual search for the least salient item was severely impaired while visual search for the most salient item was only moderately affected, demonstrating a fundamental difference with respect to visual attention. A qualitatively identical pattern of results was encountered by Schiller and Lee (1991), who used similar visual search tasks to assess the effect of a lesion in extrastriate area V4 of the macaque.

  11. Method for the irradiation of single targets

    International Nuclear Information System (INIS)

    Krimmel, E.; Dullnig, H.

    1977-01-01

    The invention pertains to a system for the irradiation of single targets with particle beams. The targets all have frames around them. The system consists of an automatic advance leading into a high-vacuum chamber, and a positioning element which guides one target after the other into the irradiation position, at right angles to the automatic advance, and back into the automatic advance after irradiation. (GSCH) [de

  12. Getting satisfied with "satisfaction of search": How to measure errors during multiple-target visual search.

    Science.gov (United States)

    Biggs, Adam T

    2017-07-01

    Visual search studies are common in cognitive psychology, and the results generally focus upon accuracy, response times, or both. Most research has focused upon search scenarios where no more than 1 target will be present for any single trial. However, if multiple targets can be present on a single trial, it introduces an additional source of error because the found target can interfere with subsequent search performance. These errors have been studied thoroughly in radiology for decades, although their emphasis in cognitive psychology studies has been more recent. One particular issue with multiple-target search is that these subsequent search errors (i.e., specific errors which occur following a found target) are measured differently by different studies. There is currently no guidance as to which measurement method is best or what impact different measurement methods could have upon various results and conclusions. The current investigation provides two efforts to address these issues. First, the existing literature is reviewed to clarify the appropriate scenarios where subsequent search errors could be observed. Second, several different measurement methods are used with several existing datasets to contrast and compare how each method would have affected the results and conclusions of those studies. The evidence is then used to provide appropriate guidelines for measuring multiple-target search errors in future studies.

  13. The Temporal Dynamics of Visual Search: Evidence for Parallel Processing in Feature and Conjunction Searches

    Science.gov (United States)

    McElree, Brian; Carrasco, Marisa

    2012-01-01

    Feature and conjunction searches have been argued to delineate parallel and serial operations in visual processing. The authors evaluated this claim by examining the temporal dynamics of the detection of features and conjunctions. The 1st experiment used a reaction time (RT) task to replicate standard mean RT patterns and to examine the shapes of the RT distributions. The 2nd experiment used the response-signal speed–accuracy trade-off (SAT) procedure to measure discrimination (asymptotic detection accuracy) and detection speed (processing dynamics). Set size affected discrimination in both feature and conjunction searches but affected detection speed only in the latter. Fits of models to the SAT data that included a serial component overpredicted the magnitude of the observed dynamics differences. The authors concluded that both features and conjunctions are detected in parallel. Implications for the role of attention in visual processing are discussed. PMID:10641310

  14. Visual search attentional bias modification reduced social phobia in adolescents.

    Science.gov (United States)

    De Voogd, E L; Wiers, R W; Prins, P J M; Salemink, E

    2014-06-01

    An attentional bias for negative information plays an important role in the development and maintenance of (social) anxiety and depression, which are highly prevalent in adolescence. Attention Bias Modification (ABM) might be an interesting tool in the prevention of emotional disorders. The current study investigated whether visual search ABM might affect attentional bias and emotional functioning in adolescents. A visual search task was used as a training paradigm; participants (n = 16 adolescents, aged 13-16) had to repeatedly identify the only smiling face in a 4 × 4 matrix of negative emotional faces, while participants in the control condition (n = 16) were randomly allocated to one of three placebo training versions. An assessment version of the task was developed to directly test whether attentional bias changed due to the training. Self-reported anxiety and depressive symptoms and self-esteem were measured pre- and post-training. After two sessions of training, the ABM group showed a significant decrease in attentional bias for negative information and self-reported social phobia, while the control group did not. There were no effects of training on depressive mood or self-esteem. No correlation between attentional bias and social phobia was found, which raises questions about the validity of the attentional bias assessment task. Also, the small sample size precludes strong conclusions. Visual search ABM might be beneficial in changing attentional bias and social phobia in adolescents, but further research with larger sample sizes and longer follow-up is needed. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. WORDGRAPH: Keyword-in-Context Visualization for NETSPEAK's Wildcard Search.

    Science.gov (United States)

    Riehmann, Patrick; Gruendl, Henning; Potthast, Martin; Trenkmann, Martin; Stein, Benno; Froehlich, Benno

    2012-09-01

    The WORDGRAPH helps writers in visually choosing phrases while writing a text. It checks for the commonness of phrases and allows for the retrieval of alternatives by means of wildcard queries. To support such queries, we implement a scalable retrieval engine, which returns high-quality results within milliseconds using a probabilistic retrieval strategy. The results are displayed as WORDGRAPH visualization or as a textual list. The graphical interface provides an effective means for interactive exploration of search results using filter techniques, query expansion, and navigation. Our observations indicate that, of three investigated retrieval tasks, the textual interface is sufficient for the phrase verification task, wherein both interfaces support context-sensitive word choice, and the WORDGRAPH best supports the exploration of a phrase's context or the underlying corpus. Our user study confirms these observations and shows that WORDGRAPH is generally the preferred interface over the textual result list for queries containing multiple wildcards.

  16. Size matters: large objects capture attention in visual search.

    Science.gov (United States)

    Proulx, Michael J

    2010-12-23

    Can objects or events ever capture one's attention in a purely stimulus-driven manner? A recent review of the literature set out the criteria required to find stimulus-driven attentional capture independent of goal-directed influences, and concluded that no published study has satisfied that criteria. Here visual search experiments assessed whether an irrelevantly large object can capture attention. Capture of attention by this static visual feature was found. The results suggest that a large object can indeed capture attention in a stimulus-driven manner and independent of displaywide features of the task that might encourage a goal-directed bias for large items. It is concluded that these results are either consistent with the stimulus-driven criteria published previously or alternatively consistent with a flexible, goal-directed mechanism of saliency detection.

  17. Object-based target templates guide attention during visual search.

    Science.gov (United States)

    Berggren, Nick; Eimer, Martin

    2018-05-03

    During visual search, attention is believed to be controlled in a strictly feature-based fashion, without any guidance by object-based target representations. To challenge this received view, we measured electrophysiological markers of attentional selection (N2pc component) and working memory (sustained posterior contralateral negativity; SPCN) in search tasks where two possible targets were defined by feature conjunctions (e.g., blue circles and green squares). Critically, some search displays also contained nontargets with two target features (incorrect conjunction objects, e.g., blue squares). Because feature-based guidance cannot distinguish these objects from targets, any selective bias for targets will reflect object-based attentional control. In Experiment 1, where search displays always contained only one object with target-matching features, targets and incorrect conjunction objects elicited identical N2pc and SPCN components, demonstrating that attentional guidance was entirely feature-based. In Experiment 2, where targets and incorrect conjunction objects could appear in the same display, clear evidence for object-based attentional control was found. The target N2pc became larger than the N2pc to incorrect conjunction objects from 250 ms poststimulus, and only targets elicited SPCN components. This demonstrates that after an initial feature-based guidance phase, object-based templates are activated when they are required to distinguish target and nontarget objects. These templates modulate visual processing and control access to working memory, and their activation may coincide with the start of feature integration processes. Results also suggest that while multiple feature templates can be activated concurrently, only a single object-based target template can guide attention at any given time. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  18. Visual search performance in infants associates with later ASD diagnosis.

    Science.gov (United States)

    Cheung, C H M; Bedford, R; Johnson, M H; Charman, T; Gliga, T

    2018-01-01

    An enhanced ability to detect visual targets amongst distractors, known as visual search (VS), has often been documented in Autism Spectrum Disorders (ASD). Yet, it is unclear when this behaviour emerges in development and if it is specific to ASD. We followed up infants at high and low familial risk for ASD to investigate how early VS abilities links to later ASD diagnosis, the potential underlying mechanisms of this association and the specificity of superior VS to ASD. Clinical diagnosis of ASD as well as dimensional measures of ASD, attention-deficit/hyperactivity disorder (ADHD) and anxiety symptoms were ascertained at 3 years. At 9 and 15 months, but not at age 2 years, high-risk children who later met clinical criteria for ASD (HR-ASD) had better VS performance than those without later diagnosis and low-risk controls. Although HR-ASD children were also more attentive to the task at 9 months, this did not explain search performance. Superior VS specifically predicted 3 year-old ASD but not ADHD or anxiety symptoms. Our results demonstrate that atypical perception and core ASD symptoms of social interaction and communication are closely and selectively associated during early development, and suggest causal links between perceptual and social features of ASD. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  19. Visual search of cyclic spatio-temporal events

    Science.gov (United States)

    Gautier, Jacques; Davoine, Paule-Annick; Cunty, Claire

    2018-05-01

    The analysis of spatio-temporal events, and especially of relationships between their different dimensions (space-time-thematic attributes), can be done with geovisualization interfaces. But few geovisualization tools integrate the cyclic dimension of spatio-temporal event series (natural events or social events). Time Coil and Time Wave diagrams represent both the linear time and the cyclic time. By introducing a cyclic temporal scale, these diagrams may highlight the cyclic characteristics of spatio-temporal events. However, the settable cyclic temporal scales are limited to usual durations like days or months. Because of that, these diagrams cannot be used to visualize cyclic events, which reappear with an unusual period, and don't allow to make a visual search of cyclic events. Also, they don't give the possibility to identify the relationships between the cyclic behavior of the events and their spatial features, and more especially to identify localised cyclic events. The lack of possibilities to represent the cyclic time, outside of the temporal diagram of multi-view geovisualization interfaces, limits the analysis of relationships between the cyclic reappearance of events and their other dimensions. In this paper, we propose a method and a geovisualization tool, based on the extension of Time Coil and Time Wave, to provide a visual search of cyclic events, by allowing to set any possible duration to the diagram's cyclic temporal scale. We also propose a symbology approach to push the representation of the cyclic time into the map, in order to improve the analysis of relationships between space and the cyclic behavior of events.

  20. Searching for the right word: Hybrid visual and memory search for words.

    Science.gov (United States)

    Boettcher, Sage E P; Wolfe, Jeremy M

    2015-05-01

    In "hybrid search" (Wolfe Psychological Science, 23(7), 698-703, 2012), observers search through visual space for any of multiple targets held in memory. With photorealistic objects as the stimuli, response times (RTs) increase linearly with the visual set size and logarithmically with the memory set size, even when over 100 items are committed to memory. It is well-established that pictures of objects are particularly easy to memorize (Brady, Konkle, Alvarez, & Oliva Proceedings of the National Academy of Sciences, 105, 14325-14329, 2008). Would hybrid-search performance be similar if the targets were words or phrases, in which word order can be important, so that the processes of memorization might be different? In Experiment 1, observers memorized 2, 4, 8, or 16 words in four different blocks. After passing a memory test, confirming their memorization of the list, the observers searched for these words in visual displays containing two to 16 words. Replicating Wolfe (Psychological Science, 23(7), 698-703, 2012), the RTs increased linearly with the visual set size and logarithmically with the length of the word list. The word lists of Experiment 1 were random. In Experiment 2, words were drawn from phrases that observers reported knowing by heart (e.g., "London Bridge is falling down"). Observers were asked to provide four phrases, ranging in length from two words to no less than 20 words (range 21-86). All words longer than two characters from the phrase, constituted the target list. Distractor words were matched for length and frequency. Even with these strongly ordered lists, the results again replicated the curvilinear function of memory set size seen in hybrid search. One might expect to find serial position effects, perhaps reducing the RTs for the first (primacy) and/or the last (recency) members of a list (Atkinson & Shiffrin, 1968; Murdock Journal of Experimental Psychology, 64, 482-488, 1962). Surprisingly, we showed no reliable effects of word order

  1. Visual search performance among persons with schizophrenia as a function of target eccentricity.

    Science.gov (United States)

    Elahipanah, Ava; Christensen, Bruce K; Reingold, Eyal M

    2010-03-01

    The current study investigated one possible mechanism of impaired visual attention among patients with schizophrenia: a reduced visual span. Visual span is the region of the visual field from which one can extract information during a single eye fixation. This study hypothesized that schizophrenia-related visual search impairment is mediated, in part, by a smaller visual span. To test this hypothesis, 23 patients with schizophrenia and 22 healthy controls completed a visual search task where the target was pseudorandomly presented at different distances from the center of the display. Response times were analyzed as a function of search condition (feature vs. conjunctive), display size, and target eccentricity. Consistent with previous reports, patient search times were more adversely affected as the number of search items increased in the conjunctive search condition. It was important however, that patients' conjunctive search times were also impacted to a greater degree by target eccentricity. Moreover, a significant impairment in patients' visual search performance was only evident when targets were more eccentric and their performance was more similar to healthy controls when the target was located closer to the center of the search display. These results support the hypothesis that a narrower visual span may underlie impaired visual search performance among patients with schizophrenia. Copyright 2010 APA, all rights reserved

  2. "Hot" Facilitation of "Cool" Processing: Emotional Distraction Can Enhance Priming of Visual Search

    Science.gov (United States)

    Kristjansson, Arni; Oladottir, Berglind; Most, Steven B.

    2013-01-01

    Emotional stimuli often capture attention and disrupt effortful cognitive processing. However, cognitive processes vary in the degree to which they require effort. We investigated the impact of emotional pictures on visual search and on automatic priming of search. Observers performed visual search after task-irrelevant neutral or emotionally…

  3. How visual search relates to visual diagnostic performance: a narrative systematic review of eye-tracking research in radiology.

    Science.gov (United States)

    van der Gijp, A; Ravesloot, C J; Jarodzka, H; van der Schaaf, M F; van der Schaaf, I C; van Schaik, J P J; Ten Cate, Th J

    2017-08-01

    Eye tracking research has been conducted for decades to gain understanding of visual diagnosis such as in radiology. For educational purposes, it is important to identify visual search patterns that are related to high perceptual performance and to identify effective teaching strategies. This review of eye-tracking literature in the radiology domain aims to identify visual search patterns associated with high perceptual performance. Databases PubMed, EMBASE, ERIC, PsycINFO, Scopus and Web of Science were searched using 'visual perception' OR 'eye tracking' AND 'radiology' and synonyms. Two authors independently screened search results and included eye tracking studies concerning visual skills in radiology published between January 1, 1994 and July 31, 2015. Two authors independently assessed study quality with the Medical Education Research Study Quality Instrument, and extracted study data with respect to design, participant and task characteristics, and variables. A thematic analysis was conducted to extract and arrange study results, and a textual narrative synthesis was applied for data integration and interpretation. The search resulted in 22 relevant full-text articles. Thematic analysis resulted in six themes that informed the relation between visual search and level of expertise: (1) time on task, (2) eye movement characteristics of experts, (3) differences in visual attention, (4) visual search patterns, (5) search patterns in cross sectional stack imaging, and (6) teaching visual search strategies. Expert search was found to be characterized by a global-focal search pattern, which represents an initial global impression, followed by a detailed, focal search-to-find mode. Specific task-related search patterns, like drilling through CT scans and systematic search in chest X-rays, were found to be related to high expert levels. One study investigated teaching of visual search strategies, and did not find a significant effect on perceptual performance. Eye

  4. Improvement in Visual Search with Practice : Mapping Learning-Related Changes in Neurocognitive Stages of Processing

    NARCIS (Netherlands)

    Clark, Kait; Appelbaum, L. Gregory; van den Berg, Berry; Mitroff, Stephen R.; Woldorff, Marty G.

    2015-01-01

    Practice can improve performance on visual search tasks; the neural mechanisms underlying such improvements, however, are not clear. Response time typically shortens with practice, but which components of the stimulus-response processing chain facilitate this behavioral change? Improved search

  5. Task Specificity and the Influence of Memory on Visual Search: Comment on Vo and Wolfe (2012)

    Science.gov (United States)

    Hollingworth, Andrew

    2012-01-01

    Recent results from Vo and Wolfe (2012b) suggest that the application of memory to visual search may be task specific: Previous experience searching for an object facilitated later search for that object, but object information acquired during a different task did not appear to transfer to search. The latter inference depended on evidence that a…

  6. Does prism adaptation affect visual search in spatial neglect patients: A systematic review.

    Science.gov (United States)

    De Wit, Liselotte; Ten Brink, Antonia F; Visser-Meily, Johanna M A; Nijboer, Tanja C W

    2018-03-01

    Prism adaptation (PA) is a widely used intervention for (visuo-)spatial neglect. PA-induced improvements can be assessed by visual search tasks. It remains unclear which outcome measures are the most sensitive for the effects of PA in neglect. In this review, we aimed to evaluate PA effects on visual search measures. A systematic literature search was completed regarding PA intervention studies focusing on patients with neglect using visual search tasks. Information about study content and effectiveness was extracted. Out of 403 identified studies, 30 met the inclusion criteria. The quality of the studies was evaluated: Rankings were moderate-to-high for 7, and low for 23 studies. As feature search was only performed by five studies, low-to-moderate ranking, we were limited in drawing firm conclusions about the PA effect on feature search. All moderate-to-high-ranking studies investigated cancellation by measuring only omissions or hits. These studies found an overall improvement after PA. Measuring perseverations and total task duration provides more specific information about visual search. The two (low ranking) studies that measured this found an improvement after PA on perseverations and duration (while accuracy improved for one study and remained the same for the other). This review suggests there is an overall effect of PA on visual search, although complex visual search tasks and specific visual search measures are lacking. Suggestions for search measures that give insight in subcomponents of visual search are provided for future studies, such as perseverations, search path intersections, search consistency and using a speed-accuracy trade-off. © 2016 The British Psychological Society.

  7. Exploiting visual search theory to infer social interactions

    Science.gov (United States)

    Rota, Paolo; Dang-Nguyen, Duc-Tien; Conci, Nicola; Sebe, Nicu

    2013-03-01

    In this paper we propose a new method to infer human social interactions using typical techniques adopted in literature for visual search and information retrieval. The main piece of information we use to discriminate among different types of interactions is provided by proxemics cues acquired by a tracker, and used to distinguish between intentional and casual interactions. The proxemics information has been acquired through the analysis of two different metrics: on the one hand we observe the current distance between subjects, and on the other hand we measure the O-space synergy between subjects. The obtained values are taken at every time step over a temporal sliding window, and processed in the Discrete Fourier Transform (DFT) domain. The features are eventually merged into an unique array, and clustered using the K-means algorithm. The clusters are reorganized using a second larger temporal window into a Bag Of Words framework, so as to build the feature vector that will feed the SVM classifier.

  8. Posterior α EEG Dynamics Dissociate Current from Future Goals in Working Memory-Guided Visual Search

    NARCIS (Netherlands)

    de Vries, I.E.J.; van Driel, J.; Olivers, C.N.L.

    2017-01-01

    Current models of visual search assume that search is guided by an active visual working memory representation of what we are currently looking for. This attentional template for currently relevant stimuli can be dissociated from accessory memory representations that are only needed prospectively,

  9. The role of object categories in hybrid visual and memory search

    Science.gov (United States)

    Cunningham, Corbin A.; Wolfe, Jeremy M.

    2014-01-01

    In hybrid search, observers (Os) search for any of several possible targets in a visual display containing distracting items and, perhaps, a target. Wolfe (2012) found that responses times (RT) in such tasks increased linearly with increases in the number of items in the display. However, RT increased linearly with the log of the number of items in the memory set. In earlier work, all items in the memory set were unique instances (e.g. this apple in this pose). Typical real world tasks involve more broadly defined sets of stimuli (e.g. any “apple” or, perhaps, “fruit”). The present experiments show how sets or categories of targets are handled in joint visual and memory search. In Experiment 1, searching for a digit among letters was not like searching for targets from a 10-item memory set, though searching for targets from an N-item memory set of arbitrary alphanumeric characters was like searching for targets from an N-item memory set of arbitrary objects. In Experiment 2, Os searched for any instance of N sets or categories held in memory. This hybrid search was harder than search for specific objects. However, memory search remained logarithmic. Experiment 3 illustrates the interaction of visual guidance and memory search when a subset of visual stimuli are drawn from a target category. Furthermore, we outline a conceptual model, supported by our results, defining the core components that would be necessary to support such categorical hybrid searches. PMID:24661054

  10. The effect of search condition and advertising type on visual attention to Internet advertising.

    Science.gov (United States)

    Kim, Gho; Lee, Jang-Han

    2011-05-01

    This research was conducted to examine the level of consumers' visual attention to Internet advertising. It was predicted that consumers' search type would influence visual attention to advertising. Specifically, it was predicted that more attention to advertising would be attracted in the exploratory search condition than in the goal-directed search condition. It was also predicted that there would be a difference in visual attention depending on the advertisement type (advertising type: text vs. pictorial advertising). An eye tracker was used for measurement. Results revealed that search condition and advertising type influenced advertising effectiveness.

  11. Person perception informs understanding of cognition during visual search.

    Science.gov (United States)

    Brennan, Allison A; Watson, Marcus R; Kingstone, Alan; Enns, James T

    2011-08-01

    Does person perception--the impressions we form from watching others--hold clues to the mental states of people engaged in cognitive tasks? We investigated this with a two-phase method: In Phase 1, participants searched on a computer screen (Experiment 1) or in an office (Experiment 2); in Phase 2, other participants rated the searchers' video-recorded behavior. The results showed that blind raters are sensitive to individual differences in search proficiency and search strategy, as well as to environmental factors affecting search difficulty. Also, different behaviors were linked to search success in each setting: Eye movement frequency predicted successful search on a computer screen; head movement frequency predicted search success in an office. In both settings, an active search strategy and positive emotional expressions were linked to search success. These data indicate that person perception informs cognition beyond the scope of performance measures, offering the potential for new measurements of cognition that are both rich and unobtrusive.

  12. How Visual Search Relates to Visual Diagnostic Performance: A Narrative Systematic Review of Eye-Tracking Research in Radiology

    Science.gov (United States)

    van der Gijp, A.; Ravesloot, C. J.; Jarodzka, H.; van der Schaaf, M. F.; van der Schaaf, I. C.; van Schaik, J. P.; ten Cate, Th. J.

    2017-01-01

    Eye tracking research has been conducted for decades to gain understanding of visual diagnosis such as in radiology. For educational purposes, it is important to identify visual search patterns that are related to high perceptual performance and to identify effective teaching strategies. This review of eye-tracking literature in the radiology…

  13. How visual search relates to visual diagnostic performance : a narrative systematic review of eye-tracking research in radiology

    NARCIS (Netherlands)

    van der Gijp, A; Ravesloot, C J; Jarodzka, H; van der Schaaf, M F; van der Schaaf, I C; van Schaik, J P J; ten Cate, Olle

    Eye tracking research has been conducted for decades to gain understanding of visual diagnosis such as in radiology. For educational purposes, it is important to identify visual search patterns that are related to high perceptual performance and to identify effective teaching strategies. This review

  14. Visual-auditory integration for visual search: a behavioral study in barn owls

    Directory of Open Access Journals (Sweden)

    Yael eHazan

    2015-02-01

    Full Text Available Barn owls are nocturnal predators that rely on both vision and hearing for survival. The optic tectum of barn owls, a midbrain structure involved in selective attention, has been used as a model for studying visual- auditory integration at the neuronal level. However, behavioral data on visual- auditory integration in barn owls are lacking. The goal of this study was to examine if the integration of visual and auditory signals contributes to the process of guiding attention towards salient stimuli. We attached miniature wireless video cameras on barn owls' heads (OwlCam to track their target of gaze. We first provide evidence that the area centralis (a retinal area with a maximal density of photoreceptors is used as a functional fovea in barn owls. Thus, by mapping the projection of the area centralis on the OwlCam's video frame, it is possible to extract the target of gaze. For the experiment, owls were positioned on a high perch and four food items were scattered in a large arena on the floor. In addition, a hidden loudspeaker was positioned in the arena. The positions of the food items and speaker were changed every session. Video sequences from the OwlCam were saved for offline analysis while the owls spontaneously scanned the room and the food items with abrupt gaze shifts (head saccades. From time to time during the experiment, a brief sound was emitted from the speaker. The fixation points immediately following the sounds were extracted and the distances between the gaze position and the nearest items and loudspeaker were measured. The head saccades were rarely towards the location of the sound source but to salient visual features in the room, such as the door knob or the food items. However, among the food items, the one closest to the loudspeaker had the highest probability of attracting a gaze shift. This result supports the notion that auditory signals are integrated with visual information for the selection of the next visual search

  15. The spatially global control of attentional target selection in visual search

    OpenAIRE

    Berggren, Nick; Jenkins, M.; McCants, C.W.; Eimer, Martin

    2017-01-01

    Glyn Humphreys and his co-workers have made numerous important theoretical and empirical contributions to research on visual search. They have introduced the concept of attentional target templates and investigated the nature of these templates and how they are involved in the control of search performance. In the experiments reported here, we investigated whether feature-specific search template for particular colours can guide target selection independently for different regions of visual s...

  16. Visual search among items of different salience: removal of visual attention mimics a lesion in extrastriate area V4

    OpenAIRE

    Braun, Jochen

    1994-01-01

    In more than one respect, visual search for the most salient or the least salient item in a display are different kinds of visual tasks. The present work investigated whether this difference is primarily one of perceptual difficulty, or whether it is more fundamental and relates to visual attention. Display items of different salience were produced by varying either size, contrast, color saturation, or pattern. Perceptual masking was employed and, on average, mask onset was delayed longer in ...

  17. Visual Working Memory Supports the Inhibition of Previously Processed Information: Evidence from Preview Search

    Science.gov (United States)

    Al-Aidroos, Naseem; Emrich, Stephen M.; Ferber, Susanne; Pratt, Jay

    2012-01-01

    In four experiments we assessed whether visual working memory (VWM) maintains a record of previously processed visual information, allowing old information to be inhibited, and new information to be prioritized. Specifically, we evaluated whether VWM contributes to the inhibition (i.e., visual marking) of previewed distractors in a preview search.…

  18. Looking sharp: Becoming a search template boosts precision and stability in visual working memory.

    Science.gov (United States)

    Rajsic, Jason; Ouslis, Natasha E; Wilson, Daryl E; Pratt, Jay

    2017-08-01

    Visual working memory (VWM) plays a central role in visual cognition, and current work suggests that there is a special state in VWM for items that are the goal of visual searches. However, whether the quality of memory for target templates differs from memory for other items in VWM is currently unknown. In this study, we measured the precision and stability of memory for search templates and accessory items to determine whether search templates receive representational priority in VWM. Memory for search templates exhibited increased precision and probability of recall, whereas accessory items were remembered less often. Additionally, while memory for Templates showed benefits when instances of the Template appeared in search, this benefit was not consistently observed for Accessory items when they appeared in search. Our results show that becoming a search template can substantially affect the quality of a representation in VWM.

  19. Interactions of visual odometry and landmark guidance during food search in honeybees

    NARCIS (Netherlands)

    Vladusich, T; Hemmi, JM; Srinivasan, MV; Zeil, J

    How do honeybees use visual odometry and goal-defining landmarks to guide food search? In one experiment, bees were trained to forage in an optic-flow-rich tunnel with a landmark positioned directly above the feeder. Subsequent food-search tests indicated that bees searched much more accurately when

  20. Choosing colors for map display icons using models of visual search.

    Science.gov (United States)

    Shive, Joshua; Francis, Gregory

    2013-04-01

    We show how to choose colors for icons on maps to minimize search time using predictions of a model of visual search. The model analyzes digital images of a search target (an icon on a map) and a search display (the map containing the icon) and predicts search time as a function of target-distractor color distinctiveness and target eccentricity. We parameterized the model using data from a visual search task and performed a series of optimization tasks to test the model's ability to choose colors for icons to minimize search time across icons. Map display designs made by this procedure were tested experimentally. In a follow-up experiment, we examined the model's flexibility to assign colors in novel search situations. The model fits human performance, performs well on the optimization tasks, and can choose colors for icons on maps with novel stimuli to minimize search time without requiring additional model parameter fitting. Models of visual search can suggest color choices that produce search time reductions for display icons. Designers should consider constructing visual search models as a low-cost method of evaluating color assignments.

  1. High or Low Target Prevalence Increases the Dual-Target Cost in Visual Search

    Science.gov (United States)

    Menneer, Tamaryn; Donnelly, Nick; Godwin, Hayward J.; Cave, Kyle R.

    2010-01-01

    Previous studies have demonstrated a dual-target cost in visual search. In the current study, the relationship between search for one and search for two targets was investigated to examine the effects of target prevalence and practice. Color-shape conjunction stimuli were used with response time, accuracy and signal detection measures. Performance…

  2. Searching for Signs, Symbols, and Icons: Effects of Time of Day, Visual Complexity, and Grouping

    Science.gov (United States)

    McDougall, Sine; Tyrer, Victoria; Folkard, Simon

    2006-01-01

    Searching for icons, symbols, or signs is an integral part of tasks involving computer or radar displays, head-up displays in aircraft, or attending to road traffic signs. Icons therefore need to be designed to optimize search times, taking into account the factors likely to slow down visual search. Three factors likely to adversely affect visual…

  3. Effects of absolute luminance and luminance contrast on visual search in low mesopic environments.

    Science.gov (United States)

    Hunter, Mathew; Godde, Ben; Olk, Bettina

    2018-03-26

    Diverse adaptive visual processing mechanisms allow us to complete visual search tasks in a wide visual photopic range (>0.6 cd/m 2 ). Whether search strategies or mechanisms known from this range extend below, in the mesopic and scotopic luminance spectra (search in more complex-feature and conjunction-search paradigms. The results verify the previously reported deficiency windows defined by an interaction of base luminance and luminance contrast for more complex visual-search tasks. Based on significant regression analyses, a more precise definition of the magnitude of contribution of different contrast parameters. Characterized feature search patterns had approximately a 2.5:1 ratio of contribution from the Michelson contrast property relative to Weber contrast, whereas the ratio was approximately 1:1 in a serial-search condition. The results implicate near-complete magnocellular isolation in a visual-search paradigm that has yet to be demonstrated. Our analyses provide a new method of characterizing visual search and the first insight in its underlying mechanisms in luminance environments in the low mesopic and scotopic spectra.

  4. Smart Images Search based on Visual Features Fusion

    International Nuclear Information System (INIS)

    Saad, M.H.

    2013-01-01

    Image search engines attempt to give fast and accurate access to the wide range of the huge amount images available on the Internet. There have been a number of efforts to build search engines based on the image content to enhance search results. Content-Based Image Retrieval (CBIR) systems have achieved a great interest since multimedia files, such as images and videos, have dramatically entered our lives throughout the last decade. CBIR allows automatically extracting target images according to objective visual contents of the image itself, for example its shapes, colors and textures to provide more accurate ranking of the results. The recent approaches of CBIR differ in terms of which image features are extracted to be used as image descriptors for matching process. This thesis proposes improvements of the efficiency and accuracy of CBIR systems by integrating different types of image features. This framework addresses efficient retrieval of images in large image collections. A comparative study between recent CBIR techniques is provided. According to this study; image features need to be integrated to provide more accurate description of image content and better image retrieval accuracy. In this context, this thesis presents new image retrieval approaches that provide more accurate retrieval accuracy than previous approaches. The first proposed image retrieval system uses color, texture and shape descriptors to form the global features vector. This approach integrates the yc b c r color histogram as a color descriptor, the modified Fourier descriptor as a shape descriptor and modified Edge Histogram as a texture descriptor in order to enhance the retrieval results. The second proposed approach integrates the global features vector, which is used in the first approach, with the SURF salient point technique as local feature. The nearest neighbor matching algorithm with a proposed similarity measure is applied to determine the final image rank. The second approach

  5. Visual search asymmetries within color-coded and intensity-coded displays.

    Science.gov (United States)

    Yamani, Yusuke; McCarley, Jason S

    2010-06-01

    Color and intensity coding provide perceptual cues to segregate categories of objects within a visual display, allowing operators to search more efficiently for needed information. Even within a perceptually distinct subset of display elements, however, it may often be useful to prioritize items representing urgent or task-critical information. The design of symbology to produce search asymmetries (Treisman & Souther, 1985) offers a potential technique for doing this, but it is not obvious from existing models of search that an asymmetry observed in the absence of extraneous visual stimuli will persist within a complex color- or intensity-coded display. To address this issue, in the current study we measured the strength of a visual search asymmetry within displays containing color- or intensity-coded extraneous items. The asymmetry persisted strongly in the presence of extraneous items that were drawn in a different color (Experiment 1) or a lower contrast (Experiment 2) than the search-relevant items, with the targets favored by the search asymmetry producing highly efficient search. The asymmetry was attenuated but not eliminated when extraneous items were drawn in a higher contrast than search-relevant items (Experiment 3). Results imply that the coding of symbology to exploit visual search asymmetries can facilitate visual search for high-priority items even within color- or intensity-coded displays. PsycINFO Database Record (c) 2010 APA, all rights reserved.

  6. Visual search guidance is best after a short delay

    OpenAIRE

    Schmidt, Joseph; Zelinsky, Gregory J.

    2011-01-01

    Search displays are typically presented immediately after a target cue, but in the real-world, delays often exist between target designation and search. Experiments 1 and 2 asked how search guidance changes with delay. Targets were cued using a picture or text label, each for 3000ms, followed by a delay up to 9000ms before the search display. Search stimuli were realistic objects, and guidance was quantified using multiple eye movement measures. Text-based cues showed a non-significant trend ...

  7. Eye movements and attention in reading, scene perception, and visual search.

    Science.gov (United States)

    Rayner, Keith

    2009-08-01

    Eye movements are now widely used to investigate cognitive processes during reading, scene perception, and visual search. In this article, research on the following topics is reviewed with respect to reading: (a) the perceptual span (or span of effective vision), (b) preview benefit, (c) eye movement control, and (d) models of eye movements. Related issues with respect to eye movements during scene perception and visual search are also reviewed. It is argued that research on eye movements during reading has been somewhat advanced over research on eye movements in scene perception and visual search and that some of the paradigms developed to study reading should be more widely adopted in the study of scene perception and visual search. Research dealing with "real-world" tasks and research utilizing the visual-world paradigm are also briefly discussed.

  8. A survey on visual information search behavior and requirements of radiologists.

    Science.gov (United States)

    Markonis, D; Holzer, M; Dungs, S; Vargas, A; Langs, G; Kriewel, S; Müller, H

    2012-01-01

    The main objective of this study is to learn more on the image use and search requirements of radiologists. These requirements will then be taken into account to develop a new search system for images and associated meta data search in the Khresmoi project. Observations of the radiology workflow, case discussions and a literature review were performed to construct a survey form that was given online and in paper form to radiologists. Eye tracking was performed on a radiology viewing station to analyze typical tasks and to complement the survey. In total 34 radiologists answered the survey online or on paper. Image search was mentioned as a frequent and common task, particularly for finding cases of interest for differential diagnosis. Sources of information besides the Internet are books and discussions with colleagues. Search for images is unsuccessful in around 25% of the cases, stopping the search after around 10 minutes. The most common reason for failure is that target images are considered rare. Important additions for search requested in the survey are filtering by pathology and modality, as well as search for visually similar images and cases. Few radiologists are familiar with visual retrieval but they desire the option to upload images for searching similar ones. Image search is common in radiology but few radiologists are fully aware of visual information retrieval. Taking into account the many unsuccessful searches and time spent for this, a good image search could improve the situation and help in clinical practice.

  9. Visual search for features and conjunctions following declines in the useful field of view.

    Science.gov (United States)

    Cosman, Joshua D; Lees, Monica N; Lee, John D; Rizzo, Matthew; Vecera, Shaun P

    2012-01-01

    BACKGROUND/STUDY CONTEXT: Typical measures for assessing the useful field (UFOV) of view involve many components of attention. The objective of the current experiment was to examine differences in visual search efficiency for older individuals with and without UFOV impairment. The authors used a computerized screening instrument to assess the useful field of view and to characterize participants as having an impaired or normal UFOV. Participants also performed two visual search tasks, a feature search (e.g., search for a green target among red distractors) or a conjunction search (e.g., a green target with a gap on its left or right side among red distractors with gaps on the left or right and green distractors with gaps on the top or bottom). Visual search performance did not differ between UFOV impaired and unimpaired individuals when searching for a basic feature. However, search efficiency was lower for impaired individuals than unimpaired individuals when searching for a conjunction of features. The results suggest that UFOV decline in normal aging is associated with conjunction search. This finding suggests that the underlying cause of UFOV decline may arise from an overall decline in attentional efficiency. Because the useful field of view is a reliable predictor of driving safety, the results suggest that decline in the everyday visual behavior of older adults might arise from attentional declines.

  10. Long-Term Memory Search across the Visual Brain

    Directory of Open Access Journals (Sweden)

    Milan Fedurco

    2012-01-01

    Full Text Available Signal transmission from the human retina to visual cortex and connectivity of visual brain areas are relatively well understood. How specific visual perceptions transform into corresponding long-term memories remains unknown. Here, I will review recent Blood Oxygenation Level-Dependent functional Magnetic Resonance Imaging (BOLD fMRI in humans together with molecular biology studies (animal models aiming to understand how the retinal image gets transformed into so-called visual (retinotropic maps. The broken object paradigm has been chosen in order to illustrate the complexity of multisensory perception of simple objects subject to visual —rather than semantic— type of memory encoding. The author explores how amygdala projections to the visual cortex affect the memory formation and proposes the choice of experimental techniques needed to explain our massive visual memory capacity. Maintenance of the visual long-term memories is suggested to require recycling of GluR2-containing α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid receptors (AMPAR and β2-adrenoreceptors at the postsynaptic membrane, which critically depends on the catalytic activity of the N-ethylmaleimide-sensitive factor (NSF and protein kinase PKMζ.

  11. Investigating the role of visual and auditory search in reading and developmental dyslexia

    Directory of Open Access Journals (Sweden)

    Marie eLallier

    2013-09-01

    Full Text Available It has been suggested that auditory and visual sequential processing deficits contribute to phonological disorders in developmental dyslexia. As an alternative explanation to a phonological deficit as the proximal cause for reading disorders, the visual attention span hypothesis (VA Span suggests that difficulties in processing visual elements simultaneously lead to dyslexia, regardless of the presence of a phonological disorder. In this study, we assessed whether deficits in processing simultaneously displayed visual or auditory elements is linked to dyslexia associated with a VA Span impairment. Sixteen children with developmental dyslexia and 16 age-matched skilled readers were assessed on visual and auditory search tasks. Participants were asked to detect a target presented simultaneously with 3, 9 or 15 distracters. In the visual modality, target detection was slower in the dyslexic children than in the control group on a serial search condition only: the intercepts (but not the slopes of the search functions were higher in the dyslexic group than in the control group. In the auditory modality, although no group difference was observed, search performance was influenced by the number of distracters in the control group only. Within the dyslexic group, not only poor visual search (high reaction times and intercepts but also low auditory search performance (d´ strongly correlated with poor irregular word reading accuracy. Moreover, both visual and auditory search performance was associated with the VA Span abilities of dyslexic participants but not with their phonological skills. The present data suggests that some visual mechanisms engaged in serial search contribute to reading and orthographic knowledge via VA Span skills regardless of phonological skills. The present results further open the question of the role of auditory simultaneous processing in reading as well as its link with VA Span skills.

  12. Investigating the role of visual and auditory search in reading and developmental dyslexia.

    Science.gov (United States)

    Lallier, Marie; Donnadieu, Sophie; Valdois, Sylviane

    2013-01-01

    It has been suggested that auditory and visual sequential processing deficits contribute to phonological disorders in developmental dyslexia. As an alternative explanation to a phonological deficit as the proximal cause for reading disorders, the visual attention span hypothesis (VA Span) suggests that difficulties in processing visual elements simultaneously lead to dyslexia, regardless of the presence of a phonological disorder. In this study, we assessed whether deficits in processing simultaneously displayed visual or auditory elements is linked to dyslexia associated with a VA Span impairment. Sixteen children with developmental dyslexia and 16 age-matched skilled readers were assessed on visual and auditory search tasks. Participants were asked to detect a target presented simultaneously with 3, 9, or 15 distracters. In the visual modality, target detection was slower in the dyslexic children than in the control group on a "serial" search condition only: the intercepts (but not the slopes) of the search functions were higher in the dyslexic group than in the control group. In the auditory modality, although no group difference was observed, search performance was influenced by the number of distracters in the control group only. Within the dyslexic group, not only poor visual search (high reaction times and intercepts) but also low auditory search performance (d') strongly correlated with poor irregular word reading accuracy. Moreover, both visual and auditory search performance was associated with the VA Span abilities of dyslexic participants but not with their phonological skills. The present data suggests that some visual mechanisms engaged in "serial" search contribute to reading and orthographic knowledge via VA Span skills regardless of phonological skills. The present results further open the question of the role of auditory simultaneous processing in reading as well as its link with VA Span skills.

  13. Scanners and drillers: Characterizing expert visual search through volumetric images

    Science.gov (United States)

    Drew, Trafton; Vo, Melissa Le-Hoa; Olwal, Alex; Jacobson, Francine; Seltzer, Steven E.; Wolfe, Jeremy M.

    2013-01-01

    Modern imaging methods like computed tomography (CT) generate 3-D volumes of image data. How do radiologists search through such images? Are certain strategies more efficient? Although there is a large literature devoted to understanding search in 2-D, relatively little is known about search in volumetric space. In recent years, with the ever-increasing popularity of volumetric medical imaging, this question has taken on increased importance as we try to understand, and ultimately reduce, errors in diagnostic radiology. In the current study, we asked 24 radiologists to search chest CTs for lung nodules that could indicate lung cancer. To search, radiologists scrolled up and down through a “stack” of 2-D chest CT “slices.” At each moment, we tracked eye movements in the 2-D image plane and coregistered eye position with the current slice. We used these data to create a 3-D representation of the eye movements through the image volume. Radiologists tended to follow one of two dominant search strategies: “drilling” and “scanning.” Drillers restrict eye movements to a small region of the lung while quickly scrolling through depth. Scanners move more slowly through depth and search an entire level of the lung before moving on to the next level in depth. Driller performance was superior to the scanners on a variety of metrics, including lung nodule detection rate, percentage of the lung covered, and the percentage of search errors where a nodule was never fixated. PMID:23922445

  14. Journal of Health and Visual Sciences: Advanced Search

    African Journals Online (AJOL)

    Search tips: Search terms are case-insensitive; Common words are ignored; By default only articles containing all terms in the query are returned (i.e., AND is implied); Combine multiple words with OR to find articles containing either term; e.g., education OR research; Use parentheses to create more complex queries; e.g., ...

  15. Attentional control during visual search: The effect of irrelevant singletons

    NARCIS (Netherlands)

    Theeuwes, J.; Burger, R.

    1998-01-01

    Four experiments investigated whether a highly salient color singleton can be ignored during serial search. Observers searched for a target letter among nontarget letters and were instructed to ignore an irrelevant, highly salient color singleton that was either compatible or incompatible with the

  16. Visual search in scenes involves selective and non-selective pathways

    Science.gov (United States)

    Wolfe, Jeremy M; Vo, Melissa L-H; Evans, Karla K; Greene, Michelle R

    2010-01-01

    How do we find objects in scenes? For decades, visual search models have been built on experiments in which observers search for targets, presented among distractor items, isolated and randomly arranged on blank backgrounds. Are these models relevant to search in continuous scenes? This paper argues that the mechanisms that govern artificial, laboratory search tasks do play a role in visual search in scenes. However, scene-based information is used to guide search in ways that had no place in earlier models. Search in scenes may be best explained by a dual-path model: A “selective” path in which candidate objects must be individually selected for recognition and a “non-selective” path in which information can be extracted from global / statistical information. PMID:21227734

  17. Computer-Assisted Visual Search/Decision Aids as a Training Tool for Mammography

    National Research Council Canada - National Science Library

    Nodine, Calvin

    2000-01-01

    The primary goal of the project is to develop a computer-assisted visual search (CAVS) mammography training tool that will improve the perceptual and cognitive skills of trainees leading to mammographic expertise...

  18. Computer-Assisted Visual Search/Decision Aids as a Training Tool for Mammography

    National Research Council Canada - National Science Library

    Nodine, Calvin

    1999-01-01

    The primary goal of the project is to develop a computer-assisted visual search (CAVS) mammography training tool that will improve the perceptual and cognitive skills of trainees leading to mammographic expertise...

  19. Computer-Assisted Visual Search/Decision Aids as a Training Tool for Mammography

    National Research Council Canada - National Science Library

    Nodine, Calvin

    1998-01-01

    The primary goal of the project is to develop a computer-assisted visual search (CAVS) mammography training tool that will improve the perceptual and cognitive skills of trainees leading to mammographic expertise...

  20. Investigating the visual span in comparative search: the effects of task difficulty and divided attention.

    Science.gov (United States)

    Pomplun, M; Reingold, E M; Shen, J

    2001-09-01

    In three experiments, participants' visual span was measured in a comparative visual search task in which they had to detect a local match or mismatch between two displays presented side by side. Experiment 1 manipulated the difficulty of the comparative visual search task by contrasting a mismatch detection task with a substantially more difficult match detection task. In Experiment 2, participants were tested in a single-task condition involving only the visual task and a dual-task condition in which they concurrently performed an auditory task. Finally, in Experiment 3, participants performed two dual-task conditions, which differed in the difficulty of the concurrent auditory task. Both the comparative search task difficulty (Experiment 1) and the divided attention manipulation (Experiments 2 and 3) produced strong effects on visual span size.

  1. Visual Search for Feature and Conjunction Targets with an Attention Deficit

    OpenAIRE

    Arguin, Martin; Joanette, Yves; Cavanagh, Patrick

    1993-01-01

    Brain-damaged subjects who had previously been identified as suffering from a visual attention deficit for contralesional stimulation were tested on a series of visual search tasks. The experiments examined the hypothesis that the processing of single features is preattentive but that feature integration, necessary for the correct perception of conjunctions of features, requires attention (Treisman & Gelade, 1980 Treisman & Sato, 1990). Subjects searched for a feature target (orientation or c...

  2. Computational assessment of visual search strategies in volumetric medical images.

    Science.gov (United States)

    Wen, Gezheng; Aizenman, Avigael; Drew, Trafton; Wolfe, Jeremy M; Haygood, Tamara Miner; Markey, Mia K

    2016-01-01

    When searching through volumetric images [e.g., computed tomography (CT)], radiologists appear to use two different search strategies: "drilling" (restrict eye movements to a small region of the image while quickly scrolling through slices), or "scanning" (search over large areas at a given depth before moving on to the next slice). To computationally identify the type of image information that is used in these two strategies, 23 naïve observers were instructed with either "drilling" or "scanning" when searching for target T's in 20 volumes of faux lung CTs. We computed saliency maps using both classical two-dimensional (2-D) saliency, and a three-dimensional (3-D) dynamic saliency that captures the characteristics of scrolling through slices. Comparing observers' gaze distributions with the saliency maps showed that search strategy alters the type of saliency that attracts fixations. Drillers' fixations aligned better with dynamic saliency and scanners with 2-D saliency. The computed saliency was greater for detected targets than for missed targets. Similar results were observed in data from 19 radiologists who searched five stacks of clinical chest CTs for lung nodules. Dynamic saliency may be superior to the 2-D saliency for detecting targets embedded in volumetric images, and thus "drilling" may be more efficient than "scanning."

  3. The role of object categories in hybrid visual and memory search.

    Science.gov (United States)

    Cunningham, Corbin A; Wolfe, Jeremy M

    2014-08-01

    In hybrid search, observers search for any of several possible targets in a visual display containing distracting items and, perhaps, a target. Wolfe (2012) found that response times (RTs) in such tasks increased linearly with increases in the number of items in the display. However, RT increased linearly with the log of the number of items in the memory set. In earlier work, all items in the memory set were unique instances (e.g., this apple in this pose). Typical real-world tasks involve more broadly defined sets of stimuli (e.g., any "apple" or, perhaps, "fruit"). The present experiments show how sets or categories of targets are handled in joint visual and memory search. In Experiment 1, searching for a digit among letters was not like searching for targets from a 10-item memory set, though searching for targets from an N-item memory set of arbitrary alphanumeric characters was like searching for targets from an N-item memory set of arbitrary objects. In Experiment 2, observers searched for any instance of N sets or categories held in memory. This hybrid search was harder than search for specific objects. However, memory search remained logarithmic. Experiment 3 illustrates the interaction of visual guidance and memory search when a subset of visual stimuli are drawn from a target category. Furthermore, we outline a conceptual model, supported by our results, defining the core components that would be necessary to support such categorical hybrid searches. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  4. Use of an augmented-vision device for visual search by patients with tunnel vision.

    Science.gov (United States)

    Luo, Gang; Peli, Eli

    2006-09-01

    To study the effect of an augmented-vision device that superimposes minified contour images over natural vision on visual search performance of patients with tunnel vision. Twelve subjects with tunnel vision searched for targets presented outside their visual fields (VFs) on a blank background under three cue conditions (with contour cues provided by the device, with auditory cues, and without cues). Three subjects (VF, 8 degrees -11 degrees wide) carried out the search over a 90 degrees x 74 degrees area, and nine subjects (VF, 7 degrees -16 degrees wide) carried out the search over a 66 degrees x 52 degrees area. Eye and head movements were recorded for performance analyses that included directness of search path, search time, and gaze speed. Directness of the search path was greatly and significantly improved when the contour or auditory cues were provided in the larger and the smaller area searches. When using the device, a significant reduction in search time (28% approximately 74%) was demonstrated by all three subjects in the larger area search and by subjects with VFs wider than 10 degrees in the smaller area search (average, 22%). Directness and gaze speed accounted for 90% of the variability of search time. Although performance improvement with the device for the larger search area was obvious, whether it was helpful for the smaller search area depended on VF and gaze speed. Because improvement in directness was demonstrated, increased gaze speed, which could result from further training and adaptation to the device, might enable patients with small VFs to benefit from the device for visual search tasks.

  5. The price of information: Increased inspection costs reduce the confirmation bias in visual search.

    Science.gov (United States)

    Rajsic, Jason; Wilson, Daryl E; Pratt, Jay

    2018-04-01

    In visual search, there is a confirmation bias such that attention is biased towards stimuli that match a target template, which has been attributed to covert costs of updating the templates that guide search [Rajsic, Wilson, & Pratt, 2015. Confirmation bias in visual search. Journal of Experimental Psychology: Human Perception and Performance. Advance online publication. doi:10.1037/xhp0000090]. In order to provide direct evidence for this speculation, the present study increased the cost of inspections in search by using gaze- and mouse-contingent searches, which restrict the manner in which information in search displays can be accrued, and incur additional motor costs (in the case of mouse-contingent searches). In a fourth experiment, we rhythmically mask elements in the search display to induce temporal inspection costs. Our results indicated that confirmation bias is indeed attenuated when inspection costs are increased. We conclude that confirmation bias results from the low-cost strategy of matching information to a single, concrete visual template, and that more sophisticated guidance strategies will be used when sufficiently beneficial. This demonstrates that search guidance itself comes at a cost, and that the form of guidance adopted in a given search depends on a comparison between guidance costs and the expected benefits of their implementation.

  6. Influence of social presence on eye movements in visual search tasks.

    Science.gov (United States)

    Liu, Na; Yu, Ruifeng

    2017-12-01

    This study employed an eye-tracking technique to investigate the influence of social presence on eye movements in visual search tasks. A total of 20 male subjects performed visual search tasks in a 2 (target presence: present vs. absent) × 2 (task complexity: complex vs. simple) × 2 (social presence: alone vs. a human audience) within-subject experiment. Results indicated that the presence of an audience could evoke a social facilitation effect on response time in visual search tasks. Compared with working alone, the participants made fewer and shorter fixations, larger saccades and shorter scan path in simple search tasks and more and longer fixations, smaller saccades and longer scan path in complex search tasks when working with an audience. The saccade velocity and pupil diameter in the audience-present condition were larger than those in the working-alone condition. No significant change in target fixation number was observed between two social presence conditions. Practitioner Summary: This study employed an eye-tracking technique to examine the influence of social presence on eye movements in visual search tasks. Results clarified the variation mechanism and characteristics of oculomotor scanning induced by social presence in visual search.

  7. Toddlers' language-mediated visual search: they need not have the words for it

    NARCIS (Netherlands)

    Johnson, E.K.; McQueen, J.M.; Hüttig, F.

    2011-01-01

    Eye movements made by listeners during language-mediated visual search reveal a strong link between visual processing and conceptual processing. For example, upon hearing the word for a missing referent with a characteristic colour (e.g., “strawberry”), listeners tend to fixate a colour-matched

  8. Implicit short- and long-term memory direct our gaze in visual search

    NARCIS (Netherlands)

    Kruijne, Wouter; Meeter, Martijn

    2016-01-01

    Visual attention is strongly affected by the past: both by recent experience and by long-term regularities in the environment that are encoded in and retrieved from memory. In visual search, intertrial repetition of targets causes speeded response times (short-term priming). Similarly, targets that

  9. The Effects of Presentation Method and Information Density on Visual Search Ability and Working Memory Load

    Science.gov (United States)

    Chang, Ting-Wen; Kinshuk; Chen, Nian-Shing; Yu, Pao-Ta

    2012-01-01

    This study investigates the effects of successive and simultaneous information presentation methods on learner's visual search ability and working memory load for different information densities. Since the processing of information in the brain depends on the capacity of visual short-term memory (VSTM), the limited information processing capacity…

  10. The influence of artificial scotomas on eye movements during visual search

    NARCIS (Netherlands)

    Cornelissen, FW; Bruin, KJ; Kooijman, AC

    Purpose. Fixation durations are normally adapted to the difficulty of the foveal analysis task. We examine to what extent artificial central and peripheral visual field defects interfere with this adaptation process. Methods. Subjects performed a visual search task while their eye movements were

  11. Detection of Emotional Faces: Salient Physical Features Guide Effective Visual Search

    Science.gov (United States)

    Calvo, Manuel G.; Nummenmaa, Lauri

    2008-01-01

    In this study, the authors investigated how salient visual features capture attention and facilitate detection of emotional facial expressions. In a visual search task, a target emotional face (happy, disgusted, fearful, angry, sad, or surprised) was presented in an array of neutral faces. Faster detection of happy and, to a lesser extent,…

  12. Contributions from cognitive neuroscience to understanding functional mechanisms of visual search.

    NARCIS (Netherlands)

    Humphreys, G.W.; Hodsoll, J.; Olivers, C.N.L.; Yoon, E.Y.

    2006-01-01

    We argue that cognitive neuroscience can contribute not only information about the neural localization of processes underlying visual search, but also information about the functional nature of these processes. First we present an overview of recent work on whether search for form - colour

  13. Long-Term Priming of Visual Search Prevails against the Passage of Time and Counteracting Instructions

    Science.gov (United States)

    Kruijne, Wouter; Meeter, Martijn

    2016-01-01

    Studies on "intertrial priming" have shown that in visual search experiments, the preceding trial automatically affects search performance: facilitating it when the target features repeat and giving rise to switch costs when they change--so-called (short-term) intertrial priming. These effects also occur at longer time scales: When 1 of…

  14. Shape representation modulating the effect of motion on visual search performance.

    Science.gov (United States)

    Yang, Lindong; Yu, Ruifeng; Lin, Xuelian; Liu, Na

    2017-11-02

    The effect of motion on visual search has been extensively investigated, but that of uniform linear motion of display on search performance for tasks with different target-distractor shape representations has been rarely explored. The present study conducted three visual search experiments. In Experiments 1 and 2, participants finished two search tasks that differed in target-distractor shape representations under static and dynamic conditions. Two tasks with clear and blurred stimuli were performed in Experiment 3. The experiments revealed that target-distractor shape representation modulated the effect of motion on visual search performance. For tasks with low target-distractor shape similarity, motion negatively affected search performance, which was consistent with previous studies. However, for tasks with high target-distractor shape similarity, if the target differed from distractors in that a gap with a linear contour was added to the target, and the corresponding part of distractors had a curved contour, motion positively influenced search performance. Motion blur contributed to the performance enhancement under dynamic conditions. The findings are useful for understanding the influence of target-distractor shape representation on dynamic visual search performance when display had uniform linear motion.

  15. Long-term priming of visual search prevails against the passage of time and counteracting instructions

    NARCIS (Netherlands)

    Kruijne, W.; Meeter, M.

    2016-01-01

    Studies on intertrial priming have shown that in visual search experiments, the preceding trial automatically affects search performance: facilitating it when the target features repeat and giving rise to switch costs when they change-so-called (short-term) intertrial priming. These effects also

  16. Contextual Cueing in Multiconjunction Visual Search Is Dependent on Color- and Configuration-Based Intertrial Contingencies

    Science.gov (United States)

    Geyer, Thomas; Shi, Zhuanghua; Muller, Hermann J.

    2010-01-01

    Three experiments examined memory-based guidance of visual search using a modified version of the contextual-cueing paradigm (Jiang & Chun, 2001). The target, if present, was a conjunction of color and orientation, with target (and distractor) features randomly varying across trials (multiconjunction search). Under these conditions, reaction times…

  17. Facilitation and inhibition of visual display search processes through use of colour

    NARCIS (Netherlands)

    Nes, van F.L.; Juola, J.F.; Moonen, R.J.A.M.

    1987-01-01

    The effect of colour differences on visual search of videotex displays has been investigated in several experiments, including one with accurate measurements of eye movements. Subjects had to search for specific target words on display pages with normal text in one, two or four colours. The

  18. Cortical Dynamics of Contextually Cued Attentive Visual Learning and Search: Spatial and Object Evidence Accumulation

    Science.gov (United States)

    Huang, Tsung-Ren; Grossberg, Stephen

    2010-01-01

    How do humans use target-predictive contextual information to facilitate visual search? How are consistently paired scenic objects and positions learned and used to more efficiently guide search in familiar scenes? For example, humans can learn that a certain combination of objects may define a context for a kitchen and trigger a more efficient…

  19. Pip and pop : Non-spatial auditory signals improve spatial visual search

    NARCIS (Netherlands)

    Burg, E. van der; Olivers, C.N.L.; Bronkhorst, A.W.; Theeuwes, J.

    2008-01-01

    Searching for an object within a cluttered, continuously changing environment can be a very time-consuming process. The authors show that a simple auditory pip drastically decreases search times for a synchronized visual object that is normally very difficult to find. This effect occurs even though

  20. Central and Peripheral Vision Loss Differentially Affects Contextual Cueing in Visual Search

    Science.gov (United States)

    Geringswald, Franziska; Pollmann, Stefan

    2015-01-01

    Visual search for targets in repeated displays is more efficient than search for the same targets in random distractor layouts. Previous work has shown that this contextual cueing is severely impaired under central vision loss. Here, we investigated whether central vision loss, simulated with gaze-contingent displays, prevents the incidental…

  1. The role of space and time in object-based visual search

    NARCIS (Netherlands)

    Schreij, D.B.B.; Olivers, C.N.L.

    2013-01-01

    Recently we have provided evidence that observers more readily select a target from a visual search display if the motion trajectory of the display object suggests that the observer has dealt with it before. Here we test the prediction that this object-based memory effect on search breaks down if

  2. Spatial partitions systematize visual search and enhance target memory.

    Science.gov (United States)

    Solman, Grayden J F; Kingstone, Alan

    2017-02-01

    Humans are remarkably capable of finding desired objects in the world, despite the scale and complexity of naturalistic environments. Broadly, this ability is supported by an interplay between exploratory search and guidance from episodic memory for previously observed target locations. Here we examined how the environment itself may influence this interplay. In particular, we examined how partitions in the environment-like buildings, rooms, and furniture-can impact memory during repeated search. We report that the presence of partitions in a display, independent of item configuration, reliably improves episodic memory for item locations. Repeated search through partitioned displays was faster overall and was characterized by more rapid ballistic orienting in later repetitions. Explicit recall was also both faster and more accurate when displays were partitioned. Finally, we found that search paths were more regular and systematic when displays were partitioned. Given the ubiquity of partitions in real-world environments, these results provide important insights into the mechanisms of naturalistic search and its relation to memory.

  3. The guidance of spatial attention during visual search for colour combinations and colour configurations

    OpenAIRE

    Berggren, Nick; Eimer, Martin

    2016-01-01

    Representations of target-defining features (attentional templates) guide the selection of target objects in visual search. We used behavioural and electrophysiological measures to investigate how such search templates control the allocation of attention in search tasks where targets are defined by the combination of two colours or by a specific spatial configuration of these colours. Target displays were preceded by spatially uninformative cue displays that contained items in one or both tar...

  4. Peripheral vision of youths with low vision: motion perception, crowding, and visual search.

    Science.gov (United States)

    Tadin, Duje; Nyquist, Jeffrey B; Lusk, Kelly E; Corn, Anne L; Lappin, Joseph S

    2012-08-24

    Effects of low vision on peripheral visual function are poorly understood, especially in children whose visual skills are still developing. The aim of this study was to measure both central and peripheral visual functions in youths with typical and low vision. Of specific interest was the extent to which measures of foveal function predict performance of peripheral tasks. We assessed central and peripheral visual functions in youths with typical vision (n = 7, ages 10-17) and low vision (n = 24, ages 9-18). Experimental measures used both static and moving stimuli and included visual crowding, visual search, motion acuity, motion direction discrimination, and multitarget motion comparison. In most tasks, visual function was impaired in youths with low vision. Substantial differences, however, were found both between participant groups and, importantly, across different tasks within participant groups. Foveal visual acuity was a modest predictor of peripheral form vision and motion sensitivity in either the central or peripheral field. Despite exhibiting normal motion discriminations in fovea, motion sensitivity of youths with low vision deteriorated in the periphery. This contrasted with typically sighted participants, who showed improved motion sensitivity with increasing eccentricity. Visual search was greatly impaired in youths with low vision. Our results reveal a complex pattern of visual deficits in peripheral vision and indicate a significant role of attentional mechanisms in observed impairments. These deficits were not adequately captured by measures of foveal function, arguing for the importance of independently assessing peripheral visual function.

  5. Peripheral Vision of Youths with Low Vision: Motion Perception, Crowding, and Visual Search

    Science.gov (United States)

    Tadin, Duje; Nyquist, Jeffrey B.; Lusk, Kelly E.; Corn, Anne L.; Lappin, Joseph S.

    2012-01-01

    Purpose. Effects of low vision on peripheral visual function are poorly understood, especially in children whose visual skills are still developing. The aim of this study was to measure both central and peripheral visual functions in youths with typical and low vision. Of specific interest was the extent to which measures of foveal function predict performance of peripheral tasks. Methods. We assessed central and peripheral visual functions in youths with typical vision (n = 7, ages 10–17) and low vision (n = 24, ages 9–18). Experimental measures used both static and moving stimuli and included visual crowding, visual search, motion acuity, motion direction discrimination, and multitarget motion comparison. Results. In most tasks, visual function was impaired in youths with low vision. Substantial differences, however, were found both between participant groups and, importantly, across different tasks within participant groups. Foveal visual acuity was a modest predictor of peripheral form vision and motion sensitivity in either the central or peripheral field. Despite exhibiting normal motion discriminations in fovea, motion sensitivity of youths with low vision deteriorated in the periphery. This contrasted with typically sighted participants, who showed improved motion sensitivity with increasing eccentricity. Visual search was greatly impaired in youths with low vision. Conclusions. Our results reveal a complex pattern of visual deficits in peripheral vision and indicate a significant role of attentional mechanisms in observed impairments. These deficits were not adequately captured by measures of foveal function, arguing for the importance of independently assessing peripheral visual function. PMID:22836766

  6. From Foreground to Background: How Task-Neutral Context Influences Contextual Cueing of Visual Search

    OpenAIRE

    Zang, Xuelian; Geyer, Thomas; Assump??o, Leonardo; M?ller, Hermann J.; Shi, Zhuanghua

    2016-01-01

    Selective attention determines the effectiveness of implicit contextual learning (e.g., Jiang and Leung, 2005). Visual foreground-background segmentation, on the other hand, is a key process in the guidance of attention (Wolfe, 2003). In the present study, we examined the impact of foreground-background segmentation on contextual cueing of visual search in three experiments. A visual search display, consisting of distractor ‘L’s and a target ‘T’, was overlaid on a task-neutral cuboid on the s...

  7. Mobile Visual Search Based on Histogram Matching and Zone Weight Learning

    Science.gov (United States)

    Zhu, Chuang; Tao, Li; Yang, Fan; Lu, Tao; Jia, Huizhu; Xie, Xiaodong

    2018-01-01

    In this paper, we propose a novel image retrieval algorithm for mobile visual search. At first, a short visual codebook is generated based on the descriptor database to represent the statistical information of the dataset. Then, an accurate local descriptor similarity score is computed by merging the tf-idf weighted histogram matching and the weighting strategy in compact descriptors for visual search (CDVS). At last, both the global descriptor matching score and the local descriptor similarity score are summed up to rerank the retrieval results according to the learned zone weights. The results show that the proposed approach outperforms the state-of-the-art image retrieval method in CDVS.

  8. Episodic retrieval and feature facilitation in intertrial priming of visual search

    DEFF Research Database (Denmark)

    Asgeirsson, Arni Gunnar; Kristjánsson, Árni

    2011-01-01

    Abstract Huang, Holcombe, and Pashler (Memory & Cognition, 32, 12–20, 2004) found that priming from repetition of different features of a target in a visual search task resulted in significant response time (RT) reductions when both target brightness and size were repeated. But when only one...... feature was repeated and the other changed, RTs were longer than when neither feature was repeated. From this, they argued that priming in visual search reflected episodic retrieval of memory traces, rather than facilitation of repeated features. We tested different varia- tions of the search task...

  9. Effect of marihuana and alcohol on visual search performance

    Science.gov (United States)

    1976-10-01

    Two experiments were performed to determine the effects of alcohol and marihuana on visual scanning patterns in a simulated driving situation. In the first experiment 27 male heavy drinkers were divided into 3 groups of 9, defined by three blood alco...

  10. Optimization of interactive visual-similarity-based search

    NARCIS (Netherlands)

    Nguyen, G.P.; Worring, M.

    2008-01-01

    At one end of the spectrum, research in interactive content-based retrieval concentrates on machine learning methods for effective use of relevance feedback. On the other end, the information visualization community focuses on effective methods for conveying information to the user. What is lacking

  11. A String Search Marketing Application Using Visual Programming

    Science.gov (United States)

    Chin, Jerry M.; Chin, Mary H.; Van Landuyt, Cathryn

    2013-01-01

    This paper demonstrates the use of programing software that provides the student programmer visual cues to construct the code to a student programming assignment. This method does not disregard or minimize the syntax or required logical constructs. The student can concentrate more on the logic and less on the language itself.

  12. Efficient visual search from synchronized auditory signals requires transient audiovisual events.

    Directory of Open Access Journals (Sweden)

    Erik Van der Burg

    Full Text Available BACKGROUND: A prevailing view is that audiovisual integration requires temporally coincident signals. However, a recent study failed to find any evidence for audiovisual integration in visual search even when using synchronized audiovisual events. An important question is what information is critical to observe audiovisual integration. METHODOLOGY/PRINCIPAL FINDINGS: Here we demonstrate that temporal coincidence (i.e., synchrony of auditory and visual components can trigger audiovisual interaction in cluttered displays and consequently produce very fast and efficient target identification. In visual search experiments, subjects found a modulating visual target vastly more efficiently when it was paired with a synchronous auditory signal. By manipulating the kind of temporal modulation (sine wave vs. square wave vs. difference wave; harmonic sine-wave synthesis; gradient of onset/offset ramps we show that abrupt visual events are required for this search efficiency to occur, and that sinusoidal audiovisual modulations do not support efficient search. CONCLUSIONS/SIGNIFICANCE: Thus, audiovisual temporal alignment will only lead to benefits in visual search if the changes in the component signals are both synchronized and transient. We propose that transient signals are necessary in synchrony-driven binding to avoid spurious interactions with unrelated signals when these occur close together in time.

  13. Multisensory brand search: How the meaning of sounds guides consumers' visual attention.

    Science.gov (United States)

    Knoeferle, Klemens M; Knoeferle, Pia; Velasco, Carlos; Spence, Charles

    2016-06-01

    Building on models of crossmodal attention, the present research proposes that brand search is inherently multisensory, in that the consumers' visual search for a specific brand can be facilitated by semantically related stimuli that are presented in another sensory modality. A series of 5 experiments demonstrates that the presentation of spatially nonpredictive auditory stimuli associated with products (e.g., usage sounds or product-related jingles) can crossmodally facilitate consumers' visual search for, and selection of, products. Eye-tracking data (Experiment 2) revealed that the crossmodal effect of auditory cues on visual search manifested itself not only in RTs, but also in the earliest stages of visual attentional processing, thus suggesting that the semantic information embedded within sounds can modulate the perceptual saliency of the target products' visual representations. Crossmodal facilitation was even observed for newly learnt associations between unfamiliar brands and sonic logos, implicating multisensory short-term learning in establishing audiovisual semantic associations. The facilitation effect was stronger when searching complex rather than simple visual displays, thus suggesting a modulatory role of perceptual load. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  14. Visual narratives : free-hand sketch for visual search and navigation of video.

    OpenAIRE

    James, Stuart

    2016-01-01

    Humans have an innate ability to communicate visually; the earliest forms of communication were cave drawings, and children can communicate visual descriptions of scenes through drawings well before they can write. Drawings and sketches offer an intuitive and efficient means for communicating visual concepts. Today, society faces a deluge of digital visual content driven by a surge in the generation of video on social media and the online availability of video archives. Mobile devices are...

  15. Rapid resumption of interrupted visual search. New insights on the interaction between vision and memory.

    Science.gov (United States)

    Lleras, Alejandro; Rensink, Ronald A; Enns, James T

    2005-09-01

    A modified visual search task demonstrates that humans are very good at resuming a search after it has been momentarily interrupted. This is shown by exceptionally rapid response time to a display that reappears after a brief interruption, even when an entirely different visual display is seen during the interruption and two different visual searches are performed simultaneously. This rapid resumption depends on the stability of the visual scene and is not due to display or response anticipations. These results are consistent with the existence of an iterative hypothesis-testing mechanism that compares information stored in short-term memory (the perceptual hypothesis) with information about the display (the sensory pattern). In this view, rapid resumption occurs because a hypothesis based on a previous glance of the scene can be tested very rapidly in a subsequent glance, given that the initial hypothesis-generation step has already been performed.

  16. Climate and colored walls: in search of visual comfort

    Science.gov (United States)

    Arrarte-Grau, Malvina

    2002-06-01

    The quality of natural light, the landscape surrounds and the techniques of construction are important factors in the selection of architectural colors. Observation of exterior walls in differentiated climates allows the recognition of particularities in the use of color which satisfy the need for visual comfort. At a distance of 2000 kilometers along the coast of Peru, Lima and Mancora at 12° and 4° respectively, are well defined for their climatic characteristics: in Mancora sunlight causes high reflection, in Lima overcast sky and high humidity cause glare. The study of building color effects at these locations serves to illustrate that color values may be controlled in order to achieve visual comfort and contribute to color identity.

  17. Playing shooter and driving videogames improves top-down guidance in visual search.

    Science.gov (United States)

    Wu, Sijing; Spence, Ian

    2013-05-01

    Playing action videogames is known to improve visual spatial attention and related skills. Here, we showed that playing action videogames also improves classic visual search, as well as the ability to locate targets in a dual search that mimics certain aspects of an action videogame. In Experiment 1A, first-person shooter (FPS) videogame players were faster than nonplayers in both feature search and conjunction search, and in Experiment 1B, they were faster and more accurate in a peripheral search and identification task while simultaneously performing a central search. In Experiment 2, we showed that 10 h of play could improve the performance of nonplayers on each of these tasks. Three different genres of videogames were used for training: two action games and a 3-D puzzle game. Participants who played an action game (either an FPS or a driving game) achieved greater gains on all search tasks than did those who trained using the puzzle game. Feature searches were faster after playing an action videogame, suggesting that players developed a better target template to guide search in a top-down manner. The results of the dual search suggest that, in addition to enhancing the ability to divide attention, playing an action game improves the top-down guidance of attention to possible target locations. The results have practical implications for the development of training tools to improve perceptual and cognitive skills.

  18. Functional MRI mapping of visual function and selective attention for performance assessment and presurgical planning using conjunctive visual search.

    Science.gov (United States)

    Parker, Jason G; Zalusky, Eric J; Kirbas, Cemil

    2014-03-01

    Accurate mapping of visual function and selective attention using fMRI is important in the study of human performance as well as in presurgical treatment planning of lesions in or near visual centers of the brain. Conjunctive visual search (CVS) is a useful tool for mapping visual function during fMRI because of its greater activation extent compared with high-capacity parallel search processes. The purpose of this work was to develop and evaluate a CVS that was capable of generating consistent activation in the basic and higher level visual areas of the brain by using a high number of distractors as well as an optimized contrast condition. Images from 10 healthy volunteers were analyzed and brain regions of greatest activation and deactivation were determined using a nonbiased decomposition of the results at the hemisphere, lobe, and gyrus levels. The results were quantified in terms of activation and deactivation extent and mean z-statistic. The proposed CVS was found to generate robust activation of the occipital lobe, as well as regions in the middle frontal gyrus associated with coordinating eye movements and in regions of the insula associated with task-level control and focal attention. As expected, the task demonstrated deactivation patterns commonly implicated in the default-mode network. Further deactivation was noted in the posterior region of the cerebellum, most likely associated with the formation of optimal search strategy. We believe the task will be useful in studies of visual and selective attention in the neuroscience community as well as in mapping visual function in clinical fMRI.

  19. Acute exercise and aerobic fitness influence selective attention during visual search.

    Science.gov (United States)

    Bullock, Tom; Giesbrecht, Barry

    2014-01-01

    Successful goal directed behavior relies on a human attention system that is flexible and able to adapt to different conditions of physiological stress. However, the effects of physical activity on multiple aspects of selective attention and whether these effects are mediated by aerobic capacity, remains unclear. The aim of the present study was to investigate the effects of a prolonged bout of physical activity on visual search performance and perceptual distraction. Two groups of participants completed a hybrid visual search flanker/response competition task in an initial baseline session and then at 17-min intervals over a 2 h 16 min test period. Participants assigned to the exercise group engaged in steady-state aerobic exercise between completing blocks of the visual task, whereas participants assigned to the control group rested in between blocks. The key result was a correlation between individual differences in aerobic capacity and visual search performance, such that those individuals that were more fit performed the search task more quickly. Critically, this relationship only emerged in the exercise group after the physical activity had begun. The relationship was not present in either group at baseline and never emerged in the control group during the test period, suggesting that under these task demands, aerobic capacity may be an important determinant of visual search performance under physical stress. The results enhance current understanding about the relationship between exercise and cognition, and also inform current models of selective attention.

  20. Acute exercise and aerobic fitness influence selective attention during visual search

    Science.gov (United States)

    Bullock, Tom; Giesbrecht, Barry

    2014-01-01

    Successful goal directed behavior relies on a human attention system that is flexible and able to adapt to different conditions of physiological stress. However, the effects of physical activity on multiple aspects of selective attention and whether these effects are mediated by aerobic capacity, remains unclear. The aim of the present study was to investigate the effects of a prolonged bout of physical activity on visual search performance and perceptual distraction. Two groups of participants completed a hybrid visual search flanker/response competition task in an initial baseline session and then at 17-min intervals over a 2 h 16 min test period. Participants assigned to the exercise group engaged in steady-state aerobic exercise between completing blocks of the visual task, whereas participants assigned to the control group rested in between blocks. The key result was a correlation between individual differences in aerobic capacity and visual search performance, such that those individuals that were more fit performed the search task more quickly. Critically, this relationship only emerged in the exercise group after the physical activity had begun. The relationship was not present in either group at baseline and never emerged in the control group during the test period, suggesting that under these task demands, aerobic capacity may be an important determinant of visual search performance under physical stress. The results enhance current understanding about the relationship between exercise and cognition, and also inform current models of selective attention. PMID:25426094

  1. Acute exercise and aerobic fitness influence selective attention during visual search

    Directory of Open Access Journals (Sweden)

    Tom eBullock

    2014-11-01

    Full Text Available Successful goal directed behavior relies on a human attention system that is flexible and able to adapt to different conditions of physiological stress. However, the effects of physical activity on multiple aspects of selective attention and whether these effects are mediated by aerobic capacity, remains unclear. The aim of the present study was to investigate the effects of a prolonged bout of physical activity on visual search performance and perceptual distraction. Two groups of participants completed a hybrid visual search flanker/response competition task in an initial baseline session and then at 17-minute intervals over a 2 hour 16 minute test period. Participants assigned to the exercise group engaged in steady-state aerobic exercise between completing blocks of the visual task, whereas participants assigned to the control group rested in between blocks. The key result was a correlation between individual differences in aerobic capacity and visual search performance, such that those individuals that were more fit performed the search task more quickly. Critically, this relationship only emerged in the exercise group after the physical activity had begun. The relationship was not present in either group at baseline and never emerged in the control group during the test period, suggesting that under these task demands, aerobic capacity may be an important determinant of visual search performance under physical stress. The results enhance current understanding about the relationship between exercise and cognition, and also inform current models of selective attention.

  2. Motivation and short-term memory in visual search: Attention's accelerator revisited.

    Science.gov (United States)

    Schneider, Daniel; Bonmassar, Claudia; Hickey, Clayton

    2018-05-01

    A cue indicating the possibility of cash reward will cause participants to perform memory-based visual search more efficiently. A recent study has suggested that this performance benefit might reflect the use of multiple memory systems: when needed, participants may maintain the to-be-remembered object in both long-term and short-term visual memory, with this redundancy benefitting target identification during search (Reinhart, McClenahan & Woodman, 2016). Here we test this compelling hypothesis. We had participants complete a memory-based visual search task involving a reward cue that either preceded presentation of the to-be-remembered target (pre-cue) or followed it (retro-cue). Following earlier work, we tracked memory representation using two components of the event-related potential (ERP): the contralateral delay activity (CDA), reflecting short-term visual memory, and the anterior P170, reflecting long-term storage. We additionally tracked attentional preparation and deployment in the contingent negative variation (CNV) and N2pc, respectively. Results show that only the reward pre-cue impacted our ERP indices of memory. However, both types of cue elicited a robust CNV, reflecting an influence on task preparation, both had equivalent impact on deployment of attention to the target, as indexed in the N2pc, and both had equivalent impact on visual search behavior. Reward prospect thus has an influence on memory-guided visual search, but this does not appear to be necessarily mediated by a change in the visual memory representations indexed by CDA. Our results demonstrate that the impact of motivation on search is not a simple product of improved memory for target templates. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Distractor dwelling, skipping, and revisiting determine target absent performance in difficult visual search

    Directory of Open Access Journals (Sweden)

    Gernot Horstmann

    2016-08-01

    Full Text Available Some targets in visual search are more difficult to find than others. In particular, a target that is similar to the distractors is more difficult to find than a target that is dissimilar to the distractors. Efficiency differences between easy and difficult searches are manifest not only in target-present trials but also in target-absent trials. In fact, even physically identical displays are searched through with different efficiency depending on the searched-for target. Here, we monitored eye movements in search for a target similar to the distractors (difficult search versus a target dissimilar to the distractors (easy search. We aimed to examine three hypotheses concerning the causes of differential search efficiencies in target-absent trials: (a distractor dwelling (b distractor skipping, and (c distractor revisiting. Reaction times increased with target similarity which is consistent with existing theories and replicates earlier results. Eye movement data indicated guidance in target trials, even though search was very slow. Dwelling, skipping, and revisiting contributed to low search efficiency in difficult search, with dwelling being the strongest factor. It is argued that differences in dwell time account for a large amount of total search time differences.

  4. Distractor Dwelling, Skipping, and Revisiting Determine Target Absent Performance in Difficult Visual Search

    Science.gov (United States)

    Horstmann, Gernot; Herwig, Arvid; Becker, Stefanie I.

    2016-01-01

    Some targets in visual search are more difficult to find than others. In particular, a target that is similar to the distractors is more difficult to find than a target that is dissimilar to the distractors. Efficiency differences between easy and difficult searches are manifest not only in target-present trials but also in target-absent trials. In fact, even physically identical displays are searched through with different efficiency depending on the searched-for target. Here, we monitored eye movements in search for a target similar to the distractors (difficult search) versus a target dissimilar to the distractors (easy search). We aimed to examine three hypotheses concerning the causes of differential search efficiencies in target-absent trials: (a) distractor dwelling (b) distractor skipping, and (c) distractor revisiting. Reaction times increased with target similarity which is consistent with existing theories and replicates earlier results. Eye movement data indicated guidance in target trials, even though search was very slow. Dwelling, skipping, and revisiting contributed to low search efficiency in difficult search, with dwelling being the strongest factor. It is argued that differences in dwell time account for a large amount of total search time differences. PMID:27574510

  5. Target-present guessing as a function of target prevalence and accumulated information in visual search.

    Science.gov (United States)

    Peltier, Chad; Becker, Mark W

    2017-05-01

    Target prevalence influences visual search behavior. At low target prevalence, miss rates are high and false alarms are low, while the opposite is true at high prevalence. Several models of search aim to describe search behavior, one of which has been specifically intended to model search at varying prevalence levels. The multiple decision model (Wolfe & Van Wert, Current Biology, 20(2), 121--124, 2010) posits that all searches that end before the observer detects a target result in a target-absent response. However, researchers have found very high false alarms in high-prevalence searches, suggesting that prevalence rates may be used as a source of information to make "educated guesses" after search termination. Here, we further examine the ability for prevalence level and knowledge gained during visual search to influence guessing rates. We manipulate target prevalence and the amount of information that an observer accumulates about a search display prior to making a response to test if these sources of evidence are used to inform target present guess rates. We find that observers use both information about target prevalence rates and information about the proportion of the array inspected prior to making a response allowing them to make an informed and statistically driven guess about the target's presence.

  6. Reward association facilitates distractor suppression in human visual search.

    Science.gov (United States)

    Gong, Mengyuan; Yang, Feitong; Li, Sheng

    2016-04-01

    Although valuable objects are attractive in nature, people often encounter situations where they would prefer to avoid such distraction while focusing on the task goal. Contrary to the typical effect of attentional capture by a reward-associated item, we provide evidence for a facilitation effect derived from the active suppression of a high reward-associated stimulus when cuing its identity as distractor before the display of search arrays. Selection of the target is shown to be significantly faster when the distractors were in high reward-associated colour than those in low reward-associated or non-rewarded colours. This behavioural reward effect was associated with two neural signatures before the onset of the search display: the increased frontal theta oscillation and the strengthened top-down modulation from frontal to anterior temporal regions. The former suggests an enhanced working memory representation for the reward-associated stimulus and the increased need for cognitive control to override Pavlovian bias, whereas the latter indicates that the boost of inhibitory control is realized through a frontal top-down mechanism. These results suggest a mechanism in which the enhanced working memory representation of a reward-associated feature is integrated with task demands to modify attentional priority during active distractor suppression and benefit behavioural performance. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  7. Working memory load predicts visual search efficiency: Evidence from a novel pupillary response paradigm.

    Science.gov (United States)

    Attar, Nada; Schneps, Matthew H; Pomplun, Marc

    2016-10-01

    An observer's pupil dilates and constricts in response to variables such as ambient and focal luminance, cognitive effort, the emotional stimulus content, and working memory load. The pupil's memory load response is of particular interest, as it might be used for estimating observers' memory load while they are performing a complex task, without adding an interruptive and confounding memory test to the protocol. One important task in which working memory's involvement is still being debated is visual search, and indeed a previous experiment by Porter, Troscianko, and Gilchrist (Quarterly Journal of Experimental Psychology, 60, 211-229, 2007) analyzed observers' pupil sizes during search to study this issue. These authors found that pupil size increased over the course of the search, and they attributed this finding to accumulating working memory load. However, since the pupil response is slow and does not depend on memory load alone, this conclusion is rather speculative. In the present study, we estimated working memory load in visual search during the presentation of intermittent fixation screens, thought to induce a low, stable level of arousal and cognitive effort. Using standard visual search and control tasks, we showed that this paradigm reduces the influence of non-memory-related factors on pupil size. Furthermore, we found an early increase in working memory load to be associated with more efficient search, indicating a significant role of working memory in the search process.

  8. Running the figure to the ground: figure-ground segmentation during visual search.

    Science.gov (United States)

    Ralph, Brandon C W; Seli, Paul; Cheng, Vivian O Y; Solman, Grayden J F; Smilek, Daniel

    2014-04-01

    We examined how figure-ground segmentation occurs across multiple regions of a visual array during a visual search task. Stimuli consisted of arrays of black-and-white figure-ground images in which roughly half of each image depicted a meaningful object, whereas the other half constituted a less meaningful shape. The colours of the meaningful regions of the targets and distractors were either the same (congruent) or different (incongruent). We found that incongruent targets took longer to locate than congruent targets (Experiments 1, 2, and 3) and that this segmentation-congruency effect decreased when the number of search items was reduced (Experiment 2). Furthermore, an analysis of eye movements revealed that participants spent more time scrutinising the target before confirming its identity on incongruent trials than on congruent trials (Experiment 3). These findings suggest that the distractor context influences target segmentation and detection during visual search. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Tactile search for change has less memory than visual search for change.

    Science.gov (United States)

    Yoshida, Takako; Yamaguchi, Ayumi; Tsutsui, Hideomi; Wake, Tenji

    2015-05-01

    Haptic perception of a 2D image is thought to make heavy demands on working memory. During active exploration, humans need to store the latest local sensory information and integrate it with kinesthetic information from hand and finger locations in order to generate a coherent perception. This tactile integration has not been studied as extensively as visual shape integration. In the current study, we compared working-memory capacity for tactile exploration to that of visual exploration as measured in change-detection tasks. We found smaller memory capacity during tactile exploration (approximately 1 item) compared with visual exploration (2-10 items). These differences generalized to position memory and could not be attributed to insufficient stimulus-exposure durations, acuity differences between modalities, or uncertainty over the position of items. This low capacity for tactile memory suggests that the haptic system is almost amnesic when outside the fingertips and that there is little or no cross-position integration.

  10. Abnormal early brain responses during visual search are evident in schizophrenia but not bipolar affective disorder.

    Science.gov (United States)

    VanMeerten, Nicolaas J; Dubke, Rachel E; Stanwyck, John J; Kang, Seung Suk; Sponheim, Scott R

    2016-01-01

    People with schizophrenia show deficits in processing visual stimuli but neural abnormalities underlying the deficits are unclear and it is unknown whether such functional brain abnormalities are present in other severe mental disorders or in individuals who carry genetic liability for schizophrenia. To better characterize brain responses underlying visual search deficits and test their specificity to schizophrenia we gathered behavioral and electrophysiological responses during visual search (i.e., Span of Apprehension [SOA] task) from 38 people with schizophrenia, 31 people with bipolar disorder, 58 biological relatives of people with schizophrenia, 37 biological relatives of people with bipolar disorder, and 65 non-psychiatric control participants. Through subtracting neural responses associated with purely sensory aspects of the stimuli we found that people with schizophrenia exhibited reduced early posterior task-related neural responses (i.e., Span Endogenous Negativity [SEN]) while other groups showed normative responses. People with schizophrenia exhibited longer reaction times than controls during visual search but nearly identical accuracy. Those individuals with schizophrenia who had larger SENs performed more efficiently (i.e., shorter reaction times) on the SOA task suggesting that modulation of early visual cortical responses facilitated their visual search. People with schizophrenia also exhibited a diminished P300 response compared to other groups. Unaffected first-degree relatives of people with bipolar disorder and schizophrenia showed an amplified N1 response over posterior brain regions in comparison to other groups. Diminished early posterior brain responses are associated with impaired visual search in schizophrenia and appear to be specifically associated with the neuropathology of schizophrenia. Published by Elsevier B.V.

  11. Assessment of brain damage in a geriatric population through use of a visual-searching task.

    Science.gov (United States)

    Turbiner, M; Derman, R M

    1980-04-01

    This study was designed to assess the discriminative capacity of a visual-searching task for brain damage, as described by Goldstein and Kyc (1978), for 10 hospitalized male, brain-damaged patients, 10 hospitalized male schizophrenic patients, and 10 normal subjects in a control group, all of whom were approximately 65 yr. old. The derived data indicated, at a statistically significant level, that the visual-searching task was effective in successfully classifying 80% of the brain-damaged sample when compared to the schizophrenic patients and discriminating 90% of the brain-damaged patients from normal subjects.

  12. Memory and visual search in naturalistic 2D and 3D environments.

    Science.gov (United States)

    Li, Chia-Ling; Aivar, M Pilar; Kit, Dmitry M; Tong, Matthew H; Hayhoe, Mary M

    2016-06-01

    The role of memory in guiding attention allocation in daily behaviors is not well understood. In experiments with two-dimensional (2D) images, there is mixed evidence about the importance of memory. Because the stimulus context in laboratory experiments and daily behaviors differs extensively, we investigated the role of memory in visual search, in both two-dimensional (2D) and three-dimensional (3D) environments. A 3D immersive virtual apartment composed of two rooms was created, and a parallel 2D visual search experiment composed of snapshots from the 3D environment was developed. Eye movements were tracked in both experiments. Repeated searches for geometric objects were performed to assess the role of spatial memory. Subsequently, subjects searched for realistic context objects to test for incidental learning. Our results show that subjects learned the room-target associations in 3D but less so in 2D. Gaze was increasingly restricted to relevant regions of the room with experience in both settings. Search for local contextual objects, however, was not facilitated by early experience. Incidental fixations to context objects do not necessarily benefit search performance. Together, these results demonstrate that memory for global aspects of the environment guides search by restricting allocation of attention to likely regions, whereas task relevance determines what is learned from the active search experience. Behaviors in 2D and 3D environments are comparable, although there is greater use of memory in 3D.

  13. Eye movements, visual search and scene memory, in an immersive virtual environment.

    Directory of Open Access Journals (Sweden)

    Dmitry Kit

    Full Text Available Visual memory has been demonstrated to play a role in both visual search and attentional prioritization in natural scenes. However, it has been studied predominantly in experimental paradigms using multiple two-dimensional images. Natural experience, however, entails prolonged immersion in a limited number of three-dimensional environments. The goal of the present experiment was to recreate circumstances comparable to natural visual experience in order to evaluate the role of scene memory in guiding eye movements in a natural environment. Subjects performed a continuous visual-search task within an immersive virtual-reality environment over three days. We found that, similar to two-dimensional contexts, viewers rapidly learn the location of objects in the environment over time, and use spatial memory to guide search. Incidental fixations did not provide obvious benefit to subsequent search, suggesting that semantic contextual cues may often be just as efficient, or that many incidentally fixated items are not held in memory in the absence of a specific task. On the third day of the experience in the environment, previous search items changed in color. These items were fixated upon with increased probability relative to control objects, suggesting that memory-guided prioritization (or Surprise may be a robust mechanisms for attracting gaze to novel features of natural environments, in addition to task factors and simple spatial saliency.

  14. Eye movements, visual search and scene memory, in an immersive virtual environment.

    Science.gov (United States)

    Kit, Dmitry; Katz, Leor; Sullivan, Brian; Snyder, Kat; Ballard, Dana; Hayhoe, Mary

    2014-01-01

    Visual memory has been demonstrated to play a role in both visual search and attentional prioritization in natural scenes. However, it has been studied predominantly in experimental paradigms using multiple two-dimensional images. Natural experience, however, entails prolonged immersion in a limited number of three-dimensional environments. The goal of the present experiment was to recreate circumstances comparable to natural visual experience in order to evaluate the role of scene memory in guiding eye movements in a natural environment. Subjects performed a continuous visual-search task within an immersive virtual-reality environment over three days. We found that, similar to two-dimensional contexts, viewers rapidly learn the location of objects in the environment over time, and use spatial memory to guide search. Incidental fixations did not provide obvious benefit to subsequent search, suggesting that semantic contextual cues may often be just as efficient, or that many incidentally fixated items are not held in memory in the absence of a specific task. On the third day of the experience in the environment, previous search items changed in color. These items were fixated upon with increased probability relative to control objects, suggesting that memory-guided prioritization (or Surprise) may be a robust mechanisms for attracting gaze to novel features of natural environments, in addition to task factors and simple spatial saliency.

  15. Priming of pop-out modulates attentional target selection in visual search: Behavioural and electrophysiological evidence

    OpenAIRE

    Eimer, Martin; Kiss, Monika; Cheung, Theodore

    2009-01-01

    Previous behavioural studies have shown that the repetition of target or distractor features across trials speeds pop-out visual search. We obtained behavioural and event-related brain potential (ERP) measures in two experiments where participants searched for a colour singleton target among homogeneously coloured distractors. An ERP marker of spatially selective attention (N2pc component) was delayed when either target or distractor colours were swapped across successive trials, demonstratin...

  16. Ontology-Driven Search and Triage: Design of a Web-Based Visual Interface for MEDLINE.

    Science.gov (United States)

    Demelo, Jonathan; Parsons, Paul; Sedig, Kamran

    2017-02-02

    Diverse users need to search health and medical literature to satisfy open-ended goals such as making evidence-based decisions and updating their knowledge. However, doing so is challenging due to at least two major difficulties: (1) articulating information needs using accurate vocabulary and (2) dealing with large document sets returned from searches. Common search interfaces such as PubMed do not provide adequate support for exploratory search tasks. Our objective was to improve support for exploratory search tasks by combining two strategies in the design of an interactive visual interface by (1) using a formal ontology to help users build domain-specific knowledge and vocabulary and (2) providing multi-stage triaging support to help mitigate the information overload problem. We developed a Web-based tool, Ontology-Driven Visual Search and Triage Interface for MEDLINE (OVERT-MED), to test our design ideas. We implemented a custom searchable index of MEDLINE, which comprises approximately 25 million document citations. We chose a popular biomedical ontology, the Human Phenotype Ontology (HPO), to test our solution to the vocabulary problem. We implemented multistage triaging support in OVERT-MED, with the aid of interactive visualization techniques, to help users deal with large document sets returned from searches. Formative evaluation suggests that the design features in OVERT-MED are helpful in addressing the two major difficulties described above. Using a formal ontology seems to help users articulate their information needs with more accurate vocabulary. In addition, multistage triaging combined with interactive visualizations shows promise in mitigating the information overload problem. Our strategies appear to be valuable in addressing the two major problems in exploratory search. Although we tested OVERT-MED with a particular ontology and document collection, we anticipate that our strategies can be transferred successfully to other contexts.

  17. Memory for found targets interferes with subsequent performance in multiple-target visual search.

    Science.gov (United States)

    Cain, Matthew S; Mitroff, Stephen R

    2013-10-01

    Multiple-target visual searches--when more than 1 target can appear in a given search display--are commonplace in radiology, airport security screening, and the military. Whereas 1 target is often found accurately, additional targets are more likely to be missed in multiple-target searches. To better understand this decrement in 2nd-target detection, here we examined 2 potential forms of interference that can arise from finding a 1st target: interference from the perceptual salience of the 1st target (a now highly relevant distractor in a known location) and interference from a newly created memory representation for the 1st target. Here, we found that removing found targets from the display or making them salient and easily segregated color singletons improved subsequent search accuracy. However, replacing found targets with random distractor items did not improve subsequent search accuracy. Removing and highlighting found targets likely reduced both a target's visual salience and its memory load, whereas replacing a target removed its visual salience but not its representation in memory. Collectively, the current experiments suggest that the working memory load of a found target has a larger effect on subsequent search accuracy than does its perceptual salience. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  18. Stronger interference from distractors in the right hemifield during visual search.

    Science.gov (United States)

    Carlei, Christophe; Kerzel, Dirk

    2018-03-01

    The orientation-bias hypothesis states that there is a bias to attend to the right visual hemifield (RVF) when there is spatial competition between stimuli in the left and right hemifield [Pollmann, S. (1996). A pop-out induced extinction-like phenomenon in neurologically intact subjects. Neuropsychologia, 34(5), 413-425. doi: 10.1016/0028-3932(95)00125-5 ]. In support of this hypothesis, stronger interference was reported for RVF distractors with contralateral targets. In contrast, previous studies using rapid serial visual presentation (RSVP) found stronger interference from distractors in the left visual hemifield (LVF). We used the additional singleton paradigm to test whether this discrepancy was due to the different distractor features that were employed (colour vs. orientation). Interference from the colour distractor with contralateral targets was larger in the RVF than in the LVF. However, the asymmetrical interference disappeared when observers had to search for an inconspicuous colour target instead of the inconspicuous shape target. We suggest that the LVF orienting-bias is limited to situations where search is driven by bottom-up saliency (singleton search) instead of top-down search goals (feature search). In contrast, analysis of the literature suggests the opposite for the LVF bias in RSVP tasks. Thus, the attentional asymmetry may depend on whether the task involves temporal or spatial competition, and whether search is based on bottom-up or top-down signals.

  19. Transcranial direct current stimulation (tDCS) facilitates overall visual search response times but does not interact with visual search task factors.

    Science.gov (United States)

    Sung, Kyongje; Gordon, Barry

    2018-01-01

    Whether transcranial direct current stimulation (tDCS) affects mental functions, and how any such effects arise from its neural effects, continue to be debated. We investigated whether tDCS applied over the visual cortex (Oz) with a vertex (Cz) reference might affect response times (RTs) in a visual search task. We also examined whether any significant tDCS effects would interact with task factors (target presence, discrimination difficulty, and stimulus brightness) that are known to selectively influence one or the other of the two information processing stages posited by current models of visual search. Based on additive factor logic, we expected that the pattern of interactions involving a significant tDCS effect could help us colocalize the tDCS effect to one (or both) of the processing stages. In Experiment 1 (n = 12), anodal tDCS improved RTs significantly; cathodal tDCS produced a nonsignificant trend toward improvement. However, there were no interactions between the anodal tDCS effect and target presence or discrimination difficulty. In Experiment 2 (n = 18), we manipulated stimulus brightness along with target presence and discrimination difficulty. Anodal and cathodal tDCS both produced significant improvements in RTs. Again, the tDCS effects did not interact with any of the task factors. In Experiment 3 (n = 16), electrodes were placed at Cz and on the upper arm, to test for a possible effect of incidental stimulation of the motor regions under Cz. No effect of tDCS on RTs was found. These findings strengthen the case for tDCS having real effects on cerebral information processing. However, these effects did not clearly arise from either of the two processing stages of the visual search process. We suggest that this is because tDCS has a DIFFUSE, pervasive action across the task-relevant neuroanatomical region(s), not a discrete effect in terms of information processing stages.

  20. Explicit awareness supports conditional visual search in the retrieval guidance paradigm.

    Science.gov (United States)

    Buttaccio, Daniel R; Lange, Nicholas D; Hahn, Sowon; Thomas, Rick P

    2014-01-01

    In four experiments we explored whether participants would be able to use probabilistic prompts to simplify perceptually demanding visual search in a task we call the retrieval guidance paradigm. On each trial a memory prompt appeared prior to (and during) the search task and the diagnosticity of the prompt(s) was manipulated to provide complete, partial, or non-diagnostic information regarding the target's color on each trial (Experiments 1-3). In Experiment 1 we found that the more diagnostic prompts was associated with faster visual search performance. However, similar visual search behavior was observed in Experiment 2 when the diagnosticity of the prompts was eliminated, suggesting that participants in Experiment 1 were merely relying on base rate information to guide search and were not utilizing the prompts. In Experiment 3 participants were informed of the relationship between the prompts and the color of the target and this was associated with faster search performance relative to Experiment 1, suggesting that the participants were using the prompts to guide search. Additionally, in Experiment 3 a knowledge test was implemented and performance in this task was associated with qualitative differences in search behavior such that participants that were able to name the color(s) most associated with the prompts were faster to find the target than participants who were unable to do so. However, in Experiments 1-3 diagnosticity of the memory prompt was manipulated via base rate information, making it possible that participants were merely relying on base rate information to inform search in Experiment 3. In Experiment 4 we manipulated diagnosticity of the prompts without manipulating base rate information and found a similar pattern of results as Experiment 3. Together, the results emphasize the importance of base rate and diagnosticity information in visual search behavior. In the General discussion section we explore how a recent computational model of

  1. Disturbance of visual search by stimulating to posterior parietal cortex in the brain using transcranial magnetic stimulation

    Science.gov (United States)

    Iramina, Keiji; Ge, Sheng; Hyodo, Akira; Hayami, Takehito; Ueno, Shoogo

    2009-04-01

    In this study, we applied a transcranial magnetic stimulation (TMS) to investigate the temporal aspect for the functional processing of visual attention. Although it has been known that right posterior parietal cortex (PPC) in the brain has a role in certain visual search tasks, there is little knowledge about the temporal aspect of this area. Three visual search tasks that have different difficulties of task execution individually were carried out. These three visual search tasks are the "easy feature task," the "hard feature task," and the "conjunction task." To investigate the temporal aspect of the PPC involved in the visual search, we applied various stimulus onset asynchronies (SOAs) and measured the reaction time of the visual search. The magnetic stimulation was applied on the right PPC or the left PPC by the figure-eight coil. The results show that the reaction times of the hard feature task are longer than those of the easy feature task. When SOA=150 ms, compared with no-TMS condition, there was a significant increase in target-present reaction time when TMS pulses were applied. We considered that the right PPC was involved in the visual search at about SOA=150 ms after visual stimulus presentation. The magnetic stimulation to the right PPC disturbed the processing of the visual search. However, the magnetic stimulation to the left PPC gives no effect on the processing of the visual search.

  2. Increased Complexities in Visual Search Behavior in Skilled Players for a Self-Paced Aiming Task

    Directory of Open Access Journals (Sweden)

    Jingyi S. Chia

    2017-06-01

    Full Text Available The badminton serve is an important shot for winning a rally in a match. It combines good technique with the ability to accurately integrate visual information from the shuttle, racket, opponent, and intended landing point. Despite its importance and repercussive nature, to date no study has looked at the visual search behaviors during badminton service in the singles discipline. Unlike anticipatory tasks (e.g., shot returns, the serve presents an opportunity to explore the role of visual search behaviors in movement control for self-paced tasks. Accordingly, this study examined skill-related differences in visual behavior during the badminton singles serve. Skilled (n = 12 and less skilled (n = 12 participants performed 30 serves to a live opponent, while real-time eye movements were captured using a mobile gaze registration system. Frame-by-frame analyses of 662 serves were made and the skilled players took a longer preparatory time before serving. Visual behavior of the skilled players was characterized by significantly greater number of fixations on more areas of interest per trial than the less skilled. In addition, the skilled players spent a significantly longer time fixating on the court and net, whereas the less skilled players found the shuttle to be more informative. Quiet eye (QE duration (indicative of superior sports performance however, did not differ significantly between groups which has implications on the perceived importance of QE in the badminton serve. Moreover, while visual behavior differed by skill level, considerable individual differences were also observed especially within the skilled players. This augments the need for not just group-level analyses, but individualized analysis for a more accurate representation of visual behavior. Findings from this study thus provide an insight to the possible visual search strategies as players serve in net-barrier games. Moreover, this study highlighted an important aspect of

  3. Chess players' eye movements reveal rapid recognition of complex visual patterns: Evidence from a chess-related visual search task.

    Science.gov (United States)

    Sheridan, Heather; Reingold, Eyal M

    2017-03-01

    To explore the perceptual component of chess expertise, we monitored the eye movements of expert and novice chess players during a chess-related visual search task that tested anecdotal reports that a key differentiator of chess skill is the ability to visualize the complex moves of the knight piece. Specifically, chess players viewed an array of four minimized chessboards, and they rapidly searched for the target board that allowed a knight piece to reach a target square in three moves. On each trial, there was only one target board (i.e., the "Yes" board), and for the remaining "lure" boards, the knight's path was blocked on either the first move (the "Easy No" board) or the second move (i.e., "the Difficult No" board). As evidence that chess experts can rapidly differentiate complex chess-related visual patterns, the experts (but not the novices) showed longer first-fixation durations on the "Yes" board relative to the "Difficult No" board. Moreover, as hypothesized, the task strongly differentiated chess skill: Reaction times were more than four times faster for the experts relative to novices, and reaction times were correlated with within-group measures of expertise (i.e., official chess ratings, number of hours of practice). These results indicate that a key component of chess expertise is the ability to rapidly recognize complex visual patterns.

  4. Mental workload while driving: effects on visual search, discrimination, and decision making.

    Science.gov (United States)

    Recarte, Miguel A; Nunes, Luis M

    2003-06-01

    The effects of mental workload on visual search and decision making were studied in real traffic conditions with 12 participants who drove an instrumented car. Mental workload was manipulated by having participants perform several mental tasks while driving. A simultaneous visual-detection and discrimination test was used as performance criteria. Mental tasks produced spatial gaze concentration and visual-detection impairment, although no tunnel vision occurred. According to ocular behavior analysis, this impairment was due to late detection and poor identification more than to response selection. Verbal acquisition tasks were innocuous compared with production tasks, and complex conversations, whether by phone or with a passenger, are dangerous for road safety.

  5. Posterior α EEG Dynamics Dissociate Current from Future Goals in Working Memory-Guided Visual Search.

    Science.gov (United States)

    de Vries, Ingmar E J; van Driel, Joram; Olivers, Christian N L

    2017-02-08

    Current models of visual search assume that search is guided by an active visual working memory representation of what we are currently looking for. This attentional template for currently relevant stimuli can be dissociated from accessory memory representations that are only needed prospectively, for a future task, and that should be prevented from guiding current attention. However, it remains unclear what electrophysiological mechanisms dissociate currently relevant (serving upcoming selection) from prospectively relevant memories (serving future selection). We measured EEG of 20 human subjects while they performed two consecutive visual search tasks. Before the search tasks, a cue instructed observers which item to look for first (current template) and which second (prospective template). During the delay leading up to the first search display, we found clear suppression of α band (8-14 Hz) activity in regions contralateral to remembered items, comprising both local power and interregional phase synchronization within a posterior parietal network. Importantly, these lateralization effects were stronger when the memory item was currently relevant (i.e., for the first search) compared with when it was prospectively relevant (i.e., for the second search), consistent with current templates being prioritized over future templates. In contrast, event-related potential analysis revealed that the contralateral delay activity was similar for all conditions, suggesting no difference in storage. Together, these findings support the idea that posterior α oscillations represent a state of increased processing or excitability in task-relevant cortical regions, and reflect enhanced cortical prioritization of memory representations that serve as a current selection filter. SIGNIFICANCE STATEMENT Our days are filled with looking for relevant objects while ignoring irrelevant visual information. Such visual search activity is thought to be driven by current goals activated in

  6. Where perception meets memory: a review of repetition priming in visual search tasks.

    Science.gov (United States)

    Kristjánsson, Arni; Campana, Gianluca

    2010-01-01

    What we have recently seen and attended to strongly influences how we subsequently allocate visual attention. A clear example is how repeated presentation of an object's features or location in visual search tasks facilitates subsequent detection or identification of that item, a phenomenon known as priming. Here, we review a large body of results from priming studies that suggest that a short-term implicit memory system guides our attention to recently viewed items. The nature of this memory system and the processing level at which visual priming occurs are still debated. Priming might be due to activity modulations of low-level areas coding simple stimulus characteristics or to higher level episodic memory representations of whole objects or visual scenes. Indeed, recent evidence indicates that only minor changes to the stimuli used in priming studies may alter the processing level at which priming occurs. We also review recent behavioral, neuropsychological, and neurophysiological evidence that indicates that the priming patterns are reflected in activity modulations at multiple sites along the visual pathways. We furthermore suggest that studies of priming in visual search may potentially shed important light on the nature of cortical visual representations. Our conclusion is that priming occurs at many different levels of the perceptual hierarchy, reflecting activity modulations ranging from lower to higher levels, depending on the stimulus, task, and context-in fact, the neural loci that are involved in the analysis of the stimuli for which priming effects are seen.

  7. Implicit short- and long-term memory direct our gaze in visual search.

    Science.gov (United States)

    Kruijne, Wouter; Meeter, Martijn

    2016-04-01

    Visual attention is strongly affected by the past: both by recent experience and by long-term regularities in the environment that are encoded in and retrieved from memory. In visual search, intertrial repetition of targets causes speeded response times (short-term priming). Similarly, targets that are presented more often than others may facilitate search, even long after it is no longer present (long-term priming). In this study, we investigate whether such short-term priming and long-term priming depend on dissociable mechanisms. By recording eye movements while participants searched for one of two conjunction targets, we explored at what stages of visual search different forms of priming manifest. We found both long- and short- term priming effects. Long-term priming persisted long after the bias was present, and was again found even in participants who were unaware of a color bias. Short- and long-term priming affected the same stage of the task; both biased eye movements towards targets with the primed color, already starting with the first eye movement. Neither form of priming affected the response phase of a trial, but response repetition did. The results strongly suggest that both long- and short-term memory can implicitly modulate feedforward visual processing.

  8. Visual search in school-aged children with unilateral brain lesions

    NARCIS (Netherlands)

    Netelenbos, J.B.; de Rooij, L.

    2004-01-01

    In this preliminary study, visual search for targets within and beyond the initial field of view was investigated in seven school-aged children (five females, two males; mean age at testing 8 years 10 months, SD 1 year 3 months; range 6 to 10 years) with various acquired, postnatal, focal brain

  9. From Foreground to Background: How Task-Neutral Context Influences Contextual Cueing of Visual Search.

    Science.gov (United States)

    Zang, Xuelian; Geyer, Thomas; Assumpção, Leonardo; Müller, Hermann J; Shi, Zhuanghua

    2016-01-01

    Selective attention determines the effectiveness of implicit contextual learning (e.g., Jiang and Leung, 2005). Visual foreground-background segmentation, on the other hand, is a key process in the guidance of attention (Wolfe, 2003). In the present study, we examined the impact of foreground-background segmentation on contextual cueing of visual search in three experiments. A visual search display, consisting of distractor 'L's and a target 'T', was overlaid on a task-neutral cuboid on the same depth plane (Experiment 1), on stereoscopically separated depth planes (Experiment 2), or spread over the entire display on the same depth plane (Experiment 3). Half of the search displays contained repeated target-distractor arrangements, whereas the other half was always newly generated. The task-neutral cuboid was constant during an initial training session, but was either rotated by 90° or entirely removed in the subsequent test sessions. We found that the gains resulting from repeated presentation of display arrangements during training (i.e., contextual-cueing effects) were diminished when the cuboid was changed or removed in Experiment 1, but remained intact in Experiments 2 and 3 when the cuboid was placed in a different depth plane, or when the items were randomly spread over the whole display but not on the edges of the cuboid. These findings suggest that foreground-background segmentation occurs prior to contextual learning, and only objects/arrangements that are grouped as foreground are learned over the course of repeated visual search.

  10. The function of visual search and memory in sequential looking tasks

    NARCIS (Netherlands)

    J. Epelboim (Julie); R.M. Steinman (Robert); E. Kowler (Eileen); M. Edwards (Mark); Z. Pizlo (Zygmunt); D.W. Erkelens (Dirk Willem); H. Collewijn (Han)

    1995-01-01

    textabstractEye and head movements were recorded as unrestrained subjects tapped or only looked at nearby targets. Scanning patterns were the same in both tasks: subjects looked at each target before tapping it; visual search had similar speeds and gaze-shift accuracies. Looking however, took longer

  11. The effects of link format and screen location on visual search of web pages.

    Science.gov (United States)

    Ling, Jonathan; Van Schaik, Paul

    2004-06-22

    Navigation of web pages is of critical importance to the usability of web-based systems such as the World Wide Web and intranets. The primary means of navigation is through the use of hyperlinks. However, few studies have examined the impact of the presentation format of these links on visual search. The present study used a two-factor mixed measures design to investigate whether there was an effect of link format (plain text, underlined, bold, or bold and underlined) upon speed and accuracy of visual search and subjective measures in both the navigation and content areas of web pages. An effect of link format on speed of visual search for both hits and correct rejections was found. This effect was observed in the navigation and the content areas. Link format did not influence accuracy in either screen location. Participants showed highest preference for links that were in bold and underlined, regardless of screen area. These results are discussed in the context of visual search processes and design recommendations are given.

  12. What Are the Shapes of Response Time Distributions in Visual Search?

    Science.gov (United States)

    Palmer, Evan M.; Horowitz, Todd S.; Torralba, Antonio; Wolfe, Jeremy M.

    2011-01-01

    Many visual search experiments measure response time (RT) as their primary dependent variable. Analyses typically focus on mean (or median) RT. However, given enough data, the RT distribution can be a rich source of information. For this paper, we collected about 500 trials per cell per observer for both target-present and target-absent displays…

  13. VisualRank: applying PageRank to large-scale image search.

    Science.gov (United States)

    Jing, Yushi; Baluja, Shumeet

    2008-11-01

    Because of the relative ease in understanding and processing text, commercial image-search systems often rely on techniques that are largely indistinguishable from text-search. Recently, academic studies have demonstrated the effectiveness of employing image-based features to provide alternative or additional signals. However, it remains uncertain whether such techniques will generalize to a large number of popular web queries, and whether the potential improvement to search quality warrants the additional computational cost. In this work, we cast the image-ranking problem into the task of identifying "authority" nodes on an inferred visual similarity graph and propose VisualRank to analyze the visual link structures among images. The images found to be "authorities" are chosen as those that answer the image-queries well. To understand the performance of such an approach in a real system, we conducted a series of large-scale experiments based on the task of retrieving images for 2000 of the most popular products queries. Our experimental results show significant improvement, in terms of user satisfaction and relevancy, in comparison to the most recent Google Image Search results. Maintaining modest computational cost is vital to ensuring that this procedure can be used in practice; we describe the techniques required to make this system practical for large scale deployment in commercial search engines.

  14. An individual differences approach to multiple-target visual search errors: How search errors relate to different characteristics of attention.

    Science.gov (United States)

    Adamo, Stephen H; Cain, Matthew S; Mitroff, Stephen R

    2017-12-01

    A persistent problem in visual search is that searchers are more likely to miss a target if they have already found another in the same display. This phenomenon, the Subsequent Search Miss (SSM) effect, has remained despite being a known issue for decades. Increasingly, evidence supports a resource depletion account of SSM errors-a previously detected target consumes attentional resources leaving fewer resources available for the processing of a second target. However, "attention" is broadly defined and is composed of many different characteristics, leaving considerable uncertainty about how attention affects second-target detection. The goal of the current study was to identify which attentional characteristics (i.e., selection, limited capacity, modulation, and vigilance) related to second-target misses. The current study compared second-target misses to an attentional blink task and a vigilance task, which both have established measures that were used to operationally define each of four attentional characteristics. Second-target misses in the multiple-target search were correlated with (1) a measure of the time it took for the second target to recovery from the blink in the attentional blink task (i.e., modulation), and (2) target sensitivity (d') in the vigilance task (i.e., vigilance). Participants with longer recovery and poorer vigilance had more second-target misses in the multiple-target visual search task. The results add further support to a resource depletion account of SSM errors and highlight that worse modulation and poor vigilance reflect a deficit in attentional resources that can account for SSM errors. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Cultural differences in attention: Eye movement evidence from a comparative visual search task.

    Science.gov (United States)

    Alotaibi, Albandri; Underwood, Geoffrey; Smith, Alastair D

    2017-10-01

    Individual differences in visual attention have been linked to thinking style: analytic thinking (common in individualistic cultures) is thought to promote attention to detail and focus on the most important part of a scene, whereas holistic thinking (common in collectivist cultures) promotes attention to the global structure of a scene and the relationship between its parts. However, this theory is primarily based on relatively simple judgement tasks. We compared groups from Great Britain (an individualist culture) and Saudi Arabia (a collectivist culture) on a more complex comparative visual search task, using simple natural scenes. A higher overall number of fixations for Saudi participants, along with longer search times, indicated less efficient search behaviour than British participants. Furthermore, intra-group comparisons of scan-path for Saudi participants revealed less similarity than within the British group. Together, these findings suggest that there is a positive relationship between an analytic cognitive style and controlled attention. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Active visual search in non-stationary scenes: coping with temporal variability and uncertainty

    Science.gov (United States)

    Ušćumlić, Marija; Blankertz, Benjamin

    2016-02-01

    Objective. State-of-the-art experiments for studying neural processes underlying visual cognition often constrain sensory inputs (e.g., static images) and our behavior (e.g., fixed eye-gaze, long eye fixations), isolating or simplifying the interaction of neural processes. Motivated by the non-stationarity of our natural visual environment, we investigated the electroencephalography (EEG) correlates of visual recognition while participants overtly performed visual search in non-stationary scenes. We hypothesized that visual effects (such as those typically used in human-computer interfaces) may increase temporal uncertainty (with reference to fixation onset) of cognition-related EEG activity in an active search task and therefore require novel techniques for single-trial detection. Approach. We addressed fixation-related EEG activity in an active search task with respect to stimulus-appearance styles and dynamics. Alongside popping-up stimuli, our experimental study embraces two composite appearance styles based on fading-in, enlarging, and motion effects. Additionally, we explored whether the knowledge obtained in the pop-up experimental setting can be exploited to boost the EEG-based intention-decoding performance when facing transitional changes of visual content. Main results. The results confirmed our initial hypothesis that the dynamic of visual content can increase temporal uncertainty of the cognition-related EEG activity in active search with respect to fixation onset. This temporal uncertainty challenges the pivotal aim to keep the decoding performance constant irrespective of visual effects. Importantly, the proposed approach for EEG decoding based on knowledge transfer between the different experimental settings gave a promising performance. Significance. Our study demonstrates that the non-stationarity of visual scenes is an important factor in the evolution of cognitive processes, as well as in the dynamic of ocular behavior (i.e., dwell time and

  17. Neural basis of feature-based contextual effects on visual search behavior

    Directory of Open Access Journals (Sweden)

    Kelly eShen

    2012-01-01

    Full Text Available Searching for a visual object is known to be adaptable to context, and it is thought to result from the selection of neural representations distributed on a visual salience map, wherein stimulus-driven and goal-directed signals are combined. Here we investigated the neural basis of this adaptability by recording superior colliculus (SC neurons while three female rhesus monkeys (Macaca mulatta searched with saccadic eye movements for a target presented in an array of visual stimuli whose feature composition varied from trial to trial. We found that sensory-motor activity associated with distracters was enhanced or suppressed depending on the search array composition and that it corresponded to the monkey's search strategy, as assessed by the distribution of the occasional errant saccades. This feature-related modulation occurred independently from the saccade goal and facilitated the process of saccade target selection. We also observed feature-related enhancement in the activity associated with distracters that had been the search target during the previous session. Consistent with recurrent processing, both feature-related neuronal modulations occurred more than 60 ms after the onset of the visually evoked responses, and their near coincidence with the time of saccade target selection suggests that they are integral to this process. These results suggest that SC neuronal activity is shaped by the visual context as dictated by both stimulus-driven and goal-directed signals. Given the close proximity of the SC to the motor circuit, our findings suggest a direct link between perception and action and no need for distinct salience and motor maps.

  18. Modulation of neuronal responses during covert search for visual feature conjunctions.

    Science.gov (United States)

    Buracas, Giedrius T; Albright, Thomas D

    2009-09-29

    While searching for an object in a visual scene, an observer's attentional focus and eye movements are often guided by information about object features and spatial locations. Both spatial and feature-specific attention are known to modulate neuronal responses in visual cortex, but little is known of the dynamics and interplay of these mechanisms as visual search progresses. To address this issue, we recorded from directionally selective cells in visual area MT of monkeys trained to covertly search for targets defined by a unique conjunction of color and motion features and to signal target detection with an eye movement to the putative target. Two patterns of response modulation were observed. One pattern consisted of enhanced responses to targets presented in the receptive field (RF). These modulations occurred at the end-stage of search and were more potent during correct target identification than during erroneous saccades to a distractor in RF, thus suggesting that this modulation is not a mere presaccadic enhancement. A second pattern of modulation was observed when RF stimuli were nontargets that shared a feature with the target. The latter effect was observed during early stages of search and is consistent with a global feature-specific mechanism. This effect often terminated before target identification, thus suggesting that it interacts with spatial attention. This modulation was exhibited not only for motion but also for color cue, although MT neurons are known to be insensitive to color. Such cue-invariant attentional effects may contribute to a feature binding mechanism acting across visual dimensions.

  19. Neural correlates of context-dependent feature conjunction learning in visual search tasks.

    Science.gov (United States)

    Reavis, Eric A; Frank, Sebastian M; Greenlee, Mark W; Tse, Peter U

    2016-06-01

    Many perceptual learning experiments show that repeated exposure to a basic visual feature such as a specific orientation or spatial frequency can modify perception of that feature, and that those perceptual changes are associated with changes in neural tuning early in visual processing. Such perceptual learning effects thus exert a bottom-up influence on subsequent stimulus processing, independent of task-demands or endogenous influences (e.g., volitional attention). However, it is unclear whether such bottom-up changes in perception can occur as more complex stimuli such as conjunctions of visual features are learned. It is not known whether changes in the efficiency with which people learn to process feature conjunctions in a task (e.g., visual search) reflect true bottom-up perceptual learning versus top-down, task-related learning (e.g., learning better control of endogenous attention). Here we show that feature conjunction learning in visual search leads to bottom-up changes in stimulus processing. First, using fMRI, we demonstrate that conjunction learning in visual search has a distinct neural signature: an increase in target-evoked activity relative to distractor-evoked activity (i.e., a relative increase in target salience). Second, we demonstrate that after learning, this neural signature is still evident even when participants passively view learned stimuli while performing an unrelated, attention-demanding task. This suggests that conjunction learning results in altered bottom-up perceptual processing of the learned conjunction stimuli (i.e., a perceptual change independent of the task). We further show that the acquired change in target-evoked activity is contextually dependent on the presence of distractors, suggesting that search array Gestalts are learned. Hum Brain Mapp 37:2319-2330, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  20. Differential effects of parietal and frontal inactivations on reaction times distributions in a visual search task

    Directory of Open Access Journals (Sweden)

    Claire eWardak

    2012-06-01

    Full Text Available The posterior parietal cortex participates to numerous cognitive functions, from perceptual to attentional and decisional processes. However, the same functions have also been attributed to the frontal cortex. We previously conducted a series of reversible inactivations of the lateral intraparietal area (LIP and of the frontal eye field (FEF in the monkey which showed impairments in covert visual search performance, characterized mainly by an increase in the mean reaction time (RT necessary to detect a contralesional target. Only subtle differences were observed between the inactivation effects in both areas. In particular, the magnitude of the deficit was dependant of search task difficulty for LIP, but not for FEF.In the present study, we re-examine these data in order to try to dissociate the specific involvement of these two regions, by considering the entire RT distribution instead of mean RT. We use the LATER model to help us interpret the effects of the inactivations with regard to information accumulation rate and decision processes. We show that: 1 different search strategies can be used by monkeys to perform visual search, either by processing the visual scene in parallel, or by combining parallel and serial processes; 2 LIP and FEF inactivations have very different effects on the RT distributions in the two monkeys. Although our results are not conclusive with regards to the exact functional mechanisms affected by the inactivations, the effects we observe on RT distributions could be accounted by an involvement of LIP in saliency representation or decision-making, and an involvement of FEF in attentional shifts and perception. Finally, we observe that the use of the LATER model is limited in the context of a visual search as it cannot fit all the behavioural strategies encountered. We propose that the diversity in search strategies observed in our monkeys also exists in individual human subjects and should be considered in future

  1. Footprints: A Visual Search Tool that Supports Discovery and Coverage Tracking.

    Science.gov (United States)

    Isaacs, Ellen; Domico, Kelly; Ahern, Shane; Bart, Eugene; Singhal, Mudita

    2014-12-01

    Searching a large document collection to learn about a broad subject involves the iterative process of figuring out what to ask, filtering the results, identifying useful documents, and deciding when one has covered enough material to stop searching. We are calling this activity "discoverage," discovery of relevant material and tracking coverage of that material. We built a visual analytic tool called Footprints that uses multiple coordinated visualizations to help users navigate through the discoverage process. To support discovery, Footprints displays topics extracted from documents that provide an overview of the search space and are used to construct searches visuospatially. Footprints allows users to triage their search results by assigning a status to each document (To Read, Read, Useful), and those status markings are shown on interactive histograms depicting the user's coverage through the documents across dates, sources, and topics. Coverage histograms help users notice biases in their search and fill any gaps in their analytic process. To create Footprints, we used a highly iterative, user-centered approach in which we conducted many evaluations during both the design and implementation stages and continually modified the design in response to feedback.

  2. Memory under pressure: secondary-task effects on contextual cueing of visual search.

    Science.gov (United States)

    Annac, Efsun; Manginelli, Angela A; Pollmann, Stefan; Shi, Zhuanghua; Müller, Hermann J; Geyer, Thomas

    2013-11-04

    Repeated display configurations improve visual search. Recently, the question has arisen whether this contextual cueing effect (Chun & Jiang, 1998) is itself mediated by attention, both in terms of selectivity and processing resources deployed. While it is accepted that selective attention modulates contextual cueing (Jiang & Leung, 2005), there is an ongoing debate whether the cueing effect is affected by a secondary working memory (WM) task, specifically at which stage WM influences the cueing effect: the acquisition of configural associations (e.g., Travis, Mattingley, & Dux, 2013) versus the expression of learned associations (e.g., Manginelli, Langer, Klose, & Pollmann, 2013). The present study re-investigated this issue. Observers performed a visual search in combination with a spatial WM task. The latter was applied on either early or late search trials--so as to examine whether WM load hampers the acquisition of or retrieval from contextual memory. Additionally, the WM and search tasks were performed either temporally in parallel or in succession--so as to permit the effects of spatial WM load to be dissociated from those of executive load. The secondary WM task was found to affect cueing in late, but not early, experimental trials--though only when the search and WM tasks were performed in parallel. This pattern suggests that contextual cueing involves a spatial WM resource, with spatial WM providing a workspace linking the current search array with configural long-term memory; as a result, occupying this workspace by a secondary WM task hampers the expression of learned configural associations.

  3. A Lifelog Browser for Visualization and Search of Mobile Everyday-Life

    Directory of Open Access Journals (Sweden)

    Keum-Sung Hwang

    2014-01-01

    Full Text Available Mobile devices can now handle a great deal of information thanks to the convergence of diverse functionalities. Mobile environments have already shown great potential in terms of providing customized service to users because they can record meaningful and private information continually for long periods of time. The research for understanding, searching and summarizing the everyday-life of human has received increasing attention in recent years due to the digital convergence. In this paper, we propose a mobile life browser, which visualizes and searches human's mobile life based on the contents and context of lifelog data. The mobile life browser is for searching the personal information effectively collected on his/her mobile device and for supporting the concept-based searching method by using concept networks and Bayesian networks. In the experiments, we collected the real mobile log data from three users for a month and visualized the mobile lives of the users with the mobile life browser developed. Some tests on searching tasks confirmed that the result using the proposed concept-based searching method is promising.

  4. Rare, but obviously there: effects of target frequency and salience on visual search accuracy.

    Science.gov (United States)

    Biggs, Adam T; Adamo, Stephen H; Mitroff, Stephen R

    2014-10-01

    Accuracy can be extremely important for many visual search tasks. However, numerous factors work to undermine successful search. Several negative influences on search have been well studied, yet one potentially influential factor has gone almost entirely unexplored-namely, how is search performance affected by the likelihood that a specific target might appear? A recent study demonstrated that when specific targets appear infrequently (i.e., once in every thousand trials) they were, on average, not often found. Even so, some infrequently appearing targets were actually found quite often, suggesting that the targets' frequency is not the only factor at play. Here, we investigated whether salience (i.e., the extent to which an item stands out during search) could explain why some infrequent targets are easily found whereas others are almost never found. Using the mobile application Airport Scanner, we assessed how individual target frequency and salience interacted in a visual search task that included a wide array of targets and millions of trials. Target frequency and salience were both significant predictors of search accuracy, although target frequency explained more of the accuracy variance. Further, when examining only the rarest target items (those that appeared on less than 0.15% of all trials), there was a significant relationship between salience and accuracy such that less salient items were less likely to be found. Beyond implications for search theory, these data suggest significant vulnerability for real-world searches that involve targets that are both infrequent and hard-to-spot. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Evaluation of a prototype search and visualization system for exploring scientific communities.

    Science.gov (United States)

    Bales, Michael E; Kaufman, David R; Johnson, Stephen B

    2009-11-14

    Searches of bibliographic databases generate lists of articles but do little to reveal connections between authors, institutions, and grants. As a result, search results cannot be fully leveraged. To address this problem we developed Sciologer, a prototype search and visualization system. Sciologer presents the results of any PubMed query as an interactive network diagram of the above elements. We conducted a cognitive evaluation with six neuroscience and six obesity researchers. Researchers used the system effectively. They used geographic, color, and shape metaphors to describe community structure and made accurate inferences pertaining to a) collaboration among research groups; b) prominence of individual researchers; and c) differentiation of expertise. The tool confirmed certain beliefs, disconfirmed others, and extended their understanding of their own discipline. The majority indicated the system offered information of value beyond a traditional PubMed search and that they would use the tool if available.

  6. Age differences in visual search for compound patterns: long- versus short-range grouping.

    Science.gov (United States)

    Burack, J A; Enns, J T; Iarocci, G; Randolph, B

    2000-11-01

    Visual search for compound patterns was examined in observers aged 6, 8, 10, and 22 years. The main question was whether age-related improvement in search rate (response time slope over number of items) was different for patterns defined by short- versus long-range spatial relations. Perceptual access to each type of relation was varied by using elements of same contrast (easy to access) or mixed contrast (hard to access). The results showed large improvements with age in search rate for long-range targets; search rate for short-range targets was fairly constant across age. This pattern held regardless of whether perceptual access to a target was easy or hard, supporting the hypothesis that different processes are involved in perceptual grouping at these two levels. The results also point to important links between ontogenic and microgenic change in perception (H. Werner, 1948, 1957).

  7. Direction of Auditory Pitch-Change Influences Visual Search for Slope From Graphs.

    Science.gov (United States)

    Parrott, Stacey; Guzman-Martinez, Emmanuel; Orte, Laura; Grabowecky, Marcia; Huntington, Mark D; Suzuki, Satoru

    2015-01-01

    Linear trend (slope) is important information conveyed by graphs. We investigated how sounds influenced slope detection in a visual search paradigm. Four bar graphs or scatter plots were presented on each trial. Participants looked for a positive-slope or a negative-slope target (in blocked trials), and responded to targets in a go or no-go fashion. For example, in a positive-slope-target block, the target graph displayed a positive slope while other graphs displayed negative slopes (a go trial), or all graphs displayed negative slopes (a no-go trial). When an ascending or descending sound was presented concurrently, ascending sounds slowed detection of negative-slope targets whereas descending sounds slowed detection of positive-slope targets. The sounds had no effect when they immediately preceded the visual search displays, suggesting that the results were due to crossmodal interaction rather than priming. The sounds also had no effect when targets were words describing slopes, such as "positive," "negative," "increasing," or "decreasing," suggesting that the results were unlikely due to semantic-level interactions. Manipulations of spatiotemporal similarity between sounds and graphs had little effect. These results suggest that ascending and descending sounds influence visual search for slope based on a general association between the direction of auditory pitch-change and visual linear trend.

  8. Learning where to look: electrophysiological and behavioral indices of visual search in young and old subjects.

    Science.gov (United States)

    Looren de Jong, H; Kok, A; Woestenburg, J C; Logman, C J; Van Rooy, J C

    1988-06-01

    The present investigation explores the way young and elderly subjects use regularities in target location in a visual display to guide search for targets. Although both young and old subjects show efficient use of search strategies, slight but reliable differences in reaction times suggest decreased ability in the elderly to use complex cues. Event-related potentials were very different for the young and the old. In the young, P3 amplitudes were larger on trials where the rule that governed the location of the target became evident; this was interpreted as an effect of memory updating. Enhanced positive Slow Wave amplitude indicated uncertainty in random search conditions. Elderly subjects' P3 and SW, however, seemed unrelated to behavioral performance, and they showed a large negative Slow Wave at central and parietal sites to randomly located targets. The latter finding was tentatively interpreted as a sign of increased effort in the elderly to allocate attention in visual space. This pattern of behavioral and ERP results suggests that age-related differences in search tasks can be understood in terms of changes in the strategy of allocating visual attention.

  9. The eye movements of dyslexic children during reading and visual search: impact of the visual attention span.

    Science.gov (United States)

    Prado, Chloé; Dubois, Matthieu; Valdois, Sylviane

    2007-09-01

    The eye movements of 14 French dyslexic children having a VA span reduction and 14 normal readers were compared in two tasks of visual search and text reading. The dyslexic participants made a higher number of rightward fixations in reading only. They simultaneously processed the same low number of letters in both tasks whereas normal readers processed far more letters in reading. Importantly, the children's VA span abilities related to the number of letters simultaneously processed in reading. The atypical eye movements of some dyslexic readers in reading thus appear to reflect difficulties to increase their VA span according to the task request.

  10. Visual search and urban driving under the influence of marijuana and alcohol.

    Science.gov (United States)

    Lamers, C. T. J.; Ramaekers, J. G.

    2001-07-01

    The purpose of the present study was to assess the effects of low doses of marijuana and alcohol, and their combination, on visual search at intersections and on general driving proficiency in the City Driving Test. Sixteen recreational users of alcohol and marijuana (eight males and eight females) were treated with these substances or placebo according to a balanced, 4-way, cross-over, observer- and subject-blind design. On separate evenings, subjects received weight-calibrated doses of THC, alcohol or placebo in each of the following treatment conditions: alcohol placebo + THC placebo, alcohol + THC placebo, THC 100 &mgr;g/kg + alcohol placebo, THC 100 &mgr;g/kg + alcohol. Alcohol doses administered were sufficient for achieving a blood alcohol concentration (BAC) of about 0.05 g/dl. Initial drinking preceded smoking by one hour. The City Driving Test commenced 15 minutes after smoking and lasted 45 minutes. The test was conducted over a fixed route within the city limits of Maastricht. An eye movement recording system was mounted on each subject's head for providing relative frequency measures of appropriate visual search at intersections. General driving quality was rated by a licensed driving instructor on a shortened version of the Royal Dutch Tourist Association's Driving Proficiency Test. After placebo treatment subjects searched for traffic approaching from side streets on the right in 84% of all cases. Visual search frequency in these subjects did not change when they were treated with alcohol or marijuana alone. However, when treated with the combination of alcohol and marijuana, the frequency of visual search dropped by 3%. Performance as rated on the Driving Proficiency Scale did not differ between treatments. It was concluded that the effects of low doses of THC (100 &mgr;g/kg) and alcohol (BAC < 0.05 g/dl) on higher-level driving skills as measured in the present study are minimal. Copyright 2001 John Wiley & Sons, Ltd.

  11. Adding a Visualization Feature to Web Search Engines: It’s Time

    Energy Technology Data Exchange (ETDEWEB)

    Wong, Pak C.

    2008-11-11

    Since the first world wide web (WWW) search engine quietly entered our lives in 1994, the “information need” behind web searching has rapidly grown into a multi-billion dollar business that dominates the internet landscape, drives e-commerce traffic, propels global economy, and affects the lives of the whole human race. Today’s search engines are faster, smarter, and more powerful than those released just a few years ago. With the vast investment pouring into research and development by leading web technology providers and the intense emotion behind corporate slogans such as “win the web” or “take back the web,” I can’t help but ask why are we still using the very same “text-only” interface that was used 13 years ago to browse our search engine results pages (SERPs)? Why has the SERP interface technology lagged so far behind in the web evolution when the corresponding search technology has advanced so rapidly? In this article I explore some current SERP interface issues, suggest a simple but practical visual-based interface design approach, and argue why a visual approach can be a strong candidate for tomorrow’s SERP interface.

  12. Visual search for emotional expressions: Effect of stimulus set on anger and happiness superiority.

    Science.gov (United States)

    Savage, Ruth A; Becker, Stefanie I; Lipp, Ottmar V

    2016-01-01

    Prior reports of preferential detection of emotional expressions in visual search have yielded inconsistent results, even for face stimuli that avoid obvious expression-related perceptual confounds. The current study investigated inconsistent reports of anger and happiness superiority effects using face stimuli drawn from the same database. Experiment 1 excluded procedural differences as a potential factor, replicating a happiness superiority effect in a procedure that previously yielded an anger superiority effect. Experiments 2a and 2b confirmed that image colour or poser gender did not account for prior inconsistent findings. Experiments 3a and 3b identified stimulus set as the critical variable, revealing happiness or anger superiority effects for two partially overlapping sets of face stimuli. The current results highlight the critical role of stimulus selection for the observation of happiness or anger superiority effects in visual search even for face stimuli that avoid obvious expression related perceptual confounds and are drawn from a single database.

  13. Visual Search and Target Cueing: A Comparison of Head-Mounted Versus Hand-Held Displays on the Allocation of Visual Attention

    National Research Council Canada - National Science Library

    Yeh, Michelle; Wickens, Christopher D

    1998-01-01

    We conducted a study to examine the effects of target cueing and conformality with a hand-held or head-mounted display to determine their effects on visual search tasks requiring focused and divided attention...

  14. From foreground to background: how task-neutral context influences contextual cueing of visual search

    Directory of Open Access Journals (Sweden)

    Xuelian eZang

    2016-06-01

    Full Text Available Selective attention determines the effectiveness of implicit contextual learning (e.g., Jiang & Leung, 2005. Visual foreground-background segmentation, on the other hand, is a key process in the guidance of attention (Wolfe, 2003. In the present study, we examined the impact of foreground-background segmentation on contextual cueing of visual search in three experiments. A visual search display, consisting of distractor ‘L’s and a target ‘T’, was overlaid on a task-neutral cuboid on the same depth plane (Experiment 1, on stereoscopically separated depth planes (Experiment 2, or spread over the entire display on the same depth plane (Experiment 3. Half of the search displays contained repeated target-distractor arrangements, whereas the other half was always newly generated. The task-neutral cuboid was constant during an initial training session, but was either rotated by 90º or entirely removed in the subsequent test sessions. We found that the gains resulting from repeated presentation of display arrangements during training (i.e., contextual-cueing effects were diminished when the cuboid was changed or removed in Experiment 1, but remained intact in Experiments 2 and 3 when the cuboid was placed in a different depth plane, or when the items were randomly spread over the whole display but not on the edges of the cuboid. These findings suggest that foreground-background segmentation occurs prior to contextual learning, and only objects/arrangements that are grouped as foreground are learned over the course of repeated visual search.

  15. Do People Take Stimulus Correlations into Account in Visual Search (Open Source)

    Science.gov (United States)

    2016-03-10

    RESEARCH ARTICLE Do People Take Stimulus Correlations into Account in Visual Search? Manisha Bhardwaj1, Ronald van den Berg2,3, Wei Ji Ma2,4...journal.pone.0149402 March 10, 2016 1 / 16 OPEN ACCESS Citation: Bhardwaj M, van den Berg R, Ma WJ, Josić K (2016) Do People Take Stimulus Correlations into...different values of ρ, larger set sizes, and more extensive training could shed more light on how exactly people misestimate stimulus correlations in

  16. Sleep-effects on implicit and explicit memory in repeated visual search.

    Science.gov (United States)

    Geyer, Thomas; Mueller, Hermann J; Assumpcao, Leonardo; Gais, Steffen

    2013-01-01

    In repeated visual search tasks, facilitation of reaction times (RTs) due to repetition of the spatial arrangement of items occurs independently of RT facilitation due to improvements in general task performance. Whereas the latter represents typical procedural learning, the former is a kind of implicit memory that depends on the medial temporal lobe (MTL) memory system and is impaired in patients with amnesia. A third type of memory that develops during visual search is the observers' explicit knowledge of repeated displays. Here, we used a visual search task to investigate whether procedural memory, implicit contextual cueing, and explicit knowledge of repeated configurations, which all arise independently from the same set of stimuli, are influenced by sleep. Observers participated in two experimental sessions, separated by either a nap or a controlled rest period. In each of the two sessions, they performed a visual search task in combination with an explicit recognition task. We found that (1) across sessions, MTL-independent procedural learning was more pronounced for the nap than rest group. This confirms earlier findings, albeit from different motor and perceptual tasks, showing that procedural memory can benefit from sleep. (2) Likewise, the sleep group compared with the rest group showed enhanced context-dependent configural learning in the second session. This is a novel finding, indicating that the MTL-dependent, implicit memory underlying contextual cueing is also sleep-dependent. (3) By contrast, sleep and wake groups displayed equivalent improvements in explicit recognition memory in the second session. Overall, the current study shows that sleep affects MTL-dependent as well as MTL-independent memory, but it affects different, albeit simultaneously acquired, forms of MTL-dependent memory differentially.

  17. From Foreground to Background: How Task-Neutral Context Influences Contextual Cueing of Visual Search

    Science.gov (United States)

    Zang, Xuelian; Geyer, Thomas; Assumpção, Leonardo; Müller, Hermann J.; Shi, Zhuanghua

    2016-01-01

    Selective attention determines the effectiveness of implicit contextual learning (e.g., Jiang and Leung, 2005). Visual foreground-background segmentation, on the other hand, is a key process in the guidance of attention (Wolfe, 2003). In the present study, we examined the impact of foreground-background segmentation on contextual cueing of visual search in three experiments. A visual search display, consisting of distractor ‘L’s and a target ‘T’, was overlaid on a task-neutral cuboid on the same depth plane (Experiment 1), on stereoscopically separated depth planes (Experiment 2), or spread over the entire display on the same depth plane (Experiment 3). Half of the search displays contained repeated target-distractor arrangements, whereas the other half was always newly generated. The task-neutral cuboid was constant during an initial training session, but was either rotated by 90° or entirely removed in the subsequent test sessions. We found that the gains resulting from repeated presentation of display arrangements during training (i.e., contextual-cueing effects) were diminished when the cuboid was changed or removed in Experiment 1, but remained intact in Experiments 2 and 3 when the cuboid was placed in a different depth plane, or when the items were randomly spread over the whole display but not on the edges of the cuboid. These findings suggest that foreground-background segmentation occurs prior to contextual learning, and only objects/arrangements that are grouped as foreground are learned over the course of repeated visual search. PMID:27375530

  18. Object integration requires attention: visual search for Kanizsa figures in parietal extinction

    OpenAIRE

    Gögler, N.; Finke, K.; Keller, I.; Muller, Hermann J.; Conci, M.

    2016-01-01

    The contribution of selective attention to object integration is a topic of debate: integration of parts into coherent wholes, such as in Kanizsa figures, is thought to arise either from pre-attentive, automatic coding processes or from higher-order processes involving selective attention. Previous studies have attempted to examine the role of selective attention in object integration either by employing visual search paradigms or by studying patients with unilateral deficits in selective att...

  19. Age-Related Changes in Selective Attention and Perceptual Load During Visual Search

    OpenAIRE

    Madden, David J.; Langley, Linda K.

    2003-01-01

    Three visual search experiments were conducted to test the hypothesis that age differences in selective attention vary as a function of perceptual load (E. A. Maylor & N. Lavie, 1998). Under resource-limited conditions (Experiments 1 and 2), the distraction from irrelevant display items generally decreased as display size (perceptual load) increased. This perceptual load effect was similar for younger and older adults, contrary to the findings of Maylor and Lavie. Distraction at low perceptua...

  20. I can see what you are saying: Auditory labels reduce visual search times.

    Science.gov (United States)

    Cho, Kit W

    2016-10-01

    The present study explored the self-directed-speech effect, the finding that relative to silent reading of a label (e.g., DOG), saying it aloud reduces visual search reaction times (RTs) for locating a target picture among distractors. Experiment 1 examined whether this effect is due to a confound in the differences in the number of cues in self-directed speech (two) vs. silent reading (one) and tested whether self-articulation is required for the effect. The results showed that self-articulation is not required and that merely hearing the auditory label reduces visual search RTs relative to silent reading. This finding also rules out the number of cues confound. Experiment 2 examined whether hearing an auditory label activates more prototypical features of the label's referent and whether the auditory-label benefit is moderated by the target's imagery concordance (the degree to which the target picture matches the mental picture that is activated by a written label for the target). When the target imagery concordance was high, RTs following the presentation of a high prototypicality picture or auditory cue were comparable and shorter than RTs following a visual label or low prototypicality picture cue. However, when the target imagery concordance was low, RTs following an auditory cue were shorter than the comparable RTs following the picture cues and visual-label cue. The results suggest that an auditory label activates both prototypical and atypical features of a concept and can facilitate visual search RTs even when compared to picture primes. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Dynamic spatial coding within the dorsal frontoparietal network during a visual search task.

    Directory of Open Access Journals (Sweden)

    Wieland H Sommer

    Full Text Available To what extent are the left and right visual hemifields spatially coded in the dorsal frontoparietal attention network? In many experiments with neglect patients, the left hemisphere shows a contralateral hemifield preference, whereas the right hemisphere represents both hemifields. This pattern of spatial coding is often used to explain the right-hemispheric dominance of lesions causing hemispatial neglect. However, pathophysiological mechanisms of hemispatial neglect are controversial because recent experiments on healthy subjects produced conflicting results regarding the spatial coding of visual hemifields. We used an fMRI paradigm that allowed us to distinguish two attentional subprocesses during a visual search task. Either within the left or right hemifield subjects first attended to stationary locations (spatial orienting and then shifted their attentional focus to search for a target line. Dynamic changes in spatial coding of the left and right hemifields were observed within subregions of the dorsal front-parietal network: During stationary spatial orienting, we found the well-known spatial pattern described above, with a bilateral hemifield representation in the right hemisphere and a contralateral preference in the left hemisphere. However, during search, the right hemisphere had a contralateral preference and the left hemisphere equally represented both hemifields. This finding leads to novel perspectives regarding models of visuospatial attention and hemispatial neglect.

  2. Expectation violations in sensorimotor sequences: shifting from LTM-based attentional selection to visual search.

    Science.gov (United States)

    Foerster, Rebecca M; Schneider, Werner X

    2015-03-01

    Long-term memory (LTM) delivers important control signals for attentional selection. LTM expectations have an important role in guiding the task-driven sequence of covert attention and gaze shifts, especially in well-practiced multistep sensorimotor actions. What happens when LTM expectations are disconfirmed? Does a sensory-based visual-search mode of attentional selection replace the LTM-based mode? What happens when prior LTM expectations become valid again? We investigated these questions in a computerized version of the number-connection test. Participants clicked on spatially distributed numbered shapes in ascending order while gaze was recorded. Sixty trials were performed with a constant spatial arrangement. In 20 consecutive trials, either numbers, shapes, both, or no features switched position. In 20 reversion trials, participants worked on the original arrangement. Only the sequence-affecting number switches elicited slower clicking, visual search-like scanning, and lower eye-hand synchrony. The effects were neither limited to the exchanged numbers nor to the corresponding actions. Thus, expectation violations in a well-learned sensorimotor sequence cause a regression from LTM-based attentional selection to visual search beyond deviant-related actions and locations. Effects lasted for several trials and reappeared during reversion. © 2015 New York Academy of Sciences.

  3. Fractal analysis of visual search activity for mass detection during mammographic screening.

    Science.gov (United States)

    Alamudun, Folami; Yoon, Hong-Jun; Hudson, Kathleen B; Morin-Ducote, Garnetta; Hammond, Tracy; Tourassi, Georgia D

    2017-03-01

    The objective of this study was to assess the complexity of human visual search activity during mammographic screening using fractal analysis and to investigate its relationship with case and reader characteristics. The study was performed for the task of mammographic screening with simultaneous viewing of four coordinated breast views as typically done in clinical practice. Eye-tracking data and diagnostic decisions collected for 100 mammographic cases (25 normal, 25 benign, 50 malignant) from 10 readers (three board certified radiologists and seven Radiology residents), formed the corpus for this study. The fractal dimension of the readers' visual scanning pattern was computed with the Minkowski-Bouligand box-counting method and used as a measure of gaze complexity. Individual factor and group-based interaction ANOVA analysis was performed to study the association between fractal dimension, case pathology, breast density, and reader experience level. The consistency of the observed trends depending on gaze data representation was also examined. Case pathology, breast density, reader experience level, and individual reader differences are all independent predictors of the complexity of visual scanning pattern when screening for breast cancer. No higher order effects were found to be significant. Fractal characterization of visual search behavior during mammographic screening is dependent on case properties and image reader characteristics. © 2017 American Association of Physicists in Medicine.

  4. Visual working memory supports the inhibition of previously processed information: evidence from preview search.

    Science.gov (United States)

    Al-Aidroos, Naseem; Emrich, Stephen M; Ferber, Susanne; Pratt, Jay

    2012-06-01

    In four experiments we assessed whether visual working memory (VWM) maintains a record of previously processed visual information, allowing old information to be inhibited, and new information to be prioritized. Specifically, we evaluated whether VWM contributes to the inhibition (i.e., visual marking) of previewed distractors in a preview search. We evaluated this proposal by testing three predictions. First, Experiments 1 and 2 demonstrate that preview inhibition is more effective when the number of previewed distractors is below VWM capacity than above; an effect that can only be observed at small preview set sizes (Experiment 2A) and when observers are allowed to move their eyes freely (Experiment 2B). Second, Experiment 3 shows that, when quantified as the number of inhibited distractors, the magnitude of the preview effect is stable across different search difficulties. Third, Experiment 4 demonstrates that individual differences in preview inhibition are correlated with individual differences in VWM capacity. These findings provide converging evidence that VWM supports the inhibition of previewed distractors. More generally, these findings demonstrate how VWM contributes to the efficiency of human visual information processing--VWM prioritizes new information by inhibiting old information from being reselected for attention.

  5. Modeling the effect of selection history on pop-out visual search.

    Directory of Open Access Journals (Sweden)

    Yuan-Chi Tseng

    Full Text Available While attentional effects in visual selection tasks have traditionally been assigned "top-down" or "bottom-up" origins, more recently it has been proposed that there are three major factors affecting visual selection: (1 physical salience, (2 current goals and (3 selection history. Here, we look further into selection history by investigating Priming of Pop-out (POP and the Distractor Preview Effect (DPE, two inter-trial effects that demonstrate the influence of recent history on visual search performance. Using the Ratcliff diffusion model, we model observed saccadic selections from an oddball search experiment that included a mix of both POP and DPE conditions. We find that the Ratcliff diffusion model can effectively model the manner in which selection history affects current attentional control in visual inter-trial effects. The model evidence shows that bias regarding the current trial's most likely target color is the most critical parameter underlying the effect of selection history. Our results are consistent with the view that the 3-item color-oddball task used for POP and DPE experiments is best understood as an attentional decision making task.

  6. Biometric recognition via texture features of eye movement trajectories in a visual searching task.

    Science.gov (United States)

    Li, Chunyong; Xue, Jiguo; Quan, Cheng; Yue, Jingwei; Zhang, Chenggang

    2018-01-01

    Biometric recognition technology based on eye-movement dynamics has been in development for more than ten years. Different visual tasks, feature extraction and feature recognition methods are proposed to improve the performance of eye movement biometric system. However, the correct identification and verification rates, especially in long-term experiments, as well as the effects of visual tasks and eye trackers' temporal and spatial resolution are still the foremost considerations in eye movement biometrics. With a focus on these issues, we proposed a new visual searching task for eye movement data collection and a new class of eye movement features for biometric recognition. In order to demonstrate the improvement of this visual searching task being used in eye movement biometrics, three other eye movement feature extraction methods were also tested on our eye movement datasets. Compared with the original results, all three methods yielded better results as expected. In addition, the biometric performance of these four feature extraction methods was also compared using the equal error rate (EER) and Rank-1 identification rate (Rank-1 IR), and the texture features introduced in this paper were ultimately shown to offer some advantages with regard to long-term stability and robustness over time and spatial precision. Finally, the results of different combinations of these methods with a score-level fusion method indicated that multi-biometric methods perform better in most cases.

  7. Modeling visual search using three-parameter probability functions in a hierarchical Bayesian framework.

    Science.gov (United States)

    Lin, Yi-Shin; Heinke, Dietmar; Humphreys, Glyn W

    2015-04-01

    In this study, we applied Bayesian-based distributional analyses to examine the shapes of response time (RT) distributions in three visual search paradigms, which varied in task difficulty. In further analyses we investigated two common observations in visual search-the effects of display size and of variations in search efficiency across different task conditions-following a design that had been used in previous studies (Palmer, Horowitz, Torralba, & Wolfe, Journal of Experimental Psychology: Human Perception and Performance, 37, 58-71, 2011; Wolfe, Palmer, & Horowitz, Vision Research, 50, 1304-1311, 2010) in which parameters of the response distributions were measured. Our study showed that the distributional parameters in an experimental condition can be reliably estimated by moderate sample sizes when Monte Carlo simulation techniques are applied. More importantly, by analyzing trial RTs, we were able to extract paradigm-dependent shape changes in the RT distributions that could be accounted for by using the EZ2 diffusion model. The study showed that Bayesian-based RT distribution analyses can provide an important means to investigate the underlying cognitive processes in search, including stimulus grouping and the bottom-up guidance of attention.

  8. Visual search and emotion: how children with autism spectrum disorders scan emotional scenes.

    Science.gov (United States)

    Maccari, Lisa; Pasini, Augusto; Caroli, Emanuela; Rosa, Caterina; Marotta, Andrea; Martella, Diana; Fuentes, Luis J; Casagrande, Maria

    2014-11-01

    This study assessed visual search abilities, tested through the flicker task, in children diagnosed with autism spectrum disorders (ASDs). Twenty-two children diagnosed with ASD and 22 matched typically developing (TD) children were told to detect changes in objects of central interest or objects of marginal interest (MI) embedded in either emotion-laden (positive or negative) or neutral real-world pictures. The results showed that emotion-laden pictures equally interfered with performance of both ASD and TD children, slowing down reaction times compared with neutral pictures. Children with ASD were faster than TD children, particularly in detecting changes in MI objects, the most difficult condition. However, their performance was less accurate than performance of TD children just when the pictures were negative. These findings suggest that children with ASD have better visual search abilities than TD children only when the search is particularly difficult and requires strong serial search strategies. The emotional-social impairment that is usually considered as a typical feature of ASD seems to be limited to processing of negative emotional information.

  9. Evidence for negative feature guidance in visual search is explained by spatial recoding.

    Science.gov (United States)

    Beck, Valerie M; Hollingworth, Andrew

    2015-10-01

    Theories of attention and visual search explain how attention is guided toward objects with known target features. But can attention be directed away from objects with a feature known to be associated only with distractors? Most studies have found that the demand to maintain the to-be-avoided feature in visual working memory biases attention toward matching objects rather than away from them. In contrast, Arita, Carlisle, and Woodman (2012) claimed that attention can be configured to selectively avoid objects that match a cued distractor color, and they reported evidence that this type of negative cue generates search benefits. However, the colors of the search array items in Arita et al. (2012) were segregated by hemifield (e.g., blue items on the left, red on the right), which allowed for a strategy of translating the feature-cue information into a simple spatial template (e.g., avoid right, or attend left). In the present study, we replicated the negative cue benefit using the Arita et al. (2012), method (albeit within a subset of participants who reliably used the color cues to guide attention). Then, we eliminated the benefit by using search arrays that could not be grouped by hemifield. Our results suggest that feature-guided avoidance is implemented only indirectly, in this case by translating feature-cue information into a spatial template. (c) 2015 APA, all rights reserved).

  10. Orientation is different: Interaction between contour integration and feature contrasts in visual search.

    Science.gov (United States)

    Jingling, Li; Tseng, Chia-Huei; Zhaoping, Li

    2013-09-10

    Salient items usually capture attention and are beneficial to visual search. Jingling and Tseng (2013), nevertheless, have discovered that a salient collinear column can impair local visual search. The display used in that study had 21 rows and 27 columns of bars, all uniformly horizontal (or vertical) except for one column of bars orthogonally oriented to all other bars, making this unique column of collinear (or noncollinear) bars salient in the display. Observers discriminated an oblique target bar superimposed on one of the bars either in the salient column or in the background. Interestingly, responses were slower for a target in a salient collinear column than in the background. This opens a theoretical question of how contour integration interacts with salience computation, which is addressed here by an examination of how salience modulated the search impairment from the collinear column. We show that the collinear column needs to have a high orientation contrast with its neighbors to exert search interference. A collinear column of high contrast in color or luminance did not produce the same impairment. Our results show that orientation-defined salience interacted with collinear contour differently from other feature dimensions, which is consistent with the neuronal properties in V1.

  11. Visual search for conjunctions of physical and numerical size shows that they are processed independently.

    Science.gov (United States)

    Sobel, Kenith V; Puri, Amrita M; Faulkenberry, Thomas J; Dague, Taylor D

    2017-03-01

    The size congruity effect refers to the interaction between numerical magnitude and physical digit size in a symbolic comparison task. Though this effect is well established in the typical 2-item scenario, the mechanisms at the root of the interference remain unclear. Two competing explanations have emerged in the literature: an early interaction model and a late interaction model. In the present study, we used visual conjunction search to test competing predictions from these 2 models. Participants searched for targets that were defined by a conjunction of physical and numerical size. Some distractors shared the target's physical size, and the remaining distractors shared the target's numerical size. We held the total number of search items fixed and manipulated the ratio of the 2 distractor set sizes. The results from 3 experiments converge on the conclusion that numerical magnitude is not a guiding feature for visual search, and that physical and numerical magnitude are processed independently, which supports a late interaction model of the size congruity effect. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  12. Contextual cueing of pop-out visual search: when context guides the deployment of attention.

    Science.gov (United States)

    Geyer, Thomas; Zehetleitner, Michael; Müller, Hermann J

    2010-05-01

    Visual context information can guide attention in demanding (i.e., inefficient) search tasks. When participants are repeatedly presented with identically arranged ('repeated') displays, reaction times are faster relative to newly composed ('non-repeated') displays. The present article examines whether this 'contextual cueing' effect operates also in simple (i.e., efficient) search tasks and if so, whether there it influences target, rather than response, selection. The results were that singleton-feature targets were detected faster when the search items were presented in repeated, rather than non-repeated, arrangements. Importantly, repeated, relative to novel, displays also led to an increase in signal detection accuracy. Thus, contextual cueing can expedite the selection of pop-out targets, most likely by enhancing feature contrast signals at the overall-salience computation stage.

  13. Visual search and spatial attention: ERPs in focussed and divided attention conditions.

    Science.gov (United States)

    Wijers, A A; Okita, T; Mulder, G; Mulder, L J; Lorist, M M; Poiesz, R; Scheffers, M K

    1987-08-01

    ERPs and performance were measured in divided and focussed attention visual search tasks. In focussed attention tasks, to-be-attended and to-be-ignored letters were presented simultaneously. We varied display load, mapping conditions and display size. RT, P3b-latency and negativity in the ERP associated with controlled search all increased with display load. Each of these measures showed selectivity of controlled search, in that they decreased with focussing of attention. An occipital N230, on the other hand, was not sensitive to focussing of attention, but was primarily affected by display load. ERPs to both attended and unattended targets in focussed attention conditions showed and N2 compared to nontargets, suggesting that both automatic and controlled letter classifications are possible. These effects were not affected by display size. Consistent mapping resulted in shorter RT and P3b-latency in divided attention conditions, compared to varied mapping conditions, but had no effect in focussed attention conditions.

  14. The footprints of visual attention during search with 100% valid and 100% invalid cues.

    Science.gov (United States)

    Eckstein, Miguel P; Pham, Binh T; Shimozaki, Steven S

    2004-06-01

    Human performance during visual search typically improves when spatial cues indicate the possible target locations. In many instances, the performance improvement is quantitatively predicted by a Bayesian or quasi-Bayesian observer in which visual attention simply selects the information at the cued locations without changing the quality of processing or sensitivity and ignores the information at the uncued locations. Aside from the general good agreement between the effect of the cue on model and human performance, there has been little independent confirmation that humans are effectively selecting the relevant information. In this study, we used the classification image technique to assess the effectiveness of spatial cues in the attentional selection of relevant locations and suppression of irrelevant locations indicated by spatial cues. Observers searched for a bright target among dimmer distractors that might appear (with 50% probability) in one of eight locations in visual white noise. The possible target location was indicated using a 100% valid box cue or seven 100% invalid box cues in which the only potential target locations was uncued. For both conditions, we found statistically significant perceptual templates shaped as differences of Gaussians at the relevant locations with no perceptual templates at the irrelevant locations. We did not find statistical significant differences between the shapes of the inferred perceptual templates for the 100% valid and 100% invalid cues conditions. The results confirm the idea that during search visual attention allows the observer to effectively select relevant information and ignore irrelevant information. The results for the 100% invalid cues condition suggests that the selection process is not drawn automatically to the cue but can be under the observers' voluntary control.

  15. Visual search in Alzheimer's disease: a deficiency in processing conjunctions of features.

    Science.gov (United States)

    Tales, A; Butler, S R; Fossey, J; Gilchrist, I D; Jones, R W; Troscianko, T

    2002-01-01

    Human vision often needs to encode multiple characteristics of many elements of the visual field, for example their lightness and orientation. The paradigm of visual search allows a quantitative assessment of the function of the underlying mechanisms. It measures the ability to detect a target element among a set of distractor elements. We asked whether Alzheimer's disease (AD) patients are particularly affected in one type of search, where the target is defined by a conjunction of features (orientation and lightness) and where performance depends on some shifting of attention. Two non-conjunction control conditions were employed. The first was a pre-attentive, single-feature, "pop-out" task, detecting a vertical target among horizontal distractors. The second was a single-feature, partly attentive task in which the target element was slightly larger than the distractors-a "size" task. This was chosen to have a similar level of attentional load as the conjunction task (for the control group), but lacked the conjunction of two features. In an experiment, 15 AD patients were compared to age-matched controls. The results suggested that AD patients have a particular impairment in the conjunction task but not in the single-feature size or pre-attentive tasks. This may imply that AD particularly affects those mechanisms which compare across more than one feature type, and spares the other systems and is not therefore simply an 'attention-related' impairment. Additionally, these findings show a double dissociation with previous data on visual search in Parkinson's disease (PD), suggesting a different effect of these diseases on the visual pathway.

  16. Category-based guidance of spatial attention during visual search for feature conjunctions.

    Science.gov (United States)

    Nako, Rebecca; Grubert, Anna; Eimer, Martin

    2016-10-01

    The question whether alphanumerical category is involved in the control of attentional target selection during visual search remains a contentious issue. We tested whether category-based attentional mechanisms would guide the allocation of attention under conditions where targets were defined by a combination of alphanumerical category and a basic visual feature, and search displays could contain both targets and partially matching distractor objects. The N2pc component was used as an electrophysiological marker of attentional object selection in tasks where target objects were defined by a conjunction of color and category (Experiment 1) or shape and category (Experiment 2). Some search displays contained the target or a nontarget object that matched either the target color/shape or its category among 3 nonmatching distractors. In other displays, the target and a partially matching nontarget object appeared together. N2pc components were elicited not only by targets and by color- or shape-matching nontargets, but also by category-matching nontarget objects, even on trials where a target was present in the same display. On these trials, the summed N2pc components to the 2 types of partially matching nontargets were initially equal in size to the target N2pc, suggesting that attention was allocated simultaneously and independently to all objects with target-matching features during the early phase of attentional processing. Results demonstrate that alphanumerical category is a genuine guiding feature that can operate in parallel with color or shape information to control the deployment of attention during visual search. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  17. The effect of mood state on visual search times for detecting a target in noise: An application of smartphone technology.

    Science.gov (United States)

    Maekawa, Toru; Anderson, Stephen J; de Brecht, Matthew; Yamagishi, Noriko

    2018-01-01

    The study of visual perception has largely been completed without regard to the influence that an individual's emotional status may have on their performance in visual tasks. However, there is a growing body of evidence to suggest that mood may affect not only creative abilities and interpersonal skills but also the capacity to perform low-level cognitive tasks. Here, we sought to determine whether rudimentary visual search processes are similarly affected by emotion. Specifically, we examined whether an individual's perceived happiness level affects their ability to detect a target in noise. To do so, we employed pop-out and serial visual search paradigms, implemented using a novel smartphone application that allowed search times and self-rated levels of happiness to be recorded throughout each twenty-four-hour period for two weeks. This experience sampling protocol circumvented the need to alter mood artificially with laboratory-based induction methods. Using our smartphone application, we were able to replicate the classic visual search findings, whereby pop-out search times remained largely unaffected by the number of distractors whereas serial search times increased with increasing number of distractors. While pop-out search times were unaffected by happiness level, serial search times with the maximum numbers of distractors (n = 30) were significantly faster for high happiness levels than low happiness levels (p = 0.02). Our results demonstrate the utility of smartphone applications in assessing ecologically valid measures of human visual performance. We discuss the significance of our findings for the assessment of basic visual functions using search time measures, and for our ability to search effectively for targets in real world settings.

  18. The guidance of visual search by shape features and shape configurations.

    Science.gov (United States)

    McCants, Cody W; Berggren, Nick; Eimer, Martin

    2018-03-01

    Representations of target features (attentional templates) guide attentional object selection during visual search. In many search tasks, targets objects are defined not by a single feature but by the spatial configuration of their component shapes. We used electrophysiological markers of attentional selection processes to determine whether the guidance of shape configuration search is entirely part-based or sensitive to the spatial relationship between shape features. Participants searched for targets defined by the spatial arrangement of two shape components (e.g., hourglass above circle). N2pc components were triggered not only by targets but also by partially matching distractors with one target shape (e.g., hourglass above hexagon) and by distractors that contained both target shapes in the reverse arrangement (e.g., circle above hourglass), in line with part-based attentional control. Target N2pc components were delayed when a reverse distractor was present on the opposite side of the same display, suggesting that early shape-specific attentional guidance processes could not distinguish between targets and reverse distractors. The control of attention then became sensitive to spatial configuration, which resulted in a stronger attentional bias for target objects relative to reverse and partially matching distractors. Results demonstrate that search for target objects defined by the spatial arrangement of their component shapes is initially controlled in a feature-based fashion but can later be guided by templates for spatial configurations. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  19. Multisensory teamwork: using a tactile or an auditory display to exchange gaze information improves performance in joint visual search.

    Science.gov (United States)

    Wahn, Basil; Schwandt, Jessika; Krüger, Matti; Crafa, Daina; Nunnendorf, Vanessa; König, Peter

    2016-06-01

    In joint tasks, adjusting to the actions of others is critical for success. For joint visual search tasks, research has shown that when search partners visually receive information about each other's gaze, they use this information to adjust to each other's actions, resulting in faster search performance. The present study used a visual, a tactile and an auditory display, respectively, to provide search partners with information about each other's gaze. Results showed that search partners performed faster when the gaze information was received via a tactile or auditory display in comparison to receiving it via a visual display or receiving no gaze information. Findings demonstrate the effectiveness of tactile and auditory displays for receiving task-relevant information in joint tasks and are applicable to circumstances in which little or no visual information is available or the visual modality is already taxed with a demanding task such as air-traffic control. Practitioner Summary: The present study demonstrates that tactile and auditory displays are effective for receiving information about actions of others in joint tasks. Findings are either applicable to circumstances in which little or no visual information is available or when the visual modality is already taxed with a demanding task.

  20. Visual search behaviour in skeletal radiographs: a cross-speciality study

    International Nuclear Information System (INIS)

    Leong, J.J.H.; Nicolaou, M.; Emery, R.J.; Darzi, A.W.; Yang, G.-Z.

    2007-01-01

    Aim: To determine whether experience improves the consistency of visual search behaviour in fracture identification in plain radiographs, and the effect of specialization. Material and methods: Twenty-five observers consisting of consultant radiologists, consultant orthopaedic surgeons, orthopaedic specialist registrars, orthopaedic senior house officers, and accident and emergency senior house officers examined 33 skeletal radiographs (shoulder, hand, and knee). Eye movement data were collected using a Tobii 1750 eye tracker with levels of diagnostic confidence collected simultaneously. Kullback-Leibler (KL) divergence and Gaussian mixture model fitting of fixation distance-to-fracture were used to calculate the consistency and the relationship between discovery and reflective visual search phases among different observer groups. Results: Total time spent studying the radiograph was not significantly different between the groups. However, the expert groups had a higher number of true positives (p < 0.001) with less dwell time on the fracture site (p < 0.001) and smaller KL distance (r = 0.062, p < 0.001) between trials. The Gaussian mixture model revealed smaller mean squared error in the expert groups in hand radiographs (r 0.162, p = 0.07); however, the reverse was true in shoulder radiographs (r -0.287, p < 0.001). The relative duration of the reflective phase decreases as the confidence level increased (r = 0.266, p = 0.074). Conclusions: Expert search behaviour exhibited higher accuracy and consistency whilst using less time fixating on fracture sites. This strategy conforms to the discovery and reflective phases of the global-focal model, where the reflective search may be implicated in the cross-referencing and conspicuity of the target, as well as the level of decision-making process involved. The effect of specialization appears to change the search strategy more than the effect of the length of training

  1. Fixation and saliency during search of natural scenes: the case of visual agnosia.

    Science.gov (United States)

    Foulsham, Tom; Barton, Jason J S; Kingstone, Alan; Dewhurst, Richard; Underwood, Geoffrey

    2009-07-01

    Models of eye movement control in natural scenes often distinguish between stimulus-driven processes (which guide the eyes to visually salient regions) and those based on task and object knowledge (which depend on expectations or identification of objects and scene gist). In the present investigation, the eye movements of a patient with visual agnosia were recorded while she searched for objects within photographs of natural scenes and compared to those made by students and age-matched controls. Agnosia is assumed to disrupt the top-down knowledge available in this task, and so may increase the reliance on bottom-up cues. The patient's deficit in object recognition was seen in poor search performance and inefficient scanning. The low-level saliency of target objects had an effect on responses in visual agnosia, and the most salient region in the scene was more likely to be fixated by the patient than by controls. An analysis of model-predicted saliency at fixation locations indicated a closer match between fixations and low-level saliency in agnosia than in controls. These findings are discussed in relation to saliency-map models and the balance between high and low-level factors in eye guidance.

  2. Neural Correlates of Changes in a Visual Search Task due to Cognitive Training in Seniors

    Directory of Open Access Journals (Sweden)

    Nele Wild-Wall

    2012-01-01

    Full Text Available This study aimed to elucidate the underlying neural sources of near transfer after a multidomain cognitive training in older participants in a visual search task. Participants were randomly assigned to a social control, a no-contact control and a training group, receiving a 4-month paper-pencil and PC-based trainer guided cognitive intervention. All participants were tested in a before and after session with a conjunction visual search task. Performance and event-related potentials (ERPs suggest that the cognitive training improved feature processing of the stimuli which was expressed in an increased rate of target detection compared to the control groups. This was paralleled by enhanced amplitudes of the frontal P2 in the ERP and by higher activation in lingual and parahippocampal brain areas which are discussed to support visual feature processing. Enhanced N1 and N2 potentials in the ERP for nontarget stimuli after cognitive training additionally suggest improved attention and subsequent processing of arrays which were not immediately recognized as targets. Possible test repetition effects were confined to processes of stimulus categorisation as suggested by the P3b potential. The results show neurocognitive plasticity in aging after a broad cognitive training and allow pinpointing the functional loci of effects induced by cognitive training.

  3. User-assisted visual search and tracking across distributed multi-camera networks

    Science.gov (United States)

    Raja, Yogesh; Gong, Shaogang; Xiang, Tao

    2011-11-01

    Human CCTV operators face several challenges in their task which can lead to missed events, people or associations, including: (a) data overload in large distributed multi-camera environments; (b) short attention span; (c) limited knowledge of what to look for; and (d) lack of access to non-visual contextual intelligence to aid search. Developing a system to aid human operators and alleviate such burdens requires addressing the problem of automatic re-identification of people across disjoint camera views, a matching task made difficult by factors such as lighting, viewpoint and pose changes and for which absolute scoring approaches are not best suited. Accordingly, we describe a distributed multi-camera tracking (MCT) system to visually aid human operators in associating people and objects effectively over multiple disjoint camera views in a large public space. The system comprises three key novel components: (1) relative measures of ranking rather than absolute scoring to learn the best features for matching; (2) multi-camera behaviour profiling as higher-level knowledge to reduce the search space and increase the chance of finding correct matches; and (3) human-assisted data mining to interactively guide search and in the process recover missing detections and discover previously unknown associations. We provide an extensive evaluation of the greater effectiveness of the system as compared to existing approaches on industry-standard i-LIDS multi-camera data.

  4. Neurophysiological correlates of relatively enhanced local visual search in autistic adolescents.

    Science.gov (United States)

    Manjaly, Zina M; Bruning, Nicole; Neufang, Susanne; Stephan, Klaas E; Brieber, Sarah; Marshall, John C; Kamp-Becker, Inge; Remschmidt, Helmut; Herpertz-Dahlmann, Beate; Konrad, Kerstin; Fink, Gereon R

    2007-03-01

    Previous studies found normal or even superior performance of autistic patients on visuospatial tasks requiring local search, like the Embedded Figures Task (EFT). A well-known interpretation of this is "weak central coherence", i.e. autistic patients may show a reduced general ability to process information in its context and may therefore have a tendency to favour local over global aspects of information processing. An alternative view is that the local processing advantage in the EFT may result from a relative amplification of early perceptual processes which boosts processing of local stimulus properties but does not affect processing of global context. This study used functional magnetic resonance imaging (fMRI) in 12 autistic adolescents (9 Asperger and 3 high-functioning autistic patients) and 12 matched controls to help distinguish, on neurophysiological grounds, between these two accounts of EFT performance in autistic patients. Behaviourally, we found autistic individuals to be unimpaired during the EFT while they were significantly worse at performing a closely matched control task with minimal local search requirements. The fMRI results showed that activations specific for the local search aspects of the EFT were left-lateralised in parietal and premotor areas for the control group (as previously demonstrated for adults), whereas for the patients these activations were found in right primary visual cortex and bilateral extrastriate areas. These results suggest that enhanced local processing in early visual areas, as opposed to impaired processing of global context, is characteristic for performance of the EFT by autistic patients.

  5. Visual search in ecological and non-ecological displays: evidence for a non-monotonic effect of complexity on performance.

    Directory of Open Access Journals (Sweden)

    Philippe Chassy

    Full Text Available Considerable research has been carried out on visual search, with single or multiple targets. However, most studies have used artificial stimuli with low ecological validity. In addition, little is known about the effects of target complexity and expertise in visual search. Here, we investigate visual search in three conditions of complexity (detecting a king, detecting a check, and detecting a checkmate with chess players of two levels of expertise (novices and club players. Results show that the influence of target complexity depends on level of structure of the visual display. Different functional relationships were found between artificial (random chess positions and ecologically valid (game positions stimuli: With artificial, but not with ecologically valid stimuli, a "pop out" effect was present when a target was visually more complex than distractors but could be captured by a memory chunk. This suggests that caution should be exercised when generalising from experiments using artificial stimuli with low ecological validity to real-life stimuli.

  6. More target features in visual working memory leads to poorer search guidance: Evidence from contralateral delay activity

    OpenAIRE

    Schmidt, Joseph; MacNamara, Annmarie; Proudfit, Greg Hajcak; Zelinsky, Gregory J.

    2014-01-01

    The visual-search literature has assumed that the top-down target representation used to guide search resides in visual working memory (VWM). We directly tested this assumption using contralateral delay activity (CDA) to estimate the VWM load imposed by the target representation. In Experiment 1, observers previewed four photorealistic objects and were cued to remember the two objects appearing to the left or right of central fixation; Experiment 2 was identical except that observers previewe...

  7. More insight into the interplay of response selection and visual attention in dual-tasks: masked visual search and response selection are performed in parallel.

    Science.gov (United States)

    Reimer, Christina B; Schubert, Torsten

    2017-09-15

    Both response selection and visual attention are limited in capacity. According to the central bottleneck model, the response selection processes of two tasks in a dual-task situation are performed sequentially. In conjunction search, visual attention is required to select the items and to bind their features (e.g., color and form), which results in a serial search process. Search time increases as items are added to the search display (i.e., set size effect). When the search display is masked, visual attention deployment is restricted to a brief period of time and target detection decreases as a function of set size. Here, we investigated whether response selection and visual attention (i.e., feature binding) rely on a common or on distinct capacity limitations. In four dual-task experiments, participants completed an auditory Task 1 and a conjunction search Task 2 that were presented with an experimentally modulated temporal interval between them (Stimulus Onset Asynchrony, SOA). In Experiment 1, Task 1 was a two-choice discrimination task and the conjunction search display was not masked. In Experiment 2, the response selection difficulty in Task 1 was increased to a four-choice discrimination and the search task was the same as in Experiment 1. We applied the locus-of-slack method in both experiments to analyze conjunction search time, that is, we compared the set size effects across SOAs. Similar set size effects across SOAs (i.e., additive effects of SOA and set size) would indicate sequential processing of response selection and visual attention. However, a significantly smaller set size effect at short SOA compared to long SOA (i.e., underadditive interaction of SOA and set size) would indicate parallel processing of response selection and visual attention. In both experiments, we found underadditive interactions of SOA and set size. In Experiments 3 and 4, the conjunction search display in Task 2 was masked. Task 1 was the same as in Experiments 1 and 2

  8. Visual search and attention in five-year-old very preterm/very low birth weight children.

    Science.gov (United States)

    Geldof, Christiaan J A; de Kieviet, Jorrit F; Dik, Marjolein; Kok, Joke H; van Wassenaer-Leemhuis, Aleid G; Oosterlaan, Jaap

    2013-12-01

    This study aimed to establish visual search performance and attention functioning in very preterm/very low birth weight (VP/VLBW) children using novel and well established measures, and to study their contribution to intellectual functioning. Visual search and attention network efficiency were assessed in 108 VP/VLBW children and 72 age matched term controls at 5.5 years corrected age. Visual search performance was investigated with a newly developed paradigm manipulating stimulus density and stimulus organization. Attention functioning was studied using the Attention Network Test (ANT). Intellectual functioning was measured by a short form of the Wechsler Preschool and Primary Scale of Intelligence. Data were analyzed using ANOVAs and multiple regression analyses. Visual search was less efficient in VP/VLBW children as compared to term controls, as indicated by increased search time (0.31 SD, p = .04) and increased error rate (0.36 SD, p = .02). In addition, VP/VLBW children demonstrated poorer executive attention as indicated by lower accuracy for the executive attention measure of the ANT (0.61 SD, p attention measures (0.13 SD, p = .42). Visual search time and error rate, and executive attention, collectively, accounted for 14% explained variance in full scale IQ (R(2) = .14, p attention. Visual attention dysfunctions contributed to intelligence, suggesting the opportunity to improve intellectual functioning by using interventions programs that may enhance attention capacities. © 2013.

  9. Object integration requires attention: Visual search for Kanizsa figures in parietal extinction.

    Science.gov (United States)

    Gögler, Nadine; Finke, Kathrin; Keller, Ingo; Müller, Hermann J; Conci, Markus

    2016-11-01

    The contribution of selective attention to object integration is a topic of debate: integration of parts into coherent wholes, such as in Kanizsa figures, is thought to arise either from pre-attentive, automatic coding processes or from higher-order processes involving selective attention. Previous studies have attempted to examine the role of selective attention in object integration either by employing visual search paradigms or by studying patients with unilateral deficits in selective attention. Here, we combined these two approaches to investigate object integration in visual search in a group of five patients with left-sided parietal extinction. Our search paradigm was designed to assess the effect of left- and right-grouped nontargets on detecting a Kanizsa target square. The results revealed comparable reaction time (RT) performance in patients and controls when they were presented with displays consisting of a single to-be-grouped item that had to be classified as target vs. nontarget. However, when display size increased to two items, patients showed an extinction-specific pattern of enhanced RT costs for nontargets that induced a partial shape grouping on the right, i.e., in the attended hemifield (relative to the ungrouped baseline). Together, these findings demonstrate a competitive advantage for right-grouped objects, which in turn indicates that in parietal extinction, attentional competition between objects particularly limits integration processes in the contralesional, i.e., left hemifield. These findings imply a crucial contribution of selective attentional resources to visual object integration. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Hand movement deviations in a visual search task with cross modal cuing

    Directory of Open Access Journals (Sweden)

    Hürol Aslan

    2007-01-01

    Full Text Available The purpose of this study is to demonstrate the cross-modal effects of an auditory organization on a visual search task and to investigate the influence of the level of detail in instructions describing or hinting at the associations between auditory stimuli and the possible locations of a visual target. In addition to measuring the participants’ reaction times, we paid special attention to tracking the hand movements toward the target. According to the results, the auditory stimuli unassociated with the target locations slightly –but significantly- increased the deviation of the hand movement from the path leading to the target location. The increase in the deviation depended on the degree of association between auditory stimuli and target locations, albeit not on the level of detail in the instructions about the task.

  11. The Impact of Concurrent Noise on Visual Search in Children With ADHD.

    Science.gov (United States)

    Allen, Rosemary; Pammer, Kristen

    2015-09-22

    The purpose of this study was to investigate the impact of a concurrent "white noise" stimulus on selective attention in children with ADHD. Participants were 33 children aged 7 to 14 years, who had been previously diagnosed with ADHD. All children completed a computer-based conjunction search task under two noise conditions: a classroom noise condition and a classroom noise + white noise condition. The white noise stimulus was sounds of rain, administered using an iPhone application called Sleep Machine. There were no overall differences between conditions for target detection accuracy, mean reaction time (RT), or reaction time variability (SD). The impact of white noise on visual search depended on children's medication status. White noise may improve task engagement for non-medicated children. White noise may be beneficial for task performance when used as an adjunct to medication. © The Author(s) 2015.

  12. Attentional Capture by Salient Distractors during Visual Search Is Determined by Temporal Task Demands

    DEFF Research Database (Denmark)

    Kiss, Monika; Grubert, Anna; Petersen, Anders

    2012-01-01

    The question whether attentional capture by salient but taskirrelevant visual stimuli is triggered in a bottom–up fashion or depends on top–down task settings is still unresolved. Strong support for bottom–up capture was obtained in the additional singleton task, in which search arrays were visible...... until response onset. Equally strong evidence for top–down control of attentional capture was obtained in spatial cueing experiments in which display durations were very brief. To demonstrate the critical role of temporal task demands on salience-driven attentional capture, we measured ERP indicators...... component that was followed by a late Pd component, suggesting that they triggered attentional capture, which was later replaced by location-specific inhibition. When search arrays were visible for only 200 msec, the distractor-elicited N2pc was eliminated and was replaced by a Pd component in the same time...

  13. The interplay of attention and consciousness in visual search, attentional blink and working memory consolidation.

    Science.gov (United States)

    Raffone, Antonino; Srinivasan, Narayanan; van Leeuwen, Cees

    2014-05-05

    Despite the acknowledged relationship between consciousness and attention, theories of the two have mostly been developed separately. Moreover, these theories have independently attempted to explain phenomena in which both are likely to interact, such as the attentional blink (AB) and working memory (WM) consolidation. Here, we make an effort to bridge the gap between, on the one hand, a theory of consciousness based on the notion of global workspace (GW) and, on the other, a synthesis of theories of visual attention. We offer a theory of attention and consciousness (TAC) that provides a unified neurocognitive account of several phenomena associated with visual search, AB and WM consolidation. TAC assumes multiple processing stages between early visual representation and conscious access, and extends the dynamics of the global neuronal workspace model to a visual attentional workspace (VAW). The VAW is controlled by executive routers, higher-order representations of executive operations in the GW, without the need for explicit saliency or priority maps. TAC leads to newly proposed mechanisms for illusory conjunctions, AB, inattentional blindness and WM capacity, and suggests neural correlates of phenomenal consciousness. Finally, the theory reconciles the all-or-none and graded perspectives on conscious representation.

  14. Detection of emotional faces: salient physical features guide effective visual search.

    Science.gov (United States)

    Calvo, Manuel G; Nummenmaa, Lauri

    2008-08-01

    In this study, the authors investigated how salient visual features capture attention and facilitate detection of emotional facial expressions. In a visual search task, a target emotional face (happy, disgusted, fearful, angry, sad, or surprised) was presented in an array of neutral faces. Faster detection of happy and, to a lesser extent, surprised and disgusted faces was found both under upright and inverted display conditions. Inversion slowed down the detection of these faces less than that of others (fearful, angry, and sad). Accordingly, the detection advantage involves processing of featural rather than configural information. The facial features responsible for the detection advantage are located in the mouth rather than the eye region. Computationally modeled visual saliency predicted both attentional orienting and detection. Saliency was greatest for the faces (happy) and regions (mouth) that were fixated earlier and detected faster, and there was close correspondence between the onset of the modeled saliency peak and the time at which observers initially fixated the faces. The authors conclude that visual saliency of specific facial features--especially the smiling mouth--is responsible for facilitated initial orienting, which thus shortens detection. (PsycINFO Database Record (c) 2008 APA, all rights reserved).

  15. The interplay of attention and consciousness in visual search, attentional blink and working memory consolidation

    Science.gov (United States)

    Raffone, Antonino; Srinivasan, Narayanan; van Leeuwen, Cees

    2014-01-01

    Despite the acknowledged relationship between consciousness and attention, theories of the two have mostly been developed separately. Moreover, these theories have independently attempted to explain phenomena in which both are likely to interact, such as the attentional blink (AB) and working memory (WM) consolidation. Here, we make an effort to bridge the gap between, on the one hand, a theory of consciousness based on the notion of global workspace (GW) and, on the other, a synthesis of theories of visual attention. We offer a theory of attention and consciousness (TAC) that provides a unified neurocognitive account of several phenomena associated with visual search, AB and WM consolidation. TAC assumes multiple processing stages between early visual representation and conscious access, and extends the dynamics of the global neuronal workspace model to a visual attentional workspace (VAW). The VAW is controlled by executive routers, higher-order representations of executive operations in the GW, without the need for explicit saliency or priority maps. TAC leads to newly proposed mechanisms for illusory conjunctions, AB, inattentional blindness and WM capacity, and suggests neural correlates of phenomenal consciousness. Finally, the theory reconciles the all-or-none and graded perspectives on conscious representation. PMID:24639586

  16. Performance of brain-damaged, schizophrenic, and normal subjects on a visual searching task.

    Science.gov (United States)

    Goldstein, G; Kyc, F

    1978-06-01

    Goldstein, Rennick, Welch, and Shelly (1973) reported on a visual searching task that generated 94.1% correct classifications when comparing brain-damaged and normal subjects, and 79.4% correct classifications when comparing brain-damaged and psychiatric patients. In the present study, representing a partial cross-validation with some modification of the test procedure, comparisons were made between brain-damaged and schizophrenic, and brain-damaged and normal subjects. There were 92.5% correct classifications for the brain-damaged vs normal comparison, and 82.5% correct classifications for the brain-damaged vs schizophrenic comparison.

  17. Iterative Integration of Visual Insights during Scalable Patent Search and Analysis.

    Science.gov (United States)

    Koch, S; Bosch, H; Giereth, M; Ertl, T

    2011-05-01

    Patents are of growing importance in current economic markets. Analyzing patent information has, therefore, become a common task for many interest groups. As a prerequisite for patent analysis, extensive search for relevant patent information is essential. Unfortunately, the complexity of patent material inhibits a straightforward retrieval of all relevant patent documents and leads to iterative, time-consuming approaches in practice. Already the amount of patent data to be analyzed poses challenges with respect to scalability. Further scalability issues arise concerning the diversity of users and the large variety of analysis tasks. With "PatViz", a system for interactive analysis of patent information has been developed addressing scalability at various levels. PatViz provides a visual environment allowing for interactive reintegration of insights into subsequent search iterations, thereby bridging the gap between search and analytic processes. Because of its extensibility, we expect that the approach we have taken can be employed in different problem domains that require high quality of search results regarding their completeness.

  18. Hippocampal gamma-band Synchrony and pupillary responses index memory during visual search.

    Science.gov (United States)

    Montefusco-Siegmund, Rodrigo; Leonard, Timothy K; Hoffman, Kari L

    2017-04-01

    Memory for scenes is supported by the hippocampus, among other interconnected structures, but the neural mechanisms related to this process are not well understood. To assess the role of the hippocampus in memory-guided scene search, we recorded local field potentials and multiunit activity from the hippocampus of macaques as they performed goal-directed search tasks using natural scenes. We additionally measured pupil size during scene presentation, which in humans is modulated by recognition memory. We found that both pupil dilation and search efficiency accompanied scene repetition, thereby indicating memory for scenes. Neural correlates included a brief increase in hippocampal multiunit activity and a sustained synchronization of unit activity to gamma band oscillations (50-70 Hz). The repetition effects on hippocampal gamma synchronization occurred when pupils were most dilated, suggesting an interaction between aroused, attentive processing and hippocampal correlates of recognition memory. These results suggest that the hippocampus may support memory-guided visual search through enhanced local gamma synchrony. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  19. Visual Search for Motion-Form Conjunctions: Selective Attention to Movement Direction.

    Science.gov (United States)

    Von Mühlenen, Adrian; Müller, Hermann J

    1999-07-01

    In 2 experiments requiring visual search for conjunctions of motion and form, the authors reinvestigated whether motion-based filtering (e.g., P. McLeod, J. Driver, Z. Dienes, & J. Crisp, 1991) is direction selective and whether cuing of the target direction promotes efficient search performance. In both experiments, the authors varied the number of movement directions in the display and the predictability of the target direction. Search was less efficient when items moved in multiple (2, 3, and 4) directions as compared with just 1 direction. Furthermore, precuing of the target direction facilitated the search, even with "wrap-around" displays, relatively more when items moved in multiple directions. The authors proposed 2 principles to explain that pattern of effects: (a) interference on direction computation between items moving in different directions (e.g., N. Qian & R. A. Andersen, 1994) and (b) selective direction tuning of motion detectors involving a receptive-field contraction (cf. J. Moran & R. Desimone, 1985; S. Treue & J. H. R. Maunsell, 1996).

  20. Multimodal neuroimaging evidence linking memory and attention systems during visual search cued by context.

    Science.gov (United States)

    Kasper, Ryan W; Grafton, Scott T; Eckstein, Miguel P; Giesbrecht, Barry

    2015-03-01

    Visual search can be facilitated by the learning of spatial configurations that predict the location of a target among distractors. Neuropsychological and functional magnetic resonance imaging (fMRI) evidence implicates the medial temporal lobe (MTL) memory system in this contextual cueing effect, and electroencephalography (EEG) studies have identified the involvement of visual cortical regions related to attention. This work investigated two questions: (1) how memory and attention systems are related in contextual cueing; and (2) how these systems are involved in both short- and long-term contextual learning. In one session, EEG and fMRI data were acquired simultaneously in a contextual cueing task. In a second session conducted 1 week later, EEG data were recorded in isolation. The fMRI results revealed MTL contextual modulations that were correlated with short- and long-term behavioral context enhancements and attention-related effects measured with EEG. An fMRI-seeded EEG source analysis revealed that the MTL contributed the most variance to the variability in the attention enhancements measured with EEG. These results support the notion that memory and attention systems interact to facilitate search when spatial context is implicitly learned. © 2015 New York Academy of Sciences.

  1. Visual search and contextual cueing: differential effects in 10-year-old children and adults.

    Science.gov (United States)

    Couperus, Jane W; Hunt, Ruskin H; Nelson, Charles A; Thomas, Kathleen M

    2011-02-01

    The development of contextual cueing specifically in relation to attention was examined in two experiments. Adult and 10-year-old participants completed a context cueing visual search task (Jiang & Chun, The Quarterly Journal of Experimental Psychology, 54A(4), 1105-1124, 2001) containing stimuli presented in an attended (e.g., red) and unattended (e.g., green) color. When the spatial configuration of stimuli in the attended and unattended color was invariant and consistently paired with the target location, adult reaction times improved, demonstrating learning. Learning also occurred if only the attended stimuli's configuration remained fixed. In contrast, while 10 year olds, like adults, showed incrementally slower reaction times as the number of attended stimuli increased, they did not show learning in the standard paradigm. However, they did show learning when the ratio of attended to unattended stimuli was high, irrespective of the total number of attended stimuli. Findings suggest children show efficient attentional guidance by color in visual search but differences in contextual cueing.

  2. Pretraining Cortical Thickness Predicts Subsequent Perceptual Learning Rate in a Visual Search Task.

    Science.gov (United States)

    Frank, Sebastian M; Reavis, Eric A; Greenlee, Mark W; Tse, Peter U

    2016-03-01

    We report that preexisting individual differences in the cortical thickness of brain areas involved in a perceptual learning task predict the subsequent perceptual learning rate. Participants trained in a motion-discrimination task involving visual search for a "V"-shaped target motion trajectory among inverted "V"-shaped distractor trajectories. Motion-sensitive area MT+ (V5) was functionally identified as critical to the task: after 3 weeks of training, activity increased in MT+ during task performance, as measured by functional magnetic resonance imaging. We computed the cortical thickness of MT+ from anatomical magnetic resonance imaging volumes collected before training started, and found that it significantly predicted subsequent perceptual learning rates in the visual search task. Participants with thicker neocortex in MT+ before training learned faster than those with thinner neocortex in that area. A similar association between cortical thickness and training success was also found in posterior parietal cortex (PPC). © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  3. Independent and additive repetition priming of motion direction and color in visual search.

    Science.gov (United States)

    Kristjánsson, Arni

    2009-03-01

    Priming of visual search for Gabor patch stimuli, varying in color and local drift direction, was investigated. The task relevance of each feature varied between the different experimental conditions compared. When the target defining dimension was color, a large effect of color repetition was seen as well as a smaller effect of the repetition of motion direction. The opposite priming pattern was seen when motion direction defined the target--the effect of motion direction repetition was this time larger than for color repetition. Finally, when neither was task relevant, and the target defining dimension was the spatial frequency of the Gabor patch, priming was seen for repetition of both color and motion direction, but the effects were smaller than in the previous two conditions. These results show that features do not necessarily have to be task relevant for priming to occur. There is little interaction between priming following repetition of color and motion, these two features show independent and additive priming effects, most likely reflecting that the two features are processed at separate processing sites in the nervous system, consistent with previous findings from neuropsychology & neurophysiology. The implications of the findings for theoretical accounts of priming in visual search are discussed.

  4. Illusory conjunctions and perceptual grouping in a visual search task in schizophrenia.

    Science.gov (United States)

    Carr, V J; Dewis, S A; Lewin, T J

    1998-07-27

    This report describes part of a series of experiments, conducted within the framework of feature integration theory, to determine whether patients with schizophrenia show deficits in preattentive processing. Thirty subjects with a DSM-III-R diagnosis of schizophrenia and 30 age-, gender-, and education-matched normal control subjects completed two computerized experimental tasks, a visual search task assessing the frequency of illusory conjunctions (i.e. false perceptions) under conditions of divided attention (Experiment 3) and a task which examined the effects of perceptual grouping on illusory conjunctions (Experiment 4). We also assessed current symptomatology and its relationship to task performance. Contrary to our hypotheses, schizophrenia subjects did not show higher rates of illusory conjunctions, and the influence of perceptual grouping on the frequency of illusory conjunctions was similar for schizophrenia and control subjects. Nonetheless, specific predictions from feature integration theory about the impact of different target types (Experiment 3) and perceptual groups (Experiment 4) on the likelihood of forming an illusory conjunction were strongly supported, thereby confirming the integrity of the experimental procedures. Overall, these studies revealed no firm evidence that schizophrenia is associated with a preattentive abnormality in visual search using stimuli that differ on the basis of physical characteristics.

  5. Prediction of shot success for basketball free throws: visual search strategy.

    Science.gov (United States)

    Uchida, Yusuke; Mizuguchi, Nobuaki; Honda, Masaaki; Kanosue, Kazuyuki

    2014-01-01

    In ball games, players have to pay close attention to visual information in order to predict the movements of both the opponents and the ball. Previous studies have indicated that players primarily utilise cues concerning the ball and opponents' body motion. The information acquired must be effective for observing players to select the subsequent action. The present study evaluated the effects of changes in the video replay speed on the spatial visual search strategy and ability to predict free throw success. We compared eye movements made while observing a basketball free throw by novices and experienced basketball players. Correct response rates were close to chance (50%) at all video speeds for the novices. The correct response rate of experienced players was significantly above chance (and significantly above that of the novices) at the normal speed, but was not different from chance at both slow and fast speeds. Experienced players gazed more on the lower part of the player's body when viewing a normal speed video than the novices. The players likely detected critical visual information to predict shot success by properly moving their gaze according to the shooter's movements. This pattern did not change when the video speed was decreased, but changed when it was increased. These findings suggest that temporal information is important for predicting action outcomes and that such outcomes are sensitive to video speed.

  6. SAQP pitch walk metrology using single target metrology

    Science.gov (United States)

    Fang, Fang; Herrera, Pedro; Kagalwala, Taher; Camp, Janay; Vaid, Alok; Pandev, Stilian; Zach, Franz

    2017-03-01

    Self-aligned quadruple patterning (SAQP) processes have found widespread acceptance in advanced technology nodes to drive device scaling beyond the resolution limitations of immersion scanners. Of the four spaces generated in this process from one lithography pattern two tend to be equivalent as they are derived from the first spacer deposition. The three independent spaces are commonly labelled as α, β and γ. α, β and γ are controlled by multiple process steps including the initial lithographic patterning process, the two mandrel and spacer etches as well as the two spacer depositions. Scatterometry has been the preferred metrology approach, however is restricted to repetitive arrays. In these arrays independent measurements, in particular of alpha and gamma, are not possible due to degeneracy of the standard array targets. . In this work we present a single target approach which lifts the degeneracies commonly encountered while using product relevant layout geometries. We will first describe the metrology approach which includes the previously described SRM (signal response metrology) combined with reference data derived from CD SEM data. The performance of the methodology is shown in figures 1-3. In these figures the optically determined values for alpha, beta and gamma are compared to the CD SEM reference data. The variations are achieved using controlled process experiments varying Mandrel CD and Spacer deposition thicknesses.

  7. Tyrosine kinase inhibitors: Multi-targeted or single-targeted?

    Science.gov (United States)

    Broekman, Fleur; Giovannetti, Elisa; Peters, Godefridus J

    2011-02-10

    Since in most tumors multiple signaling pathways are involved, many of the inhibitors in clinical development are designed to affect a wide range of targeted kinases. The most important tyrosine kinase families in the development of tyrosine kinase inhibitors are the ABL, SCR, platelet derived growth factor, vascular endothelial growth factor receptor and epidermal growth factor receptor families. Both multi-kinase inhibitors and single-kinase inhibitors have advantages and disadvantages, which are related to potential resistance mechanisms, pharmacokinetics, selectivity and tumor environment. In different malignancies various tyrosine kinases are mutated or overexpressed and several resistance mechanisms exist. Pharmacokinetics is influenced by interindividual differences and differs for two single targeted inhibitors or between patients treated by the same tyrosine kinase inhibitor. Different tyrosine kinase inhibitors have various mechanisms to achieve selectivity, while differences in gene expression exist between tumor and stromal cells. Considering these aspects, one type of inhibitor can generally not be preferred above the other, but will depend on the specific genetic constitution of the patient and the tumor, allowing personalized therapy. The most effective way of cancer treatment by using tyrosine kinase inhibitors is to consider each patient/tumor individually and to determine the strategy that specifically targets the consequences of altered (epi)genetics of the tumor. This strategy might result in treatment by a single multi kinase inhibitor for one patient, but in treatment by a couple of single kinase inhibitors for other patients.

  8. Distributions of hit-numbers in single targets

    Energy Technology Data Exchange (ETDEWEB)

    Fowler, J F [Postgraduate Medical School, Hammersmith Hospital, London (United Kingdom)

    1966-07-01

    Very general models can be proposed for relating the surviving proportion of an irradiated population of cells or bacteria to the absorbed dose, but if the number of free parameters is large the model can never be tested experimentally (Zimmer; Zirkie; Tobias). A relatively simple model is therefore proposed here, based on the physical facts of energy deposition in small volumes which are currently under active investigation (Rossi), and on cell-survival experiments over a wide range of LET (e.g. Barendsen et al.; Barendsen). It is not suggested that the model is correct or final, but only that its shortcomings should be demonstrated by comparison with experimental results before more complicated models are worth pursuing. It is basically a multihit model applied first to a single target volume, but also applicable to the situation where only one out of many potential target volumes has to be inactivated to kill the organism. It can be extended to two or more target volumes if necessary. Emphasis is placed upon the amount of energy locally deposited in certain sensitive volumes called 'target volumes'.

  9. iPixel: a visual content-based and semantic search engine for retrieving digitized mammograms by using collective intelligence.

    Science.gov (United States)

    Alor-Hernández, Giner; Pérez-Gallardo, Yuliana; Posada-Gómez, Rubén; Cortes-Robles, Guillermo; Rodríguez-González, Alejandro; Aguilar-Laserre, Alberto A

    2012-09-01

    Nowadays, traditional search engines such as Google, Yahoo and Bing facilitate the retrieval of information in the format of images, but the results are not always useful for the users. This is mainly due to two problems: (1) the semantic keywords are not taken into consideration and (2) it is not always possible to establish a query using the image features. This issue has been covered in different domains in order to develop content-based image retrieval (CBIR) systems. The expert community has focussed their attention on the healthcare domain, where a lot of visual information for medical analysis is available. This paper provides a solution called iPixel Visual Search Engine, which involves semantics and content issues in order to search for digitized mammograms. iPixel offers the possibility of retrieving mammogram features using collective intelligence and implementing a CBIR algorithm. Our proposal compares not only features with similar semantic meaning, but also visual features. In this sense, the comparisons are made in different ways: by the number of regions per image, by maximum and minimum size of regions per image and by average intensity level of each region. iPixel Visual Search Engine supports the medical community in differential diagnoses related to the diseases of the breast. The iPixel Visual Search Engine has been validated by experts in the healthcare domain, such as radiologists, in addition to experts in digital image analysis.

  10. Encouraging top-down attention in visual search:A developmental perspective.

    Science.gov (United States)

    Lookadoo, Regan; Yang, Yingying; Merrill, Edward C

    2017-10-01

    Four experiments are reported in which 60 younger children (7-8 years old), 60 older children (10-11 years old), and 60 young adults (18-25 years old) performed a conjunctive visual search task (15 per group in each experiment). The number of distractors of each feature type was unbalanced across displays to evaluate participants' ability to restrict search to the smaller subset of features. The use of top-down attention processes to restrict search was encouraged by providing external aids for identifying and maintaining attention on the smaller set. In Experiment 1, no external assistance was provided. In Experiment 2, precues and instructions were provided to focus attention on that subset. In Experiment 3, trials in which the smaller subset was represented by the same feature were presented in alternating blocks to eliminate the need to switch attention between features from trial to trial. In Experiment 4, consecutive blocks of the same subset features were presented in the first or second half of the experiment, providing additional consistency. All groups benefited from external support of top-down attention, although the pattern of improvement varied across experiments. The younger children benefited most from precues and instruction, using the subset search strategy when instructed. Furthermore, younger children benefited from blocking trials only when blocks of the same features did not alternate. Older participants benefited from the blocking of trials in both Experiments 3 and 4, but not from precues and instructions. Hence, our results revealed both malleability and limits of children's top-down control of attention.

  11. Learning efficient visual search for stimuli containing diagnostic spatial configurations and color-shape conjunctions.

    Science.gov (United States)

    Reavis, Eric A; Frank, Sebastian M; Tse, Peter U

    2018-04-12

    Visual search is often slow and difficult for complex stimuli such as feature conjunctions. Search efficiency, however, can improve with training. Search for stimuli that can be identified by the spatial configuration of two elements (e.g., the relative position of two colored shapes) improves dramatically within a few hundred trials of practice. Several recent imaging studies have identified neural correlates of this learning, but it remains unclear what stimulus properties participants learn to use to search efficiently. Influential models, such as reverse hierarchy theory, propose two major possibilities: learning to use information contained in low-level image statistics (e.g., single features at particular retinotopic locations) or in high-level characteristics (e.g., feature conjunctions) of the task-relevant stimuli. In a series of experiments, we tested these two hypotheses, which make different predictions about the effect of various stimulus manipulations after training. We find relatively small effects of manipulating low-level properties of the stimuli (e.g., changing their retinotopic location) and some conjunctive properties (e.g., color-position), whereas the effects of manipulating other conjunctive properties (e.g., color-shape) are larger. Overall, the findings suggest conjunction learning involving such stimuli might be an emergent phenomenon that reflects multiple different learning processes, each of which capitalizes on different types of information contained in the stimuli. We also show that both targets and distractors are learned, and that reversing learned target and distractor identities impairs performance. This suggests that participants do not merely learn to discriminate target and distractor stimuli, they also learn stimulus identity mappings that contribute to performance improvements.

  12. Monitoring Processes in Visual Search Enhanced by Professional Experience: The Case of Orange Quality-Control Workers.

    Science.gov (United States)

    Visalli, Antonino; Vallesi, Antonino

    2018-01-01

    Visual search tasks have often been used to investigate how cognitive processes change with expertise. Several studies have shown visual experts' advantages in detecting objects related to their expertise. Here, we tried to extend these findings by investigating whether professional search experience could boost top-down monitoring processes involved in visual search, independently of advantages specific to objects of expertise. To this aim, we recruited a group of quality-control workers employed in citrus farms. Given the specific features of this type of job, we expected that the extensive employment of monitoring mechanisms during orange selection could enhance these mechanisms even in search situations in which orange-related expertise is not suitable. To test this hypothesis, we compared performance of our experimental group and of a well-matched control group on a computerized visual search task. In one block the target was an orange (expertise target) while in the other block the target was a Smurfette doll (neutral target). The a priori hypothesis was to find an advantage for quality-controllers in those situations in which monitoring was especially involved, that is, when deciding the presence/absence of the target required a more extensive inspection of the search array. Results were consistent with our hypothesis. Quality-controllers were faster in those conditions that extensively required monitoring processes, specifically, the Smurfette-present and both target-absent conditions. No differences emerged in the orange-present condition, which resulted to mainly rely on bottom-up processes. These results suggest that top-down processes in visual search can be enhanced through immersive real-life experience beyond visual expertise advantages.

  13. Monitoring Processes in Visual Search Enhanced by Professional Experience: The Case of Orange Quality-Control Workers

    Directory of Open Access Journals (Sweden)

    Antonino Visalli

    2018-02-01

    Full Text Available Visual search tasks have often been used to investigate how cognitive processes change with expertise. Several studies have shown visual experts' advantages in detecting objects related to their expertise. Here, we tried to extend these findings by investigating whether professional search experience could boost top-down monitoring processes involved in visual search, independently of advantages specific to objects of expertise. To this aim, we recruited a group of quality-control workers employed in citrus farms. Given the specific features of this type of job, we expected that the extensive employment of monitoring mechanisms during orange selection could enhance these mechanisms even in search situations in which orange-related expertise is not suitable. To test this hypothesis, we compared performance of our experimental group and of a well-matched control group on a computerized visual search task. In one block the target was an orange (expertise target while in the other block the target was a Smurfette doll (neutral target. The a priori hypothesis was to find an advantage for quality-controllers in those situations in which monitoring was especially involved, that is, when deciding the presence/absence of the target required a more extensive inspection of the search array. Results were consistent with our hypothesis. Quality-controllers were faster in those conditions that extensively required monitoring processes, specifically, the Smurfette-present and both target-absent conditions. No differences emerged in the orange-present condition, which resulted to mainly rely on bottom-up processes. These results suggest that top-down processes in visual search can be enhanced through immersive real-life experience beyond visual expertise advantages.

  14. The effect of computer-aided detection markers on visual search and reader performance during concurrent reading of CT colonography

    International Nuclear Information System (INIS)

    Helbren, Emma; Taylor, Stuart A.; Fanshawe, Thomas R.; Mallett, Susan; Phillips, Peter; Boone, Darren; Gale, Alastair; Altman, Douglas G.; Manning, David; Halligan, Steve

    2015-01-01

    We aimed to identify the effect of computer-aided detection (CAD) on visual search and performance in CT Colonography (CTC) of inexperienced and experienced readers. Fifteen endoluminal CTC examinations were recorded, each with one polyp, and two videos were generated, one with and one without a CAD mark. Forty-two readers (17 experienced, 25 inexperienced) interpreted the videos during infrared visual search recording. CAD markers and polyps were treated as regions of interest in data processing. This multi-reader, multi-case study was analysed using multilevel modelling. CAD drew readers' attention to polyps faster, accelerating identification times: median 'time to first pursuit' was 0.48 s (IQR 0.27 to 0.87 s) with CAD, versus 0.58 s (IQR 0.35 to 1.06 s) without. For inexperienced readers, CAD also held visual attention for longer. All visual search metrics used to assess visual gaze behaviour demonstrated statistically significant differences when ''with'' and ''without'' CAD were compared. A significant increase in the number of correct polyp identifications across all readers was seen with CAD (74 % without CAD, 87 % with CAD; p < 0.001). CAD significantly alters visual search and polyp identification in readers viewing three-dimensional endoluminal CTC. For polyp and CAD marker pursuit times, CAD generally exerted a larger effect on inexperienced readers. (orig.)

  15. The effect of computer-aided detection markers on visual search and reader performance during concurrent reading of CT colonography

    Energy Technology Data Exchange (ETDEWEB)

    Helbren, Emma; Taylor, Stuart A. [University College London, Centre for Medical Imaging, London (United Kingdom); Fanshawe, Thomas R.; Mallett, Susan [University of Oxford, Nuffield Department of Primary Care Health Sciences, Oxford (United Kingdom); Phillips, Peter [University of Cumbria, Health and Medical Sciences Group, Lancaster (United Kingdom); Boone, Darren [Colchester Hospital University NHS Foundation Trust and Anglia University, Colchester (United Kingdom); Gale, Alastair [Loughborough University, Applied Vision Research Centre, Loughborough (United Kingdom); Altman, Douglas G. [University of Oxford, Centre for Statistics in Medicine, Oxford (United Kingdom); Manning, David [Lancaster University, Lancaster Medical School, Faculty of Health and Medicine, Lancaster (United Kingdom); Halligan, Steve [University College London, Centre for Medical Imaging, London (United Kingdom); University College Hospital, Gastrointestinal Radiology, University College London, Centre for Medical Imaging, Podium Level 2, London, NW1 2BU (United Kingdom)

    2015-06-01

    We aimed to identify the effect of computer-aided detection (CAD) on visual search and performance in CT Colonography (CTC) of inexperienced and experienced readers. Fifteen endoluminal CTC examinations were recorded, each with one polyp, and two videos were generated, one with and one without a CAD mark. Forty-two readers (17 experienced, 25 inexperienced) interpreted the videos during infrared visual search recording. CAD markers and polyps were treated as regions of interest in data processing. This multi-reader, multi-case study was analysed using multilevel modelling. CAD drew readers' attention to polyps faster, accelerating identification times: median 'time to first pursuit' was 0.48 s (IQR 0.27 to 0.87 s) with CAD, versus 0.58 s (IQR 0.35 to 1.06 s) without. For inexperienced readers, CAD also held visual attention for longer. All visual search metrics used to assess visual gaze behaviour demonstrated statistically significant differences when ''with'' and ''without'' CAD were compared. A significant increase in the number of correct polyp identifications across all readers was seen with CAD (74 % without CAD, 87 % with CAD; p < 0.001). CAD significantly alters visual search and polyp identification in readers viewing three-dimensional endoluminal CTC. For polyp and CAD marker pursuit times, CAD generally exerted a larger effect on inexperienced readers. (orig.)

  16. Estimation of mental effort in learning visual search by measuring pupil response.

    Directory of Open Access Journals (Sweden)

    Tatsuto Takeuchi

    Full Text Available Perceptual learning refers to the improvement of perceptual sensitivity and performance with training. In this study, we examined whether learning is accompanied by a release from mental effort on the task, leading to automatization of the learned task. For this purpose, we had subjects conduct a visual search for a target, defined by a combination of orientation and spatial frequency, while we monitored their pupil size. It is well known that pupil size reflects the strength of mental effort invested in a task. We found that pupil size increased rapidly as the learning proceeded in the early phase of training and decreased at the later phase to a level half of its maximum value. This result does not support the simple automatization hypothesis. Instead, it suggests that the mental effort and behavioral performance reflect different aspects of perceptual learning. Further, mental effort would be continued to be invested to maintain good performance at a later stage of training.

  17. Contextual remapping in visual search after predictable target-location changes.

    Science.gov (United States)

    Conci, Markus; Sun, Luning; Müller, Hermann J

    2011-07-01

    Invariant spatial context can facilitate visual search. For instance, detection of a target is faster if it is presented within a repeatedly encountered, as compared to a novel, layout of nontargets, demonstrating a role of contextual learning for attentional guidance ('contextual cueing'). Here, we investigated how context-based learning adapts to target location (and identity) changes. Three experiments were performed in which, in an initial learning phase, observers learned to associate a given context with a given target location. A subsequent test phase then introduced identity and/or location changes to the target. The results showed that contextual cueing could not compensate for target changes that were not 'predictable' (i.e. learnable). However, for predictable changes, contextual cueing remained effective even immediately after the change. These findings demonstrate that contextual cueing is adaptive to predictable target location changes. Under these conditions, learned contextual associations can be effectively 'remapped' to accommodate new task requirements.

  18. Does apparent size capture attention in visual search? Evidence from the Muller-Lyer illusion.

    Science.gov (United States)

    Proulx, Michael J; Green, Monique

    2011-11-23

    Is perceived size a crucial factor for the bottom-up guidance of attention? Here, a visual search experiment was used to examine whether an irrelevantly longer object can capture attention when participants were to detect a vertical target item. The longer object was created by an apparent size manipulation, the Müller-Lyer illusion; however, all objects contained the same number of pixels. The vertical target was detected more efficiently when it was also perceived as the longer item that was defined by apparent size. Further analysis revealed that the longer Müller-Lyer object received a greater degree of attentional priority than published results for other features such as retinal size, luminance contrast, and the abrupt onset of a new object. The present experiment has demonstrated for the first time that apparent size can capture attention and, thus, provide bottom-up guidance on the basis of perceived salience.

  19. Urinary oxytocin positively correlates with performance in facial visual search in unmarried males, without specific reaction to infant face.

    Science.gov (United States)

    Saito, Atsuko; Hamada, Hiroki; Kikusui, Takefumi; Mogi, Kazutaka; Nagasawa, Miho; Mitsui, Shohei; Higuchi, Takashi; Hasegawa, Toshikazu; Hiraki, Kazuo

    2014-01-01

    The neuropeptide oxytocin plays a central role in prosocial and parental behavior in non-human mammals as well as humans. It has been suggested that oxytocin may affect visual processing of infant faces and emotional reaction to infants. Healthy male volunteers (N = 13) were tested for their ability to detect infant or adult faces among adult or infant faces (facial visual search task). Urine samples were collected from all participants before the study to measure the concentration of oxytocin. Urinary oxytocin positively correlated with performance in the facial visual search task. However, task performance and its correlation with oxytocin concentration did not differ between infant faces and adult faces. Our data suggests that endogenous oxytocin is related to facial visual cognition, but does not promote infant-specific responses in unmarried men who are not fathers.

  20. Urinary oxytocin positively correlates with performance in facial visual search in unmarried males, without specific reaction to infant face

    Directory of Open Access Journals (Sweden)

    Atsuko eSaito

    2014-07-01

    Full Text Available The neuropeptide oxytocin plays a central role in prosocial and parental behavior in non-human mammals as well as humans. It has been suggested that oxytocin may affect visual processing of infant faces and emotional reaction to infants. Healthy male volunteers (N = 13 were tested for their ability to detect infant or adult faces among adult or infant faces (facial visual search task. Urine samples were collected from all participants before the study to measure the concentration of oxytocin. Urinary oxytocin positively correlated with performance in the facial visual search task. However, task performance and its correlation with oxytocin concentration did not differ between infant faces and adult faces. Our data suggests that endogenous oxytocin is related to facial visual cognition, but does not promote infant-specific responses in unmarried men who are not fathers.

  1. Individual Differences in Search and Monitoring for Color Targets in Dynamic Visual Displays.

    Science.gov (United States)

    Muhl-Richardson, Alex; Godwin, Hayward J; Garner, Matthew; Hadwin, Julie A; Liversedge, Simon P; Donnelly, Nick

    2018-02-01

    Many real-world tasks now involve monitoring visual representations of data that change dynamically over time. Monitoring dynamically changing displays for the onset of targets can be done in two ways: detecting targets directly, post-onset, or predicting their onset from the prior state of distractors. In the present study, participants' eye movements were measured as they monitored arrays of 108 colored squares whose colors changed systematically over time. Across three experiments, the data show that participants detected the onset of targets both directly and predictively. Experiments 1 and 2 showed that predictive detection was only possible when supported by sequential color changes that followed a scale ordered in color space. Experiment 3 included measures of individual differences in working memory capacity (WMC) and anxious affect and a manipulation of target prevalence in the search task. It found that predictive monitoring for targets, and decisions about target onsets, were influenced by interactions between individual differences in verbal and spatial WMC and intolerance of uncertainty, a characteristic that reflects worry about uncertain future events. The results have implications for the selection of individuals tasked with monitoring dynamic visual displays for target onsets. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  2. Investigation of attentional bias in obsessive compulsive disorder with and without depression in visual search.

    Directory of Open Access Journals (Sweden)

    Sharon Morein-Zamir

    Full Text Available Whether Obsessive Compulsive Disorder (OCD is associated with an increased attentional bias to emotive stimuli remains controversial. Additionally, it is unclear whether comorbid depression modulates abnormal emotional processing in OCD. This study examined attentional bias to OC-relevant scenes using a visual search task. Controls, non-depressed and depressed OCD patients searched for their personally selected positive images amongst their negative distractors, and vice versa. Whilst the OCD groups were slower than healthy individuals in rating the images, there were no group differences in the magnitude of negative bias to concern-related scenes. A second experiment employing a common set of images replicated the results on an additional sample of OCD patients. Although there was a larger bias to negative OC-related images without pre-exposure overall, no group differences in attentional bias were observed. However, OCD patients subsequently rated the images more slowly and more negatively, again suggesting post-attentional processing abnormalities. The results argue against a robust attentional bias in OCD patients, regardless of their depression status and speak to generalized difficulties disengaging from negative valence stimuli. Rather, post-attentional processing abnormalities may account for differences in emotional processing in OCD.

  3. Visual search for changes in scenes creates long-term, incidental memory traces.

    Science.gov (United States)

    Utochkin, Igor S; Wolfe, Jeremy M

    2018-05-01

    Humans are very good at remembering large numbers of scenes over substantial periods of time. But how good are they at remembering changes to scenes? In this study, we tested scene memory and change detection two weeks after initial scene learning. In Experiments 1-3, scenes were learned incidentally during visual search for change. In Experiment 4, observers explicitly memorized scenes. At test, after two weeks observers were asked to discriminate old from new scenes, to recall a change that they had detected in the study phase, or to detect a newly introduced change in the memorization experiment. Next, they performed a change detection task, usually looking for the same change as in the study period. Scene recognition memory was found to be similar in all experiments, regardless of the study task. In Experiment 1, more difficult change detection produced better scene memory. Experiments 2 and 3 supported a "depth-of-processing" account for the effects of initial search and change detection on incidental memory for scenes. Of most interest, change detection was faster during the test phase than during the study phase, even when the observer had no explicit memory of having found that change previously. This result was replicated in two of our three change detection experiments. We conclude that scenes can be encoded incidentally as well as explicitly and that changes in those scenes can leave measurable traces even if they are not explicitly recalled.

  4. Visual Search for Wines with a Triangle on the Label in a Virtual Store

    Directory of Open Access Journals (Sweden)

    Hui Zhao

    2017-12-01

    Full Text Available Two experiments were conducted in a virtual reality (VR environment in order to investigate participants’ in-store visual search for bottles of wines displaying a prominent triangular shape on their label. The experimental task involved virtually moving along a wine aisle in a virtual supermarket while searching for the wine bottle on the shelf that had a different triangle on its label from the other bottles. The results of Experiment 1 revealed that the participants identified the bottle with a downward-pointing triangle on its label more rapidly than when looking for an upward-pointing triangle on the label instead. This finding replicates the downward-pointing triangle superiority (DPTS effect, though the magnitude of this effect was more pronounced in the first as compared to the second half of the experiment, suggesting a modulating role of practice. The results of Experiment 2 revealed that the DPTS effect was also modulated by the location of the target on the shelf. Interestingly, however, the results of a follow-up survey demonstrate that the orientation of the triangle did not influence the participants’ evaluation of the wine bottles. Taken together, these findings reveal how in-store the attention of consumers might be influenced by the design elements in product packaging. These results therefore suggest that shopping in a virtual supermarket might offer a practical means of assessing the shelf standout of product packaging, which has important implications for food marketing.

  5. Visual Search for Wines with a Triangle on the Label in a Virtual Store.

    Science.gov (United States)

    Zhao, Hui; Huang, Fuxing; Spence, Charles; Wan, Xiaoang

    2017-01-01

    Two experiments were conducted in a virtual reality (VR) environment in order to investigate participants' in-store visual search for bottles of wines displaying a prominent triangular shape on their label. The experimental task involved virtually moving along a wine aisle in a virtual supermarket while searching for the wine bottle on the shelf that had a different triangle on its label from the other bottles. The results of Experiment 1 revealed that the participants identified the bottle with a downward-pointing triangle on its label more rapidly than when looking for an upward-pointing triangle on the label instead. This finding replicates the downward-pointing triangle superiority (DPTS) effect, though the magnitude of this effect was more pronounced in the first as compared to the second half of the experiment, suggesting a modulating role of practice. The results of Experiment 2 revealed that the DPTS effect was also modulated by the location of the target on the shelf. Interestingly, however, the results of a follow-up survey demonstrate that the orientation of the triangle did not influence the participants' evaluation of the wine bottles. Taken together, these findings reveal how in-store the attention of consumers might be influenced by the design elements in product packaging. These results therefore suggest that shopping in a virtual supermarket might offer a practical means of assessing the shelf standout of product packaging, which has important implications for food marketing.

  6. Effects of display set size and its variability on the event-related potentials during a visual search task

    OpenAIRE

    Miyatani, Makoto; Sakata, Sumiko

    1999-01-01

    This study investigated the effects of display set size and its variability on the event-related potentials (ERPs) during a visual search task. In Experiment 1, subjects were required to respond if a visual display, which consisted of two, four, or six alphabets, contained one of two members of memory set. In Experiment 2, subjects detected the change of the shape of a fixation stimulus, which was surrounded by the same alphabets as in Experiment 1. In the search task (Experiment 1), the incr...

  7. Active training and driving-specific feedback improve older drivers' visual search prior to lane changes

    Directory of Open Access Journals (Sweden)

    Lavallière Martin

    2012-03-01

    Full Text Available Abstract Background Driving retraining classes may offer an opportunity to attenuate some effects of aging that may alter driving skills. Unfortunately, there is evidence that classroom programs (driving refresher courses do not improve the driving performance of older drivers. The aim of the current study was to evaluate if simulator training sessions with video-based feedback can modify visual search behaviors of older drivers while changing lanes in urban driving. Methods In order to evaluate the effectiveness of the video-based feedback training, 10 older drivers who received a driving refresher course and feedback about their driving performance were tested with an on-road standardized evaluation before and after participating to a simulator training program (Feedback group. Their results were compared to a Control group (12 older drivers who received the same refresher course and in-simulator active practice as the Feedback group without receiving driving-specific feedback. Results After attending the training program, the Control group showed no increase in the frequency of the visual inspection of three regions of interests (rear view and left side mirrors, and blind spot. In contrast, for the Feedback group, combining active training and driving-specific feedbacks increased the frequency of blind spot inspection by 100% (32.3 to 64.9% of verification before changing lanes. Conclusions These results suggest that simulator training combined with driving-specific feedbacks helped older drivers to improve their visual inspection strategies, and that in-simulator training transferred positively to on-road driving. In order to be effective, it is claimed that driving programs should include active practice sessions with driving-specific feedbacks. Simulators offer a unique environment for developing such programs adapted to older drivers' needs.

  8. Active training and driving-specific feedback improve older drivers' visual search prior to lane changes.

    Science.gov (United States)

    Lavallière, Martin; Simoneau, Martin; Tremblay, Mathieu; Laurendeau, Denis; Teasdale, Normand

    2012-03-02

    Driving retraining classes may offer an opportunity to attenuate some effects of aging that may alter driving skills. Unfortunately, there is evidence that classroom programs (driving refresher courses) do not improve the driving performance of older drivers. The aim of the current study was to evaluate if simulator training sessions with video-based feedback can modify visual search behaviors of older drivers while changing lanes in urban driving. In order to evaluate the effectiveness of the video-based feedback training, 10 older drivers who received a driving refresher course and feedback about their driving performance were tested with an on-road standardized evaluation before and after participating to a simulator training program (Feedback group). Their results were compared to a Control group (12 older drivers) who received the same refresher course and in-simulator active practice as the Feedback group without receiving driving-specific feedback. After attending the training program, the Control group showed no increase in the frequency of the visual inspection of three regions of interests (rear view and left side mirrors, and blind spot). In contrast, for the Feedback group, combining active training and driving-specific feedbacks increased the frequency of blind spot inspection by 100% (32.3 to 64.9% of verification before changing lanes). These results suggest that simulator training combined with driving-specific feedbacks helped older drivers to improve their visual inspection strategies, and that in-simulator training transferred positively to on-road driving. In order to be effective, it is claimed that driving programs should include active practice sessions with driving-specific feedbacks. Simulators offer a unique environment for developing such programs adapted to older drivers' needs.

  9. The effect of stimulus duration and motor response in hemispatial neglect during a visual search task.

    Directory of Open Access Journals (Sweden)

    Laura M Jelsone-Swain

    Full Text Available Patients with hemispatial neglect exhibit a myriad of profound deficits. A hallmark of this syndrome is the patients' absence of awareness of items located in their contralesional space. Many studies, however, have demonstrated that neglect patients exhibit some level of processing of these neglected items. It has been suggested that unconscious processing of neglected information may manifest as a fast denial. This theory of fast denial proposes that neglected stimuli are detected in the same way as non-neglected stimuli, but without overt awareness. We evaluated the fast denial theory by conducting two separate visual search task experiments, each differing by the duration of stimulus presentation. Specifically, in Experiment 1 each stimulus remained in the participants' visual field until a response was made. In Experiment 2 each stimulus was presented for only a brief duration. We further evaluated the fast denial theory by comparing verbal to motor task responses in each experiment. Overall, our results from both experiments and tasks showed no evidence for the presence of implicit knowledge of neglected stimuli. Instead, patients with neglect responded the same when they neglected stimuli as when they correctly reported stimulus absence. These findings thus cast doubt on the concept of the fast denial theory and its consequent implications for non-conscious processing. Importantly, our study demonstrated that the only behavior affected was during conscious detection of ipsilesional stimuli. Specifically, patients were slower to detect stimuli in Experiment 1 compared to Experiment 2, suggesting a duration effect occurred during conscious processing of information. Additionally, reaction time and accuracy were similar when reporting verbally versus motorically. These results provide new insights into the perceptual deficits associated with neglect and further support other work that falsifies the fast denial account of non

  10. Spatial attention can bias search in visual short-term memory

    Directory of Open Access Journals (Sweden)

    Anna C Nobre

    2008-03-01

    Full Text Available Whereas top-down attentional control is known to bias perceptual functions at many levels of stimulus analysis, its possible influence over memory-related functions remains uncharted. Our experiment combined behavioral measures and event-related potentials (ERPs to test the ability of spatial orienting to bias functions associated with visual short-term memory (VSTM, and to shed light on the neural mechanisms involved. In particular, we investigated whether orienting attention to a spatial location within an array maintained in VSTM could facilitate the search for a specific remembered item. Participants viewed arrays of one, two or four differently colored items, followed by an informative spatial (100% valid or uninformative neutral retro-cue (1500–2500 ms after the array, and later by a probe stimulus (500–1000 ms after the retro-cue. The task was to decide whether the probe stimulus had been present in the array. Behavioral results showed that spatial retro-cues improved both accuracy and response times for making decisions about the presence of the probe item in VSTM, and significantly attenuated performance decrements caused by increasing VSTM load. We also identified a novel ERP component (N3RS specifically associated with searching for an item within VSTM. Paralleling the behavioral results, the amplitude and duration of the N3RS systematically increased with VSTM load in neutral retro-cue trials. When spatial retro-cues were provided, this “retro-search” component was absent. Our findings clearly show that the infl uence of top-down attentional biases extends to mnemonic functions, and, specifically, that searching for items within VSTM can be under flexible voluntary control.

  11. Fat Content Modulates Rapid Detection of Food: A Visual Search Study Using Fast Food and Japanese Diet

    OpenAIRE

    Sawada, Reiko; Sato, Wataru; Toichi, Motomi; Fushiki, Tohru

    2017-01-01

    Rapid detection of food is crucial for the survival of organisms. However, previous visual search studies have reported discrepant results regarding the detection speeds for food vs. non-food items; some experiments showed faster detection of food than non-food, whereas others reported null findings concerning any speed advantage for the detection of food vs. non-food. Moreover, although some previous studies showed that fat content can affect visual attention for food, the effect of fat cont...

  12. The problem of latent attentional capture: Easy visual search conceals capture by task-irrelevant abrupt onsets.

    Science.gov (United States)

    Gaspelin, Nicholas; Ruthruff, Eric; Lien, Mei-Ching

    2016-08-01

    Researchers are sharply divided regarding whether irrelevant abrupt onsets capture spatial attention. Numerous studies report that they do and a roughly equal number report that they do not. This puzzle has inspired numerous attempts at reconciliation, none gaining general acceptance. The authors propose that abrupt onsets routinely capture attention, but the size of observed capture effects depends critically on how long attention dwells on distractor items which, in turn, depends critically on search difficulty. In a series of spatial cuing experiments, the authors show that irrelevant abrupt onsets produce robust capture effects when visual search is difficult, but not when search is easy. Critically, this effect occurs even when search difficulty varies randomly across trials, preventing any strategic adjustments of the attentional set that could modulate probability of capture by the onset cue. The authors argue that easy visual search provides an insensitive test for stimulus-driven capture by abrupt onsets: even though onsets truly capture attention, the effects of capture can be latent. This observation helps to explain previous failures to find capture by onsets, nearly all of which used an easy visual search. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  13. "Multisensory brand search: How the meaning of sounds guides consumers' visual attention": Correction to Knoeferle et al. (2016).

    Science.gov (United States)

    2017-03-01

    Reports an error in "Multisensory brand search: How the meaning of sounds guides consumers' visual attention" by Klemens M. Knoeferle, Pia Knoeferle, Carlos Velasco and Charles Spence ( Journal of Experimental Psychology: Applied , 2016[Jun], Vol 22[2], 196-210). In the article, under Experiment 2, Design and Stimuli, the set number of target products and visual distractors reported in the second paragraph should be 20 and 13, respectively: "On each trial, the 16 products shown in the display were randomly selected from a set of 20 products belonging to different categories. Out of the set of 20 products, seven were potential targets, whereas the other 13 were used as visual distractors only throughout the experiment (since they were not linked to specific usage or consumption sounds)." Consequently, Appendix A in the supplemental materials has been updated. (The following abstract of the original article appeared in record 2016-28876-002.) Building on models of crossmodal attention, the present research proposes that brand search is inherently multisensory, in that the consumers' visual search for a specific brand can be facilitated by semantically related stimuli that are presented in another sensory modality. A series of 5 experiments demonstrates that the presentation of spatially nonpredictive auditory stimuli associated with products (e.g., usage sounds or product-related jingles) can crossmodally facilitate consumers' visual search for, and selection of, products. Eye-tracking data (Experiment 2) revealed that the crossmodal effect of auditory cues on visual search manifested itself not only in RTs, but also in the earliest stages of visual attentional processing, thus suggesting that the semantic information embedded within sounds can modulate the perceptual saliency of the target products' visual representations. Crossmodal facilitation was even observed for newly learnt associations between unfamiliar brands and sonic logos, implicating multisensory short

  14. How Prior Knowledge and Colour Contrast Interfere Visual Search Processes in Novice Learners: An Eye Tracking Study

    Science.gov (United States)

    Sonmez, Duygu; Altun, Arif; Mazman, Sacide Guzin

    2012-01-01

    This study investigates how prior content knowledge and prior exposure to microscope slides on the phases of mitosis effect students' visual search strategies and their ability to differentiate cells that are going through any phases of mitosis. Two different sets of microscope slide views were used for this purpose; with high and low colour…

  15. Textual and Visual Information in eWOM: A Gap Between Preferences in Information Search and Diffusion

    DEFF Research Database (Denmark)

    Lee, Geunhee; Tussyadiah, Iis

    2010-01-01

    This article examines the gap between travel-related information search and diffusion by online users in order to better understand the important role of visual information in electronic word of mouth (eWOM). Several analyses were conducted to investigate differences in travelers' preferences...

  16. Immaturity of the Oculomotor Saccade and Vergence Interaction in Dyslexic Children: Evidence from a Reading and Visual Search Study

    Science.gov (United States)

    Bucci, Maria Pia; Nassibi, Naziha; Gerard, Christophe-Loic; Bui-Quoc, Emmanuel; Seassau, Magali

    2012-01-01

    Studies comparing binocular eye movements during reading and visual search in dyslexic children are, at our knowledge, inexistent. In the present study we examined ocular motor characteristics in dyslexic children versus two groups of non dyslexic children with chronological/reading age-matched. Binocular eye movements were recorded by an infrared system (mobileEBT®, e(ye)BRAIN) in twelve dyslexic children (mean age 11 years old) and a group of chronological age-matched (N = 9) and reading age-matched (N = 10) non dyslexic children. Two visual tasks were used: text reading and visual search. Independently of the task, the ocular motor behavior in dyslexic children is similar to those reported in reading age-matched non dyslexic children: many and longer fixations as well as poor quality of binocular coordination during and after the saccades. In contrast, chronological age-matched non dyslexic children showed a small number of fixations and short duration of fixations in reading task with respect to visual search task; furthermore their saccades were well yoked in both tasks. The atypical eye movement's patterns observed in dyslexic children suggest a deficiency in the visual attentional processing as well as an immaturity of the ocular motor saccade and vergence systems interaction. PMID:22438934

  17. The effects of action video game experience on the time course of inhibition of return and the efficiency of visual search.

    Science.gov (United States)

    Castel, Alan D; Pratt, Jay; Drummond, Emily

    2005-06-01

    The ability to efficiently search the visual environment is a critical function of the visual system, and recent research has shown that experience playing action video games can influence visual selective attention. The present research examined the similarities and differences between video game players (VGPs) and non-video game players (NVGPs) in terms of the ability to inhibit attention from returning to previously attended locations, and the efficiency of visual search in easy and more demanding search environments. Both groups were equally good at inhibiting the return of attention to previously cued locations, although VGPs displayed overall faster reaction times to detect targets. VGPs also showed overall faster response time for easy and difficult visual search tasks compared to NVGPs, largely attributed to faster stimulus-response mapping. The findings suggest that relative to NVGPs, VGPs rely on similar types of visual processing strategies but possess faster stimulus-response mappings in visual attention tasks.

  18. Reading wiring diagrams made easier for maintenance operators: contribution from research in visual attention and visual search

    International Nuclear Information System (INIS)

    Ponthieu, L.; Wolfe, J.M.

    1994-07-01

    This work has been carried out while the author was visiting the Visual Psychophysics lab at the Center for Ophthalmic Research, Harvard Medical School. The general framework is the design of a wiring diagrams visualization system for maintenance operators in electric plants. This study concentrates on how knowledge and experimental techniques from visual attention can help this goal. From this standpoint, the visualization system must best exploit the human visual system abilities. As electronic databases containing all the diagrams will soon be available, it is important to think in advance the display techniques. Presently, maintenance operators favor working with paper printouts even where such databases are already available. The study shows why such an approach is valuable for the design of a display that fits the operator's tasks. Beyond that, this work has been a mean to learn the experimental techniques of cognitive sciences in an applied frame. (authors). 9 figs., 5 annexes

  19. EEG and Eye Tracking Signatures of Target Encoding during Structured Visual Search

    Directory of Open Access Journals (Sweden)

    Anne-Marie Brouwer

    2017-05-01

    Full Text Available EEG and eye tracking variables are potential sources of information about the underlying processes of target detection and storage during visual search. Fixation duration, pupil size and event related potentials (ERPs locked to the onset of fixation or saccade (saccade-related potentials, SRPs have been reported to differ dependent on whether a target or a non-target is currently fixated. Here we focus on the question of whether these variables also differ between targets that are subsequently reported (hits and targets that are not (misses. Observers were asked to scan 15 locations that were consecutively highlighted for 1 s in pseudo-random order. Highlighted locations displayed either a target or a non-target stimulus with two, three or four targets per trial. After scanning, participants indicated which locations had displayed a target. To induce memory encoding failures, participants concurrently performed an aurally presented math task (high load condition. In a low load condition, participants ignored the math task. As expected, more targets were missed in the high compared with the low load condition. For both conditions, eye tracking features distinguished better between hits and misses than between targets and non-targets (with larger pupil size and shorter fixations for missed compared with correctly encoded targets. In contrast, SRP features distinguished better between targets and non-targets than between hits and misses (with average SRPs showing larger P300 waveforms for targets than for non-targets. Single trial classification results were consistent with these averages. This work suggests complementary contributions of eye and EEG measures in potential applications to support search and detect tasks. SRPs may be useful to monitor what objects are relevant to an observer, and eye variables may indicate whether the observer should be reminded of them later.

  20. Target templates: the precision of mental representations affects attentional guidance and decision-making in visual search.

    Science.gov (United States)

    Hout, Michael C; Goldinger, Stephen D

    2015-01-01

    When people look for things in the environment, they use target templates-mental representations of the objects they are attempting to locate-to guide attention and to assess incoming visual input as potential targets. However, unlike laboratory participants, searchers in the real world rarely have perfect knowledge regarding the potential appearance of targets. In seven experiments, we examined how the precision of target templates affects the ability to conduct visual search. Specifically, we degraded template precision in two ways: 1) by contaminating searchers' templates with inaccurate features, and 2) by introducing extraneous features to the template that were unhelpful. We recorded eye movements to allow inferences regarding the relative extents to which attentional guidance and decision-making are hindered by template imprecision. Our findings support a dual-function theory of the target template and highlight the importance of examining template precision in visual search.

  1. More target features in visual working memory leads to poorer search guidance: evidence from contralateral delay activity.

    Science.gov (United States)

    Schmidt, Joseph; MacNamara, Annmarie; Proudfit, Greg Hajcak; Zelinsky, Gregory J

    2014-03-05

    The visual-search literature has assumed that the top-down target representation used to guide search resides in visual working memory (VWM). We directly tested this assumption using contralateral delay activity (CDA) to estimate the VWM load imposed by the target representation. In Experiment 1, observers previewed four photorealistic objects and were cued to remember the two objects appearing to the left or right of central fixation; Experiment 2 was identical except that observers previewed two photorealistic objects and were cued to remember one. CDA was measured during a delay following preview offset but before onset of a four-object search array. One of the targets was always present, and observers were asked to make an eye movement to it and press a button. We found lower magnitude CDA on trials when the initial search saccade was directed to the target (strong guidance) compared to when it was not (weak guidance). This difference also tended to be larger shortly before search-display onset and was largely unaffected by VWM item-capacity limits or number of previews. Moreover, the difference between mean strong- and weak-guidance CDA was proportional to the increase in search time between mean strong-and weak-guidance trials (as measured by time-to-target and reaction-time difference scores). Contrary to most search models, our data suggest that trials resulting in the maintenance of more target features results in poor search guidance to a target. We interpret these counterintuitive findings as evidence for strong search guidance using a small set of highly discriminative target features that remain after pruning from a larger set of features, with the load imposed on VWM varying with this feature-consolidation process.

  2. The hard-won benefits of familiarity in visual search: naturally familiar brand logos are found faster.

    Science.gov (United States)

    Qin, Xiaoyan Angela; Koutstaal, Wilma; Engel, Stephen A

    2014-05-01

    Familiar items are found faster than unfamiliar ones in visual search tasks. This effect has important implications for cognitive theory, because it may reveal how mental representations of commonly encountered items are changed by experience to optimize performance. It remains unknown, however, whether everyday items with moderate levels of exposure would show benefits in visual search, and if so, what kind of experience would be required to produce them. Here, we tested whether familiar product logos were searched for faster than unfamiliar ones, and also familiarized subjects with previously unfamiliar logos. Subjects searched for preexperimentally familiar and unfamiliar logos, half of which were familiarized in the laboratory, amongst other, unfamiliar distractor logos. In three experiments, we used an N-back-like familiarization task, and in four others we used a task that asked detailed questions about the perceptual aspects of the logos. The number of familiarization exposures ranged from 30 to 84 per logo across experiments, with two experiments involving across-day familiarization. Preexperimentally familiar target logos were searched for faster than were unfamiliar, nonfamiliarized logos, by 8 % on average. This difference was reliable in all seven experiments. However, familiarization had little or no effect on search speeds; its average effect was to improve search times by 0.7 %, and its effect was significant in only one of the seven experiments. If priming, mere exposure, episodic memory, or relatively modest familiarity were responsible for familiarity's effects on search, then performance should have improved following familiarization. Our results suggest that the search-related advantage of familiar logos does not develop easily or rapidly.

  3. Task relevance of emotional information affects anxiety-linked attention bias in visual search.

    Science.gov (United States)

    Dodd, Helen F; Vogt, Julia; Turkileri, Nilgun; Notebaert, Lies

    2017-01-01

    Task relevance affects emotional attention in healthy individuals. Here, we investigate whether the association between anxiety and attention bias is affected by the task relevance of emotion during an attention task. Participants completed two visual search tasks. In the emotion-irrelevant task, participants were asked to indicate whether a discrepant face in a crowd of neutral, middle-aged faces was old or young. Irrelevant to the task, target faces displayed angry, happy, or neutral expressions. In the emotion-relevant task, participants were asked to indicate whether a discrepant face in a crowd of middle-aged neutral faces was happy or angry (target faces also varied in age). Trait anxiety was not associated with attention in the emotion-relevant task. However, in the emotion-irrelevant task, trait anxiety was associated with a bias for angry over happy faces. These findings demonstrate that the task relevance of emotional information affects conclusions about the presence of an anxiety-linked attention bias. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Object-based implicit learning in visual search: perceptual segmentation constrains contextual cueing.

    Science.gov (United States)

    Conci, Markus; Müller, Hermann J; von Mühlenen, Adrian

    2013-07-09

    In visual search, detection of a target is faster when it is presented within a spatial layout of repeatedly encountered nontarget items, indicating that contextual invariances can guide selective attention (contextual cueing; Chun & Jiang, 1998). However, perceptual regularities may interfere with contextual learning; for instance, no contextual facilitation occurs when four nontargets form a square-shaped grouping, even though the square location predicts the target location (Conci & von Mühlenen, 2009). Here, we further investigated potential causes for this interference-effect: We show that contextual cueing can reliably occur for targets located within the region of a segmented object, but not for targets presented outside of the object's boundaries. Four experiments demonstrate an object-based facilitation in contextual cueing, with a modulation of context-based learning by relatively subtle grouping cues including closure, symmetry, and spatial regularity. Moreover, the lack of contextual cueing for targets located outside the segmented region was due to an absence of (latent) learning of contextual layouts, rather than due to an attentional bias towards the grouped region. Taken together, these results indicate that perceptual segmentation provides a basic structure within which contextual scene regularities are acquired. This in turn argues that contextual learning is constrained by object-based selection.

  5. Contrasting vertical and horizontal representations of affect in emotional visual search.

    Science.gov (United States)

    Damjanovic, Ljubica; Santiago, Julio

    2016-02-01

    Independent lines of evidence suggest that the representation of emotional evaluation recruits both vertical and horizontal spatial mappings. These two spatial mappings differ in their experiential origins and their productivity, and available data suggest that they differ in their saliency. Yet, no study has so far compared their relative strength in an attentional orienting reaction time task that affords the simultaneous manifestation of both types of mapping. Here, we investigated this question using a visual search task with emotional faces. We presented angry and happy face targets and neutral distracter faces in top, bottom, left, and right locations on the computer screen. Conceptual congruency effects were observed along the vertical dimension supporting the 'up = good' metaphor, but not along the horizontal dimension. This asymmetrical processing pattern was observed when faces were presented in a cropped (Experiment 1) and whole (Experiment 2) format. These findings suggest that the 'up = good' metaphor is more salient and readily activated than the 'right = good' metaphor, and that the former outcompetes the latter when the task context affords the simultaneous activation of both mappings.

  6. Preserved suppression of salient irrelevant stimuli during visual search in Age-Associated Memory Impairment

    Directory of Open Access Journals (Sweden)

    Laura eLorenzo-López

    2016-01-01

    Full Text Available Previous studies have suggested that older adults with age-associated memory impairment (AAMI may show a significant decline in attentional resource capacity and inhibitory processes in addition to memory impairment. In the present paper, the potential attentional capture by task-irrelevant stimuli was examined in older adults with AAMI compared to healthy older adults using scalp-recorded event-related brain potentials (ERPs. ERPs were recorded during the execution of a visual search task, in which the participants had to detect the presence of a target stimulus that differed from distractors by orientation. To explore the automatic attentional capture phenomenon, an irrelevant distractor stimulus defined by a different feature (color was also presented without previous knowledge of the participants. A consistent N2pc, an electrophysiological indicator of attentional deployment, was present for target stimuli but not for task-irrelevant color stimuli, suggesting that these irrelevant distractors did not attract attention in AAMI older adults. Furthermore, the N2pc for targets was significantly delayed in AAMI patients compared to healthy older controls. Together, these findings suggest a specific impairment of the attentional selection process of relevant target stimuli in these individuals and indicate that the mechanism of top-down suppression of entirely task-irrelevant stimuli is preserved, at least when the target and the irrelevant stimuli are perceptually very different.

  7. Exploratory search in an audio-visual archive: evaluating a professional search tool for non-professional users

    NARCIS (Netherlands)

    Bron, M.; van Gorp, J.; Nack, F.; de Rijke, M.

    2011-01-01

    As archives are opening up and publishing their content online, the general public can now directly access archive collections. To support access, archives typically provide the public with their internal search tools that were originally intended for professional archivists. We conduct a

  8. Intrinsic motivation and attentional capture from gamelike features in a visual search task.

    Science.gov (United States)

    Miranda, Andrew T; Palmer, Evan M

    2014-03-01

    In psychology research studies, the goals of the experimenter and the goals of the participants often do not align. Researchers are interested in having participants who take the experimental task seriously, whereas participants are interested in earning their incentive (e.g., money or course credit) as quickly as possible. Creating experimental methods that are pleasant for participants and that reward them for effortful and accurate data generation, while not compromising the scientific integrity of the experiment, would benefit both experimenters and participants alike. Here, we explored a gamelike system of points and sound effects that rewarded participants for fast and accurate responses. We measured participant engagement at both cognitive and perceptual levels and found that the point system (which invoked subtle, anonymous social competition between participants) led to positive intrinsic motivation, while the sound effects (which were pleasant and arousing) led to attentional capture for rewarded colors. In a visual search task, points were awarded after each trial for fast and accurate responses, accompanied by short, pleasant sound effects. We adapted a paradigm from Anderson, Laurent, and Yantis (Proceedings of the National Academy of Sciences 108(25):10367-10371, 2011b), in which participants completed a training phase during which red and green targets were probabilistically associated with reward (a point bonus multiplier). During a test phase, no points or sounds were delivered, color was irrelevant to the task, and previously rewarded targets were sometimes presented as distractors. Significantly longer response times on trials in which previously rewarded colors were present demonstrated attentional capture, and positive responses to a five-question intrinsic-motivation scale demonstrated participant engagement.

  9. Do synesthetes have a general advantage in visual search and episodic memory? A case for group studies.

    Directory of Open Access Journals (Sweden)

    Nicolas Rothen

    Full Text Available BACKGROUND: Some studies, most of them case-reports, suggest that synesthetes have an advantage in visual search and episodic memory tasks. The goal of this study was to examine this hypothesis in a group study. METHODOLOGY/PRINCIPAL FINDINGS: In the present study, we tested thirteen grapheme-color synesthetes and we compared their performance on a visual search task and a memory test to an age-, handedness-, education-, and gender-matched control group. The results showed no significant group differences (all relevant ps>.50. For the visual search task effect sizes indicated a small advantage for synesthetes (Cohen's d between .19 and .32. No such advantage was found for episodic memory (Cohen's d<.05. CONCLUSIONS/SIGNIFICANCE: The results indicate that synesthesia per se does not seem to lead to a strong performance advantage. Rather, the superior performance of synesthetes observed in some case-report studies may be due to individual differences, to a selection bias or to a strategic use of synesthesia as a mnemonic. In order to establish universal effects of synesthesia on cognition single-case studies must be complemented by group studies.

  10. Working memory capacity and the top-down control of visual search: Exploring the boundaries of "executive attention".

    Science.gov (United States)

    Kane, Michael J; Poole, Bradley J; Tuholski, Stephen W; Engle, Randall W

    2006-07-01

    The executive attention theory of working memory capacity (WMC) proposes that measures of WMC broadly predict higher order cognitive abilities because they tap important and general attention capabilities (R. W. Engle & M. J. Kane, 2004). Previous research demonstrated WMC-related differences in attention tasks that required restraint of habitual responses or constraint of conscious focus. To further specify the executive attention construct, the present experiments sought boundary conditions of the WMC-attention relation. Three experiments correlated individual differences in WMC, as measured by complex span tasks, and executive control of visual search. In feature-absence search, conjunction search, and spatial configuration search, WMC was unrelated to search slopes, although they were large and reliably measured. Even in a search task designed to require the volitional movement of attention (J. M. Wolfe, G. A. Alvarez, & T. S. Horowitz, 2000), WMC was irrelevant to performance. Thus, WMC is not associated with all demanding or controlled attention processes, which poses problems for some general theories of WMC. Copyright 2006 APA, all rights reserved.

  11. Human dorsolateral prefrontal cortex is involved in visual search for conjunctions but not features: a theta TMS study.

    Science.gov (United States)

    Kalla, Roger; Muggleton, Neil G; Cowey, Alan; Walsh, Vincent

    2009-10-01

    Functional neuroimaging studies have shown that the detection of a target defined by more than one feature (for example, a conjunction of colour and orientation) amongst distractors is associated with the activation of a network of brain areas. Dorsolateral prefrontal cortex (DLPFC), along with areas such as the frontal eye fields (FEF) and posterior parietal cortex (PPC), is a component of this network. While transcranial magnetic stimulation (TMS) had shown that both FEF and PPC are necessary for, and not just correlated with, successful conjunction search, this is not the case for DLPFC. To test the hypothesis that this area is also necessary for efficient conjunction search, TMS was applied over DLPFC and the effects on conjunction and feature (in this case colour) search performance compared with those when TMS was delivered over area MT/V5 and a vertex control stimulation condition. DLPFC TMS impaired performance on the conjunction search task but was without effect on feature search, similar to findings when TMS is delivered over PPC or FEF. Vertex TMS had no effects whereas MT/V5 TMS significantly improved performance with a time course that may indicate that this was due to modulation of V4 activity. These findings illustrate that, like FEF and PPC, DLPFC is necessary for fully effective conjunction visual search performance.

  12. Right away: A late, right-lateralized category effect complements an early, left-lateralized category effect in visual search.

    Science.gov (United States)

    Constable, Merryn D; Becker, Stefanie I

    2017-10-01

    According to the Sapir-Whorf hypothesis, learned semantic categories can influence early perceptual processes. A central finding in support of this view is the lateralized category effect-namely, the finding that categorically different colors (e.g., blue and green hues) can be discriminated faster than colors within the same color category (e.g., different hues of green), especially when they are presented in the right visual field. Because the right visual field projects to the left hemisphere, this finding has been popularly couched in terms of the left-lateralization of language. However, other studies have reported bilateral category effects, which has led some researchers to question the linguistic origins of the effect. Here we examined the time course of lateralized and bilateral category effects in the classical visual search paradigm by means of eyetracking and RT distribution analyses. Our results show a bilateral category effect in the manual responses, which is combined of an early, left-lateralized category effect and a later, right-lateralized category effect. The newly discovered late, right-lateralized category effect occurred only when observers had difficulty locating the target, indicating a specialization of the right hemisphere to find categorically different targets after an initial error. The finding that early and late stages of visual search show different lateralized category effects can explain a wide range of previously discrepant findings.

  13. Awareness in contextual cueing of visual search as measured with concurrent access- and phenomenal-consciousness tasks.

    Science.gov (United States)

    Schlagbauer, Bernhard; Müller, Hermann J; Zehetleitner, Michael; Geyer, Thomas

    2012-10-25

    In visual search, context information can serve as a cue to guide attention to the target location. When observers repeatedly encounter displays with identical target-distractor arrangements, reaction times (RTs) are faster for repeated relative to nonrepeated displays, the latter containing novel configurations. This effect has been termed "contextual cueing." The present study asked whether information about the target location in repeated displays is "explicit" (or "conscious") in nature. To examine this issue, observers performed a test session (after an initial training phase in which RTs to repeated and nonrepeated displays were measured) in which the search stimuli were presented briefly and terminated by visual masks; following this, observers had to make a target localization response (with accuracy as the dependent measure) and indicate their visual experience and confidence associated with the localization response. The data were examined at the level of individual displays, i.e., in terms of whether or not a repeated display actually produced contextual cueing. The results were that (a) contextual cueing was driven by only a very small number of about four actually learned configurations; (b) localization accuracy was increased for learned relative to nonrepeated displays; and (c) both consciousness measures were enhanced for learned compared to nonrepeated displays. It is concluded that contextual cueing is driven by only a few repeated displays and the ability to locate the target in these displays is associated with increased visual experience.

  14. Properties of V1 neurons tuned to conjunctions of visual features: application of the V1 saliency hypothesis to visual search behavior.

    Directory of Open Access Journals (Sweden)

    Li Zhaoping

    Full Text Available From a computational theory of V1, we formulate an optimization problem to investigate neural properties in the primary visual cortex (V1 from human reaction times (RTs in visual search. The theory is the V1 saliency hypothesis that the bottom-up saliency of any visual location is represented by the highest V1 response to it relative to the background responses. The neural properties probed are those associated with the less known V1 neurons tuned simultaneously or conjunctively in two feature dimensions. The visual search is to find a target bar unique in color (C, orientation (O, motion direction (M, or redundantly in combinations of these features (e.g., CO, MO, or CM among uniform background bars. A feature singleton target is salient because its evoked V1 response largely escapes the iso-feature suppression on responses to the background bars. The responses of the conjunctively tuned cells are manifested in the shortening of the RT for a redundant feature target (e.g., a CO target from that predicted by a race between the RTs for the two corresponding single feature targets (e.g., C and O targets. Our investigation enables the following testable predictions. Contextual suppression on the response of a CO-tuned or MO-tuned conjunctive cell is weaker when the contextual inputs differ from the direct inputs in both feature dimensions, rather than just one. Additionally, CO-tuned cells and MO-tuned cells are often more active than the single feature tuned cells in response to the redundant feature targets, and this occurs more frequently for the MO-tuned cells such that the MO-tuned cells are no less likely than either the M-tuned or O-tuned neurons to be the most responsive neuron to dictate saliency for an MO target.

  15. Properties of V1 neurons tuned to conjunctions of visual features: application of the V1 saliency hypothesis to visual search behavior.

    Science.gov (United States)

    Zhaoping, Li; Zhe, Li

    2012-01-01

    From a computational theory of V1, we formulate an optimization problem to investigate neural properties in the primary visual cortex (V1) from human reaction times (RTs) in visual search. The theory is the V1 saliency hypothesis that the bottom-up saliency of any visual location is represented by the highest V1 response to it relative to the background responses. The neural properties probed are those associated with the less known V1 neurons tuned simultaneously or conjunctively in two feature dimensions. The visual search is to find a target bar unique in color (C), orientation (O), motion direction (M), or redundantly in combinations of these features (e.g., CO, MO, or CM) among uniform background bars. A feature singleton target is salient because its evoked V1 response largely escapes the iso-feature suppression on responses to the background bars. The responses of the conjunctively tuned cells are manifested in the shortening of the RT for a redundant feature target (e.g., a CO target) from that predicted by a race between the RTs for the two corresponding single feature targets (e.g., C and O targets). Our investigation enables the following testable predictions. Contextual suppression on the response of a CO-tuned or MO-tuned conjunctive cell is weaker when the contextual inputs differ from the direct inputs in both feature dimensions, rather than just one. Additionally, CO-tuned cells and MO-tuned cells are often more active than the single feature tuned cells in response to the redundant feature targets, and this occurs more frequently for the MO-tuned cells such that the MO-tuned cells are no less likely than either the M-tuned or O-tuned neurons to be the most responsive neuron to dictate saliency for an MO target.

  16. Presentation of laboratory test results in patient portals: influence of interface design on risk interpretation and visual search behaviour.

    Science.gov (United States)

    Fraccaro, Paolo; Vigo, Markel; Balatsoukas, Panagiotis; van der Veer, Sabine N; Hassan, Lamiece; Williams, Richard; Wood, Grahame; Sinha, Smeeta; Buchan, Iain; Peek, Niels

    2018-02-12

    Patient portals are considered valuable instruments for self-management of long term conditions, however, there are concerns over how patients might interpret and act on the clinical information they access. We hypothesized that visual cues improve patients' abilities to correctly interpret laboratory test results presented through patient portals. We also assessed, by applying eye-tracking methods, the relationship between risk interpretation and visual search behaviour. We conducted a controlled study with 20 kidney transplant patients. Participants viewed three different graphical presentations in each of low, medium, and high risk clinical scenarios composed of results for 28 laboratory tests. After viewing each clinical scenario, patients were asked how they would have acted in real life if the results were their own, as a proxy of their risk interpretation. They could choose between: 1) Calling their doctor immediately (high interpreted risk); 2) Trying to arrange an appointment within the next 4 weeks (medium interpreted risk); 3) Waiting for the next appointment in 3 months (low interpreted risk). For each presentation, we assessed accuracy of patients' risk interpretation, and employed eye tracking to assess and compare visual search behaviour. Misinterpretation of risk was common, with 65% of participants underestimating the need for action across all presentations at least once. Participants found it particularly difficult to interpret medium risk clinical scenarios. Participants who consistently understood when action was needed showed a higher visual search efficiency, suggesting a better strategy to cope with information overload that helped them to focus on the laboratory tests most relevant to their condition. This study confirms patients' difficulties in interpreting laboratories test results, with many patients underestimating the need for action, even when abnormal values were highlighted or grouped together. Our findings raise patient safety

  17. Site-dependent effects of tDCS uncover dissociations in the communication network underlying the processing of visual search.

    Science.gov (United States)

    Ball, Keira; Lane, Alison R; Smith, Daniel T; Ellison, Amanda

    2013-11-01

    The right posterior parietal cortex (rPPC) and the right frontal eye field (rFEF) form part of a network of brain areas involved in orienting spatial attention. Previous studies using transcranial magnetic stimulation (TMS) have demonstrated that both areas are critically involved in the processing of conjunction visual search tasks, since stimulation of these sites disrupts performance. This study investigated the effects of long term neuronal modulation to rPPC and rFEF using transcranial direct current stimulation (tDCS) with the aim of uncovering sharing of these resources in the processing of conjunction visual search tasks. Participants completed four blocks of conjunction search trials over the course of 45 min. Following the first block they received 15 min of either cathodal or anodal stimulation to rPPC or rFEF, or sham stimulation. A significant interaction between block and stimulation condition was found, indicating that tDCS caused different effects according to the site (rPPC or rFEF) and type of stimulation (cathodal, anodal, or sham). Practice resulted in a significant reduction in reaction time across the four blocks in all conditions except when cathodal tDCS was applied to rPPC. The effects of cathodal tDCS over rPPC are subtler than those seen with TMS, and no effect of tDCS was evident at rFEF. This suggests that rFEF has a more transient role than rPPC in the processing of conjunction visual search and is robust to longer-term methods of neuro-disruption. Our results may be explained within the framework of functional connectivity between these, and other, areas. Copyright © 2013 Elsevier Inc. All rights reserved.

  18. More than Just Finding Color: Strategy in Global Visual Search Is Shaped by Learned Target Probabilities

    Science.gov (United States)

    Williams, Carrick C.; Pollatsek, Alexander; Cave, Kyle R.; Stroud, Michael J.

    2009-01-01

    In 2 experiments, eye movements were examined during searches in which elements were grouped into four 9-item clusters. The target (a red or blue "T") was known in advance, and each cluster contained different numbers of target-color elements. Rather than color composition of a cluster invariantly guiding the order of search though…

  19. Comparing the effect of temporal delay on the availability of egocentric and allocentric information in visual search.

    Science.gov (United States)

    Ball, Keira; Birch, Yan; Lane, Alison; Ellison, Amanda; Schenk, Thomas

    2017-07-28

    Frames of reference play a central role in perceiving an object's location and reaching to pick that object up. It is thought that the ventral stream, believed to subserve vision for perception, utilises allocentric coding, while the dorsal stream, argued to be responsible for vision for action, primarily uses an egocentric reference frame. We have previously shown that egocentric representations can survive a delay; however, it is possible that in comparison to allocentric information, egocentric information decays more rapidly. Here we directly compare the effect of delay on the availability of egocentric and allocentric representations. We used spatial priming in visual search and repeated the location of the target relative to either a landmark in the search array (allocentric condition) or the observer's body (egocentric condition). Three inter-trial intervals created minimum delays between two consecutive trials of 2, 4, or 8seconds. In both conditions, search times to primed locations were faster than search times to un-primed locations. In the egocentric condition the effects were driven by a reduction in search times when egocentric information was repeated, an effect that was observed at all three delays. In the allocentric condition while search times did not change when the allocentric information was repeated, search times to un-primed target locations became slower. We conclude that egocentric representations are not as transient as previously thought but instead this information is still available, and can influence behaviour, after lengthy periods of delay. We also discuss the possible origins of the differences between allocentric and egocentric priming effects. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Fat Content Modulates Rapid Detection of Food: A Visual Search Study Using Fast Food and Japanese Diet.

    Science.gov (United States)

    Sawada, Reiko; Sato, Wataru; Toichi, Motomi; Fushiki, Tohru

    2017-01-01

    Rapid detection of food is crucial for the survival of organisms. However, previous visual search studies have reported discrepant results regarding the detection speeds for food vs. non-food items; some experiments showed faster detection of food than non-food, whereas others reported null findings concerning any speed advantage for the detection of food vs. non-food. Moreover, although some previous studies showed that fat content can affect visual attention for food, the effect of fat content on the detection of food remains unclear. To investigate these issues, we measured reaction times (RTs) during a visual search task in which participants with normal weight detected high-fat food (i.e., fast food), low-fat food (i.e., Japanese diet), and non-food (i.e., kitchen utensils) targets within crowds of non-food distractors (i.e., cars). Results showed that RTs for food targets were shorter than those for non-food targets. Moreover, the RTs for high-fat food were shorter than those for low-fat food. These results suggest that food is more rapidly detected than non-food within the environment and that a higher fat content in food facilitates rapid detection.

  1. Fat Content Modulates Rapid Detection of Food: A Visual Search Study Using Fast Food and Japanese Diet

    Directory of Open Access Journals (Sweden)

    Reiko Sawada

    2017-06-01

    Full Text Available Rapid detection of food is crucial for the survival of organisms. However, previous visual search studies have reported discrepant results regarding the detection speeds for food vs. non-food items; some experiments showed faster detection of food than non-food, whereas others reported null findings concerning any speed advantage for the detection of food vs. non-food. Moreover, although some previous studies showed that fat content can affect visual attention for food, the effect of fat content on the detection of food remains unclear. To investigate these issues, we measured reaction times (RTs during a visual search task in which participants with normal weight detected high-fat food (i.e., fast food, low-fat food (i.e., Japanese diet, and non-food (i.e., kitchen utensils targets within crowds of non-food distractors (i.e., cars. Results showed that RTs for food targets were shorter than those for non-food targets. Moreover, the RTs for high-fat food were shorter than those for low-fat food. These results suggest that food is more rapidly detected than non-food within the environment and that a higher fat content in food facilitates rapid detection.

  2. Fat Content Modulates Rapid Detection of Food: A Visual Search Study Using Fast Food and Japanese Diet

    Science.gov (United States)

    Sawada, Reiko; Sato, Wataru; Toichi, Motomi; Fushiki, Tohru

    2017-01-01

    Rapid detection of food is crucial for the survival of organisms. However, previous visual search studies have reported discrepant results regarding the detection speeds for food vs. non-food items; some experiments showed faster detection of food than non-food, whereas others reported null findings concerning any speed advantage for the detection of food vs. non-food. Moreover, although some previous studies showed that fat content can affect visual attention for food, the effect of fat content on the detection of food remains unclear. To investigate these issues, we measured reaction times (RTs) during a visual search task in which participants with normal weight detected high-fat food (i.e., fast food), low-fat food (i.e., Japanese diet), and non-food (i.e., kitchen utensils) targets within crowds of non-food distractors (i.e., cars). Results showed that RTs for food targets were shorter than those for non-food targets. Moreover, the RTs for high-fat food were shorter than those for low-fat food. These results suggest that food is more rapidly detected than non-food within the environment and that a higher fat content in food facilitates rapid detection. PMID:28690568

  3. FMRI for Functional Localization and Task Difficulty Assessment During Visual Search for Military Vehicles

    National Research Council Canada - National Science Library

    Meitzler, Thomas; Bryk, Darryl; Sohn, Euijung; Hirsch, Joyce

    2005-01-01

    Past and current U.S. Army computational vision models designed to determine the difficulty of visual detection of camouflage for military vehicles are extremely limited in the sense that they do not encompass much...

  4. Computer-Assisted Visual Search/Decision Aids as a Training Tool for Mammography

    National Research Council Canada - National Science Library

    Nodine, Calvin

    2000-01-01

    .... In the first two years we carried out two experiments. The first equated experience by comparing perceptual skills of expert radiologists with lay people searching non-medical pictorial scenes for hidden targets...

  5. The perception of hazard. I. Hazard analysis and the contribution of visual search to hazard perception

    Energy Technology Data Exchange (ETDEWEB)

    Blignaut, C J.H.

    1979-09-01

    This is a study of the ability of miners to perceive warnings of imminent danger that a fall of rock will occur. Features of 745 rock falls (other than outbursts) were studied, and an experiment was carried out on groups of experienced or novice gold miners in a simulated stope. Novice workers, when compared with experienced men, lack the ability to search adequately for dangerous loose rock, but their search skills can be improved significantly by training.

  6. Training and transfer of training in rapid visual search for camouflaged targets.

    Directory of Open Access Journals (Sweden)

    Mark B Neider

    Full Text Available Previous examinations of search under camouflage conditions have reported that performance improves with training and that training can engender near perfect transfer to similar, but novel camouflage-type displays [1]. What remains unclear, however, are the cognitive mechanisms underlying these training improvements and transfer benefits. On the one hand, improvements and transfer benefits might be associated with higher-level overt strategy shifts, such as through the restriction of eye movements to target-likely (background display regions. On the other hand, improvements and benefits might be related to the tuning of lower-level perceptual processes, such as figure-ground segregation. To decouple these competing possibilities we had one group of participants train on camouflage search displays and a control group train on non-camouflage displays. Critically, search displays were rapidly presented, precluding eye movements. Before and following training, all participants completed transfer sessions in which they searched novel displays. We found that search performance on camouflage displays improved with training. Furthermore, participants who trained on camouflage displays suffered no performance costs when searching novel displays following training. Our findings suggest that training to break camouflage is related to the tuning of perceptual mechanisms and not strategic shifts in overt attention.

  7. Inhibitory guidance in visual search: the case of movement-form conjunctions.

    Science.gov (United States)

    Dent, Kevin; Allen, Harriet A; Braithwaite, Jason J; Humphreys, Glyn W

    2012-02-01

    We used a probe-dot procedure to examine the roles of excitatory attentional guidance and distractor suppression in search for movement-form conjunctions. Participants in Experiment 1 completed a conjunction (moving X amongst moving Os and static Xs) and two single-feature (moving X amongst moving Os, and static X amongst static Os) conditions. "Active" participants searched for the target, whereas "passive" participants viewed the displays without responding. Subsequently, both groups located (left or right) a probe dot appearing in either an occupied or an unoccupied location. In the conjunction condition, the active group located probes presented on static distractors more slowly than probes presented on moving distractors, reversing the direction of the difference found within the passive group. This disadvantage for probes on static items was much stronger in conjunction than in single-feature search. The same pattern of results was replicated in Experiment 2, which used a go/no-go procedure. Experiment 3 extended the go/no-go procedure to the case of search for a static target and revealed increased probe localisation times as a consequence of active search, primarily for probes on moving distractor items. The results demonstrated attentional guidance by inhibition of distractors in conjunction search.

  8. C-State: an interactive web app for simultaneous multi-gene visualization and comparative epigenetic pattern search.

    Science.gov (United States)

    Sowpati, Divya Tej; Srivastava, Surabhi; Dhawan, Jyotsna; Mishra, Rakesh K

    2017-09-13

    Comparative epigenomic analysis across multiple genes presents a bottleneck for bench biologists working with NGS data. Despite the development of standardized peak analysis algorithms, the identification of novel epigenetic patterns and their visualization across gene subsets remains a challenge. We developed a fast and interactive web app, C-State (Chromatin-State), to query and plot chromatin landscapes across multiple loci and cell types. C-State has an interactive, JavaScript-based graphical user interface and runs locally in modern web browsers that are pre-installed on all computers, thus eliminating the need for cumbersome data transfer, pre-processing and prior programming knowledge. C-State is unique in its ability to extract and analyze multi-gene epigenetic information. It allows for powerful GUI-based pattern searching and visualization. We include a case study to demonstrate its potential for identifying user-defined epigenetic trends in context of gene expression profiles.

  9. Working-memory capacity predicts the executive control of visual search among distractors: the influences of sustained and selective attention.

    Science.gov (United States)

    Poole, Bradley J; Kane, Michael J

    2009-07-01

    Variation in working-memory capacity (WMC) predicts individual differences in only some attention-control capabilities. Whereas higher WMC subjects outperform lower WMC subjects in tasks requiring the restraint of prepotent but inappropriate responses, and the constraint of attentional focus to target stimuli against distractors, they do not differ in prototypical visual-search tasks, even those that yield steep search slopes and engender top-down control. The present three experiments tested whether WMC, as measured by complex memory span tasks, would predict search latencies when the 1-8 target locations to be searched appeared alone, versus appearing among distractor locations to be ignored, with the latter requiring selective attentional focus. Subjects viewed target-location cues and then fixated on those locations over either long (1,500-1,550 ms) or short (300 ms) delays. Higher WMC subjects identified targets faster than did lower WMC subjects only in the presence of distractors and only over long fixation delays. WMC thus appears to affect subjects' ability to maintain a constrained attentional focus over time.

  10. The effects of memory load and stimulus relevance on the EEG during a visual selective memory search task : An ERP and ERD/ERS study

    NARCIS (Netherlands)

    Gomarus, HK; Althaus, M; Wijers, AA; Minderaa, RB

    Objective: Psychophysiological correlates of selective attention and working memory were investigated in a group of 18 healthy children using a visually presented selective mernory search task. Methods: Subjects had to memorize one (load 1) or 3 (load3) letters (memory set) and search for these

  11. Discriminability and dimensionality effects in visual search for featural conjunctions: a functional pop-out.

    Science.gov (United States)

    Dehaene, S

    1989-07-01

    Treisman and Gelade's (1980) feature-integration theory of attention states that a scene must be serially scanned before the objects in it can be accurately perceived. Is serial scanning compatible with the speed observed in the perception of real-world scenes? Most real scenes consist of many more dimensions (color, size, shape, depth, etc.) than those generally found in search paradigms. Furthermore, real objects differ from each other along many of these dimensions. The present experiment assessed the influence of the total number of dimensions and target/distractor discriminability (the number of dimensions that suffice to separate a target from distractors) on search times for a conjunction of features. Search was always found to be serial. However, for the most discriminable targets, search rate was so fast that search times were in the same range as pop-out detection times. Apparently, greater discriminability enables subjects to direct attention at a faster rate and at only a fraction of the items in a scene.

  12. Analysis of internal and external validity criteria for a computerized visual search task: A pilot study.

    Science.gov (United States)

    Richard's, María M; Introzzi, Isabel; Zamora, Eliana; Vernucci, Santiago

    2017-01-01

    Inhibition is one of the main executive functions, because of its fundamental role in cognitive and social development. Given the importance of reliable and computerized measurements to assessment inhibitory performance, this research intends to analyze the internal and external criteria of validity of a computerized conjunction search task, to evaluate the role of perceptual inhibition. A sample of 41 children (21 females and 20 males), aged between 6 and 11 years old (M = 8.49, SD = 1.47), intentionally selected from a private management school of Mar del Plata (Argentina), middle socio-economic level were assessed. The Conjunction Search Task from the TAC Battery, Coding and Symbol Search tasks from Wechsler Intelligence Scale for Children were used. Overall, results allow us to confirm that the perceptual inhibition task form TAC presents solid rates of internal and external validity that make a valid measurement instrument of this process.

  13. Visual search for motion-form conjunctions: is form discriminated within the motion system?

    Science.gov (United States)

    von Mühlenen, A; Müller, H J

    2001-06-01

    Motion-form conjunction search can be more efficient when the target is moving (a moving 45 degrees tilted line among moving vertical and stationary 45 degrees tilted lines) rather than stationary. This asymmetry may be due to aspects of form being discriminated within a motion system representing only moving items, whereas discrimination of stationary items relies on a static form system (J. Driver & P. McLeod, 1992). Alternatively, it may be due to search exploiting differential motion velocity and direction signals generated by the moving-target and distractor lines. To decide between these alternatives, 4 experiments systematically varied the motion-signal information conveyed by the moving target and distractors while keeping their form difference salient. Moving-target search was found to be facilitated only when differential motion-signal information was available. Thus, there is no need to assume that form is discriminated within the motion system.

  14. Comparing the Precision of Information Retrieval of MeSH-Controlled Vocabulary Search Method and a Visual Method in the Medline Medical Database.

    Science.gov (United States)

    Hariri, Nadjla; Ravandi, Somayyeh Nadi

    2014-01-01

    Medline is one of the most important databases in the biomedical field. One of the most important hosts for Medline is Elton B. Stephens CO. (EBSCO), which has presented different search methods that can be used based on the needs of the users. Visual search and MeSH-controlled search methods are among the most common methods. The goal of this research was to compare the precision of the retrieved sources in the EBSCO Medline base using MeSH-controlled and visual search methods. This research was a semi-empirical study. By holding training workshops, 70 students of higher education in different educational departments of Kashan University of Medical Sciences were taught MeSH-Controlled and visual search methods in 2012. Then, the precision of 300 searches made by these students was calculated based on Best Precision, Useful Precision, and Objective Precision formulas and analyzed in SPSS software using the independent sample T Test, and three precisions obtained with the three precision formulas were studied for the two search methods. The mean precision of the visual method was greater than that of the MeSH-Controlled search for all three types of precision, i.e. Best Precision, Useful Precision, and Objective Precision, and their mean precisions were significantly different (P searches. Fifty-three percent of the participants in the research also mentioned that the use of the combination of the two methods produced better results. For users, it is more appropriate to use a natural, language-based method, such as the visual method, in the EBSCO Medline host than to use the controlled method, which requires users to use special keywords. The potential reason for their preference was that the visual method allowed them more freedom of action.

  15. Collaborative Video Search Combining Video Retrieval with Human-Based Visual Inspection

    NARCIS (Netherlands)

    Hudelist, M.A.; Cobârzan, C.; Beecks, C.; van de Werken, Rob; Kletz, S.; Hürst, W.O.; Schoeffmann, K.

    2016-01-01

    We propose a novel video browsing approach that aims at optimally integrating traditional, machine-based retrieval methods with an interface design optimized for human browsing performance. Advanced video retrieval and filtering (e.g., via color and motion signatures, and visual concepts) on a

  16. The tug of war between phonological, semantic and shape information in language-mediated visual search

    NARCIS (Netherlands)

    Hüttig, F.; McQueen, J.M.

    2007-01-01

    Experiments 1 and 2 examined the time-course of retrieval of phonological, visual-shape and semantic knowledge as Dutch participants listened to sentences and looked at displays of four pictures. Given a sentence with beker, 'beaker' for example, the display contained phonological (a beaver, bever),

  17. The Tug of War between Phonological, Semantic and Shape Information in Language-Mediated Visual Search

    Science.gov (United States)

    Huettig, Falk; McQueen, James M.

    2007-01-01

    Experiments 1 and 2 examined the time-course of retrieval of phonological, visual-shape and semantic knowledge as Dutch participants listened to sentences and looked at displays of four pictures. Given a sentence with "beker," "beaker," for example, the display contained phonological (a beaver, "bever"), shape (a…

  18. Making virtual reality a reality: visualization rooms revolutionize the search for oil and gas

    Energy Technology Data Exchange (ETDEWEB)

    Smith, M.

    2001-11-01

    Visualization chambers, state-of-the-art versions of the 3-D cinema films of the 1950s, made possible with the arrival of supercomputers, are popping up in the offices of most major-league explorers in Calgary, Houston and elsewhere. Combining rapid-fire networking, powerful computers, integrated software and digital projection systems, visualization rooms display seismic and other data in images that appear to lift off the screen and float in front of it. The display allows participants to work with stereoscopic subsurface simulations in well-lit rooms where they can reference notes, printouts and drawings; enables the exploration team to gather close to the screen for discussion and inspection of minute details ; improves the ability to understand huge data sets; speeds the process of arriving at effective drilling decisions; encourages and facilitates collaborative work among people of different disciplines, geologists, engineers, geophysicists, bringing them together in one place in front of a giant screen, where everyone can see the same data all at once. Various examples of the technology's successes are described. The technology does not come cheap; it may cost anywhere from $500,000 to $3 million for a visualization room, but considering that drilling a single well may cost up to $40 million, visualization technology is not considered to be a huge expense in terms of exploration.

  19. Sonification and Visualization of Predecisional Information Search: Identifying Toolboxes in Children

    Science.gov (United States)

    Betsch, Tilmann; Wünsche, Kirsten; Großkopf, Armin; Schröder, Klara; Stenmans, Rachel

    2018-01-01

    Prior evidence has suggested that preschoolers and elementary schoolers search information largely with no systematic plan when making decisions in probabilistic environments. However, this finding might be due to the insensitivity of standard classification methods that assume a lack of variance in decision strategies for tasks of the same kind.…

  20. The relationship between visual search and categorization of own- and other-age faces.

    Science.gov (United States)

    Craig, Belinda M; Lipp, Ottmar V

    2018-03-13

    Young adult participants are faster to detect young adult faces in crowds of infant and child faces than vice versa. These findings have been interpreted as evidence for more efficient attentional capture by own-age than other-age faces, but could alternatively reflect faster rejection of other-age than own-age distractors, consistent with the previously reported other-age categorization advantage: faster categorization of other-age than own-age faces. Participants searched for own-age faces in other-age backgrounds or vice versa. Extending the finding to different other-age groups, young adult participants were faster to detect young adult faces in both early adolescent (Experiment 1) and older adult backgrounds (Experiment 2). To investigate whether the own-age detection advantage could be explained by faster categorization and rejection of other-age background faces, participants in experiments 3 and 4 also completed an age categorization task. Relatively faster categorization of other-age faces was related to relatively faster search through other-age backgrounds on target absent trials but not target present trials. These results confirm that other-age faces are more quickly categorized and searched through and that categorization and search processes are related; however, this correlational approach could not confirm or reject the contribution of background face processing to the own-age detection advantage. © 2018 The British Psychological Society.

  1. Brain structures involved in visual search in the presence and absence of color singletons

    NARCIS (Netherlands)

    Talsma, D.; Coe, N.B.; Munoz, D.P.; Theeuwes, J.

    2010-01-01

    It is still debated to what degree top-down and bottom-up driven attentional control processes are subserved by shared or by separate mechanisms. Interactions between these attentional control forms were investigated using a rapid event-related fMRI design, using an attentional search task.

  2. Neurophysiological correlates of relatively enhanced local visual search in autistic adolescents

    OpenAIRE

    Manjaly, Zina M.; Bruning, Nicole; Neufang, Susanne; Stephan, Klaas E.; Brieber, Sarah; Marshall, John C.; Kamp-Becker, Inge; Remschmidt, Helmut; Herpertz-Dahlmann, Beate; Konrad, Kerstin; Fink, Gereon R.

    2007-01-01

    Previous studies found normal or even superior performance of autistic patients on visuospatial tasks requiring local search, like the Embedded Figures Task (EFT). A well-known interpretation of this is ?weak central coherence?, i.e. autistic patients may show a reduced general ability to process information in its context and may therefore have a tendency to favour local over global aspects of information processing. An alternative view is that the local processing advantage in the EFT may r...

  3. Perceptual grouping and attention in visual search for features and for objects.

    Science.gov (United States)

    Treisman, A

    1982-04-01

    This article explores the effects of perceptual grouping on search for targets defined by separate features or by conjunction of features. Treisman and Gelade proposed a feature-integration theory of attention, which claims that in the absence of prior knowledge, the separable features of objects are correctly combined only when focused attention is directed to each item in turn. If items are preattentively grouped, however, attention may be directed to groups rather than to single items whenever no recombination of features within a group could generate an illusory target. This prediction is confirmed: In search for conjunctions, subjects appear to scan serially between groups rather than items. The scanning rate shows little effect of the spatial density of distractors, suggesting that it reflects serial fixations of attention rather than eye movements. Search for features, on the other hand, appears to independent of perceptual grouping, suggesting that features are detected preattentively. A conjunction target can be camouflaged at the preattentive level by placing it at the boundary between two adjacent groups, each of which shares one of its features. This suggests that preattentive grouping creates separate feature maps within each separable dimension rather than one global configuration.

  4. An evaluation for spatial resolution, using a single target on a medical image

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Kyung Sung [Dept. of Radiotechnology, Cheju Halla University, Cheju (Korea, Republic of)

    2016-12-15

    Hitherto, spatial resolution has commonly been evaluated by test patterns or phantoms built on some specific distances (from close to far) between two objects (or double targets). This evaluation method's shortcoming is that resolution is restricted to target distances of phantoms made for test. Therefore, in order to solve the problem, this study proposes and verifies a new method to efficiently test spatial resolution with a single target. For the research I used PSF and JND to propose an idea to measure spatial resolution. After that, I made experiments by commonly used phantoms to verify my new evaluation hypothesis inferred from the above method. To analyse the hypothesis, I used LabVIEW program and got a line pixel from digital image. The result was identical to my spatial-resolution hypothesis inferred from a single target. The findings of the experiment proves only a single target can be enough to relatively evaluate spatial resolution on a digital image. In other words, the limit of the traditional spatial-resolution evaluation method, based on double targets, can be overcome by my new evaluation one using a single target.

  5. Neural Activity Associated with Visual Search for Line Drawings on AAC Displays: An Exploration of the Use of fMRI.

    Science.gov (United States)

    Wilkinson, Krista M; Dennis, Nancy A; Webb, Christina E; Therrien, Mari; Stradtman, Megan; Farmer, Jacquelyn; Leach, Raevynn; Warrenfeltz, Megan; Zeuner, Courtney

    2015-01-01

    Visual aided augmentative and alternative communication (AAC) consists of books or technologies that contain visual symbols to supplement spoken language. A common observation concerning some forms of aided AAC is that message preparation can be frustratingly slow. We explored the uses of fMRI to examine the neural correlates of visual search for line drawings on AAC displays in 18 college students under two experimental conditions. Under one condition, the location of the icons remained stable and participants were able to learn the spatial layout of the display. Under the other condition, constant shuffling of the locations of the icons prevented participants from learning the layout, impeding rapid search. Brain activation was contrasted under these conditions. Rapid search in the stable display was associated with greater activation of cortical and subcortical regions associated with memory, motor learning, and dorsal visual pathways compared to the search in the unpredictable display. Rapid search for line drawings on stable AAC displays involves not just the conceptual knowledge of the symbol meaning but also the integration of motor, memory, and visual-spatial knowledge about the display layout. Further research must study individuals who use AAC, as well as the functional effect of interventions that promote knowledge about array layout.

  6. When and why might a Computer Aided Detection (CAD) system interfere with visual search? An eye-tracking study

    Science.gov (United States)

    Drew, Trafton; Cunningham, Corbin; Wolfe, Jeremy

    2012-01-01

    Rational and Objectives Computer Aided Detection (CAD) systems are intended to improve performance. This study investigates how CAD might actually interfere with a visual search task. This is a laboratory study with implications for clinical use of CAD. Methods 47 naïve observers in two studies were asked to search for a target, embedded in 1/f2.4 noise while we monitored their eye-movements. For some observers, a CAD system marked 75% of targets and 10% of distractors while other observers completed the study without CAD. In Experiment 1, the CAD system’s primary function was to tell observers where the target might be. In Experiment 2, CAD provided information about target identity. Results In Experiment 1, there was a significant enhancement of observer sensitivity in the presence of CAD (t(22)=4.74, pCAD system were missed more frequently than equivalent targets in No CAD blocks of the experiment (t(22)=7.02, pCAD, but also no significant cost on sensitivity to unmarked targets (t(22)=0.6, p=n.s.). Finally, in both experiments, CAD produced reliable changes in eye-movements: CAD observers examined a lower total percentage of the search area than the No CAD observers (Ex 1: t(48)=3.05, pCAD signals do not combine with observers’ unaided performance in a straight-forward manner. CAD can engender a sense of certainty that can lead to incomplete search and elevated chances of missing unmarked stimuli. PMID:22958720

  7. Event-related potentials dissociate perceptual from response-related age effects in visual search

    DEFF Research Database (Denmark)

    Wiegand, Iris; Müller, Hermann J.; Finke, Kathrin

    2013-01-01

    measures with lateralized event-related potentials of younger and older adults performing a compound-search task, in which the target-defining dimension of a pop-out target (color/shape) and the response-critical target feature (vertical/horizontal stripes) varied independently across trials. Slower...... responses in older participants were associated with age differences in all analyzed event-related potentials from perception to response, indicating that behavioral slowing originates from multiple stages within the information-processing stream. Furthermore, analyses of carry-over effects from one trial...

  8. BUILDING A BILLION SPATIO-TEMPORAL OBJECT SEARCH AND VISUALIZATION PLATFORM

    Directory of Open Access Journals (Sweden)

    D. Kakkar

    2017-10-01

    Full Text Available With funding from the Sloan Foundation and Harvard Dataverse, the Harvard Center for Geographic Analysis (CGA has developed a prototype spatio-temporal visualization platform called the Billion Object Platform or BOP. The goal of the project is to lower barriers for scholars who wish to access large, streaming, spatio-temporal datasets. The BOP is now loaded with the latest billion geo-tweets, and is fed a real-time stream of about 1 million tweets per day. The geo-tweets are enriched with sentiment and census/admin boundary codes when they enter the system. The system is open source and is currently hosted on Massachusetts Open Cloud (MOC, an OpenStack environment with all components deployed in Docker orchestrated by Kontena. This paper will provide an overview of the BOP architecture, which is built on an open source stack consisting of Apache Lucene, Solr, Kafka, Zookeeper, Swagger, scikit-learn, OpenLayers, and AngularJS. The paper will further discuss the approach used for harvesting, enriching, streaming, storing, indexing, visualizing and querying a billion streaming geo-tweets.

  9. Building a Billion Spatio-Temporal Object Search and Visualization Platform

    Science.gov (United States)

    Kakkar, D.; Lewis, B.

    2017-10-01

    With funding from the Sloan Foundation and Harvard Dataverse, the Harvard Center for Geographic Analysis (CGA) has developed a prototype spatio-temporal visualization platform called the Billion Object Platform or BOP. The goal of the project is to lower barriers for scholars who wish to access large, streaming, spatio-temporal datasets. The BOP is now loaded with the latest billion geo-tweets, and is fed a real-time stream of about 1 million tweets per day. The geo-tweets are enriched with sentiment and census/admin boundary codes when they enter the system. The system is open source and is currently hosted on Massachusetts Open Cloud (MOC), an OpenStack environment with all components deployed in Docker orchestrated by Kontena. This paper will provide an overview of the BOP architecture, which is built on an open source stack consisting of Apache Lucene, Solr, Kafka, Zookeeper, Swagger, scikit-learn, OpenLayers, and AngularJS. The paper will further discuss the approach used for harvesting, enriching, streaming, storing, indexing, visualizing and querying a billion streaming geo-tweets.

  10. Visual Foraging With Fingers and Eye Gaze

    Directory of Open Access Journals (Sweden)

    Ómar I. Jóhannesson

    2016-03-01

    Full Text Available A popular model of the function of selective visual attention involves search where a single target is to be found among distractors. For many scenarios, a more realistic model involves search for multiple targets of various types, since natural tasks typically do not involve a single target. Here we present results from a novel multiple-target foraging paradigm. We compare finger foraging where observers cancel a set of predesignated targets by tapping them, to gaze foraging where observers cancel items by fixating them for 100 ms. During finger foraging, for most observers, there was a large difference between foraging based on a single feature, where observers switch easily between target types, and foraging based on a conjunction of features where observers tended to stick to one target type. The pattern was notably different during gaze foraging where these condition differences were smaller. Two conclusions follow: (a The fact that a sizeable number of observers (in particular during gaze foraging had little trouble switching between different target types raises challenges for many prominent theoretical accounts of visual attention and working memory. (b While caveats must be noted for the comparison of gaze and finger foraging, the results suggest that selection mechanisms for gaze and pointing have different operational constraints.

  11. Distinct neural networks for target feature versus dimension changes in visual search, as revealed by EEG and fMRI.

    Science.gov (United States)

    Becker, Stefanie I; Grubert, Anna; Dux, Paul E

    2014-11-15

    In visual search, responses are slowed, from one trial to the next, both when the target dimension changes (e.g., from a color target to a size target) and when the target feature changes (e.g., from a red target to a green target) relative to being repeated across trials. The present study examined whether such feature and dimension switch costs can be attributed to the same underlying mechanism(s). Contrary to this contention, an EEG study showed that feature changes influenced visual selection of the target (i.e., delayed N2pc onset), whereas dimension changes influenced the later process of response selection (i.e., delayed s-LRP onset). An fMRI study provided convergent evidence for the two-system view: Compared with repetitions, feature changes led to increased activation in the occipital cortex, and superior and inferior parietal lobules, which have been implicated in spatial attention. By contrast, dimension changes led to activation of a fronto-posterior network that is primarily linked with response selection (i.e., pre-motor cortex, supplementary motor area and frontal areas). Taken together, the results suggest that feature and dimension switch costs are based on different processes. Specifically, whereas target feature changes delay attention shifts to the target, target dimension changes interfere with later response selection operations. Crown Copyright © 2014. Published by Elsevier Inc. All rights reserved.

  12. Effects of Symbol Brightness Cueing on Attention During a Visual Search of a Cockpit Display of Traffic Information

    Science.gov (United States)

    Johnson, Walter W.; Liao, Min-Ju; Granada, Stacie

    2003-01-01

    This study investigated visual search performance for target aircraft symbols on a Cockpit Display of Traffic Information (CDTI). Of primary interest was the influence of target brightness (intensity) and highlighting validity (search directions) on the ability to detect a target aircraft among distractor aircraft. Target aircraft were distinguished by an airspace course that conflicted with Ownship (that is, the participant's aircraft). The display could present all (homogeneous) bright aircraft, all (homogeneous) dim aircraft, or mixed bright and dim aircraft, with the target aircraft being either bright or dim. In the mixed intensity condition, participants may or may not have been instructed whether the target was bright or dim. Results indicated that highlighting validity facilitated better detection times. However, instead of bright targets being detected faster, dim targets were found to be detected more slowly in the mixed intensity display than in the homogeneous display. This relative slowness may be due to a delay in confirming the dim aircraft to be a target when it it was among brighter distractor aircraft. This hypothesis will be tested in future research. Funding for this work was provided by the Advanced Air Transportation Technologies Project of NASA's Airspace Operation Systems Program.

  13. Accelerating object detection via a visual-feature-directed search cascade: algorithm and field programmable gate array implementation

    Science.gov (United States)

    Kyrkou, Christos; Theocharides, Theocharis

    2016-07-01

    Object detection is a major step in several computer vision applications and a requirement for most smart camera systems. Recent advances in hardware acceleration for real-time object detection feature extensive use of reconfigurable hardware [field programmable gate arrays (FPGAs)], and relevant research has produced quite fascinating results, in both the accuracy of the detection algorithms as well as the performance in terms of frames per second (fps) for use in embedded smart camera systems. Detecting objects in images, however, is a daunting task and often involves hardware-inefficient steps, both in terms of the datapath design and in terms of input/output and memory access patterns. We present how a visual-feature-directed search cascade composed of motion detection, depth computation, and edge detection, can have a significant impact in reducing the data that needs to be examined by the classification engine for the presence of an object of interest. Experimental results on a Spartan 6 FPGA platform for face detection indicate data search reduction of up to 95%, which results in the system being able to process up to 50 1024×768 pixels images per second with a significantly reduced number of false positives.

  14. The dependencies of fronto-parietal BOLD responses evoked by covert visual search suggest eye-centred coding.

    Science.gov (United States)

    Atabaki, A; Dicke, P W; Karnath, H-O; Thier, P

    2013-04-01

    Visual scenes explored covertly are initially represented in a retinal frame of reference (FOR). On the other hand, 'later' stages of the cortical network allocating spatial attention most probably use non-retinal or non-eye-centred representations as they may ease the integration of different sensory modalities for the formation of supramodal representations of space. We tested if the cortical areas involved in shifting covert attention are based on eye-centred or non-eye-centred coding by using functional magnetic resonance imaging. Subjects were scanned while detecting a target item (a regularly oriented 'L') amidst a set of distractors (rotated 'L's). The array was centred either 5° right or left of the fixation point, independent of eye-gaze orientation, the latter varied in three steps: straight relative to the head, 10° left or 10° right. A quantitative comparison of the blood-oxygen-level-dependent (BOLD) responses for the three eye-gaze orientations revealed stronger BOLD responses in the right intraparietal sulcus (IPS) and the right frontal eye field (FEF) for search in the contralateral (i.e. left) eye-centred space, independent of whether the array was located in the right or left head-centred hemispace. The left IPS showed the reverse pattern, i.e. an activation by search in the right eye-centred hemispace. In other words, the IPS and the right FEF, members of the cortical network underlying covert search, operate in an eye-centred FOR. © 2013 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.

  15. The control of single-colour and multiple-colour visual search by attentional templates in working memory and in long-term memory

    OpenAIRE

    Grubert, Anna; Carlisle, N.; Eimer, Martin

    2016-01-01

    The question whether target selection in visual search can be effectively controlled by simultaneous attentional templates for multiple features is still under dispute. We investigated whether multiple-colour attentional guidance is possible when target colours remain constant and can thus be represented in long-term memory but not when they change frequently and have to be held in working memory. Participants searched for one, two, or three possible target colours that were specified by cue ...

  16. A Multi-Area Stochastic Model for a Covert Visual Search Task.

    Directory of Open Access Journals (Sweden)

    Michael A Schwemmer

    Full Text Available Decisions typically comprise several elements. For example, attention must be directed towards specific objects, their identities recognized, and a choice made among alternatives. Pairs of competing accumulators and drift-diffusion processes provide good models of evidence integration in two-alternative perceptual choices, but more complex tasks requiring the coordination of attention and decision making involve multistage processing and multiple brain areas. Here we consider a task in which a target is located among distractors and its identity reported by lever release. The data comprise reaction times, accuracies, and single unit recordings from two monkeys' lateral interparietal area (LIP neurons. LIP firing rates distinguish between targets and distractors, exhibit stimulus set size effects, and show response-hemifield congruence effects. These data motivate our model, which uses coupled sets of leaky competing accumulators to represent processes hypothesized to occur in feature-selective areas and limb motor and pre-motor areas, together with the visual selection process occurring in LIP. Model simulations capture the electrophysiological and behavioral data, and fitted parameters suggest that different connection weights between LIP and the other cortical areas may account for the observed behavioral differences between the animals.

  17. Reward Draws the Eye, Uncertainty Holds the Eye: Associative Learning Modulates Distractor Interference in Visual Search

    Directory of Open Access Journals (Sweden)

    Stephan Koenig

    2017-07-01

    Full Text Available Stimuli in our sensory environment differ with respect to their physical salience but moreover may acquire motivational salience by association with reward. If we repeatedly observed that reward is available in the context of a particular cue but absent in the context of another cue the former typically attracts more attention than the latter. However, we also may encounter cues uncorrelated with reward. A cue with 50% reward contingency may induce an average reward expectancy but at the same time induces high reward uncertainty. In the current experiment we examined how both values, reward expectancy and uncertainty, affected overt attention. Two different colors were established as predictive cues for low reward and high reward respectively. A third color was followed by high reward on 50% of the trials and thus induced uncertainty. Colors then were introduced as distractors during search for a shape target, and we examined the relative potential of the color distractors to capture and hold the first fixation. We observed that capture frequency corresponded to reward expectancy while capture duration corresponded to uncertainty. The results may suggest that within trial reward expectancy is represented at an earlier time window than uncertainty.

  18. Collinear masking effect in visual search is independent of perceptual salience.

    Science.gov (United States)

    Jingling, Li; Lu, Yi-Hui; Cheng, Miao; Tseng, Chia-Huei

    2017-07-01

    Searching for a target in a salient region should be easier than looking for one in a nonsalient region. However, we previously discovered a contradictory phenomenon in which a local target in a salient structure was more difficult to find than one in the background. The salient structure was constructed of orientation singletons aligned to each other to form a collinear structure. In the present study, we undertake to determine whether such a masking effect was a result of salience competition between a global structure and the local target. In the first 3 experiments, we increased the salience value of the local target with the hope of adding to its competitive advantage and eventually eliminating the masking effect; nevertheless, the masking effect persisted. In an additional 2 experiments, we reduced salience of the global collinear structure by altering the orientation of the background bars and the masking effect still emerged. Our salience manipulations were validated by a controlled condition in which the global structure was grouped noncollinearly. In this case, local target salience increase (e.g., onset) or global distractor salience reduction (e.g., randomized flanking orientations) effectively removed the facilitation effect of the noncollinear structure. Our data suggest that salience competition is unlikely to explain the collinear masking effect, and other mechanisms such as contour integration, border formation, or the crowding effect may be prospective candidates for further investigation.

  19. Response Time, Visual Search Strategy, and Anticipatory Skills in Volleyball Players

    Directory of Open Access Journals (Sweden)

    Alessandro Piras

    2014-01-01

    Full Text Available This paper aimed at comparing expert and novice volleyball players in a visuomotor task using realistic stimuli. Videos of a volleyball setter performing offensive action were presented to participants, while their eye movements were recorded by a head-mounted video based eye tracker. Participants were asked to foresee the direction (forward or backward of the setter’s toss by pressing one of two keys. Key-press response time, response accuracy, and gaze behaviour were measured from the first frame showing the setter’s hand-ball contact to the button pressed by the participants. Experts were faster and more accurate in predicting the direction of the setting than novices, showing accurate predictions when they used a search strategy involving fewer fixations of longer duration, as well as spending less time in fixating all display areas from which they extract critical information for the judgment. These results are consistent with the view that superior performance in experts is due to their ability to efficiently encode domain-specific information that is relevant to the task.

  20. From Single Target to Multitarget/Network Therapeutics in Alzheimer’s Therapy

    Directory of Open Access Journals (Sweden)

    Hailin Zheng

    2014-01-01

    Full Text Available Brain network dysfunction in Alzheimer’s disease (AD involves many proteins (enzymes, processes and pathways, which overlap and influence one another in AD pathogenesis. This complexity challenges the dominant paradigm in drug discovery or a single-target drug for a single mechanism. Although this paradigm has achieved considerable success in some particular diseases, it has failed to provide effective approaches to AD therapy. Network medicines may offer alternative hope for effective treatment of AD and other complex diseases. In contrast to the single-target drug approach, network medicines employ a holistic approach to restore network dysfunction by simultaneously targeting key components in disease networks. In this paper, we explore several drugs either in the clinic or under development for AD therapy in term of their design strategies, diverse mechanisms of action and disease-modifying potential. These drugs act as multi-target ligands and may serve as leads for further development as network medicines.

  1. Involuntary top-down control by search-irrelevant features: Visual working memory biases attention in an object-based manner.

    Science.gov (United States)

    Foerster, Rebecca M; Schneider, Werner X

    2018-03-01

    Many everyday tasks involve successive visual-search episodes with changing targets. Converging evidence suggests that these targets are retained in visual working memory (VWM) and bias attention from there. It is unknown whether all or only search-relevant features of a VWM template bias attention during search. Bias signals might be configured exclusively to task-relevant features so that only search-relevant features bias attention. Alternatively, VWM might maintain objects in the form of bound features. Then, all template features will bias attention in an object-based manner, so that biasing effects are ranked by feature relevance. Here, we investigated whether search-irrelevant VWM template features bias attention. Participants had to saccade to a target opposite a distractor. A colored cue depicted the target prior to each search trial. The target was predefined only by its identity, while its color was irrelevant. When target and cue matched not only in identity (search-relevant) but also in color (search-irrelevant), saccades went more often and faster directly to the target than without any color match (Experiment 1). When introducing a cue-distractor color match (Experiment 2), direct target saccades were most likely when target and cue matched in the search-irrelevant color and least likely in case of a cue-distractor color match. When cue and target were never colored the same (Experiment 3), cue-colored distractors still captured the eyes more often than different-colored distractors despite color being search-irrelevant. As participants were informed about the misleading color, the result argues against a strategical and voluntary usage of color. Instead, search-irrelevant features biased attention obligatorily arguing for involuntary top-down control by object-based VWM templates. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Visual search for feature conjunctions: an fMRI study comparing alcohol-related neurodevelopmental disorder (ARND) to ADHD.

    Science.gov (United States)

    O'Conaill, Carrie R; Malisza, Krisztina L; Buss, Joan L; Bolster, R Bruce; Clancy, Christine; de Gervai, Patricia Dreessen; Chudley, Albert E; Longstaffe, Sally

    2015-01-01

    Alcohol-related neurodevelopmental disorder (ARND) falls under the umbrella of fetal alcohol spectrum disorder (FASD). Diagnosis of ARND is difficult because individuals do not demonstrate the characteristic facial features associated with fetal alcohol syndrome (FAS). While attentional problems in ARND are similar to those found in attention-deficit/hyperactivity disorder (ADHD), the underlying impairment in attention pathways may be different. Functional magnetic resonance imaging (fMRI) and diffusion tensor imaging (DTI) was conducted at 3 T. Sixty-three children aged 10 to 14 years diagnosed with ARND, ADHD, and typically developing (TD) controls performed a single-feature and a feature-conjunction visual search task. Dorsal and ventral attention pathways were activated during both attention tasks in all groups. Significantly greater activation was observed in ARND subjects during a single-feature search as compared to TD and ADHD groups, suggesting ARND subjects require greater neural recruitment to perform this simple task. ARND subjects appear unable to effectively use the very efficient automatic perceptual 'pop-out' mechanism employed by TD and ADHD groups during presentation of the disjunction array. By comparison, activation was lower in ARND compared to TD and ADHD subjects during the more difficult conjunction search task as compared to the single-feature search. Analysis of DTI data using tract-based spatial statistics (TBSS) showed areas of significantly lower fractional anisotropy (FA) and higher mean diffusivity (MD) in the right inferior longitudinal fasciculus (ILF) in ARND compared to TD subjects. Damage to the white matter of the ILF may compromise the ventral attention pathway and may require subjects to use the dorsal attention pathway, which is associated with effortful top-down processing, for tasks that should be automatic. Decreased functional activity in the right temporoparietal junction (TPJ) of ARND subjects may be due to a

  3. Uploading, Searching and Visualizing of Paleomagnetic and Rock Magnetic Data in the Online MagIC Database

    Science.gov (United States)

    Minnett, R.; Koppers, A.; Tauxe, L.; Constable, C.; Donadini, F.

    2007-12-01

    The Magnetics Information Consortium (MagIC) is commissioned to implement and maintain an online portal to a relational database populated by both rock and paleomagnetic data. The goal of MagIC is to archive all available measurements and derived properties from paleomagnetic studies of directions and intensities, and for rock magnetic experiments (hysteresis, remanence, susceptibility, anisotropy). MagIC is hosted under EarthRef.org at http://earthref.org/MAGIC/ and will soon implement two search nodes, one for paleomagnetism and one for rock magnetism. Currently the PMAG node is operational. Both nodes provide query building based on location, reference, methods applied, material type and geological age, as well as a visual map interface to browse and select locations. Users can also browse the database by data type or by data compilation to view all contributions associated with well known earlier collections like PINT, GMPDB or PSVRL. The query result set is displayed in a digestible tabular format allowing the user to descend from locations to sites, samples, specimens and measurements. At each stage, the result set can be saved and, where appropriate, can be visualized by plotting global location maps, equal area, XY, age, and depth plots, or typical Zijderveld, hysteresis, magnetization and remanence diagrams. User contributions to the MagIC database are critical to achieving a useful research tool. We have developed a standard data and metadata template (version 2.3) that can be used to format and upload all data at the time of publication in Earth Science journals. Software tools are provided to facilitate population of these templates within Microsoft Excel. These tools allow for the import/export of text files and provide advanced functionality to manage and edit the data, and to perform various internal checks to maintain data integrity and prepare for uploading. The MagIC Contribution Wizard at http://earthref.org/MAGIC/upload.htm executes the upload

  4. Functional interaction between right parietal and bilateral frontal cortices during visual search tasks revealed using functional magnetic imaging and transcranial direct current stimulation.

    Directory of Open Access Journals (Sweden)

    Amanda Ellison

    Full Text Available The existence of a network of brain regions which are activated when one undertakes a difficult visual search task is well established. Two primary nodes on this network are right posterior parietal cortex (rPPC and right frontal eye fields. Both have been shown to be involved in the orientation of attention, but the contingency that the activity of one of these areas has on the other is less clear. We sought to investigate this question by using transcranial direct current stimulation (tDCS to selectively decrease activity in rPPC and then asking participants to perform a visual search task whilst undergoing functional magnetic resonance imaging. Comparison with a condition in which sham tDCS was applied revealed that cathodal tDCS over rPPC causes a selective bilateral decrease in frontal activity when performing a visual search task. This result demonstrates for the first time that premotor regions within the frontal lobe and rPPC are not only necessary to carry out a visual search task, but that they work together to bring about normal function.

  5. What Top-Down Task Sets Do for Us: An ERP Study on the Benefits of Advance Preparation in Visual Search

    Science.gov (United States)

    Eimer, Martin; Kiss, Monika; Nicholas, Susan

    2011-01-01

    When target-defining features are specified in advance, attentional target selection in visual search is controlled by preparatory top-down task sets. We used ERP measures to study voluntary target selection in the absence of such feature-specific task sets, and to compare it to selection that is guided by advance knowledge about target features.…

  6. The influence of action video game playing on eye movement behaviour during visual search in abstract, in-game and natural scenes.

    Science.gov (United States)

    Azizi, Elham; Abel, Larry A; Stainer, Matthew J

    2017-02-01

    Action game playing has been associated with several improvements in visual attention tasks. However, it is not clear how such changes might influence the way we overtly select information from our visual world (i.e. eye movements). We examined whether action-video-game training changed eye movement behaviour in a series of visual search tasks including conjunctive search (relatively abstracted from natural behaviour), game-related search, and more naturalistic scene search. Forty nongamers were trained in either an action first-person shooter game or a card game (control) for 10 hours. As a further control, we recorded eye movements of 20 experienced action gamers on the same tasks. The results did not show any change in duration of fixations or saccade amplitude either from before to after the training or between all nongamers (pretraining) and experienced action gamers. However, we observed a change in search strategy, reflected by a reduction in the vertical distribution of fixations for the game-related search task in the action-game-trained group. This might suggest learning the likely distribution of targets. In other words, game training only skilled participants to search game images for targets important to the game, with no indication of transfer to the more natural scene search. Taken together, these results suggest no modification in overt allocation of attention. Either the skills that can be trained with action gaming are not powerful enough to influence information selection through eye movements, or action-game-learned skills are not used when deciding where to move the eyes.

  7. Dual Target Search is Neither Purely Simultaneous nor Purely Successive.

    Science.gov (United States)

    Cave, Kyle R; Menneer, Tamaryn; Nomani, Mohammad S; Stroud, Michael J; Donnelly, Nick

    2017-08-31

    Previous research shows that visual search for two different targets is less efficient than search for a single target. Stroud, Menneer, Cave and Donnelly (2012) concluded that two target colours are represented separately based on modeling the fixation patterns. Although those analyses provide evidence for two separate target representations, they do not show whether participants search simultaneously for both targets, or first search for one target and then the other. Some studies suggest that multiple target representations are simultaneously active, while others indicate that search can be voluntarily simultaneous, or switching, or a mixture of both. Stroud et al.'s participants were not explicitly instructed to use any particular strategy. These data were revisited to determine which strategy was employed. Each fixated item was categorised according to whether its colour was more similar to one target or the other. Once an item similar to one target is fixated, the next fixated item is more likely to be similar to that target than the other, showing that at a given moment during search, one target is generally favoured. However, the search for one target is not completed before search for the other begins. Instead, there are often short runs of one or two fixations to distractors similar to one target, with each run followed by a switch to the other target. Thus, the results suggest that one target is more highly weighted than the other at any given time, but not to the extent that search is purely successive.

  8. You look familiar, but I don’t care: Lure rejection in hybrid visual and memory search is not based on familiarity

    Science.gov (United States)

    Wolfe, Jeremy M.; Boettcher, Sage E. P.; Josephs, Emilie L.; Cunningham, Corbin A.; Drew, Trafton

    2015-01-01

    In “hybrid” search tasks, observers hold multiple possible targets in memory while searching for those targets amongst distractor items in visual displays. Wolfe (2012) found that, if the target set is held constant over a block of trials, RTs in such tasks were a linear function of the number of items in the visual display and a linear function of the log of the number of items held in memory. However, in such tasks, the targets can become far more familiar than the distractors. Does this “familiarity” – operationalized here as the frequency and recency with which an item has appeared – influence performance in hybrid tasks In Experiment 1, we compared searches where distractors appeared with the same frequency as the targets to searches where all distractors were novel. Distractor familiarity did not have any reliable effect on search. In Experiment 2, most distractors were novel but some critical distractors were as common as the targets while others were 4× more common. Familiar distractors did not produce false alarm errors, though they did slightly increase response times (RTs). In Experiment 3, observers successfully searched for the new, unfamiliar item among distractors that, in many cases, had been seen only once before. We conclude that when the memory set is held constant for many trials, item familiarity alone does not cause observers to mistakenly confuse target with distractors. PMID:26191615

  9. Face Recognition and Visual Search Strategies in Autism Spectrum Disorders: Amending and Extending a Recent Review by Weigelt et al.

    Directory of Open Access Journals (Sweden)

    Julia Tang

    Full Text Available The purpose of this review was to build upon a recent review by Weigelt et al. which examined visual search strategies and face identification between individuals with autism spectrum disorders (ASD and typically developing peers. Seven databases, CINAHL Plus, EMBASE, ERIC, Medline, Proquest, PsychInfo and PubMed were used to locate published scientific studies matching our inclusion criteria. A total of 28 articles not included in Weigelt et al. met criteria for inclusion into this systematic review. Of these 28 studies, 16 were available and met criteria at the time of the previous review, but were mistakenly excluded; and twelve were recently published. Weigelt et al. found quantitative, but not qualitative, differences in face identification in individuals with ASD. In contrast, the current systematic review found both qualitative and quantitative differences in face identification between individuals with and without ASD. There is a large inconsistency in findings across the eye tracking and neurobiological studies reviewed. Recommendations for future research in face recognition in ASD were discussed.

  10. Theoretical Issues and Methodological Implications in Researching Visual Search Behaviours: A Preliminary Study Comparing the Cognitive and Ecologic Paradigms

    Directory of Open Access Journals (Sweden)

    José Afonso

    2013-09-01

    Full Text Available A number of research papers have been devoted to understanding the mechanisms underpinning successful decision-making in sports, and analysis of eye movements has deserved special attention in this concern. A thorough reading of existing literature denotes that research on ocular fixations requires at least 100 milliseconds within the same location. For average eye-tracking systems, this means using at least three frames for each fixation. However, ecological psychology has claimed that as low as 16.67 milliseconds might suffice to capture relevant information, implying using merely one frame to consider that a fixation has been made. The goal of this experiment was to directly compare two systems (one frame-one fixation versus three frames-one fixation for coding information concerning eye movements in a representative volleyball task in an in situ condition. Specifically, it was intended to analyse emerging differences and their meaning. Results exhibited statistically significant differences with regard to search rate (number of fixations, number of fixation locations, and mean fixation duration. Analysing fixation locations it was apparent that the ecological paradigm for considering visual fixations afforded supplementary information. Furthermore, the additional emerging cues appeared to be meaningful, and the level of noise introduced was very low. It is suggested that future research in eye movements considers using the one frame-one fixation approach, instead of the traditional three frames-one fixation set.

  11. Effects of mora deletion, nonword repetition, rapid naming, and visual search performance on beginning reading in Japanese.

    Science.gov (United States)

    Kobayashi, Maya Shiho; Haynes, Charles W; Macaruso, Paul; Hook, Pamela E; Kato, Junko

    2005-06-01

    This study examined the extent to which mora deletion (phonological analysis), nonword repetition (phonological memory), rapid automatized naming (RAN), and visual search abilities predict reading in Japanese kindergartners and first graders. Analogous abilities have been identified as important predictors of reading skills in alphabetic languages like English. In contrast to English, which is based on grapheme-phoneme relationships, the primary components of Japanese orthography are two syllabaries-hiragana and katakana (collectively termed "kana")-and a system of morphosyllabic symbols (kanji). Three RAN tasks (numbers, objects, syllabary symbols [hiragana]) were used with kindergartners, with an additional kanji RAN task included for first graders. Reading measures included accuracy and speed of passage reading for kindergartners and first graders, and reading comprehension for first graders. In kindergartners, hiragana RAN and number RAN were the only significant predictors of reading accuracy and speed. In first graders, kanji RAN and hiragana RAN predicted reading speed, whereas accuracy was predicted by mora deletion. Reading comprehension was predicted by kanji RAN, mora deletion, and nonword repetition. Although number RAN did not contribute unique variance to any reading measure, it correlated highly with kanji RAN. Implications of these findings for research and practice are discussed.

  12. The Magnetics Information Consortium (MagIC) Online Database: Uploading, Searching and Visualizing Paleomagnetic and Rock Magnetic Data

    Science.gov (United States)

    Minnett, R.; Koppers, A.; Tauxe, L.; Constable, C.; Pisarevsky, S. A.; Jackson, M.; Solheid, P.; Banerjee, S.; Johnson, C.

    2006-12-01

    The Magnetics Information Consortium (MagIC) is commissioned to implement and maintain an online portal to a relational database populated by both rock and paleomagnetic data. The goal of MagIC is to archive all measurements and the derived properties for studies of paleomagnetic directions (inclination, declination) and intensities, and for rock magnetic experiments (hysteresis, remanence, susceptibility, anisotropy). MagIC is hosted under EarthRef.org at http://earthref.org/MAGIC/ and has two search nodes, one for paleomagnetism and one for rock magnetism. Both nodes provide query building based on location, reference, methods applied, material type and geological age, as well as a visual map interface to browse and select locations. The query result set is displayed in a digestible tabular format allowing the user to descend through hierarchical levels such as from locations to sites, samples, specimens, and measurements. At each stage, the result set can be saved and, if supported by the data, can be visualized by plotting global location maps, equal area plots, or typical Zijderveld, hysteresis, and various magnetization and remanence diagrams. User contributions to the MagIC database are critical to achieving a useful research tool. We have developed a standard data and metadata template (Version 2.1) that can be used to format and upload all data at the time of publication in Earth Science journals. Software tools are provided to facilitate population of these templates within Microsoft Excel. These tools allow for the import/export of text files and provide advanced functionality to manage and edit the data, and to perform various internal checks to maintain data integrity and prepare for uploading. The MagIC Contribution Wizard at http://earthref.org/MAGIC/upload.htm executes the upload and takes only a few minutes to process several thousand data records. The standardized MagIC template files are stored in the digital archives of EarthRef.org where they

  13. Children's Visual Scanning of Textual Documents: Effects of Document Organization, Search Goals, and Metatextual Knowledge

    Science.gov (United States)

    Potocki, Anna; Ros, Christine; Vibert, Nicolas; Rouet, Jean-François

    2017-01-01

    This study examines children's strategies when scanning a document to answer a specific question. More specifically, we wanted to know whether they make use of organizers (i.e., headings) when searching and whether strategic search is related to their knowledge of reading strategies. Twenty-six French fifth graders were asked to search single-page…

  14. Different target-discrimination times can be followed by the same saccade-initiation timing in different stimulus conditions during visual searches

    Science.gov (United States)

    Tanaka, Tomohiro; Nishida, Satoshi

    2015-01-01

    The neuronal processes that underlie visual searches can be divided into two stages: target discrimination and saccade preparation/generation. This predicts that the length of time of the prediscrimination stage varies according to the search difficulty across different stimulus conditions, whereas the length of the latter postdiscrimination stage is stimulus invariant. However, recent studies have suggested that the length of the postdiscrimination interval changes with different stimulus conditions. To address whether and how the visual stimulus affects determination of the postdiscrimination interval, we recorded single-neuron activity in the lateral intraparietal area (LIP) when monkeys (Macaca fuscata) performed a color-singleton search involving four stimulus conditions that differed regarding luminance (Bright vs. Dim) and target-distractor color similarity (Easy vs. Difficult). We specifically focused on comparing activities between the Bright-Difficult and Dim-Easy conditions, in which the visual stimuli were considerably different, but the mean reaction times were indistinguishable. This allowed us to examine the neuronal activity when the difference in the degree of search speed between different stimulus conditions was minimal. We found that not only prediscrimination but also postdiscrimination intervals varied across stimulus conditions: the postdiscrimination interval was longer in the Dim-Easy condition than in the Bright-Difficult condition. Further analysis revealed that the postdiscrimination interval might vary with stimulus luminance. A computer simulation using an accumulation-to-threshold model suggested that the luminance-related difference in visual response strength at discrimination time could be the cause of different postdiscrimination intervals. PMID:25995344

  15. An Investigation of the Use of Real-time Image Mosaicing for Facilitating Global Spatial Awareness in Visual Search

    Science.gov (United States)

    Soung Yee, Anthony

    Three experiments have been completed to investigate whether and how a software technique called real-time image mosaicing applied to a restricted field of view (FOV) might influence target detection and path integration performance in simulated aerial search scenarios, representing local and global spatial awareness tasks respectively. The mosaiced FOV (mFOV) was compared to single FOV (sFOV) and one with double the single size (dFOV). In addition to advancing our understanding of visual information in mosaicing, the present study examines the advantages and limitations of a number of metrics used to evaluate performance in path integration tasks, with particular attention paid to measuring performance in identifying complex routes. The highlights of the results are summarized as follows, according to Experiments 1 through 3 respectively. 1. A novel response method for evaluating route identification performance was developed. The surmised benefits of the mFOV relative to sFOV and dFOV revealed no significant differences in performance for the relatively simple route shapes tested. Compared to the mFOV and dFOV conditions, target detection performance in the local task was found to be superior in the sFOV condition. 2. In order to appropriately quantify the observed differences in complex route selections made by the participants, a novel analysis method was developed using the Thurstonian Paired Comparisons Method. 3. To investigate the effect of display size and elevation angle (EA) in a complex route environment, a 2x3 experiment was conducted for the two spatial tasks, at a height selected from Experiment 2. Although no significant differences were found in the target detection task, contrasts in the Paired Comparisons Method results revealed that route identification performance were as hypothesised: mFOV > dFOV > sFOV for EA = 90°. Results were similar for EA = 45°, but with mFOV being no different than dFOV. As hypothesised, EA was found to have an effect

  16. Superfund TIO videos. Set B. Basics of administrative law, and prp search process: PRP search, information exchange and access. Part 3. Audio-Visual

    International Nuclear Information System (INIS)

    1990-01-01

    The videotape is divided into two sections. Section 1 identifies the various types of administrative hearings, including quasi-legislative, quasi-judicial, and hybrid types. Section 2 provides an overview of the PRP search process; explains how and when to issue Section 104(e) letters and administrative subpoenas; outlines the enforcement authorities available in cases of non-compliance; and describes the types of information that can be released to PRPs

  17. Ahead of the game : taking their cue from the gaming industry, visualization firms speed search for oil

    International Nuclear Information System (INIS)

    Smith, M.

    2007-01-01

    The video gaming industry has been the driver for sophisticated new memory and computation capabilities when it comes to developing the latest in visualization technology used by the oil and gas sector. A broad commercial market drives the entire graphics revolution forward for the benefit of all, including petroleum exploration companies. This article presented new visualization systems that have been deployed worldwide by companies as Sun Valley, Landmark, Panoram, Halliburton and TouchTable Inc. Desktop visualization displays are getting larger, with better detail, resolution and less compressed data, making them easier on the user with less scrolling, less zooming and no switching from screen to screen. It was noted that with visualization technology, it is important to preserve resolution when viewing seismic data, particularly in the z-axis. However, the relatively small oil industry market is not big enough to provide the driver necessary to move projector technology forward very quickly. A changeover from analog to digital stereoscopic projection technology is one of the changes that has occurred. Digital light processing provides a brighter, clearer picture with better resolution, colour accuracy and stability. It was noted that the greatest advancement is the size of processing power which enables visualization in very large format, including tabletop interfaces and visualization rooms that provide wall-size high resolution or theatre-scale visualization. It was concluded that geologists, geophysicists, geocelluar modelers and petrophysicists can prove the value of visualization rooms when planning wells. 3 figs

  18. Ahead of the game : taking their cue from the gaming industry, visualization firms speed search for oil

    Energy Technology Data Exchange (ETDEWEB)

    Smith, M.

    2007-07-15

    The video gaming industry has been the driver for sophisticated new memory and computation capabilities when it comes to developing the latest in visualization technology used by the oil and gas sector. A broad commercial market drives the entire graphics revolution forward for the benefit of all, including petroleum exploration companies. This article presented new visualization systems that have been deployed worldwide by companies as Sun Valley, Landmark, Panoram, Halliburton and TouchTable Inc. Desktop visualization displays are getting larger, with better detail, resolution and less compressed data, making them easier on the user with less scrolling, less zooming and no switching from screen to screen. It was noted that with visualization technology, it is important to preserve resolution when viewing seismic data, particularly in the z-axis. However, the relatively small oil industry market is not big enough to provide the driver necessary to move projector technology forward very quickly. A changeover from analog to digital stereoscopic projection technology is one of the changes that has occurred. Digital light processing provides a brighter, clearer picture with better resolution, colour accuracy and stability. It was noted that the greatest advancement is the size of processing power which enables visualization in very large format, including tabletop interfaces and visualization rooms that provide wall-size high resolution or theatre-scale visualization. It was concluded that geologists, geophysicists, geocelluar modelers and petrophysicists can prove the value of visualization rooms when planning wells. 3 figs.

  19. Visual search in the real world: Color vision deficiency affects peripheral guidance, but leaves foveal verification largely unaffected

    Directory of Open Access Journals (Sweden)

    Günter eKugler

    2015-12-01

    Full Text Available Background: People with color vision deficiencies report numerous limitations in daily life. However, they use basic color terms systematically and in a similar manner as people with people with normal color vision. We hypothesize that a possible explanation for this discrepancy between color perception and behavioral consequences might be found in the gaze behavior of people with color vision deficiency.Methods: A group of participants with color vision deficiencies and a control group performed several search tasks in a naturalistic setting on a lawn.Results: Search performance was similar in both groups in a color-unrelated search task as well as in a search for yellow targets. While searching for red targets, color vision deficient participants exhibited a strongly degraded performance. This was closely matched by the number of fixations on red objects shown by the two groups. Importantly, once they fixated a target, participants with color vision deficiencies exhibited only few identification errors. Conclusions: Participants with color vision deficiencies are not able to enhance their search for red targets on a (green lawn by an efficient guiding mechanism. The data indicate that the impaired guiding is the main influence on search performance, while foveal identification (verification largely unaffected.

  20. Searching for biomarkers of CDKL5 disorder: early-onset visual impairment in CDKL5 mutant mice.

    Science.gov (United States)

    Mazziotti, Raffaele; Lupori, Leonardo; Sagona, Giulia; Gennaro, Mariangela; Della Sala, Grazia; Putignano, Elena; Pizzorusso, Tommaso

    2017-06-15

    CDKL5 disorder is a neurodevelopmental disorder still without a cure. Murine models of CDKL5 disorder have been recently generated raising the possibility of preclinical testing of treatments. However, unbiased, quantitative biomarkers of high translational value to monitor brain function are still missing. Moreover, the analysis of treatment is hindered by the challenge of repeatedly and non-invasively testing neuronal function. We analyzed the development of visual responses in a mouse model of CDKL5 disorder to introduce visually evoked responses as a quantitative method to assess cortical circuit function. Cortical visual responses were assessed in CDKL5 null male mice, heterozygous females, and their respective control wild-type littermates by repeated transcranial optical imaging from P27 until P32. No difference between wild-type and mutant mice was present at P25-P26 whereas defective responses appeared from P27-P28 both in heterozygous and homozygous CDKL5 mutant mice. These results were confirmed by visually evoked potentials (VEPs) recorded from the visual cortex of a different cohort. The previously imaged mice were also analyzed at P60-80 using VEPs, revealing a persistent reduction of response amplitude, reduced visual acuity and defective contrast function. The level of adult impairment was significantly correlated with the reduction in visual responses observed during development. Support vector machine showed that multi-dimensional visual assessment can be used to automatically classify mutant and wt mice with high reliability. Thus, monitoring visual responses represents a promising biomarker for preclinical and clinical studies on CDKL5 disorder. © The Author 2017. Published by Oxford University Press.

  1. On the interplay between working memory consolidation and attentional selection in controlling conscious access : Parallel processing at a cost-a comment on 'The interplay of attention and consciousness in visual search, attentional blink and working memory consolidation'

    NARCIS (Netherlands)

    Wyble, Brad; Bowman, Howard; Nieuwenstein, Mark

    On the interplay between working memory consolidation and attentional selection in controlling conscious access: parallel processing at a cost-a comment on 'The interplay of attention and consciousness in visual search, attentional blink and working memory consolidation'

  2. The flanker compatibility effect as a function of visual angle, attentional focus, visual transients, and perceptual load: a search for boundary conditions.

    Science.gov (United States)

    Miller, J

    1991-03-01

    When subjects must respond to a relevant center letter and ignore irrelevant flanking letters, the identities of the flankers produce a response compatibility effect, indicating that they are processed semantically at least to some extent. Because this effect decreases as the separation between target and flankers increases, the effect appears to result from imperfect early selection (attenuation). In the present experiments, several features of the focused attention paradigm were examined, in order to determine whether they might produce the flanker compatibility effect by interfering with the operation of an early selective mechanism. Specifically, the effect might be produced because the paradigm requires subjects to (1) attend exclusively to stimuli within a very small visual angle, (2) maintain a long-term attentional focus on a constant display location, (3) focus attention on an empty display location, (4) exclude onset-transient flankers from semantic processing, or (5) ignore some of the few stimuli in an impoverished visual field. The results indicate that none of these task features is required for semantic processing of unattended stimuli to occur. In fact, visual angle is the only one of the task features that clearly has a strong influence on the size of the flanker compatibility effect. The invariance of the flanker compatibility effect across these conditions suggests that the mechanism for early selection rarely, if ever, completely excludes unattended stimuli from semantic analysis. In addition, it shows that selective mechanisms are relatively insensitive to several factors that might be expected to influence them, thereby supporting the view that spatial separation has a special status for visual selective attention.

  3. Honeybees (Apis mellifera exhibit flexible visual search strategies for vertical targets presented at various heights [v2; ref status: indexed, http://f1000r.es/51p

    Directory of Open Access Journals (Sweden)

    Linde Morawetz

    2015-02-01

    Full Text Available When honeybees are presented with a colour discrimination task, they tend to choose swiftly and accurately when objects are presented in the ventral part of their frontal visual field. In contrast, poor performance is observed when objects appear in the dorsal part. Here we investigate if this asymmetry is caused by fixed search patterns or if bees can increase their detection ability of objects in search scenarios when targets appear frequently or exclusively in the dorsal area of the visual field. We trained individual honeybees to choose an orange rewarded target among blue distractors. Target and distractors were presented in the ventral visual field, the dorsal field or both. Bees presented with targets in the ventral visual field consistently had the highest search efficiency, with rapid decisions, high accuracy and direct flight paths. In contrast, search performance for dorsally located targets was inaccurate and slow at the beginning of the experimental phase, but bees increased their search performance significantly after a few foraging bouts: they found the target faster, made fewer errors and flew in a straight line towards the target. However, bees needed thrice as long to improve the search for a dorsally located target when the target’s position changed randomly between the ventral and the dorsal visual field. We propose that honeybees form expectations of the location of the target’s appearance and adapt their search strategy accordingly. A variety of possible mechanisms underlying this behavioural adaptation, for example spatial attention, are discussed.

  4. The effect of items in working memory on the deployment of attention and the eyes during visual search

    NARCIS (Netherlands)

    Houtkamp, R.; Roelfsema, P. R.

    2006-01-01

    Paying attention to an object facilitates its storage in working memory. The authors investigate whether the opposite is also true: whether items in working memory influence the deployment of attention. Participants performed a search for a prespecified target while they held another item in working

  5. Visual search for tropical web spiders: the influence of plot length, sampling effort, and phase of the day on species richness.

    Science.gov (United States)

    Pinto-Leite, C M; Rocha, P L B

    2012-12-01

    Empirical studies using visual search methods to investigate spider communities were conducted with different sampling protocols, including a variety of plot sizes, sampling efforts, and diurnal periods for sampling. We sampled 11 plots ranging in size from 5 by 10 m to 5 by 60 m. In each plot, we computed the total number of species detected every 10 min during 1 hr during the daytime and during the nighttime (0630 hours to 1100 hours, both a.m. and p.m.). We measured the influence of time effort on the measurement of species richness by comparing the curves produced by sample-based rarefaction and species richness estimation (first-order jackknife). We used a general linear model with repeated measures to assess whether the phase of the day during which sampling occurred and the differences in the plot lengths influenced the number of species observed and the number of species estimated. To measure the differences in species composition between the phases of the day, we used a multiresponse permutation procedure and a graphical representation based on nonmetric multidimensional scaling. After 50 min of sampling, we noted a decreased rate of species accumulation and a tendency of the estimated richness curves to reach an asymptote. We did not detect an effect of plot size on the number of species sampled. However, differences in observed species richness and species composition were found between phases of the day. Based on these results, we propose guidelines for visual search for tropical web spiders.

  6. Psychophysics in a Web browser? Comparing response times collected with JavaScript and Psychophysics Toolbox in a visual search task.

    Science.gov (United States)

    de Leeuw, Joshua R; Motz, Benjamin A

    2016-03-01

    Behavioral researchers are increasingly using Web-based software such as JavaScript to conduct response time experiments. Although there has been some research on the accuracy and reliability of response time measurements collected using JavaScript, it remains unclear how well this method performs relative to standard laboratory software in psychologically relevant experimental manipulations. Here we present results from a visual search experiment in which we measured response time distributions with both Psychophysics Toolbox (PTB) and JavaScript. We developed a methodology that allowed us to simultaneously run the visual search experiment with both systems, interleaving trials between two independent computers, thus minimizing the effects of factors other than the experimental software. The response times measured by JavaScript were approximately 25 ms longer than those measured by PTB. However, we found no reliable difference in the variability of the distributions related to the software, and both software packages were equally sensitive to changes in the response times as a result of the experimental manipulations. We concluded that JavaScript is a suitable tool for measuring response times in behavioral research.

  7. Neural circuits of eye movements during performance of the visual exploration task, which is similar to the responsive search score task, in schizophrenia patients and normal subjects

    International Nuclear Information System (INIS)

    Nemoto, Yasundo; Matsuda, Tetsuya; Matsuura, Masato

    2004-01-01

    Abnormal exploratory eye movements have been studied as a biological marker for schizophrenia. Using functional MRI (fMRI), we investigated brain activations of 12 healthy and 8 schizophrenic subjects during performance of a visual exploration task that is similar to the responsive search score task to clarify the neural basis of the abnormal exploratory eye movement. Performance data, such as the number of eye movements, the reaction time, and the percentage of correct answers showed no significant differences between the two groups. Only the normal subjects showed activations at the bilateral thalamus and the left anterior medial frontal cortex during the visual exploration tasks. In contrast, only the schizophrenic subjects showed activations at the right anterior cingulate gyms during the same tasks. The activation at the different locations between the two groups, the left anterior medial frontal cortex in normal subjects and the right anterior cingulate gyrus in schizophrenia subjects, was explained by the feature of the visual tasks. Hypoactivation at the bilateral thalamus supports a dysfunctional filtering theory of schizophrenia. (author)

  8. Bingo! Externally-Supported Performance Intervention for Deficient Visual Search in Normal Aging, Parkinson’s Disease and Alzheimer’s Disease

    Science.gov (United States)

    Laudate, Thomas M.; Neargarder, Sandy; Dunne, Tracy E.; Sullivan, Karen D.; Joshi, Pallavi; Gilmore, Grover C.; Riedel, Tatiana M.; Cronin-Golomb, Alice

    2011-01-01

    External support may improve task performance regardless of an individual’s ability to compensate for cognitive deficits through internally-generated mechanisms. We investigated if performance of a complex, familiar visual search task (the game of bingo) could be enhanced in groups with suboptimal vision by providing external support through manipulation of task stimuli. Participants were 19 younger adults, 14 individuals with probable Alzheimer’s disease (AD), 13 AD-matched healthy adults, 17 non-demented individuals with Parkinson’s disease (PD), and 20 PD-matched healthy adults. We varied stimulus contrast, size, and visual complexity during game play. The externally-supported performance interventions of increased stimulus size and decreased complexity resulted in improvements in performance by all groups. Performance improvement through increased stimulus size and decreased complexity was demonstrated by all groups. AD also obtained benefit from increasing contrast, presumably by compensating for their contrast sensitivity deficit. The general finding of improved performance across healthy and afflicted groups suggests the value of visual support as an easy-to-apply intervention to enhance cognitive performance. PMID:22066941

  9. Express attentional re-engagement but delayed entry into consciousness following invalid spatial cues in visual search.

    Directory of Open Access Journals (Sweden)

    Benoit Brisson

    Full Text Available BACKGROUND: In predictive spatial cueing studies, reaction times (RT are shorter for targets appearing at cued locations (valid trials than at other locations (invalid trials. An increase in the amplitude of early P1 and/or N1 event-related potential (ERP components is also present for items appearing at cued locations, reflecting early attentional sensory gain control mechanisms. However, it is still unknown at which stage in the processing stream these early amplitude effects are translated into latency effects. METHODOLOGY/PRINCIPAL FINDINGS: Here, we measured the latency of two ERP components, the N2pc and the sustained posterior contralateral negativity (SPCN, to evaluate whether visual selection (as indexed by the N2pc and visual-short term memory processes (as indexed by the SPCN are delayed in invalid trials compared to valid trials. The P1 was larger contralateral to the cued side, indicating that attention was deployed to the cued location prior to the target onset. Despite these early amplitude effects, the N2pc onset latency was unaffected by cue validity, indicating an express, quasi-instantaneous re-engagement of attention in invalid trials. In contrast, latency effects were observed for the SPCN, and these were correlated to the RT effect. CONCLUSIONS/SIGNIFICANCE: Results show that latency differences that could explain the RT cueing effects must occur after visual selection processes giving rise to the N2pc, but at or before transfer in visual short-term memory, as reflected by the SPCN, at least in discrimination tasks in which the target is presented concurrently with at least one distractor. Given that the SPCN was previously associated to conscious report, these results further show that entry into consciousness is delayed following invalid cues.

  10. Visual paired-associate learning: in search of material-specific effects in adult patients who have undergone temporal lobectomy.

    Science.gov (United States)

    Smith, Mary Lou; Bigel, Marla; Miller, Laurie A

    2011-02-01

    The mesial temporal lobes are important for learning arbitrary associations. It has previously been demonstrated that left mesial temporal structures are involved in learning word pairs, but it is not yet known whether comparable lesions in the right temporal lobe impair visually mediated associative learning. Patients who had undergone left (n=16) or right (n=18) temporal lobectomy for relief of intractable epilepsy and healthy controls (n=13) were administered two paired-associate learning tasks assessing their learning and memory of pairs of abstract designs or pairs of symbols in unique locations. Both patient groups had deficits in learning the designs, but only the right temporal group was impaired in recognition. For the symbol location task, differences were not found in learning, but again a recognition deficit was found for the right temporal group. The findings implicate the mesial temporal structures in relational learning. They support a material-specific effect for recognition but not for learning and recall of arbitrary visual and visual-spatial associative information. Copyright © 2010 Elsevier Inc. All rights reserved.

  11. The Control of Single-color and Multiple-color Visual Search by Attentional Templates in Working Memory and in Long-term Memory.

    Science.gov (United States)

    Grubert, Anna; Carlisle, Nancy B; Eimer, Martin

    2016-12-01

    The question whether target selection in visual search can be effectively controlled by simultaneous attentional templates for multiple features is still under dispute. We investigated whether multiple-color attentional guidance is possible when target colors remain constant and can thus be represented in long-term memory but not when they change frequently and have to be held in working memory. Participants searched for one, two, or three possible target colors that were specified by cue displays at the start of each trial. In constant-color blocks, the same colors remained task-relevant throughout. In variable-color blocks, target colors changed between trials. The contralateral delay activity (CDA) to cue displays increased in amplitude as a function of color memory load in variable-color blocks, which indicates that cued target colors were held in working memory. In constant-color blocks, the CDA was much smaller, suggesting that color representations were primarily stored in long-term memory. N2pc components to targets were measured as a marker of attentional target selection. Target N2pcs were attenuated and delayed during multiple-color search, demonstrating less efficient attentional deployment to color-defined target objects relative to single-color search. Importantly, these costs were the same in constant-color and variable-color blocks. These results demonstrate that attentional guidance by multiple-feature as compared with single-feature templates is less efficient both when target features remain constant and can be represented in long-term memory and when they change across trials and therefore have to be maintained in working memory.

  12. Visual Impairment

    Science.gov (United States)

    ... site Sitio para adolescentes Body Mind Sexual Health Food & Fitness Diseases & Conditions Infections Drugs & Alcohol School & Jobs Sports Expert Answers (Q&A) Staying Safe Videos for Educators Search English Español Visual Impairment KidsHealth / For Teens / Visual Impairment What's in ...

  13. From Capture to Inhibition: How does Irrelevant Information Influence Visual Search? Evidence from a Spatial Cuing Paradigm.

    Science.gov (United States)

    Mertes, Christine; Wascher, Edmund; Schneider, Daniel

    2016-01-01

    Even though information is spatially and temporally irrelevant, it can influence the processing of subsequent information. The present study used a spatial cuing paradigm to investigate the origins of this persisting influence by means of event-related potentials (ERPs) of the EEG. An irrelevant color cue that was either contingent (color search) or non-contingent (shape search) on attentional sets was presented prior to a target array with different stimulus-onset asynchronies (SOA; 200, 400, 800 ms). Behavioral results indicated that color cues captured attention only when they shared target-defining properties. These same-location effects persisted over time but were pronounced when cue and target array were presented in close succession. N2 posterior contralateral (N2pc) showed that the color cue generally drew attention, but was strongest in the contingent condition. A subsequently emerging contralateral posterior positivity referred to the irrelevant cue (i.e., distractor positivity, Pd) was unaffected by the attentional set and therefore interpreted as an inhibitory process required to enable a re-direction of the attentional focus. Contralateral delay activity (CDA) was only observable in the contingent condition, indicating the transfer of spatial information into working memory and thus providing an explanation for the same-location effect for longer SOAs. Inhibition of this irrelevant information was reflected by a second contralateral positivity triggered through target presentation. The results suggest that distracting information is actively maintained when it resembles a sought-after object. However, two independent attentional processes are at work to compensate for attentional distraction: the timely inhibition of attentional capture and the active inhibition of mental representation of irrelevant information.

  14. From capture to inhibition: How does irrelevant information influence visual search? Evidence from a spatial cuing paradigm.

    Directory of Open Access Journals (Sweden)

    Christine eMertes

    2016-05-01

    Full Text Available Even though information is spatially and temporally irrelevant, it can influence the processing of subsequent information. The present study used a spatial cuing paradigm to investigate the origins of this persisting influence by means of event-related potentials (ERPs of the EEG. An irrelevant color cue that was either contingent (color search or non-contingent (shape search on attentional sets was presented prior to a target array with different stimulus-onset asynchronies (SOA; 200, 400, 800 ms. Behavioral results indicated that color cues captured attention only when they shared target-defining properties. These same-location effects persisted over time but were pronounced when cue and target array were presented in close succession. N2pc showed that the color cue generally drew attention, but was strongest in the contingent condition. A subsequently emerging contralateral posterior positivity referred to the irrelevant cue (i.e. distractor positivity; Pd was unaffected by the attentional set and therefore interpreted as an inhibitory process required to enable a re-direction of the attentional focus. CDA was only observable in the contingent condition, indicating the transfer of spatial information into working memory and thus providing an explanation for the same-location effect for longer SOAs. Inhibition of this irrelevant information was reflected by a second contralateral positivity triggered through target presentation. The results suggest that distracting information is actively maintained when it resembles a sought-after object. However, two independent attentional processes are at work to compensate for attentional distraction: The timely inhibition of attentional capture and the active inhibition of mental representation of irrelevant information.

  15. The Behavioral Effects of tDCS on Visual Search Performance Are Not Influenced by the Location of the Reference Electrode

    Directory of Open Access Journals (Sweden)

    Amanda Ellison

    2017-09-01

    Full Text Available We investigated the role of reference electrode placement (ipsilateral v contralateral frontal pole on conjunction visual search task performance when the transcranial direct current stimulation (tDCS cathode is placed over right posterior parietal cortex (rPPC and over right frontal eye fields (rFEF, both of which have been shown to be causally involved in the processing of this task using TMS. This resulted in four experimental manipulations in which sham tDCS was applied in week one followed by active tDCS the following week. Another group received sham stimulation in both sessions to investigate practice effects over 1 week in this task. Results show that there is no difference between effects seen when the anode is placed ipsi or contralaterally. Cathodal stimulation of rPPC increased search times straight after stimulation similarly for ipsi and contralateral references. This finding does not extend to rFEF stimulation. However, for both sites and both montages, practice effects as seen in the sham/sham condition were negated. This can be taken as evidence that for this task, reference placement on either frontal pole is not important, but also that care needs to be taken when contextualizing tDCS “effects” that may not be immediately apparent particularly in between-participant designs.

  16. The effects of visual discriminability and rotation angle on 30-month-olds’ search performance in spatial rotation tasks

    Directory of Open Access Journals (Sweden)

    Mirjam Ebersbach

    2016-10-01

    Full Text Available Tracking objects that are hidden and then moved is a crucial ability related to object permanence, which develops across several stages in early childhood. In spatial rotation tasks, children observe a target object that is hidden in one of two or more containers before the containers are rotated around a fixed axis. Usually, 30-month-olds fail to find the hidden object after it was rotated by 180°. We examined whether visual discriminability of the containers improves 30-month-olds’ success in this task and whether children perform better after 90° than after 180° rotations. Two potential hiding containers with same or different colors were placed on a board that was rotated by 90° or 180° in a within-subjects design. Children (N = 29 performed above chance level in all four conditions. Their overall success in finding the object did not improve by differently colored containers. However, different colors prevented children from showing an inhibition bias in 90° rotations, that is, choosing the empty container more often when it was located close to them than when it was farther away: This bias emerged in the same colors condition but not in the different colors condition. Results are discussed in view of particular challenges that might facilitate or deteriorate spatial rotation tasks for young children.

  17. The Effects of Visual Discriminability and Rotation Angle on 30-Month-Olds' Search Performance in Spatial Rotation Tasks.

    Science.gov (United States)

    Ebersbach, Mirjam; Nawroth, Christian

    2016-01-01

    Tracking objects that are hidden and then moved is a crucial ability related to object permanence, which develops across several stages in early childhood. In spatial rotation tasks, children observe a target object that is hidden in one of two or more containers before the containers are rotated around a fixed axis. Usually, 30-month-olds fail to find the hidden object after it was rotated by 180°. We examined whether visual discriminability of the containers improves 30-month-olds' success in this task and whether children perform better after 90° than after 180° rotations. Two potential hiding containers with same or different colors were placed on a board that was rotated by 90° or 180° in a within-subjects design. Children ( N = 29) performed above chance level in all four conditions. Their overall success in finding the object did not improve by differently colored containers. However, different colors prevented children from showing an inhibition bias in 90° rotations, that is, choosing the empty container more often when it was located close to them than when it was farther away: This bias emerged in the same colors condition but not in the different colors condition. Results are discussed in view of particular challenges that might facilitate or deteriorate spatial rotation tasks for young children.

  18. The Effects of Visual Discriminability and Rotation Angle on 30-Month-Olds’ Search Performance in Spatial Rotation Tasks

    Science.gov (United States)

    Ebersbach, Mirjam; Nawroth, Christian

    2016-01-01

    Tracking objects that are hidden and then moved is a crucial ability related to object permanence, which develops across several stages in early childhood. In spatial rotation tasks, children observe a target object that is hidden in one of two or more containers before the containers are rotated around a fixed axis. Usually, 30-month-olds fail to find the hidden object after it was rotated by 180°. We examined whether visual discriminability of the containers improves 30-month-olds’ success in this task and whether children perform better after 90° than after 180° rotations. Two potential hiding containers with same or different colors were placed on a board that was rotated by 90° or 180° in a within-subjects design. Children (N = 29) performed above chance level in all four conditions. Their overall success in finding the object did not improve by differently colored containers. However, different colors prevented children from showing an inhibition bias in 90° rotations, that is, choosing the empty container more often when it was located close to them than when it was farther away: This bias emerged in the same colors condition but not in the different colors condition. Results are discussed in view of particular challenges that might facilitate or deteriorate spatial rotation tasks for young children. PMID:27812346

  19. A mathematical model of single target site location by Brownian movement in subcellular compartments.

    Science.gov (United States)

    Kuthan, Hartmut

    2003-03-07

    The location of distinct sites is mandatory for many cellular processes. In the subcompartments of the cell nucleus, only very small numbers of diffusing macromolecules and specific target sites of some types may be present. In this case, we are faced with the Brownian movement of individual macromolecules and their "random search" for single/few specific target sites, rather than bulk-averaged diffusion and multiple sites. In this article, I consider the location of a distant central target site, e.g. a globular protein, by individual macromolecules executing unbiased (i.e. drift-free) random walks in a spherical compartment. For this walk-and-capture model, the closed-form analytic solution of the first passage time probability density function (p.d.f.) has been obtained as well as the first and second moment. In the limit of a large ratio of the radii of the spherical diffusion space and central target, well-known relations for the variance and the first two moments for the exponential p.d.f. were found to hold with high accuracy. These calculations reinforce earlier numerical results and Monte Carlo simulations. A major implication derivable from the model is that non-directed random movement is an effective means for locating single sites in submicron-sized compartments, even when the diffusion coefficients are comparatively small and the diffusing species are present in one copy only. These theoretical conclusions are underscored numerically for effective diffusion constants ranging from 0.5 to 10.0 microm(2) s(-1), which have been reported for a couple of nuclear proteins in their physiological environment. Spherical compartments of submicron size are, for example, the Cajal bodies (size: 0.1-1.0 microm), which are present in 1-5 copies in the cell nucleus. Within a small Cajal body of radius 0.1 microm a single diffusing protein molecule (with D=0.5 microm(2) s(-1)) would encounter a medium-sized protein of radius 2.5 nm within 1 s with a probability near

  20. Mapping online consumer search

    NARCIS (Netherlands)

    Bronnenberg, B.J.; Kim, J.; Albuquerque, P.

    2011-01-01

    The authors propose a new method to visualize browsing behavior in so-called product search maps. Manufacturers can use these maps to understand how consumers search for competing products before choice, including how information acquisition and product search are organized along brands, product

  1. Evaluating color deficiency simulation and daltonization methods through visual search and sample-to-match: SaMSEM and ViSDEM

    Science.gov (United States)

    Simon-Liedtke, Joschua T.; Farup, Ivar; Laeng, Bruno

    2015-01-01

    Color deficient people might be confronted with minor difficulties when navigating through daily life, for example when reading websites or media, navigating with maps, retrieving information from public transport schedules and others. Color deficiency simulation and daltonization methods have been proposed to better understand problems of color deficient individuals and to improve color displays for their use. However, it remains unclear whether these color prosthetic" methods really work and how well they improve the performance of color deficient individuals. We introduce here two methods to evaluate color deficiency simulation and daltonization methods based on behavioral experiments that are widely used in the field of psychology. Firstly, we propose a Sample-to-Match Simulation Evaluation Method (SaMSEM); secondly, we propose a Visual Search Daltonization Evaluation Method (ViSDEM). Both methods can be used to validate and allow the generalization of the simulation and daltonization methods related to color deficiency. We showed that both the response times (RT) and the accuracy of SaMSEM can be used as an indicator of the success of color deficiency simulation methods and that performance in the ViSDEM can be used as an indicator for the efficacy of color deficiency daltonization methods. In future work, we will include comparison and analysis of different color deficiency simulation and daltonization methods with the help of SaMSEM and ViSDEM.

  2. But what about the Empress of Racnoss? The allocation of attention to spiders and Doctor Who in a visual search task is predicted by fear and expertise.

    Science.gov (United States)

    Purkis, Helena M; Lester, Kathryn J; Field, Andy P

    2011-12-01

    If there is a spider in the room, then the spider phobic in your group is most likely to point it out to you. This phenomenon is believed to arise because our attentional systems are hardwired to attend to threat in our environment, and, to a spider phobic, spiders are threatening. However, an alternative explanation is simply that attention is quickly drawn to the stimulus of most personal relevance in the environment. Our research examined whether positive stimuli with no biological or evolutionary relevance could be allocated preferential attention. We compared attention to pictures of spiders with pictures from the TV program Doctor Who, for people who varied in both their love of Doctor Who and their fear of spiders. We found a double dissociation: interference from spider and Doctor-Who-related images in a visual search task was predicted by spider fear and Doctor Who expertise, respectively. As such, allocation of attention reflected the personal relevance of the images rather than their threat content. The attentional system believed to have a causal role in anxiety disorders is therefore likely to be a general system that responds not to threat but to stimulus relevance; hence, nonevolutionary images, such as those from Doctor Who, captured attention as quickly as fear-relevant spider images. Where this leaves the Empress of Racnoss, we are unsure. (c) 2011 APA, all rights reserved.

  3. Cover times of random searches

    Science.gov (United States)

    Chupeau, Marie; Bénichou, Olivier; Voituriez, Raphaël

    2015-10-01

    How long must one undertake a random search to visit all sites of a given domain? This time, known as the cover time, is a key observable to quantify the efficiency of exhaustive searches, which require a complete exploration of an area and not only the discovery of a single target. Examples range from immune-system cells chasing pathogens to animals harvesting resources, from robotic exploration for cleaning or demining to the task of improving search algorithms. Despite its broad relevance, the cover time has remained elusive and so far explicit results have been scarce and mostly limited to regular random walks. Here we determine the full distribution of the cover time for a broad range of random search processes, including Lévy strategies, intermittent strategies, persistent random walks and random walks on complex networks, and reveal its universal features. We show that for all these examples the mean cover time can be minimized, and that the corresponding optimal strategies also minimize the mean search time for a single target, unambiguously pointing towards their robustness.

  4. Cube search, revisited

    Science.gov (United States)

    Zhang, Xuetao; Huang, Jie; Yigit-Elliott, Serap; Rosenholtz, Ruth

    2015-01-01

    Observers can quickly search among shaded cubes for one lit from a unique direction. However, replace the cubes with similar 2-D patterns that do not appear to have a 3-D shape, and search difficulty increases. These results have challenged models of visual search and attention. We demonstrate that cube search displays differ from those with “equivalent” 2-D search items in terms of the informativeness of fairly low-level image statistics. This informativeness predicts peripheral discriminability of target-present from target-absent patches, which in turn predicts visual search performance, across a wide range of conditions. Comparing model performance on a number of classic search tasks, cube search does not appear unexpectedly easy. Easy cube search, per se, does not provide evidence for preattentive computation of 3-D scene properties. However, search asymmetries derived from rotating and/or flipping the cube search displays cannot be explained by the information in our current set of image statistics. This may merely suggest a need to modify the model's set of 2-D image statistics. Alternatively, it may be difficult cube search that provides evidence for preattentive computation of 3-D scene properties. By attributing 2-D luminance variations to a shaded 3-D shape, 3-D scene understanding may slow search for 2-D features of the target. PMID:25780063

  5. Cortical visual impairment

    OpenAIRE

    Koželj, Urša

    2013-01-01

    In this thesis we discuss cortical visual impairment, diagnosis that is in the developed world in first place, since 20 percent of children with blindness or low vision are diagnosed with it. The objectives of the thesis are to define cortical visual impairment and the definition of characters suggestive of the cortical visual impairment as well as to search for causes that affect the growing diagnosis of cortical visual impairment. There are a lot of signs of cortical visual impairment. ...

  6. Parieto-occipital areas involved in efficient filtering in search: a time course analysis of visual marking using behavioural and functional imaging procedures

    DEFF Research Database (Denmark)

    Humphreys, Glyn W; Kyllingsbæk, Søren; Watson, Derrick G.

    2004-01-01

    (PET). We show that regions of parieto-occipital cortex are selectively activated in a preview search condition relative to a detection baseline. These regions also increase in activation as the preview interval increases (and search then becomes easier), consistent with them modulating the parallel...

  7. Reading wiring diagrams made easier for maintenance operators: contribution from research in visual attention and visual search; Aide a la lecture des schemas electriques pour le depannage: apport de la recherche sur l`attention visuelle

    Energy Technology Data Exchange (ETDEWEB)

    Ponthieu, L; Wolfe, J M

    1994-07-01

    This work has been carried out while the author was visiting the Visual Psychophysics lab at the Center for Ophthalmic Research, Harvard Medical School. The general framework is the design of a wiring diagrams visualization system for maintenance operators in electric plants. This study concentrates on how knowledge and experimental techniques from visual attention can help this goal. From this standpoint, the visualization system must best exploit the human visual system abilities. As electronic databases containing all the diagrams will soon be available, it is important to think in advance the display techniques. Presently, maintenance operators favor working with paper printouts even where such databases are already available. The study shows why such an approach is valuable for the design of a display that fits the operator`s tasks. Beyond that, this work has been a mean to learn the experimental techniques of cognitive sciences in an applied frame. (authors). 9 figs., 5 annexes.

  8. Deficits in visual search for conjunctions of motion and form after parietal damage but with spared hMT+/V5.

    Science.gov (United States)

    Dent, Kevin; Lestou, Vaia; Humphreys, Glyn W

    2010-02-01

    It has been argued that area hMT+/V5 in humans acts as a motion filter, enabling targets defined by a conjunction of motion and form to be efficiently selected. We present data indicating that (a) damage to parietal cortex leads to a selective problem in processing motion-form conjunctions, and (b) that the presence of a structurally and functional intact hMT+/V5 is not sufficient for efficient search for motion-form conjunctions. We suggest that, in addition to motion-processing areas (e.g., hMT+/V5), the posterior parietal cortex is necessary for efficient search with motion-form conjunctions, so that damage to either brain region may bring about deficits in search. We discuss the results in terms of the involvement of the posterior parietal cortex in the top-down guidance of search or in the binding of motion and form information.

  9. Exploring Visual Bookmarks and Layered Visualizations

    NARCIS (Netherlands)

    J.T. Teuben (Jan)

    2010-01-01

    textabstractCultural heritage experts are confronted with a difficult information gathering task while conducting comparison searches. Saving searches and re-examining previous work could help them to do their work. In this paper we propose a solution in which we combine visual bookmarks for saving

  10. Search for Two Categories of Target Produces Fewer Fixations to Target-Color Items

    Science.gov (United States)

    Menneer, Tamaryn; Stroud, Michael J.; Cave, Kyle R.; Li, Xingshan; Godwin, Hayward J.; Liversedge, Simon P.; Donnelly, Nick

    2012-01-01

    Searching simultaneously for metal threats (guns and knives) and improvised explosive devices (IEDs) in X-ray images is less effective than 2 independent single-target searches, 1 for metal threats and 1 for IEDs. The goals of this study were to (a) replicate this dual-target cost for categorical targets and to determine whether the cost remains…

  11. Search Help

    Science.gov (United States)

    Guidance and search help resource listing examples of common queries that can be used in the Google Search Appliance search request, including examples of special characters, or query term seperators that Google Search Appliance recognizes.

  12. Parieto-occipital areas involved in efficient filtering in search: a time course analysis of visual marking using behavioural and functional imaging procedures

    DEFF Research Database (Denmark)

    Humphreys, Glyn W; Kyllingsbæk, Søren; Watson, Derrick G.

    2004-01-01

    Search for a colour-form conjunction target can be facilitated by presenting one set of distractors prior to the second set of distractors and the target: the preview benefit (Watson & Humphreys, 1997). The early presentation of one set of distractors enables them to be efficiently filtered from...

  13. The Visual System

    Medline Plus

    Full Text Available ... search for current job openings visit HHS USAJobs Home >> NEI for Kids >> The Visual System Listen All ... up to 28,800 times a day! NEI Home Contact Us A-Z Site Map NEI on ...

  14. Covert spatial attention in search for the location of a color-afterimage patch speeds up its decay from awareness: introducing a method useful for the study of neural correlates of visual awareness.

    Science.gov (United States)

    Bachmann, Talis; Murd, Carolina

    2010-06-01

    Previous research has reported that attention to color afterimages speeds up their decay. However, the inducing stimuli in these studies have been overlapping, thereby implying that they involved overlapping receptive fields of the responsible neurons. As a result it is difficult to interpret the effect of focusing attention on a phenomenally projected target-afterimage. Here, we present a method free from these shortcomings. In searching for a target-afterimage patch among spatially separate alternatives the target fades from awareness before its competitors. This offers a good means to study neural correlates of visual awareness unconfounded with attention and enabling a temporally extended pure phenomenal experience free from simultaneous inflow of sensory transients. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  15. Visual performance on detection tasks with double-targets of the same and different difficulty.

    Science.gov (United States)

    Chan, Alan H S; Courtney, Alan J; Ma, C W

    2002-10-20

    This paper reports a study of measurement of horizontal visual sensitivity limits for 16 subjects in single-target and double-targets detection tasks. Two phases of tests were conducted in the double-targets task; targets of the same difficulty were tested in phase one while targets of different difficulty were tested in phase two. The range of sensitivity for the double-targets test was found to be smaller than that for single-target in both the same and different target difficulty cases. The presence of another target was found to affect performance to a marked degree. Interference effect of the difficult target on detection of the easy one was greater than that of the easy one on the detection of the difficult one. Performance decrement was noted when correct percentage detection was plotted against eccentricity of target in both the single-target and double-targets tests. Nevertheless, the non-significant correlation found between the performance for the two tasks demonstrated that it was impossible to predict quantitatively ability for detection of double targets from the data for single targets. This indicated probable problems in generalizing data for single target visual lobes to those for multiple targets. Also lobe area values obtained from measurements using a single-target task cannot be applied in a mathematical model for situations with multiple occurrences of targets.

  16. Random searching

    International Nuclear Information System (INIS)

    Shlesinger, Michael F

    2009-01-01

    There are a wide variety of searching problems from molecules seeking receptor sites to predators seeking prey. The optimal search strategy can depend on constraints on time, energy, supplies or other variables. We discuss a number of cases and especially remark on the usefulness of Levy walk search patterns when the targets of the search are scarce.

  17. Search Patterns

    CERN Document Server

    Morville, Peter

    2010-01-01

    What people are saying about Search Patterns "Search Patterns is a delight to read -- very thoughtful and thought provoking. It's the most comprehensive survey of designing effective search experiences I've seen." --Irene Au, Director of User Experience, Google "I love this book! Thanks to Peter and Jeffery, I now know that search (yes, boring old yucky who cares search) is one of the coolest ways around of looking at the world." --Dan Roam, author, The Back of the Napkin (Portfolio Hardcover) "Search Patterns is a playful guide to the practical concerns of search interface design. It cont

  18. Feature-Based Memory-Driven Attentional Capture: Visual Working Memory Content Affects Visual Attention

    Science.gov (United States)

    Olivers, Christian N. L.; Meijer, Frank; Theeuwes, Jan

    2006-01-01

    In 7 experiments, the authors explored whether visual attention (the ability to select relevant visual information) and visual working memory (the ability to retain relevant visual information) share the same content representations. The presence of singleton distractors interfered more strongly with a visual search task when it was accompanied by…

  19. Automated search for supernovae

    International Nuclear Information System (INIS)

    Kare, J.T.

    1984-01-01

    This thesis describes the design, development, and testing of a search system for supernovae, based on the use of current computer and detector technology. This search uses a computer-controlled telescope and charge coupled device (CCD) detector to collect images of hundreds of galaxies per night of observation, and a dedicated minicomputer to process these images in real time. The system is now collecting test images of up to several hundred fields per night, with a sensitivity corresponding to a limiting magnitude (visual) of 17. At full speed and sensitivity, the search will examine some 6000 galaxies every three nights, with a limiting magnitude of 18 or fainter, yielding roughly two supernovae per week (assuming one supernova per galaxy per 50 years) at 5 to 50 percent of maximum light. An additional 500 nearby galaxies will be searched every night, to locate about 10 supernovae per year at one or two percent of maximum light, within hours of the initial explosion

  20. Automated search for supernovae

    Energy Technology Data Exchange (ETDEWEB)

    Kare, J.T.

    1984-11-15

    This thesis describes the design, development, and testing of a search system for supernovae, based on the use of current computer and detector technology. This search uses a computer-controlled telescope and charge coupled device (CCD) detector to collect images of hundreds of galaxies per night of observation, and a dedicated minicomputer to process these images in real time. The system is now collecting test images of up to several hundred fields per night, with a sensitivity corresponding to a limiting magnitude (visual) of 17. At full speed and sensitivity, the search will examine some 6000 galaxies every three nights, with a limiting magnitude of 18 or fainter, yielding roughly two supernovae per week (assuming one supernova per galaxy per 50 years) at 5 to 50 percent of maximum light. An additional 500 nearby galaxies will be searched every night, to locate about 10 supernovae per year at one or two percent of maximum light, within hours of the initial explosion.

  1. Personalized Search

    CERN Document Server

    AUTHOR|(SzGeCERN)749939

    2015-01-01

    As the volume of electronically available information grows, relevant items become harder to find. This work presents an approach to personalizing search results in scientific publication databases. This work focuses on re-ranking search results from existing search engines like Solr or ElasticSearch. This work also includes the development of Obelix, a new recommendation system used to re-rank search results. The project was proposed and performed at CERN, using the scientific publications available on the CERN Document Server (CDS). This work experiments with re-ranking using offline and online evaluation of users and documents in CDS. The experiments conclude that the personalized search result outperform both latest first and word similarity in terms of click position in the search result for global search in CDS.

  2. Searching CLEF-IP by Strategy

    NARCIS (Netherlands)

    W. Alink (Wouter); R. Cornacchia (Roberto); A.P. de Vries (Arjen)

    2010-01-01

    htmlabstractTasks performed by intellectual property specialists are often ad hoc, and continuously require new approaches to search a collection of documents. We therefore investigate the benets of a visual `search strategy builder' to allow IP search experts to express their approach to

  3. Search Advertising

    OpenAIRE

    Cornière (de), Alexandre

    2016-01-01

    Search engines enable advertisers to target consumers based on the query they have entered. In a framework with horizontal product differentiation, imperfect product information and in which consumers incur search costs, I study a game in which advertisers have to choose a price and a set of relevant keywords. The targeting mechanism brings about three kinds of efficiency gains, namely lower search costs, better matching, and more intense product market price-competition. A monopolistic searc...

  4. Protein search for multiple targets on DNA

    Energy Technology Data Exchange (ETDEWEB)

    Lange, Martin [Johannes Gutenberg University, Mainz 55122 (Germany); Department of Chemistry, Rice University, Houston, Texas 77005 (United States); Kochugaeva, Maria [Department of Chemistry, Rice University, Houston, Texas 77005 (United States); Kolomeisky, Anatoly B., E-mail: tolya@rice.edu [Department of Chemistry, Rice University, Houston, Texas 77005 (United States); Center for Theoretical Biological Physics, Rice University, Houston, Texas 77005 (United States)

    2015-09-14

    Protein-DNA interactions are crucial for all biological processes. One of the most important fundamental aspects of these interactions is the process of protein searching and recognizing specific binding sites on DNA. A large number of experimental and theoretical investigations have been devoted to uncovering the molecular description of these phenomena, but many aspects of the mechanisms of protein search for the targets on DNA remain not well understood. One of the most intriguing problems is the role of multiple targets in protein search dynamics. Using a recently developed theoretical framework we analyze this question in detail. Our method is based on a discrete-state stochastic approach that takes into account most relevant physical-chemical processes and leads to fully analytical description of all dynamic properties. Specifically, systems with two and three targets have been explicitly investigated. It is found that multiple targets in most cases accelerate the search in comparison with a single target situation. However, the acceleration is not always proportional to the number of targets. Surprisingly, there are even situations when it takes longer to find one of the multiple targets in comparison with the single target. It depends on the spatial position of the targets, distances between them, average scanning lengths of protein molecules on DNA, and the total DNA lengths. Physical-chemical explanations of observed results are presented. Our predictions are compared with experimental observations as well as with results from a continuum theory for the protein search. Extensive Monte Carlo computer simulations fully support our theoretical calculations.

  5. Faceted Search

    CERN Document Server

    Tunkelang, Daniel

    2009-01-01

    We live in an information age that requires us, more than ever, to represent, access, and use information. Over the last several decades, we have developed a modern science and technology for information retrieval, relentlessly pursuing the vision of a "memex" that Vannevar Bush proposed in his seminal article, "As We May Think." Faceted search plays a key role in this program. Faceted search addresses weaknesses of conventional search approaches and has emerged as a foundation for interactive information retrieval. User studies demonstrate that faceted search provides more

  6. Object attributes combine additively in visual search

    OpenAIRE

    Pramod, R. T.; Arun, S. P.

    2016-01-01

    We perceive objects as containing a variety of attributes: local features, relations between features, internal details, and global properties. But we know little about how they combine. Here, we report a remarkably simple additive rule that governs how these diverse object attributes combine in vision. The perceived dissimilarity between two objects was accurately explained as a sum of (a) spatially tuned local contour-matching processes modulated by part decomposition; (b) differences in in...

  7. Object attributes combine additively in visual search.

    Science.gov (United States)

    Pramod, R T; Arun, S P

    2016-01-01

    We perceive objects as containing a variety of attributes: local features, relations between features, internal details, and global properties. But we know little about how they combine. Here, we report a remarkably simple additive rule that governs how these diverse object attributes combine in vision. The perceived dissimilarity between two objects was accurately explained as a sum of (a) spatially tuned local contour-matching processes modulated by part decomposition; (b) differences in internal details, such as texture; (c) differences in emergent attributes, such as symmetry; and (d) differences in global properties, such as orientation or overall configuration of parts. Our results elucidate an enduring question in object vision by showing that the whole object is not a sum of its parts but a sum of its many attributes.

  8. The Visual System

    Medline Plus

    Full Text Available ... search for current job openings visit HHS USAJobs Home » NEI for Kids » The Visual System Listen All ... up to 28,800 times a day! NEI Home Contact Us A-Z Site Map NEI on ...

  9. Advanced Search

    African Journals Online (AJOL)

    Search tips: Search terms are case-insensitive; Common words are ignored; By default only articles containing all terms in the query are returned (i.e., AND is implied); Combine multiple words with OR to find articles containing either term; e.g., education OR research; Use parentheses to create more complex queries; e.g., ...

  10. A Unique Role of Endogenous Visual-Spatial Attention in Rapid Processing of Multiple Targets

    Science.gov (United States)

    Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Palafox, German; Suzuki, Satoru

    2011-01-01

    Visual spatial attention can be exogenously captured by a salient stimulus or can be endogenously allocated by voluntary effort. Whether these two attention modes serve distinctive functions is debated, but for processing of single targets the literature suggests superiority of exogenous attention (it is faster acting and serves more functions).…

  11. An Empirical Study on Using Visual Embellishments in Visualization.

    Science.gov (United States)

    Borgo, R; Abdul-Rahman, A; Mohamed, F; Grant, P W; Reppa, I; Floridi, L; Chen, Min

    2012-12-01

    In written and spoken communications, figures of speech (e.g., metaphors and synecdoche) are often used as an aid to help convey abstract or less tangible concepts. However, the benefits of using rhetorical illustrations or embellishments in visualization have so far been inconclusive. In this work, we report an empirical study to evaluate hypotheses that visual embellishments may aid memorization, visual search and concept comprehension. One major departure from related experiments in the literature is that we make use of a dual-task methodology in our experiment. This design offers an abstraction of typical situations where viewers do not have their full attention focused on visualization (e.g., in meetings and lectures). The secondary task introduces "divided attention", and makes the effects of visual embellishments more observable. In addition, it also serves as additional masking in memory-based trials. The results of this study show that visual embellishments can help participants better remember the information depicted in visualization. On the other hand, visual embellishments can have a negative impact on the speed of visual search. The results show a complex pattern as to the benefits of visual embellishments in helping participants grasp key concepts from visualization.

  12. Longterm visual associations affect attentional guidance

    NARCIS (Netherlands)

    Olivers, C.N.L.

    2011-01-01

    When observers perform a visual search task, they are assumed to adopt an attentional set for what they are looking for. The present experiment investigates the influence of long-term visual memory associations on this attentional set. On each trial, observers were asked to search a display for a

  13. Autonomous search

    CERN Document Server

    Hamadi, Youssef; Saubion, Frédéric

    2012-01-01

    Autonomous combinatorial search (AS) represents a new field in combinatorial problem solving. Its major standpoint and originality is that it considers that problem solvers must be capable of self-improvement operations. This is the first book dedicated to AS.

  14. Effects of Sb-doping on the grain growth of Cu(In, Ga)Se2 thin films fabricated by means of single-target sputtering

    International Nuclear Information System (INIS)

    Zhang, Shu; Wu, Lu; Yue, Ruoyu; Yan, Zongkai; Zhan, Haoran; Xiang, Yong

    2013-01-01

    To investigate the effects of Sb doping on the kinetics of grain growth in Cu(In,Ga)Se 2 (CIGS) thin films during annealing, CIGS thin films were sputtered onto Mo coated substrates from a single CIGS alloy target, followed by chemical bath deposition of Sb 2 S 3 thin layers on top of CIGS layers and subsequent annealing at different temperatures for 30 min in Se vapors. X-ray diffraction results showed that CIGS thin films were obtained directly using the single-target sputtering method. After annealing, the In/Ga ratio in Sb-doped CIGS thin films remained stable compared to undoped film, possibly because Sb can promote the incorporation of Ga into CIGS. The grain growth in CIGS thin films was enhanced after Sb doping, exhibiting significantly larger grains after annealing at 400 °C or 450 °C compared to films without Sb. In particular, the effect was strikingly significant in grain growth across the film thickness, resulting in columnar grain structure in Sb-doped films. This grain growth improvement may be led by the diffusion of Sb from the front surface to the CIGS-Mo back interface, which promoted the mass transport process in CIGS thin films. - Highlights: ► Cu(In,Ga)Se 2 (CIGS) thin films made by sputtering from a single CIGS target. ► Chemical bath deposition used to introduce antimony into CIGS absorber layers. ► In/Ga ratio decreases in Sb-doped annealed films, comparatively to undoped films. ► Sb-doped CIGS films are superior to undoped films in terms of grain-growth kinetics

  15. Math for visualization, visualizing math

    NARCIS (Netherlands)

    Wijk, van J.J.; Hart, G.; Sarhangi, R.

    2013-01-01

    I present an overview of our work in visualization, and reflect on the role of mathematics therein. First, mathematics can be used as a tool to produce visualizations, which is illustrated with examples from information visualization, flow visualization, and cartography. Second, mathematics itself

  16. Visual art and visual perception

    NARCIS (Netherlands)

    Koenderink, Jan J.

    2015-01-01

    Visual art and visual perception ‘Visual art’ has become a minor cul-de-sac orthogonal to THE ART of the museum directors and billionaire collectors. THE ART is conceptual, instead of visual. Among its cherished items are the tins of artist’s shit (Piero Manzoni, 1961, Merda d’Artista) “worth their

  17. Search strategies

    Science.gov (United States)

    Oliver, B. M.

    Attention is given to the approaches which would provide the greatest chance of success in attempts related to the discovery of extraterrestrial advanced cultures in the Galaxy, taking into account the principle of least energy expenditure. The energetics of interstellar contact are explored, giving attention to the use of manned spacecraft, automatic probes, and beacons. The least expensive approach to a search for other civilizations involves a listening program which attempts to detect signals emitted by such civilizations. The optimum part of the spectrum for the considered search is found to be in the range from 1 to 2 GHz. Antenna and transmission formulas are discussed along with the employment of matched gates and filters, the probable characteristics of the signals to be detected, the filter-signal mismatch loss, surveys of the radio sky, the conduction of targeted searches.

  18. Análise dos padrões dos movimentos oculares em tarefas de busca visual: efeito da familiaridade e das características físicas do estímulo Analysis of the eye movement patterns in visual search tasks: effect of familiarity and stimulus features

    Directory of Open Access Journals (Sweden)

    Elizeu Coutinho de Macedo

    2007-02-01

    analyze eye movements in asymmetric visual search using the task of normal and mirrored position letters. To evaluate the effect of familiarity and stimulus features. METHODS: Eighty-three university students with normal or corrected-to-normal vision were asked to search for a letter in inverted position to the letters in a group of either normal or mirrored letters. Four types of letters were used (Z, N, E and G and the eye movements were tracked by a specialized computer-based system (eyetracking. The analyzed measurements were: reaction time, fixation number and duration, saccade distance and duration. RESULTS: All measures varied with the type of letter. Reaction time, fixation number, and saccade distance were higher when the task was to find the normal letter in a group of mirrored letters. In this condition, fixation duration was smaller. Interaction was found between familiarity and the type of letter for the reaction time, fixation number and duration. The reaction time and fixation number increased together with the stimulus complexity, with a greater increase for the normal letter target. Fixation duration, however, decreased with the complexity of the stimuli and the search condition. CONCLUSIONS: Finding a mirrored letter among normal letters proved to be easier than the contrary. The letter type also affected the performance. When the context is formed of unfamiliar complex stimuli, the fixation duration is shorter, indicating a narrower span for visual processing. Therefore, a greater number of fixations with shorter duration are needed for the unfamiliar context while less fixations with greater duration are needed for the familiar context.

  19. Orienting attention to objects in visual short-term memory

    NARCIS (Netherlands)

    Dell'Acqua, Roberto; Sessa, Paola; Toffanin, Paolo; Luria, Roy; Joliccoeur, Pierre

    We measured electroencephalographic activity during visual search of a target object among objects available to perception or among objects held in visual short-term memory (VSTM). For perceptual search, a single shape was shown first (pre-cue) followed by a search-array and the task was to decide

  20. Early vision and visual attention

    Directory of Open Access Journals (Sweden)

    Gvozdenović Vasilije P.

    2003-01-01

    Full Text Available The question whether visual perception is spontaneous, sudden or is running through several phases, mediated by higher cognitive processes, was raised ever since the early work of Gestalt psychologists. In the early 1980s, Treisman proposed the feature integration theory of attention (FIT, based on the findings of neuroscience. Soon after publishing her theory a new scientific approach appeared investigating several visual perception phenomena. The most widely researched were the key constructs of FIT, like types of visual search and the role of the attention. The following review describes the main studies of early vision and visual attention.

  1. Flow visualization

    CERN Document Server

    Merzkirch, Wolfgang

    1974-01-01

    Flow Visualization describes the most widely used methods for visualizing flows. Flow visualization evaluates certain properties of a flow field directly accessible to visual perception. Organized into five chapters, this book first presents the methods that create a visible flow pattern that could be investigated by visual inspection, such as simple dye and density-sensitive visualization methods. It then deals with the application of electron beams and streaming birefringence. Optical methods for compressible flows, hydraulic analogy, and high-speed photography are discussed in other cha

  2. Cortical evidence for negative search templates

    NARCIS (Netherlands)

    Reeder, Reshanne R.; Olivers, Christian N.L.; Pollmann, Stefan

    2017-01-01

    A “target template”, specifying target features, is thought to benefit visual search performance. Setting up a “negative template”, specifying distractor features, should improve distractor inhibition and also benefit target detection. In the current fMRI study, subjects were required to search for

  3. Intrinsic position uncertainty impairs overt search performance.

    Science.gov (United States)

    Semizer, Yelda; Michel, Melchi M

    2017-08-01

    Uncertainty regarding the position of the search target is a fundamental component of visual search. However, due to perceptual limitations of the human visual system, this uncertainty can arise from intrinsic, as well as extrinsic, sources. The current study sought to characterize the role of intrinsic position uncertainty (IPU) in overt visual search and to determine whether it significantly limits human search performance. After completing a preliminary detection experiment to characterize sensitivity as a function of visual field position, observers completed a search task that required localizing a Gabor target within a field of synthetic luminance noise. The search experiment included two clutter conditions designed to modulate the effect of IPU across search displays of varying set size. In the Cluttered condition, the display was tiled uniformly with feature clutter to maximize the effects of IPU. In the Uncluttered condition, the clutter at irrelevant locations was removed to attenuate the effects of IPU. Finally, we derived an IPU-constrained ideal searcher model, limited by the IPU measured in human observers. Ideal searchers were simulated based on the detection sensitivity and fixation sequences measured for individual human observers. The IPU-constrained ideal searcher predicted performance trends similar to those exhibited by the human observers. In the Uncluttered condition, performance decreased steeply as a function of increasing set size. However, in the Cluttered condition, the effect of IPU dominated and performance was approximately constant as a function of set size. Our findings suggest that IPU substantially limits overt search performance, especially in crowded displays.

  4. Visual field

    Science.gov (United States)

    ... your visual field. How the Test is Performed Confrontation visual field exam. This is a quick and ... to achieve this important distinction for online health information and services. Learn more about A.D.A. ...

  5. Internet Search Engines

    OpenAIRE

    Fatmaa El Zahraa Mohamed Abdou

    2004-01-01

    A general study about the internet search engines, the study deals main 7 points; the differance between search engines and search directories, components of search engines, the percentage of sites covered by search engines, cataloging of sites, the needed time for sites appearance in search engines, search capabilities, and types of search engines.

  6. Quantized Visual Awareness

    Directory of Open Access Journals (Sweden)

    W Alexander Escobar

    2013-11-01

    Full Text Available The proposed model holds that, at its most fundamental level, visual awareness is quantized. That is to say that visual awareness arises as individual bits of awareness through the action of neural circuits with hundreds to thousands of neurons in at least the human striate cortex. Circuits with specific topologies will reproducibly result in visual awareness that correspond to basic aspects of vision like color, motion and depth. These quanta of awareness (qualia are produced by the feedforward sweep that occurs through the geniculocortical pathway but are not integrated into a conscious experience until recurrent processing from centers like V4 or V5 select the appropriate qualia being produced in V1 to create a percept. The model proposed here has the potential to shift the focus of the search for visual awareness to the level of microcircuits and these likely exist across the kingdom Animalia. Thus establishing qualia as the fundamental nature of visual awareness will not only provide a deeper understanding of awareness, but also allow for a more quantitative understanding of the evolution of visual awareness throughout the animal kingdom.

  7. Data visualization

    CERN Document Server

    Azzam, Tarek

    2013-01-01

    Do you communicate data and information to stakeholders? In Part 1, we introduce recent developments in the quantitative and qualitative data visualization field and provide a historical perspective on data visualization, its potential role in evaluation practice, and future directions. Part 2 delivers concrete suggestions for optimally using data visualization in evaluation, as well as suggestions for best practices in data visualization design. It focuses on specific quantitative and qualitative data visualization approaches that include data dashboards, graphic recording, and geographic information systems (GIS). Readers will get a step-by-step process for designing an effective data dashboard system for programs and organizations, and various suggestions to improve their utility.

  8. Harvesting social images for bi-concept search

    NARCIS (Netherlands)

    Li, X.; Snoek, C.G.M.; Worring, M.; Smeulders, A.W.M.

    2012-01-01

    Searching for the co-occurrence of two visual concepts in unlabeled images is an important step towards answering complex user queries. Traditional visual search methods use combinations of the confidence scores of individual concept detectors to tackle such queries. In this paper we introduce the

  9. Visual Literacy and Visual Thinking.

    Science.gov (United States)

    Hortin, John A.

    It is proposed that visual literacy be defined as the ability to understand (read) and use (write) images and to think and learn in terms of images. This definition includes three basic principles: (1) visuals are a language and thus analogous to verbal language; (2) a visually literate person should be able to understand (read) images and use…

  10. Visual Literacy and Visual Culture.

    Science.gov (United States)

    Messaris, Paul

    Familiarity with specific images or sets of images plays a role in a culture's visual heritage. Two questions can be asked about this type of visual literacy: Is this a type of knowledge that is worth building into the formal educational curriculum of our schools? What are the educational implications of visual literacy? There is a three-part…

  11. Positions priming in briefly presented search arrays

    DEFF Research Database (Denmark)

    Asgeirsson, Arni Gunnar; Kristjánsson, Árni; Kyllingsbæk, Søren

    2011-01-01

    Repetition priming in visual search has been a topic of extensive research since Maljkovic & Nakayama [1994, Memory & Cognition, 22, 657-672] presented the first detailed studies of such effects. Their results showed large reductions in reaction times when target color was repeated on consecutive...... the targets are oddly colored alphanumeric characters. The effects arise at very low exposure durations and benefit accuracy at all exposure durations towards the subjects’ ceiling. We conclude that temporally constricted experimental conditions can add to our understanding priming in visual search...... pop-out search trials. Such repetition effects have since been generalized to a multitude of target attributes. Priming has primarily been investigated using self-terminating visual search paradigms, comparing differences in response times. Response accuracy has predominantly served as a control...

  12. Long-Term Visual Prognosis of Peripheral Multifocal Chorioretinitis

    NARCIS (Netherlands)

    Ossewaarde-van Norel, J; ten Dam-van Loon, NH; de Boer, JH; Rothova, A.

    2015-01-01

    Purpose To report on the clinical manifestations, complications, and long-term visual prognosis of patients with peripheral multifocal chorioretinitis and to search for predictors for a lower visual outcome. Design Retrospective consecutive observational case series. Methods setting: Institutional.

  13. Optimal random search for a single hidden target.

    Science.gov (United States)

    Snider, Joseph

    2011-01-01

    A single target is hidden at a location chosen from a predetermined probability distribution. Then, a searcher must find a second probability distribution from which random search points are sampled such that the target is found in the minimum number of trials. Here it will be shown that if the searcher must get very close to the target to find it, then the best search distribution is proportional to the square root of the target distribution regardless of dimension. For a Gaussian target distribution, the optimum search distribution is approximately a Gaussian with a standard deviation that varies inversely with how close the searcher must be to the target to find it. For a network where the searcher randomly samples nodes and looks for the fixed target along edges, the optimum is either to sample a node with probability proportional to the square root of the out-degree plus 1 or not to do so at all.

  14. How visual working memory contents influence priming of visual attention.

    Science.gov (United States)

    Carlisle, Nancy B; Kristjánsson, Árni

    2017-04-12

    Recent evidence shows that when the contents of visual working memory overlap with targets and distractors in a pop-out search task, intertrial priming is inhibited (Kristjánsson, Sævarsson & Driver, Psychon Bull Rev 20(3):514-521, 2013, Experiment 2, Psychonomic Bulletin and Review). This may reflect an interesting interaction between implicit short-term memory-thought to underlie intertrial priming-and explicit visual working memory. Evidence from a non-pop-out search task suggests that it may specifically be holding distractors in visual working memory that disrupts intertrial priming (Cunningham & Egeth, Psychol Sci 27(4):476-485, 2016, Experiment 2, Psychological Science). We examined whether the inhibition of priming depends on whether feature values in visual working memory overlap with targets or distractors in the pop-out search, and we found that the inhibition of priming resulted from holding distractors in visual working memory. These results are consistent with separate mechanisms of target and distractor effects in intertrial priming, and support the notion that the impact of implicit short-term memory and explicit visual working memory can interact when each provides conflicting attentional signals.

  15. The Computational Anatomy of Visual Neglect.

    Science.gov (United States)

    Parr, Thomas; Friston, Karl J

    2018-02-01

    Visual neglect is a debilitating neuropsychological phenomenon that has many clinical implications and-in cognitive neuroscience-offers an important lesion deficit model. In this article, we describe a computational model of visual neglect based upon active inference. Our objective is to establish a computational and neurophysiological process theory that can be used to disambiguate among the various causes of this important syndrome; namely, a computational neuropsychology of visual neglect. We introduce a Bayes optimal model based upon Markov decision processes that reproduces the visual searches induced by the line cancellation task (used to characterize visual neglect at the bedside). We then consider 3 distinct ways in which the model could be lesioned to reproduce neuropsychological (visual search) deficits. Crucially, these 3 levels of pathology map nicely onto the neuroanatomy of saccadic eye movements and the systems implicated in visual neglect. © The Author 2017. Published by Oxford University Press.

  16. The Visual System

    Medline Plus

    Full Text Available Skip to main content Search Search: Search Search Search the NEI Website search NEI on Social Media | Search A-Z | en español | Text size S M L About NEI NEI Research Accomplishments ...

  17. Traffic Visualization

    DEFF Research Database (Denmark)

    Picozzi, Matteo; Verdezoto, Nervo; Pouke, Matti

    2013-01-01

    In this paper, we present a space-time visualization to provide city's decision-makers the ability to analyse and uncover important "city events" in an understandable manner for city planning activities. An interactive Web mashup visualization is presented that integrates several visualization...... techniques to give a rapid overview of traffic data. We illustrate our approach as a case study for traffic visualization systems, using datasets from the city of Oulu that can be extended to other city planning activities. We also report the feedback of real users (traffic management employees, traffic police...

  18. Visualization Laboratory

    Data.gov (United States)

    Federal Laboratory Consortium — FUNCTION: Evaluates and improves the operational effectiveness of existing and emerging electronic warfare systems. By analyzing and visualizing simulation results...

  19. Distributed Visualization

    Data.gov (United States)

    National Aeronautics and Space Administration — Distributed Visualization allows anyone, anywhere, to see any simulation, at any time. Development focuses on algorithms, software, data formats, data systems and...

  20. The Associations between Visual Attention and Facial Expression Identification in Patients with Schizophrenia.

    Science.gov (United States)

    Lin, I-Mei; Fan, Sheng-Yu; Huang, Tiao-Lai; Wu, Wan-Ting; Li, Shi-Ming

    2013-12-01

    Visual search is an important attention process that precedes the information processing. Visual search also mediates the relationship between cognition function (attention) and social cognition (such as facial expression identification). However, the association between visual attention and social cognition in patients with schizophrenia remains unknown. The purposes of this study were to examine the differences in visual search performance and facial expression identification between patients with schizophrenia and normal controls, and to explore the relationship between visual search performance and facial expression identification in patients with schizophrenia. Fourteen patients with schizophrenia (mean age=46.36±6.74) and 15 normal controls (mean age=40.87±9.33) participated this study. The visual search task, including feature search and conjunction search, and Japanese and Caucasian Facial Expression of Emotion were administered. Patients with schizophrenia had worse visual search performance both in feature search and conjunction search than normal controls, as well as had worse facial expression identification, especially in surprised and sadness. In addition, there were negative associations between visual search performance and facial expression identification in patients with schizophrenia, especially in surprised and sadness. However, this phenomenon was not showed in normal controls. Patients with schizophrenia who had visual search deficits had the impairment on facial expression identification. Increasing ability of visual search and facial expression identification may improve their social function and interpersonal relationship.