WorldWideScience

Sample records for visual object categories

  1. Category-specificity in visual object recognition

    DEFF Research Database (Denmark)

    Gerlach, Christian

    2009-01-01

    Are all categories of objects recognized in the same manner visually? Evidence from neuropsychology suggests they are not: some brain damaged patients are more impaired in recognizing natural objects than artefacts whereas others show the opposite impairment. Category-effects have also been...... demonstrated in neurologically intact subjects, but the findings are contradictory and there is no agreement as to why category-effects arise. This article presents a Pre-semantic Account of Category Effects (PACE) in visual object recognition. PACE assumes two processing stages: shape configuration (the...... binding of shape elements into elaborate shape descriptions) and selection (among competing representations in visual long-term memory), which are held to be differentially affected by the structural similarity between objects. Drawing on evidence from clinical studies, experimental studies...

  2. Visual object recognition and category-specificity

    DEFF Research Database (Denmark)

    Gerlach, Christian

    This thesis is based on seven published papers. The majority of the papers address two topics in visual object recognition: (i) category-effects at pre-semantic stages, and (ii) the integration of visual elements into elaborate shape descriptions corresponding to whole objects or large object parts...... (shape configuration). In the early writings these two topics were examined more or less independently. In later works, findings concerning category-effects and shape configuration merge into an integrated model, termed RACE, advanced to explain category-effects arising at pre-semantic stages in visual...... in visual long-term memory. In the thesis it is described how this simple model can account for a wide range of findings on category-specificity in both patients with brain damage and normal subjects. Finally, two hypotheses regarding the neural substrates of the model's components - and how activation...

  3. Object-graphs for context-aware visual category discovery.

    Science.gov (United States)

    Lee, Yong Jae; Grauman, Kristen

    2012-02-01

    How can knowing about some categories help us to discover new ones in unlabeled images? Unsupervised visual category discovery is useful to mine for recurring objects without human supervision, but existing methods assume no prior information and thus tend to perform poorly for cluttered scenes with multiple objects. We propose to leverage knowledge about previously learned categories to enable more accurate discovery, and address challenges in estimating their familiarity in unsegmented, unlabeled images. We introduce two variants of a novel object-graph descriptor to encode the 2D and 3D spatial layout of object-level co-occurrence patterns relative to an unfamiliar region and show that by using them to model the interaction between an image’s known and unknown objects, we can better detect new visual categories. Rather than mine for all categories from scratch, our method identifies new objects while drawing on useful cues from familiar ones. We evaluate our approach on several benchmark data sets and demonstrate clear improvements in discovery over conventional purely appearance-based baselines.

  4. Normal and abnormal category-effects in visual object recognition

    DEFF Research Database (Denmark)

    Gerlach, Christian

    2017-01-01

    Are all categories of objects recognized in the same manner visually? Evidence from neuropsychology suggests they are not, as some brain injured patients are more impaired in recognizing natural objects than artefacts while others show the opposite impairment. In an attempt to explain category-sp...

  5. Decoding visual object categories from temporal correlations of ECoG signals.

    Science.gov (United States)

    Majima, Kei; Matsuo, Takeshi; Kawasaki, Keisuke; Kawai, Kensuke; Saito, Nobuhito; Hasegawa, Isao; Kamitani, Yukiyasu

    2014-04-15

    How visual object categories are represented in the brain is one of the key questions in neuroscience. Studies on low-level visual features have shown that relative timings or phases of neural activity between multiple brain locations encode information. However, whether such temporal patterns of neural activity are used in the representation of visual objects is unknown. Here, we examined whether and how visual object categories could be predicted (or decoded) from temporal patterns of electrocorticographic (ECoG) signals from the temporal cortex in five patients with epilepsy. We used temporal correlations between electrodes as input features, and compared the decoding performance with features defined by spectral power and phase from individual electrodes. While using power or phase alone, the decoding accuracy was significantly better than chance, correlations alone or those combined with power outperformed other features. Decoding performance with correlations was degraded by shuffling the order of trials of the same category in each electrode, indicating that the relative time series between electrodes in each trial is critical. Analysis using a sliding time window revealed that decoding performance with correlations began to rise earlier than that with power. This earlier increase in performance was replicated by a model using phase differences to encode categories. These results suggest that activity patterns arising from interactions between multiple neuronal units carry additional information on visual object categories. Copyright © 2013 Elsevier Inc. All rights reserved.

  6. The role of object categories in hybrid visual and memory search

    Science.gov (United States)

    Cunningham, Corbin A.; Wolfe, Jeremy M.

    2014-01-01

    In hybrid search, observers (Os) search for any of several possible targets in a visual display containing distracting items and, perhaps, a target. Wolfe (2012) found that responses times (RT) in such tasks increased linearly with increases in the number of items in the display. However, RT increased linearly with the log of the number of items in the memory set. In earlier work, all items in the memory set were unique instances (e.g. this apple in this pose). Typical real world tasks involve more broadly defined sets of stimuli (e.g. any “apple” or, perhaps, “fruit”). The present experiments show how sets or categories of targets are handled in joint visual and memory search. In Experiment 1, searching for a digit among letters was not like searching for targets from a 10-item memory set, though searching for targets from an N-item memory set of arbitrary alphanumeric characters was like searching for targets from an N-item memory set of arbitrary objects. In Experiment 2, Os searched for any instance of N sets or categories held in memory. This hybrid search was harder than search for specific objects. However, memory search remained logarithmic. Experiment 3 illustrates the interaction of visual guidance and memory search when a subset of visual stimuli are drawn from a target category. Furthermore, we outline a conceptual model, supported by our results, defining the core components that would be necessary to support such categorical hybrid searches. PMID:24661054

  7. Visual Working Memory Capacity for Objects from Different Categories: A Face-Specific Maintenance Effect

    Science.gov (United States)

    Wong, Jason H.; Peterson, Matthew S.; Thompson, James C.

    2008-01-01

    The capacity of visual working memory was examined when complex objects from different categories were remembered. Previous studies have not examined how visual similarity affects object memory, though it has long been known that similar-sounding phonological information interferes with rehearsal in auditory working memory. Here, experiments…

  8. The role of object categories in hybrid visual and memory search.

    Science.gov (United States)

    Cunningham, Corbin A; Wolfe, Jeremy M

    2014-08-01

    In hybrid search, observers search for any of several possible targets in a visual display containing distracting items and, perhaps, a target. Wolfe (2012) found that response times (RTs) in such tasks increased linearly with increases in the number of items in the display. However, RT increased linearly with the log of the number of items in the memory set. In earlier work, all items in the memory set were unique instances (e.g., this apple in this pose). Typical real-world tasks involve more broadly defined sets of stimuli (e.g., any "apple" or, perhaps, "fruit"). The present experiments show how sets or categories of targets are handled in joint visual and memory search. In Experiment 1, searching for a digit among letters was not like searching for targets from a 10-item memory set, though searching for targets from an N-item memory set of arbitrary alphanumeric characters was like searching for targets from an N-item memory set of arbitrary objects. In Experiment 2, observers searched for any instance of N sets or categories held in memory. This hybrid search was harder than search for specific objects. However, memory search remained logarithmic. Experiment 3 illustrates the interaction of visual guidance and memory search when a subset of visual stimuli are drawn from a target category. Furthermore, we outline a conceptual model, supported by our results, defining the core components that would be necessary to support such categorical hybrid searches. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  9. Combining features from ERP components in single-trial EEG for discriminating four-category visual objects

    Science.gov (United States)

    Wang, Changming; Xiong, Shi; Hu, Xiaoping; Yao, Li; Zhang, Jiacai

    2012-10-01

    Categorization of images containing visual objects can be successfully recognized using single-trial electroencephalograph (EEG) measured when subjects view images. Previous studies have shown that task-related information contained in event-related potential (ERP) components could discriminate two or three categories of object images. In this study, we investigated whether four categories of objects (human faces, buildings, cats and cars) could be mutually discriminated using single-trial EEG data. Here, the EEG waveforms acquired while subjects were viewing four categories of object images were segmented into several ERP components (P1, N1, P2a and P2b), and then Fisher linear discriminant analysis (Fisher-LDA) was used to classify EEG features extracted from ERP components. Firstly, we compared the classification results using features from single ERP components, and identified that the N1 component achieved the highest classification accuracies. Secondly, we discriminated four categories of objects using combining features from multiple ERP components, and showed that combination of ERP components improved four-category classification accuracies by utilizing the complementarity of discriminative information in ERP components. These findings confirmed that four categories of object images could be discriminated with single-trial EEG and could direct us to select effective EEG features for classifying visual objects.

  10. Mere exposure alters category learning of novel objects

    Directory of Open Access Journals (Sweden)

    Jonathan R Folstein

    2010-08-01

    Full Text Available We investigated how mere exposure to complex objects with correlated or uncorrelated object features affects later category learning of new objects not seen during exposure. Correlations among pre-exposed object dimensions influenced later category learning. Unlike other published studies, the collection of pre-exposed objects provided no information regarding the categories to be learned, ruling out unsupervised or incidental category learning during pre-exposure. Instead, results are interpreted with respect to statistical learning mechanisms, providing one of the first demonstrations of how statistical learning can influence visual object learning.

  11. Mere exposure alters category learning of novel objects.

    Science.gov (United States)

    Folstein, Jonathan R; Gauthier, Isabel; Palmeri, Thomas J

    2010-01-01

    We investigated how mere exposure to complex objects with correlated or uncorrelated object features affects later category learning of new objects not seen during exposure. Correlations among pre-exposed object dimensions influenced later category learning. Unlike other published studies, the collection of pre-exposed objects provided no information regarding the categories to be learned, ruling out unsupervised or incidental category learning during pre-exposure. Instead, results are interpreted with respect to statistical learning mechanisms, providing one of the first demonstrations of how statistical learning can influence visual object learning.

  12. Color descriptors for object category recognition

    NARCIS (Netherlands)

    van de Sande, K.E.A.; Gevers, T.; Snoek, C.G.M.

    2008-01-01

    Category recognition is important to access visual information on the level of objects. A common approach is to compute image descriptors first and then to apply machine learning to achieve category recognition from annotated examples. As a consequence, the choice of image descriptors is of great

  13. Category-specificity in visual object recognition

    DEFF Research Database (Denmark)

    Gerlach, Christian

    2009-01-01

    binding of shape elements into elaborate shape descriptions) and selection (among competing representations in visual long-term memory), which are held to be differentially affected by the structural similarity between objects. Drawing on evidence from clinical studies, experimental studies...

  14. Establishing Visual Category Boundaries between Objects: A PET Study

    Science.gov (United States)

    Saumier, Daniel; Chertkow, Howard; Arguin, Martin; Whatmough, Cristine

    2005-01-01

    Individuals with Alzheimer's disease (AD) often have problems in recognizing common objects. This visual agnosia may stem from difficulties in establishing appropriate visual boundaries between visually similar objects. In support of this hypothesis, Saumier, Arguin, Chertkow, and Renfrew (2001) showed that AD subjects have difficulties in…

  15. Visual memory needs categories

    OpenAIRE

    Olsson, Henrik; Poom, Leo

    2005-01-01

    Capacity limitations in the way humans store and process information in working memory have been extensively studied, and several memory systems have been distinguished. In line with previous capacity estimates for verbal memory and memory for spatial information, recent studies suggest that it is possible to retain up to four objects in visual working memory. The objects used have typically been categorically different colors and shapes. Because knowledge about categories is stored in long-t...

  16. Basic level category structure emerges gradually across human ventral visual cortex.

    Science.gov (United States)

    Iordan, Marius Cătălin; Greene, Michelle R; Beck, Diane M; Fei-Fei, Li

    2015-07-01

    Objects can be simultaneously categorized at multiple levels of specificity ranging from very broad ("natural object") to very distinct ("Mr. Woof"), with a mid-level of generality (basic level: "dog") often providing the most cognitively useful distinction between categories. It is unknown, however, how this hierarchical representation is achieved in the brain. Using multivoxel pattern analyses, we examined how well each taxonomic level (superordinate, basic, and subordinate) of real-world object categories is represented across occipitotemporal cortex. We found that, although in early visual cortex objects are best represented at the subordinate level (an effect mostly driven by low-level feature overlap between objects in the same category), this advantage diminishes compared to the basic level as we move up the visual hierarchy, disappearing in object-selective regions of occipitotemporal cortex. This pattern stems from a combined increase in within-category similarity (category cohesion) and between-category dissimilarity (category distinctiveness) of neural activity patterns at the basic level, relative to both subordinate and superordinate levels, suggesting that successive visual areas may be optimizing basic level representations.

  17. Stimulus Dependency of Object-Evoked Responses in Human Visual Cortex: An Inverse Problem for Category Specificity

    Science.gov (United States)

    Graewe, Britta; De Weerd, Peter; Farivar, Reza; Castelo-Branco, Miguel

    2012-01-01

    Many studies have linked the processing of different object categories to specific event-related potentials (ERPs) such as the face-specific N170. Despite reports showing that object-related ERPs are influenced by visual stimulus features, there is consensus that these components primarily reflect categorical aspects of the stimuli. Here, we re-investigated this idea by systematically measuring the effects of visual feature manipulations on ERP responses elicited by both structure-from-motion (SFM)-defined and luminance-defined object stimuli. SFM objects elicited a novel component at 200–250 ms (N250) over parietal and posterior temporal sites. We found, however, that the N250 amplitude was unaffected by restructuring SFM stimuli into meaningless objects based on identical visual cues. This suggests that this N250 peak was not uniquely linked to categorical aspects of the objects, but is strongly determined by visual stimulus features. We provide strong support for this hypothesis by parametrically manipulating the depth range of both SFM- and luminance-defined object stimuli and showing that the N250 evoked by SFM stimuli as well as the well-known N170 to static faces were sensitive to this manipulation. Importantly, this effect could not be attributed to compromised object categorization in low depth stimuli, confirming a strong impact of visual stimulus features on object-related ERP signals. As ERP components linked with visual categorical object perception are likely determined by multiple stimulus features, this creates an interesting inverse problem when deriving specific perceptual processes from variations in ERP components. PMID:22363479

  18. Task-relevant perceptual features can define categories in visual memory too.

    Science.gov (United States)

    Antonelli, Karla B; Williams, Carrick C

    2017-11-01

    Although Konkle, Brady, Alvarez, and Oliva (2010, Journal of Experimental Psychology: General, 139(3), 558) claim that visual long-term memory (VLTM) is organized on underlying conceptual, not perceptual, information, visual memory results from visual search tasks are not well explained by this theory. We hypothesized that when viewing an object, any task-relevant visual information is critical to the organizational structure of VLTM. In two experiments, we examined the organization of VLTM by measuring the amount of retroactive interference created by objects possessing different combinations of task-relevant features. Based on task instructions, only the conceptual category was task relevant or both the conceptual category and a perceptual object feature were task relevant. Findings indicated that when made task relevant, perceptual object feature information, along with conceptual category information, could affect memory organization for objects in VLTM. However, when perceptual object feature information was task irrelevant, it did not contribute to memory organization; instead, memory defaulted to being organized around conceptual category information. These findings support the theory that a task-defined organizational structure is created in VLTM based on the relevance of particular object features and information.

  19. Two Types of Visual Objects

    Directory of Open Access Journals (Sweden)

    Skrzypulec Błażej

    2015-06-01

    Full Text Available While it is widely accepted that human vision represents objects, it is less clear which of the various philosophical notions of ‘object’ adequately characterizes visual objects. In this paper, I show that within contemporary cognitive psychology visual objects are characterized in two distinct, incompatible ways. On the one hand, models of visual organization describe visual objects in terms of combinations of features, in accordance with the philosophical bundle theories of objects. However, models of visual persistence apply a notion of visual objects that is more similar to that endorsed in philosophical substratum theories. Here I discuss arguments that might show either that only one of the above notions of visual objects is adequate in the context of human vision, or that the category of visual objects is not uniform and contains entities properly characterized by different philosophical conceptions.

  20. Large-scale weakly supervised object localization via latent category learning.

    Science.gov (United States)

    Chong Wang; Kaiqi Huang; Weiqiang Ren; Junge Zhang; Maybank, Steve

    2015-04-01

    Localizing objects in cluttered backgrounds is challenging under large-scale weakly supervised conditions. Due to the cluttered image condition, objects usually have large ambiguity with backgrounds. Besides, there is also a lack of effective algorithm for large-scale weakly supervised localization in cluttered backgrounds. However, backgrounds contain useful latent information, e.g., the sky in the aeroplane class. If this latent information can be learned, object-background ambiguity can be largely reduced and background can be suppressed effectively. In this paper, we propose the latent category learning (LCL) in large-scale cluttered conditions. LCL is an unsupervised learning method which requires only image-level class labels. First, we use the latent semantic analysis with semantic object representation to learn the latent categories, which represent objects, object parts or backgrounds. Second, to determine which category contains the target object, we propose a category selection strategy by evaluating each category's discrimination. Finally, we propose the online LCL for use in large-scale conditions. Evaluation on the challenging PASCAL Visual Object Class (VOC) 2007 and the large-scale imagenet large-scale visual recognition challenge 2013 detection data sets shows that the method can improve the annotation precision by 10% over previous methods. More importantly, we achieve the detection precision which outperforms previous results by a large margin and can be competitive to the supervised deformable part model 5.0 baseline on both data sets.

  1. Object representations in visual memory: evidence from visual illusions.

    Science.gov (United States)

    Ben-Shalom, Asaf; Ganel, Tzvi

    2012-07-26

    Human visual memory is considered to contain different levels of object representations. Representations in visual working memory (VWM) are thought to contain relatively elaborated information about object structure. Conversely, representations in iconic memory are thought to be more perceptual in nature. In four experiments, we tested the effects of two different categories of visual illusions on representations in VWM and in iconic memory. Unlike VWM that was affected by both types of illusions, iconic memory was immune to the effects of within-object contextual illusions and was affected only by illusions driven by between-objects contextual properties. These results show that iconic and visual working memory contain dissociable representations of object shape. These findings suggest that the global properties of the visual scene are processed prior to the processing of specific elements.

  2. Category-based guidance of spatial attention during visual search for feature conjunctions.

    Science.gov (United States)

    Nako, Rebecca; Grubert, Anna; Eimer, Martin

    2016-10-01

    The question whether alphanumerical category is involved in the control of attentional target selection during visual search remains a contentious issue. We tested whether category-based attentional mechanisms would guide the allocation of attention under conditions where targets were defined by a combination of alphanumerical category and a basic visual feature, and search displays could contain both targets and partially matching distractor objects. The N2pc component was used as an electrophysiological marker of attentional object selection in tasks where target objects were defined by a conjunction of color and category (Experiment 1) or shape and category (Experiment 2). Some search displays contained the target or a nontarget object that matched either the target color/shape or its category among 3 nonmatching distractors. In other displays, the target and a partially matching nontarget object appeared together. N2pc components were elicited not only by targets and by color- or shape-matching nontargets, but also by category-matching nontarget objects, even on trials where a target was present in the same display. On these trials, the summed N2pc components to the 2 types of partially matching nontargets were initially equal in size to the target N2pc, suggesting that attention was allocated simultaneously and independently to all objects with target-matching features during the early phase of attentional processing. Results demonstrate that alphanumerical category is a genuine guiding feature that can operate in parallel with color or shape information to control the deployment of attention during visual search. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  3. Now you see it, now you don’t: The context dependent nature of category-effects in visual object recognition

    DEFF Research Database (Denmark)

    Gerlach, Christian; Toft, Kristian Olesen

    2011-01-01

    In two experiments, we test predictions regarding processing advantages/disadvantages for natural objects and artefacts in visual object recognition. Varying three important parameters*degree of perceptual differentiation, stimulus format, and stimulus exposure duration*we show how different......-effects are products of common operations which are differentially affected by the structural similarity among objects (with natural objects being more structurally similar than artefacts). The potentially most important aspect of the present study is the demonstration that category-effects are very context dependent...

  4. Category-based attentional guidance can operate in parallel for multiple target objects.

    Science.gov (United States)

    Jenkins, Michael; Grubert, Anna; Eimer, Martin

    2018-04-30

    The question whether the control of attention during visual search is always feature-based or can also be based on the category of objects remains unresolved. Here, we employed the N2pc component as an on-line marker for target selection processes to compare the efficiency of feature-based and category-based attentional guidance. Two successive displays containing pairs of real-world objects (line drawings of kitchen or clothing items) were separated by a 10 ms SOA. In Experiment 1, target objects were defined by their category. In Experiment 2, one specific visual object served as target (exemplar-based search). On different trials, targets appeared either in one or in both displays, and participants had to report the number of targets (one or two). Target N2pc components were larger and emerged earlier during exemplar-based search than during category-based search, demonstrating the superior efficiency of feature-based attentional guidance. On trials where target objects appeared in both displays, both targets elicited N2pc components that overlapped in time, suggesting that attention was allocated in parallel to these target objects. Critically, this was the case not only in the exemplar-based task, but also when targets were defined by their category. These results demonstrate that attention can be guided by object categories, and that this type of category-based attentional control can operate concurrently for multiple target objects. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Category-Specific Visual Recognition and Aging from the PACE Theory Perspective: Evidence for a Presemantic Deficit in Aging Object Recognition

    DEFF Research Database (Denmark)

    Bordaberry, Pierre; Gerlach, Christian; Lenoble, Quentin

    2016-01-01

    Background/Study Context: The objective of this study was to investigate the object recognition deficit in aging. Age-related declines were examined from the presemantic account of category effects (PACE) theory perspective (Gerlach, 2009, Cognition, 111, 281–301). This view assumes that the stru......Background/Study Context: The objective of this study was to investigate the object recognition deficit in aging. Age-related declines were examined from the presemantic account of category effects (PACE) theory perspective (Gerlach, 2009, Cognition, 111, 281–301). This view assumes...... that the structural similarity/dissimilarity inherent in living and nonliving objects, respectively, can account for a wide range of category-specific effects. Methods: In two experiments on object recognition, young (36 participants, 18–27 years) and older (36 participants, 53–69 years) adult participants...... in the selection stage of the PACE theory (visual long-term memory matching) could be responsible for these impairments. Indeed, the older group showed a deficit when this stage was most relevant. This article emphasize on the critical need for taking into account structural component of the stimuli and type...

  6. Creating Objects and Object Categories for Studying Perception and Perceptual Learning

    Science.gov (United States)

    Hauffen, Karin; Bart, Eugene; Brady, Mark; Kersten, Daniel; Hegdé, Jay

    2012-01-01

    In order to quantitatively study object perception, be it perception by biological systems or by machines, one needs to create objects and object categories with precisely definable, preferably naturalistic, properties1. Furthermore, for studies on perceptual learning, it is useful to create novel objects and object categories (or object classes) with such properties2. Many innovative and useful methods currently exist for creating novel objects and object categories3-6 (also see refs. 7,8). However, generally speaking, the existing methods have three broad types of shortcomings. First, shape variations are generally imposed by the experimenter5,9,10, and may therefore be different from the variability in natural categories, and optimized for a particular recognition algorithm. It would be desirable to have the variations arise independently of the externally imposed constraints. Second, the existing methods have difficulty capturing the shape complexity of natural objects11-13. If the goal is to study natural object perception, it is desirable for objects and object categories to be naturalistic, so as to avoid possible confounds and special cases. Third, it is generally hard to quantitatively measure the available information in the stimuli created by conventional methods. It would be desirable to create objects and object categories where the available information can be precisely measured and, where necessary, systematically manipulated (or 'tuned'). This allows one to formulate the underlying object recognition tasks in quantitative terms. Here we describe a set of algorithms, or methods, that meet all three of the above criteria. Virtual morphogenesis (VM) creates novel, naturalistic virtual 3-D objects called 'digital embryos' by simulating the biological process of embryogenesis14. Virtual phylogenesis (VP) creates novel, naturalistic object categories by simulating the evolutionary process of natural selection9,12,13. Objects and object categories created

  7. Perceptual differentiation and category effects in normal object recognition

    DEFF Research Database (Denmark)

    Gerlach, Christian; Law, I; Gade, A

    1999-01-01

    The purpose of the present PET study was (i) to investigate the neural correlates of object recognition, i.e. the matching of visual forms to memory, and (ii) to test the hypothesis that this process is more difficult for natural objects than for artefacts. This was done by using object decision...... tasks where subjects decided whether pictures represented real objects or non-objects. The object decision tasks differed in their difficulty (the degree of perceptual differentiation needed to perform them) and in the category of the real objects used (natural objects versus artefacts). A clear effect...... be the neural correlate of matching visual forms to memory, and the amount of activation in these regions may correspond to the degree of perceptual differentiation required for recognition to occur. With respect to behaviour, it took significantly longer to make object decisions on natural objects than...

  8. Category-specific responses to faces and objects in primate auditory cortex

    Directory of Open Access Journals (Sweden)

    Kari L Hoffman

    2008-03-01

    Full Text Available Auditory and visual signals often occur together, and the two sensory channels are known to infl uence each other to facilitate perception. The neural basis of this integration is not well understood, although other forms of multisensory infl uences have been shown to occur at surprisingly early stages of processing in cortex. Primary visual cortex neurons can show frequency-tuning to auditory stimuli, and auditory cortex responds selectively to certain somatosensory stimuli, supporting the possibility that complex visual signals may modulate early stages of auditory processing. To elucidate which auditory regions, if any, are responsive to complex visual stimuli, we recorded from auditory cortex and the superior temporal sulcus while presenting visual stimuli consisting of various objects, neutral faces, and facial expressions generated during vocalization. Both objects and conspecifi c faces elicited robust fi eld potential responses in auditory cortex sites, but the responses varied by category: both neutral and vocalizing faces had a highly consistent negative component (N100 followed by a broader positive component (P180 whereas object responses were more variable in time and shape, but could be discriminated consistently from the responses to faces. The face response did not vary within the face category, i.e., for expressive vs. neutral face stimuli. The presence of responses for both objects and neutral faces suggests that auditory cortex receives highly informative visual input that is not restricted to those stimuli associated with auditory components. These results reveal selectivity for complex visual stimuli in a brain region conventionally described as non-visual unisensory cortex.

  9. Conceptual distinctiveness supports detailed visual long-term memory for real-world objects.

    Science.gov (United States)

    Konkle, Talia; Brady, Timothy F; Alvarez, George A; Oliva, Aude

    2010-08-01

    Humans have a massive capacity to store detailed information in visual long-term memory. The present studies explored the fidelity of these visual long-term memory representations and examined how conceptual and perceptual features of object categories support this capacity. Observers viewed 2,800 object images with a different number of exemplars presented from each category. At test, observers indicated which of 2 exemplars they had previously studied. Memory performance was high and remained quite high (82% accuracy) with 16 exemplars from a category in memory, demonstrating a large memory capacity for object exemplars. However, memory performance decreased as more exemplars were held in memory, implying systematic categorical interference. Object categories with conceptually distinctive exemplars showed less interference in memory as the number of exemplars increased. Interference in memory was not predicted by the perceptual distinctiveness of exemplars from an object category, though these perceptual measures predicted visual search rates for an object target among exemplars. These data provide evidence that observers' capacity to remember visual information in long-term memory depends more on conceptual structure than perceptual distinctiveness. 2010 APA, all rights reserved

  10. Category-specific visual responses: an intracranial study comparing gamma, beta, alpha and ERP response selectivity

    Directory of Open Access Journals (Sweden)

    Juan R Vidal

    2010-11-01

    Full Text Available The specificity of neural responses to visual objects is a major topic in visual neuroscience. In humans, functional magnetic resonance imaging (fMRI studies have identified several regions of the occipital and temporal lobe that appear specific to faces, letter-strings, scenes, or tools. Direct electrophysiological recordings in the visual cortical areas of epileptic patients have largely confirmed this modular organization, using either single-neuron peri-stimulus time-histogram or intracerebral event-related potentials (iERP. In parallel, a new research stream has emerged using high-frequency gamma-band activity (50-150 Hz (GBR and low-frequency alpha/beta activity (8-24 Hz (ABR to map functional networks in humans. An obvious question is now whether the functional organization of the visual cortex revealed by fMRI, ERP, GBR, and ABR coincide. We used direct intracerebral recordings in 18 epileptic patients to directly compare GBR, ABR, and ERP elicited by the presentation of seven major visual object categories (faces, scenes, houses, consonants, pseudowords, tools, and animals, in relation to previous fMRI studies. Remarkably both GBR and iERP showed strong category-specificity that was in many cases sufficient to infer stimulus object category from the neural response at single-trial level. However, we also found a strong discrepancy between the selectivity of GBR, ABR, and ERP with less than 10% of spatial overlap between sites eliciting the same category-specificity. Overall, we found that selective neural responses to visual objects were broadly distributed in the brain with a prominent spatial cluster located in the posterior temporal cortex. Moreover, the different neural markers (GBR, ABR, and iERP that elicit selectivity towards specific visual object categories present little spatial overlap suggesting that the information content of each marker can uniquely characterize high-level visual information in the brain.

  11. Semantic Wavelet-Induced Frequency-Tagging (SWIFT Periodically Activates Category Selective Areas While Steadily Activating Early Visual Areas.

    Directory of Open Access Journals (Sweden)

    Roger Koenig-Robert

    Full Text Available Primate visual systems process natural images in a hierarchical manner: at the early stage, neurons are tuned to local image features, while neurons in high-level areas are tuned to abstract object categories. Standard models of visual processing assume that the transition of tuning from image features to object categories emerges gradually along the visual hierarchy. Direct tests of such models remain difficult due to confounding alteration in low-level image properties when contrasting distinct object categories. When such contrast is performed in a classic functional localizer method, the desired activation in high-level visual areas is typically accompanied with activation in early visual areas. Here we used a novel image-modulation method called SWIFT (semantic wavelet-induced frequency-tagging, a variant of frequency-tagging techniques. Natural images modulated by SWIFT reveal object semantics periodically while keeping low-level properties constant. Using functional magnetic resonance imaging (fMRI, we indeed found that faces and scenes modulated with SWIFT periodically activated the prototypical category-selective areas while they elicited sustained and constant responses in early visual areas. SWIFT and the localizer were selective and specific to a similar extent in activating category-selective areas. Only SWIFT progressively activated the visual pathway from low- to high-level areas, consistent with predictions from standard hierarchical models. We confirmed these results with criterion-free methods, generalizing the validity of our approach and show that it is possible to dissociate neural activation in early and category-selective areas. Our results provide direct evidence for the hierarchical nature of the representation of visual objects along the visual stream and open up future applications of frequency-tagging methods in fMRI.

  12. Emerging category representation in the visual forebrain hierarchy of pigeons (Columba livia).

    Science.gov (United States)

    Azizi, Amir Hossein; Pusch, Roland; Koenen, Charlotte; Klatt, Sebastian; Bröcker, Franziska; Thiele, Samuel; Kellermann, Janosch; Güntürkün, Onur; Cheng, Sen

    2018-06-06

    Recognizing and categorizing visual stimuli are cognitive functions vital for survival, and an important feature of visual systems in primates as well as in birds. Visual stimuli are processed along the ventral visual pathway. At every stage in the hierarchy, neurons respond selectively to more complex features, transforming the population representation of the stimuli. It is therefore easier to read-out category information in higher visual areas. While explicit category representations have been observed in the primate brain, less is known on equivalent processes in the avian brain. Even though their brain anatomies are radically different, it has been hypothesized that visual object representations are comparable across mammals and birds. In the present study, we investigated category representations in the pigeon visual forebrain using recordings from single cells responding to photographs of real-world objects. Using a linear classifier, we found that the population activity in the visual associative area mesopallium ventrolaterale (MVL) distinguishes between animate and inanimate objects, although this distinction is not required by the task. By contrast, a population of cells in the entopallium, a region that is lower in the hierarchy of visual areas and that is related to the primate extrastriate cortex, lacked this information. A model that pools responses of simple cells, which function as edge detectors, can account for the animate vs. inanimate categorization in the MVL, but performance in the model is based on different features than in MVL. Therefore, processing in MVL cells is very likely more abstract than simple computations on the output of edge detectors. Copyright © 2018. Published by Elsevier B.V.

  13. The effect of category learning on attentional modulation of visual cortex.

    Science.gov (United States)

    Folstein, Jonathan R; Fuller, Kelly; Howard, Dorothy; DePatie, Thomas

    2017-09-01

    Learning about visual object categories causes changes in the way we perceive those objects. One likely mechanism by which this occurs is the application of attention to potentially relevant objects. Here we test the hypothesis that category membership influences the allocation of attention, allowing attention to be applied not only to object features, but to entire categories. Participants briefly learned to categorize a set of novel cartoon animals after which EEG was recorded while participants distinguished between a target and non-target category. A second identical EEG session was conducted after two sessions of categorization practice. The category structure and task design allowed parametric manipulation of number of target features while holding feature frequency and category membership constant. We found no evidence that category membership influenced attentional selection: a postero-lateral negative component, labeled the selection negativity/N250, increased over time and was sensitive to number of target features, not target categories. In contrast, the right hemisphere N170 was not sensitive to target features. The P300 appeared sensitive to category in the first session, but showed a graded sensitivity to number of target features in the second session, possibly suggesting a transition from rule-based to similarity based categorization. Copyright © 2017. Published by Elsevier Ltd.

  14. Fourier power, subjective distance, and object categories all provide plausible models of BOLD responses in scene-selective visual areas

    Science.gov (United States)

    Lescroart, Mark D.; Stansbury, Dustin E.; Gallant, Jack L.

    2015-01-01

    Perception of natural visual scenes activates several functional areas in the human brain, including the Parahippocampal Place Area (PPA), Retrosplenial Complex (RSC), and the Occipital Place Area (OPA). It is currently unclear what specific scene-related features are represented in these areas. Previous studies have suggested that PPA, RSC, and/or OPA might represent at least three qualitatively different classes of features: (1) 2D features related to Fourier power; (2) 3D spatial features such as the distance to objects in a scene; or (3) abstract features such as the categories of objects in a scene. To determine which of these hypotheses best describes the visual representation in scene-selective areas, we applied voxel-wise modeling (VM) to BOLD fMRI responses elicited by a set of 1386 images of natural scenes. VM provides an efficient method for testing competing hypotheses by comparing predictions of brain activity based on encoding models that instantiate each hypothesis. Here we evaluated three different encoding models that instantiate each of the three hypotheses listed above. We used linear regression to fit each encoding model to the fMRI data recorded from each voxel, and we evaluated each fit model by estimating the amount of variance it predicted in a withheld portion of the data set. We found that voxel-wise models based on Fourier power or the subjective distance to objects in each scene predicted much of the variance predicted by a model based on object categories. Furthermore, the response variance explained by these three models is largely shared, and the individual models explain little unique variance in responses. Based on an evaluation of previous studies and the data we present here, we conclude that there is currently no good basis to favor any one of the three alternative hypotheses about visual representation in scene-selective areas. We offer suggestions for further studies that may help resolve this issue. PMID:26594164

  15. Adaptive learning in a compartmental model of visual cortex - how feedback enables stable category learning and refinement

    Directory of Open Access Journals (Sweden)

    Georg eLayher

    2014-12-01

    Full Text Available The categorization of real world objects is often reflected in the similarity of their visual appearances. Such categories of objects do not necessarily form disjunct sets of objects, neither semantically nor visually. The relationship between categories can often be described in terms of a hierarchical structure. For instance, tigers and leopards build two separate mammalian categories, but both belong to the category of felines. In other words, tigers and leopards are subcategories of the category Felidae. In the last decades, the unsupervised learning of categories of visual input stimuli has been addressed by numerous approaches in machine learning as well as in the computational neurosciences. However, the question of what kind of mechanisms might be involved in the process of subcategory learning, or category refinement, remains a topic of active investigation. We propose a recurrent computational network architecture for the unsupervised learning of categorial and subcategorial visual input representations. During learning, the connection strengths of bottom-up weights from input to higher-level category representations are adapted according to the input activity distribution. In a similar manner, top-down weights learn to encode the characteristics of a specific stimulus category. Feedforward and feedback learning in combination realize an associative memory mechanism, enabling the selective top-down propagation of a category's feedback weight distribution. We suggest that the difference between the expected input encoded in the projective field of a category node and the current input pattern controls the amplification of feedforward-driven representations. Large enough differences trigger the recruitment of new representational resources and the establishment of (sub- category representations. We demonstrate the temporal evolution of such learning and show how the approach successully establishes category and subcategory

  16. Adaptive learning in a compartmental model of visual cortex—how feedback enables stable category learning and refinement

    Science.gov (United States)

    Layher, Georg; Schrodt, Fabian; Butz, Martin V.; Neumann, Heiko

    2014-01-01

    The categorization of real world objects is often reflected in the similarity of their visual appearances. Such categories of objects do not necessarily form disjunct sets of objects, neither semantically nor visually. The relationship between categories can often be described in terms of a hierarchical structure. For instance, tigers and leopards build two separate mammalian categories, both of which are subcategories of the category Felidae. In the last decades, the unsupervised learning of categories of visual input stimuli has been addressed by numerous approaches in machine learning as well as in computational neuroscience. However, the question of what kind of mechanisms might be involved in the process of subcategory learning, or category refinement, remains a topic of active investigation. We propose a recurrent computational network architecture for the unsupervised learning of categorial and subcategorial visual input representations. During learning, the connection strengths of bottom-up weights from input to higher-level category representations are adapted according to the input activity distribution. In a similar manner, top-down weights learn to encode the characteristics of a specific stimulus category. Feedforward and feedback learning in combination realize an associative memory mechanism, enabling the selective top-down propagation of a category's feedback weight distribution. We suggest that the difference between the expected input encoded in the projective field of a category node and the current input pattern controls the amplification of feedforward-driven representations. Large enough differences trigger the recruitment of new representational resources and the establishment of additional (sub-) category representations. We demonstrate the temporal evolution of such learning and show how the proposed combination of an associative memory with a modulatory feedback integration successfully establishes category and subcategory representations

  17. Structural and effective connectivity reveals potential network-based influences on category-sensitive visual areas

    Directory of Open Access Journals (Sweden)

    Nicholas eFurl

    2015-05-01

    Full Text Available Visual category perception is thought to depend on brain areas that respond specifically when certain categories are viewed. These category-sensitive areas are often assumed to be modules (with some degree of processing autonomy and to act predominantly on feedforward visual input. This modular view can be complemented by a view that treats brain areas as elements within more complex networks and as influenced by network properties. This network-oriented viewpoint is emerging from studies using either diffusion tensor imaging to map structural connections or effective connectivity analyses to measure how their functional responses influence each other. This literature motivates several hypotheses that predict category-sensitive activity based on network properties. Large, long-range fiber bundles such as inferior fronto-occipital, arcuate and inferior longitudinal fasciculi are associated with behavioural recognition and could play crucial roles in conveying backward influences on visual cortex from anterior temporal and frontal areas. Such backward influences could support top-down functions such as visual search and emotion-based visual modulation. Within visual cortex itself, areas sensitive to different categories appear well-connected (e.g., face areas connect to object- and motion sensitive areas and their responses can be predicted by backward modulation. Evidence supporting these propositions remains incomplete and underscores the need for better integration of DTI and functional imaging.

  18. Emerging Object Representations in the Visual System Predict Reaction Times for Categorization

    Science.gov (United States)

    Ritchie, J. Brendan; Tovar, David A.; Carlson, Thomas A.

    2015-01-01

    Recognizing an object takes just a fraction of a second, less than the blink of an eye. Applying multivariate pattern analysis, or “brain decoding”, methods to magnetoencephalography (MEG) data has allowed researchers to characterize, in high temporal resolution, the emerging representation of object categories that underlie our capacity for rapid recognition. Shortly after stimulus onset, object exemplars cluster by category in a high-dimensional activation space in the brain. In this emerging activation space, the decodability of exemplar category varies over time, reflecting the brain’s transformation of visual inputs into coherent category representations. How do these emerging representations relate to categorization behavior? Recently it has been proposed that the distance of an exemplar representation from a categorical boundary in an activation space is critical for perceptual decision-making, and that reaction times should therefore correlate with distance from the boundary. The predictions of this distance hypothesis have been born out in human inferior temporal cortex (IT), an area of the brain crucial for the representation of object categories. When viewed in the context of a time varying neural signal, the optimal time to “read out” category information is when category representations in the brain are most decodable. Here, we show that the distance from a decision boundary through activation space, as measured using MEG decoding methods, correlates with reaction times for visual categorization during the period of peak decodability. Our results suggest that the brain begins to read out information about exemplar category at the optimal time for use in choice behaviour, and support the hypothesis that the structure of the representation for objects in the visual system is partially constitutive of the decision process in recognition. PMID:26107634

  19. Perceptual category learning and visual processing: An exercise in computational cognitive neuroscience.

    Science.gov (United States)

    Cantwell, George; Riesenhuber, Maximilian; Roeder, Jessica L; Ashby, F Gregory

    2017-05-01

    The field of computational cognitive neuroscience (CCN) builds and tests neurobiologically detailed computational models that account for both behavioral and neuroscience data. This article leverages a key advantage of CCN-namely, that it should be possible to interface different CCN models in a plug-and-play fashion-to produce a new and biologically detailed model of perceptual category learning. The new model was created from two existing CCN models: the HMAX model of visual object processing and the COVIS model of category learning. Using bitmap images as inputs and by adjusting only a couple of learning-rate parameters, the new HMAX/COVIS model provides impressively good fits to human category-learning data from two qualitatively different experiments that used different types of category structures and different types of visual stimuli. Overall, the model provides a comprehensive neural and behavioral account of basal ganglia-mediated learning. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Tracking Multiple Statistics: Simultaneous Learning of Object Names and Categories in English and Mandarin Speakers.

    Science.gov (United States)

    Chen, Chi-Hsin; Gershkoff-Stowe, Lisa; Wu, Chih-Yi; Cheung, Hintat; Yu, Chen

    2017-08-01

    Two experiments were conducted to examine adult learners' ability to extract multiple statistics in simultaneously presented visual and auditory input. Experiment 1 used a cross-situational learning paradigm to test whether English speakers were able to use co-occurrences to learn word-to-object mappings and concurrently form object categories based on the commonalities across training stimuli. Experiment 2 replicated the first experiment and further examined whether speakers of Mandarin, a language in which final syllables of object names are more predictive of category membership than English, were able to learn words and form object categories when trained with the same type of structures. The results indicate that both groups of learners successfully extracted multiple levels of co-occurrence and used them to learn words and object categories simultaneously. However, marked individual differences in performance were also found, suggesting possible interference and competition in processing the two concurrent streams of regularities. Copyright © 2016 Cognitive Science Society, Inc.

  1. Conceptual Distinctiveness Supports Detailed Visual Long-Term Memory for Real-World Objects

    Science.gov (United States)

    Konkle, Talia; Brady, Timothy F.; Alvarez, George A.; Oliva, Aude

    2010-01-01

    Humans have a massive capacity to store detailed information in visual long-term memory. The present studies explored the fidelity of these visual long-term memory representations and examined how conceptual and perceptual features of object categories support this capacity. Observers viewed 2,800 object images with a different number of exemplars…

  2. The Role of Sensory-Motor Information in Object Recognition: Evidence from Category-Specific Visual Agnosia

    Science.gov (United States)

    Wolk, D.A.; Coslett, H.B.; Glosser, G.

    2005-01-01

    The role of sensory-motor representations in object recognition was investigated in experiments involving AD, a patient with mild visual agnosia who was impaired in the recognition of visually presented living as compared to non-living entities. AD named visually presented items for which sensory-motor information was available significantly more…

  3. Right away: A late, right-lateralized category effect complements an early, left-lateralized category effect in visual search.

    Science.gov (United States)

    Constable, Merryn D; Becker, Stefanie I

    2017-10-01

    According to the Sapir-Whorf hypothesis, learned semantic categories can influence early perceptual processes. A central finding in support of this view is the lateralized category effect-namely, the finding that categorically different colors (e.g., blue and green hues) can be discriminated faster than colors within the same color category (e.g., different hues of green), especially when they are presented in the right visual field. Because the right visual field projects to the left hemisphere, this finding has been popularly couched in terms of the left-lateralization of language. However, other studies have reported bilateral category effects, which has led some researchers to question the linguistic origins of the effect. Here we examined the time course of lateralized and bilateral category effects in the classical visual search paradigm by means of eyetracking and RT distribution analyses. Our results show a bilateral category effect in the manual responses, which is combined of an early, left-lateralized category effect and a later, right-lateralized category effect. The newly discovered late, right-lateralized category effect occurred only when observers had difficulty locating the target, indicating a specialization of the right hemisphere to find categorically different targets after an initial error. The finding that early and late stages of visual search show different lateralized category effects can explain a wide range of previously discrepant findings.

  4. What are the visual features underlying rapid object recognition?

    Directory of Open Access Journals (Sweden)

    Sébastien M Crouzet

    2011-11-01

    Full Text Available Research progress in machine vision has been very significant in recent years. Robust face detection and identification algorithms are already readily available to consumers, and modern computer vision algorithms for generic object recognition are now coping with the richness and complexity of natural visual scenes. Unlike early vision models of object recognition that emphasized the role of figure-ground segmentation and spatial information between parts, recent successful approaches are based on the computation of loose collections of image features without prior segmentation or any explicit encoding of spatial relations. While these models remain simplistic models of visual processing, they suggest that, in principle, bottom-up activation of a loose collection of image features could support the rapid recognition of natural object categories and provide an initial coarse visual representation before more complex visual routines and attentional mechanisms take place. Focusing on biologically-plausible computational models of (bottom-up pre-attentive visual recognition, we review some of the key visual features that have been described in the literature. We discuss the consistency of these feature-based representations with classical theories from visual psychology and test their ability to account for human performance on a rapid object categorization task.

  5. Dependence of behavioral performance on material category in an object grasping task with monkeys.

    Science.gov (United States)

    Yokoi, Isao; Tachibana, Atsumichi; Minamimoto, Takafumi; Goda, Naokazu; Komatsu, Hidehiko

    2018-05-02

    Material perception is an essential part of our cognitive function that enables us to properly interact with our complex daily environment. One important aspect of material perception is its multimodal nature. When we see an object, we generally recognize its haptic properties as well as its visual properties. Consequently, one must examine behavior using real objects that are perceived both visually and haptically to fully understand the characteristics of material perception. As a first step, we examined whether there is any difference in the behavioral responses to different materials in monkeys trained to perform an object grasping task in which they saw and grasped rod-shaped real objects made of various materials. We found that the monkeys' behavior in the grasping task, measured based on the success rate and the pulling force, differed depending on the material category. Monkeys easily and correctly grasped objects of some materials, such as metal and glass, but failed to grasp objects of other materials. In particular, monkeys avoided grasping fur-covered objects. The differences in the behavioral responses to the material categories cannot be explained solely based on the degree of familiarity with the different materials. These results shed light on the organization of multimodal representation of materials, where their biological significance is an important factor. In addition, a monkey that avoided touching real fur-covered objects readily touched images of the same objects presented on a CRT display. This suggests employing real objects is important when studying behaviors related to material perception.

  6. Higher Level Visual Cortex Represents Retinotopic, Not Spatiotopic, Object Location

    Science.gov (United States)

    Kanwisher, Nancy

    2012-01-01

    The crux of vision is to identify objects and determine their locations in the environment. Although initial visual representations are necessarily retinotopic (eye centered), interaction with the real world requires spatiotopic (absolute) location information. We asked whether higher level human visual cortex—important for stable object recognition and action—contains information about retinotopic and/or spatiotopic object position. Using functional magnetic resonance imaging multivariate pattern analysis techniques, we found information about both object category and object location in each of the ventral, dorsal, and early visual regions tested, replicating previous reports. By manipulating fixation position and stimulus position, we then tested whether these location representations were retinotopic or spatiotopic. Crucially, all location information was purely retinotopic. This pattern persisted when location information was irrelevant to the task, and even when spatiotopic (not retinotopic) stimulus position was explicitly emphasized. We also conducted a “searchlight” analysis across our entire scanned volume to explore additional cortex but again found predominantly retinotopic representations. The lack of explicit spatiotopic representations suggests that spatiotopic object position may instead be computed indirectly and continually reconstructed with each eye movement. Thus, despite our subjective impression that visual information is spatiotopic, even in higher level visual cortex, object location continues to be represented in retinotopic coordinates. PMID:22190434

  7. Robust selectivity to two-object images in human visual cortex

    Science.gov (United States)

    Agam, Yigal; Liu, Hesheng; Papanastassiou, Alexander; Buia, Calin; Golby, Alexandra J.; Madsen, Joseph R.; Kreiman, Gabriel

    2010-01-01

    SUMMARY We can recognize objects in a fraction of a second in spite of the presence of other objects [1–3]. The responses in macaque areas V4 and inferior temporal cortex [4–15] to a neuron’s preferred stimuli are typically suppressed by the addition of a second object within the receptive field (see however [16, 17]). How can this suppression be reconciled with rapid visual recognition in complex scenes? One option is that certain “special categories” are unaffected by other objects [18] but this leaves the problem unsolved for other categories. Another possibility is that serial attentional shifts help ameliorate the problem of distractor objects [19–21]. Yet, psychophysical studies [1–3], scalp recordings [1] and neurophysiological recordings [14, 16, 22–24], suggest that the initial sweep of visual processing contains a significant amount of information. We recorded intracranial field potentials in human visual cortex during presentation of flashes of two-object images. Visual selectivity from temporal cortex during the initial ~200 ms was largely robust to the presence of other objects. We could train linear decoders on the responses to isolated objects and decode information in two-object images. These observations are compatible with parallel, hierarchical and feed-forward theories of rapid visual recognition [25] and may provide a neural substrate to begin to unravel rapid recognition in natural scenes. PMID:20417105

  8. A foreground object features-based stereoscopic image visual comfort assessment model

    Science.gov (United States)

    Jin, Xin; Jiang, G.; Ying, H.; Yu, M.; Ding, S.; Peng, Z.; Shao, F.

    2014-11-01

    Since stereoscopic images provide observers with both realistic and discomfort viewing experience, it is necessary to investigate the determinants of visual discomfort. By considering that foreground object draws most attention when human observing stereoscopic images. This paper proposes a new foreground object based visual comfort assessment (VCA) metric. In the first place, a suitable segmentation method is applied to disparity map and then the foreground object is ascertained as the one having the biggest average disparity. In the second place, three visual features being average disparity, average width and spatial complexity of foreground object are computed from the perspective of visual attention. Nevertheless, object's width and complexity do not consistently influence the perception of visual comfort in comparison with disparity. In accordance with this psychological phenomenon, we divide the whole images into four categories on the basis of different disparity and width, and exert four different models to more precisely predict its visual comfort in the third place. Experimental results show that the proposed VCA metric outperformance other existing metrics and can achieve a high consistency between objective and subjective visual comfort scores. The Pearson Linear Correlation Coefficient (PLCC) and Spearman Rank Order Correlation Coefficient (SROCC) are over 0.84 and 0.82, respectively.

  9. Understanding visualization: a formal approach using category theory and semiotics.

    Science.gov (United States)

    Vickers, Paul; Faith, Joe; Rossiter, Nick

    2013-06-01

    This paper combines the vocabulary of semiotics and category theory to provide a formal analysis of visualization. It shows how familiar processes of visualization fit the semiotic frameworks of both Saussure and Peirce, and extends these structures using the tools of category theory to provide a general framework for understanding visualization in practice, including: Relationships between systems, data collected from those systems, renderings of those data in the form of representations, the reading of those representations to create visualizations, and the use of those visualizations to create knowledge and understanding of the system under inspection. The resulting framework is validated by demonstrating how familiar information visualization concepts (such as literalness, sensitivity, redundancy, ambiguity, generalizability, and chart junk) arise naturally from it and can be defined formally and precisely. This paper generalizes previous work on the formal characterization of visualization by, inter alia, Ziemkiewicz and Kosara and allows us to formally distinguish properties of the visualization process that previous work does not.

  10. MM-MDS: a multidimensional scaling database with similarity ratings for 240 object categories from the Massive Memory picture database.

    Directory of Open Access Journals (Sweden)

    Michael C Hout

    Full Text Available Cognitive theories in visual attention and perception, categorization, and memory often critically rely on concepts of similarity among objects, and empirically require measures of "sameness" among their stimuli. For instance, a researcher may require similarity estimates among multiple exemplars of a target category in visual search, or targets and lures in recognition memory. Quantifying similarity, however, is challenging when everyday items are the desired stimulus set, particularly when researchers require several different pictures from the same category. In this article, we document a new multidimensional scaling database with similarity ratings for 240 categories, each containing color photographs of 16-17 exemplar objects. We collected similarity ratings using the spatial arrangement method. Reports include: the multidimensional scaling solutions for each category, up to five dimensions, stress and fit measures, coordinate locations for each stimulus, and two new classifications. For each picture, we categorized the item's prototypicality, indexed by its proximity to other items in the space. We also classified pairs of images along a continuum of similarity, by assessing the overall arrangement of each MDS space. These similarity ratings will be useful to any researcher that wishes to control the similarity of experimental stimuli according to an objective quantification of "sameness."

  11. MM-MDS: a multidimensional scaling database with similarity ratings for 240 object categories from the Massive Memory picture database.

    Science.gov (United States)

    Hout, Michael C; Goldinger, Stephen D; Brady, Kyle J

    2014-01-01

    Cognitive theories in visual attention and perception, categorization, and memory often critically rely on concepts of similarity among objects, and empirically require measures of "sameness" among their stimuli. For instance, a researcher may require similarity estimates among multiple exemplars of a target category in visual search, or targets and lures in recognition memory. Quantifying similarity, however, is challenging when everyday items are the desired stimulus set, particularly when researchers require several different pictures from the same category. In this article, we document a new multidimensional scaling database with similarity ratings for 240 categories, each containing color photographs of 16-17 exemplar objects. We collected similarity ratings using the spatial arrangement method. Reports include: the multidimensional scaling solutions for each category, up to five dimensions, stress and fit measures, coordinate locations for each stimulus, and two new classifications. For each picture, we categorized the item's prototypicality, indexed by its proximity to other items in the space. We also classified pairs of images along a continuum of similarity, by assessing the overall arrangement of each MDS space. These similarity ratings will be useful to any researcher that wishes to control the similarity of experimental stimuli according to an objective quantification of "sameness."

  12. Lifting to cluster-tilting objects in higher cluster categories

    OpenAIRE

    Liu, Pin

    2008-01-01

    In this note, we consider the $d$-cluster-tilted algebras, the endomorphism algebras of $d$-cluster-tilting objects in $d$-cluster categories. We show that a tilting module over such an algebra lifts to a $d$-cluster-tilting object in this $d$-cluster category.

  13. Shape configuration and category-specificity

    DEFF Research Database (Denmark)

    Gerlach, Christian; Law, Ian; Paulson, Olaf B.

    2006-01-01

    a recent account of category-specificity and lends support to the notion that category-specific impairments can occur for both natural objects and artefacts following damage to pre-semantic stages in visual object recognition. The implications of the present findings are discussed in relation to theories...

  14. A model of primate visual cortex based on category-specific redundancies in natural images

    Science.gov (United States)

    Malmir, Mohsen; Shiry Ghidary, S.

    2010-12-01

    Neurophysiological and computational studies have proposed that properties of natural images have a prominent role in shaping selectivity of neurons in the visual cortex. An important property of natural images that has been studied extensively is the inherent redundancy in these images. In this paper, the concept of category-specific redundancies is introduced to describe the complex pattern of dependencies between responses of linear filters to natural images. It is proposed that structural similarities between images of different object categories result in dependencies between responses of linear filters in different spatial scales. It is also proposed that the brain gradually removes these dependencies in different areas of the ventral visual hierarchy to provide a more efficient representation of its sensory input. The authors proposed a model to remove these redundancies and trained it with a set of natural images using general learning rules that are developed to remove dependencies between responses of neighbouring neurons. Results of experiments demonstrate the close resemblance of neuronal selectivity between different layers of the model and their corresponding visual areas.

  15. Linguistic labels, dynamic visual features, and attention in infant category learning.

    Science.gov (United States)

    Deng, Wei Sophia; Sloutsky, Vladimir M

    2015-06-01

    How do words affect categorization? According to some accounts, even early in development words are category markers and are different from other features. According to other accounts, early in development words are part of the input and are akin to other features. The current study addressed this issue by examining the role of words and dynamic visual features in category learning in 8- to 12-month-old infants. Infants were familiarized with exemplars from one category in a label-defined or motion-defined condition and then tested with prototypes from the studied category and from a novel contrast category. Eye-tracking results indicated that infants exhibited better category learning in the motion-defined condition than in the label-defined condition, and their attention was more distributed among different features when there was a dynamic visual feature compared with the label-defined condition. These results provide little evidence for the idea that linguistic labels are category markers that facilitate category learning. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Prior knowledge of category size impacts visual search.

    Science.gov (United States)

    Wu, Rachel; McGee, Brianna; Echiverri, Chelsea; Zinszer, Benjamin D

    2018-03-30

    Prior research has shown that category search can be similar to one-item search (as measured by the N2pc ERP marker of attentional selection) for highly familiar, smaller categories (e.g., letters and numbers) because the finite set of items in a category can be grouped into one unit to guide search. Other studies have shown that larger, more broadly defined categories (e.g., healthy food) also can elicit N2pc components during category search, but the amplitude of these components is typically attenuated. Two experiments investigated whether the perceived size of a familiar category impacts category and exemplar search. We presented participants with 16 familiar company logos: 8 from a smaller category (social media companies) and 8 from a larger category (entertainment/recreation manufacturing companies). The ERP results from Experiment 1 revealed that, in a two-item search array, search was more efficient for the smaller category of logos compared to the larger category. In a four-item search array (Experiment 2), where two of the four items were placeholders, search was largely similar between the category types, but there was more attentional capture by nontarget members from the same category as the target for smaller rather than larger categories. These results support a growing literature on how prior knowledge of categories affects attentional selection and capture during visual search. We discuss the implications of these findings in relation to assessing cognitive abilities across the lifespan, given that prior knowledge typically increases with age. © 2018 Society for Psychophysiological Research.

  17. The perceptual effects of learning object categories that predict perceptual goals

    Science.gov (United States)

    Van Gulick, Ana E.; Gauthier, Isabel

    2014-01-01

    In classic category learning studies, subjects typically learn to assign items to one of two categories, with no further distinction between how items on each side of the category boundary should be treated. In real life, however, we often learn categories that dictate further processing goals, for instance with objects in only one category requiring further individuation. Using methods from category learning and perceptual expertise, we studied the perceptual consequences of experience with objects in tasks that rely on attention to different dimensions in different parts of the space. In two experiments, subjects first learned to categorize complex objects from a single morphspace into two categories based on one morph dimension, and then learned to perform a different task, either naming or a local feature judgment, for each of the two categories. A same-different discrimination test before and after each training measured sensitivity to feature dimensions of the space. After initial categorization, sensitivity increased along the category-diagnostic dimension. After task association, sensitivity increased more for the category that was named, especially along the non-diagnostic dimension. The results demonstrate that local attentional weights, associated with individual exemplars as a function of task requirements, can have lasting effects on perceptual representations. PMID:24820671

  18. Incremental Visualizer for Visible Objects

    DEFF Research Database (Denmark)

    Bukauskas, Linas; Bøhlen, Michael Hanspeter

    This paper discusses the integration of database back-end and visualizer front-end into a one tightly coupled system. The main aim which we achieve is to reduce the data pipeline from database to visualization by using incremental data extraction of visible objects in a fly-through scenarios. We...... also argue that passing only relevant data from the database will substantially reduce the overall load of the visualization system. We propose the system Incremental Visualizer for Visible Objects (IVVO) which considers visible objects and enables incremental visualization along the observer movement...... path. IVVO is the novel solution which allows data to be visualized and loaded on the fly from the database and which regards visibilities of objects. We run a set of experiments to convince that IVVO is feasible in terms of I/O operations and CPU load. We consider the example of data which uses...

  19. Similarity relations in visual search predict rapid visual categorization

    Science.gov (United States)

    Mohan, Krithika; Arun, S. P.

    2012-01-01

    How do we perform rapid visual categorization?It is widely thought that categorization involves evaluating the similarity of an object to other category items, but the underlying features and similarity relations remain unknown. Here, we hypothesized that categorization performance is based on perceived similarity relations between items within and outside the category. To this end, we measured the categorization performance of human subjects on three diverse visual categories (animals, vehicles, and tools) and across three hierarchical levels (superordinate, basic, and subordinate levels among animals). For the same subjects, we measured their perceived pair-wise similarities between objects using a visual search task. Regardless of category and hierarchical level, we found that the time taken to categorize an object could be predicted using its similarity to members within and outside its category. We were able to account for several classic categorization phenomena, such as (a) the longer times required to reject category membership; (b) the longer times to categorize atypical objects; and (c) differences in performance across tasks and across hierarchical levels. These categorization times were also accounted for by a model that extracts coarse structure from an image. The striking agreement observed between categorization and visual search suggests that these two disparate tasks depend on a shared coarse object representation. PMID:23092947

  20. Attribute conjunctions and the part configuration advantage in object category learning.

    Science.gov (United States)

    Saiki, J; Hummel, J E

    1996-07-01

    Five experiments demonstrated that in object category learning people are particularly sensitive to conjunctions of part shapes and relative locations. Participants learned categories defined by a part's shape and color (part-color conjunctions) or by a part's shape and its location relative to another part (part-location conjunctions). The statistical properties of the categories were identical across these conditions, as were the salience of color and relative location. Participants were better at classifying objects defined by part-location conjunctions than objects defined by part-color conjunctions. Subsequent experiments revealed that this effect was not due to the specific color manipulation or the role of location per se. These results suggest that the shape bias in object categorization is at least partly due to sensitivity to part-location conjunctions and suggest a new processing constraint on category learning.

  1. Categorization and category effects in normal object recognition

    DEFF Research Database (Denmark)

    Gerlach, Christian; Law, Ian; Gade, Anders

    2000-01-01

    and that the categorization of artefacts, as opposed to the categorization of natural objects, is based, in part, on action knowledge mediated by the left premotor cortex. However, because artefacts and natural objects often caused activation in the same regions within tasks, processing of these categories is not totally...

  2. Impairments in part-whole representations of objects in two cases of integrative visual agnosia.

    Science.gov (United States)

    Behrmann, Marlene; Williams, Pepper

    2007-10-01

    How complex multipart visual objects are represented perceptually remains a subject of ongoing investigation. One source of evidence that has been used to shed light on this issue comes from the study of individuals who fail to integrate disparate parts of visual objects. This study reports a series of experiments that examine the ability of two such patients with this form of agnosia (integrative agnosia; IA), S.M. and C.R., to discriminate and categorize exemplars of a rich set of novel objects, "Fribbles", whose visual similarity (number of shared parts) and category membership (shared overall shape) can be manipulated. Both patients performed increasingly poorly as the number of parts required for differentiating one Fribble from another increased. Both patients were also impaired at determining when two Fribbles belonged in the same category, a process that relies on abstracting spatial relations between parts. C.R., the less impaired of the two, but not S.M., eventually learned to categorize the Fribbles but required substantially more training than normal perceivers. S.M.'s failure is not attributable to a problem in learning to use a label for identification nor is it obviously attributable to a visual memory deficit. Rather, the findings indicate that, although the patients may be able to represent a small number of parts independently, in order to represent multipart images, the parts need to be integrated or chunked into a coherent whole. It is this integrative process that is impaired in IA and appears to play a critical role in the normal object recognition of complex images.

  3. Structural similarity and category-specificity

    DEFF Research Database (Denmark)

    Gerlach, Christian; Law, Ian; Paulson, Olaf B

    2004-01-01

    It has been suggested that category-specific recognition disorders for natural objects may reflect that natural objects are more structurally (visually) similar than artefacts and therefore more difficult to recognize following brain damage. On this account one might expect a positive relationshi...

  4. Top-down modulation of visual processing and knowledge after 250 ms supports object constancy of category decisions

    Directory of Open Access Journals (Sweden)

    Haline E. Schendan

    2015-09-01

    Full Text Available People categorize objects slowly when visual input is highly impoverished instead of optimal. While bottom-up models may explain a decision with optimal input, perceptual hypothesis testing (PHT theories implicate top-down processes with impoverished input. Brain mechanisms and the time course of PHT are largely unknown. This event-related potential study used a neuroimaging paradigm that implicated prefrontal cortex in top-down modulation of occipitotemporal cortex. Subjects categorized more impoverished and less impoverished real and pseudo objects. PHT theories predict larger impoverishment effects for real than pseudo objects because top-down processes modulate knowledge only for real objects, but different PHT variants predict different timing. Consistent with parietal-prefrontal PHT variants, around 250 ms, the earliest impoverished real object interaction started on an N3 complex, which reflects interactive cortical activity for object cognition. N3 impoverishment effects localized to both prefrontal and occipitotemporal cortex for real objects only. The N3 also showed knowledge effects by 230 ms that localized to occipitotemporal cortex. Later effects reflected (a word meaning in temporal cortex during the N400, (b internal evaluation of prior decision and memory processes and secondary higher-order memory involving anterotemporal parts of a default mode network during posterior positivity (P600, and (c response related activity in posterior cingulate during an anterior slow wave (SW after 700 ms. Finally, response activity in supplementary motor area during a posterior SW after 900 ms showed impoverishment effects that correlated with RTs. Convergent evidence from studies of vision, memory, and mental imagery which reflects purely top-down inputs, indicates that the N3 reflects the critical top-down processes of PHT. A hybrid multiple-state interactive, PHT and decision theory best explains the visual constancy of object cognition.

  5. Top-down modulation of visual processing and knowledge after 250 ms supports object constancy of category decisions.

    Science.gov (United States)

    Schendan, Haline E; Ganis, Giorgio

    2015-01-01

    People categorize objects more slowly when visual input is highly impoverished instead of optimal. While bottom-up models may explain a decision with optimal input, perceptual hypothesis testing (PHT) theories implicate top-down processes with impoverished input. Brain mechanisms and the time course of PHT are largely unknown. This event-related potential study used a neuroimaging paradigm that implicated prefrontal cortex in top-down modulation of occipitotemporal cortex. Subjects categorized more impoverished and less impoverished real and pseudo objects. PHT theories predict larger impoverishment effects for real than pseudo objects because top-down processes modulate knowledge only for real objects, but different PHT variants predict different timing. Consistent with parietal-prefrontal PHT variants, around 250 ms, the earliest impoverished real object interaction started on an N3 complex, which reflects interactive cortical activity for object cognition. N3 impoverishment effects localized to both prefrontal and occipitotemporal cortex for real objects only. The N3 also showed knowledge effects by 230 ms that localized to occipitotemporal cortex. Later effects reflected (a) word meaning in temporal cortex during the N400, (b) internal evaluation of prior decision and memory processes and secondary higher-order memory involving anterotemporal parts of a default mode network during posterior positivity (P600), and (c) response related activity in posterior cingulate during an anterior slow wave (SW) after 700 ms. Finally, response activity in supplementary motor area during a posterior SW after 900 ms showed impoverishment effects that correlated with RTs. Convergent evidence from studies of vision, memory, and mental imagery which reflects purely top-down inputs, indicates that the N3 reflects the critical top-down processes of PHT. A hybrid multiple-state interactive, PHT and decision theory best explains the visual constancy of object cognition.

  6. Shape configuration and category-specificity

    DEFF Research Database (Denmark)

    Gerlach, Christian; Law, I; Paulson, Olaf B.

    2006-01-01

    and fragmented drawings. We also examined whether fragmentation had different impact on the recognition of natural objects and artefacts and found that recognition of artefacts was more affected by fragmentation than recognition of natural objects. Thus, the usual finding of an advantage for artefacts...... in difficult object decision tasks, which is also found in the present experiments with outlines, is reversed when the stimuli are fragmented. This interaction between category (natural versus artefacts) and stimulus type (outlines versus fragmented forms) is in accordance with predictions derived from...... a recent account of category-specificity and lends support to the notion that category-specific impairments can occur for both natural objects and artefacts following damage to pre-semantic stages in visual object recognition. The implications of the present findings are discussed in relation to theories...

  7. Is Syntactic-Category Processing Obligatory in Visual Word Recognition? Evidence from Chinese

    Science.gov (United States)

    Wong, Andus Wing-Kuen; Chen, Hsuan-Chih

    2012-01-01

    Three experiments were conducted to investigate how syntactic-category and semantic information is processed in visual word recognition. The stimuli were two-character Chinese words in which semantic and syntactic-category ambiguities were factorially manipulated. A lexical decision task was employed in Experiment 1, whereas a semantic relatedness…

  8. Infant visual attention and object recognition.

    Science.gov (United States)

    Reynolds, Greg D

    2015-05-15

    This paper explores the role visual attention plays in the recognition of objects in infancy. Research and theory on the development of infant attention and recognition memory are reviewed in three major sections. The first section reviews some of the major findings and theory emerging from a rich tradition of behavioral research utilizing preferential looking tasks to examine visual attention and recognition memory in infancy. The second section examines research utilizing neural measures of attention and object recognition in infancy as well as research on brain-behavior relations in the early development of attention and recognition memory. The third section addresses potential areas of the brain involved in infant object recognition and visual attention. An integrated synthesis of some of the existing models of the development of visual attention is presented which may account for the observed changes in behavioral and neural measures of visual attention and object recognition that occur across infancy. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Timing, timing, timing: Fast decoding of object information from intracranial field potentials in human visual cortex

    Science.gov (United States)

    Liu, Hesheng; Agam, Yigal; Madsen, Joseph R.; Kreiman, Gabriel

    2010-01-01

    Summary The difficulty of visual recognition stems from the need to achieve high selectivity while maintaining robustness to object transformations within hundreds of milliseconds. Theories of visual recognition differ in whether the neuronal circuits invoke recurrent feedback connections or not. The timing of neurophysiological responses in visual cortex plays a key role in distinguishing between bottom-up and top-down theories. Here we quantified at millisecond resolution the amount of visual information conveyed by intracranial field potentials from 912 electrodes in 11 human subjects. We could decode object category information from human visual cortex in single trials as early as 100 ms post-stimulus. Decoding performance was robust to depth rotation and scale changes. The results suggest that physiological activity in the temporal lobe can account for key properties of visual recognition. The fast decoding in single trials is compatible with feed-forward theories and provides strong constraints for computational models of human vision. PMID:19409272

  10. To call a cloud 'cirrus': sound symbolism in names for categories or items.

    Science.gov (United States)

    Ković, Vanja; Sučević, Jelena; Styles, Suzy J

    2017-01-01

    The aim of the present paper is to experimentally test whether sound symbolism has selective effects on labels with different ranges-of-reference within a simple noun-hierarchy. In two experiments, adult participants learned the make up of two categories of unfamiliar objects ('alien life forms'), and were passively exposed to either category-labels or item-labels, in a learning-by-guessing categorization task. Following category training, participants were tested on their visual discrimination of object pairs. For different groups of participants, the labels were either congruent or incongruent with the objects. In Experiment 1, when trained on items with individual labels, participants were worse (made more errors) at detecting visual object mismatches when trained labels were incongruent. In Experiment 2, when participants were trained on items in labelled categories, participants were faster at detecting a match if the trained labels were congruent, and faster at detecting a mismatch if the trained labels were incongruent. This pattern of results suggests that sound symbolism in category labels facilitates later similarity judgments when congruent, and discrimination when incongruent, whereas for item labels incongruence generates error in judgements of visual object differences. These findings reveal that sound symbolic congruence has a different outcome at different levels of labelling within a noun hierarchy. These effects emerged in the absence of the label itself, indicating subtle but pervasive effects on visual object processing.

  11. Object formation in visual working memory: Evidence from object-based attention.

    Science.gov (United States)

    Zhou, Jifan; Zhang, Haihang; Ding, Xiaowei; Shui, Rende; Shen, Mowei

    2016-09-01

    We report on how visual working memory (VWM) forms intact perceptual representations of visual objects using sub-object elements. Specifically, when objects were divided into fragments and sequentially encoded into VWM, the fragments were involuntarily integrated into objects in VWM, as evidenced by the occurrence of both positive and negative object-based attention effects: In Experiment 1, when subjects' attention was cued to a location occupied by the VWM object, the target presented at the location of that object was perceived as occurring earlier than that presented at the location of a different object. In Experiment 2, responses to a target were significantly slower when a distractor was presented at the same location as the cued object (Experiment 2). These results suggest that object fragments can be integrated into objects within VWM in a manner similar to that of visual perception. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. The Timing of Visual Object Categorization

    Science.gov (United States)

    Mack, Michael L.; Palmeri, Thomas J.

    2011-01-01

    An object can be categorized at different levels of abstraction: as natural or man-made, animal or plant, bird or dog, or as a Northern Cardinal or Pyrrhuloxia. There has been growing interest in understanding how quickly categorizations at different levels are made and how the timing of those perceptual decisions changes with experience. We specifically contrast two perspectives on the timing of object categorization at different levels of abstraction. By one account, the relative timing implies a relative timing of stages of visual processing that are tied to particular levels of object categorization: Fast categorizations are fast because they precede other categorizations within the visual processing hierarchy. By another account, the relative timing reflects when perceptual features are available over time and the quality of perceptual evidence used to drive a perceptual decision process: Fast simply means fast, it does not mean first. Understanding the short-term and long-term temporal dynamics of object categorizations is key to developing computational models of visual object recognition. We briefly review a number of models of object categorization and outline how they explain the timing of visual object categorization at different levels of abstraction. PMID:21811480

  13. Cross-category adaptation: objects produce gender adaptation in the perception of faces.

    Directory of Open Access Journals (Sweden)

    Amir Homayoun Javadi

    Full Text Available Adaptation aftereffects have been found for low-level visual features such as colour, motion and shape perception, as well as higher-level features such as gender, race and identity in domains such as faces and biological motion. It is not yet clear if adaptation effects in humans extend beyond this set of higher order features. The aim of this study was to investigate whether objects highly associated with one gender, e.g. high heels for females or electric shavers for males can modulate gender perception of a face. In two separate experiments, we adapted subjects to a series of objects highly associated with one gender and subsequently asked participants to judge the gender of an ambiguous face. Results showed that participants are more likely to perceive an ambiguous face as male after being exposed to objects highly associated to females and vice versa. A gender adaptation aftereffect was obtained despite the adaptor and test stimuli being from different global categories (objects and faces respectively. These findings show that our perception of gender from faces is highly affected by our environment and recent experience. This suggests two possible mechanisms: (a that perception of the gender associated with an object shares at least some brain areas with those responsible for gender perception of faces and (b adaptation to gender, which is a high-level concept, can modulate brain areas that are involved in facial gender perception through top-down processes.

  14. Cross-category adaptation: objects produce gender adaptation in the perception of faces.

    Science.gov (United States)

    Javadi, Amir Homayoun; Wee, Natalie

    2012-01-01

    Adaptation aftereffects have been found for low-level visual features such as colour, motion and shape perception, as well as higher-level features such as gender, race and identity in domains such as faces and biological motion. It is not yet clear if adaptation effects in humans extend beyond this set of higher order features. The aim of this study was to investigate whether objects highly associated with one gender, e.g. high heels for females or electric shavers for males can modulate gender perception of a face. In two separate experiments, we adapted subjects to a series of objects highly associated with one gender and subsequently asked participants to judge the gender of an ambiguous face. Results showed that participants are more likely to perceive an ambiguous face as male after being exposed to objects highly associated to females and vice versa. A gender adaptation aftereffect was obtained despite the adaptor and test stimuli being from different global categories (objects and faces respectively). These findings show that our perception of gender from faces is highly affected by our environment and recent experience. This suggests two possible mechanisms: (a) that perception of the gender associated with an object shares at least some brain areas with those responsible for gender perception of faces and (b) adaptation to gender, which is a high-level concept, can modulate brain areas that are involved in facial gender perception through top-down processes.

  15. Category Specificity in Normal Episodic Learning: Applications to Object Recognition and Category-Specific Agnosia

    Science.gov (United States)

    Bukach, Cindy M.; Bub, Daniel N.; Masson, Michael E. J.; Lindsay, D. Stephen

    2004-01-01

    Studies of patients with category-specific agnosia (CSA) have given rise to multiple theories of object recognition, most of which assume the existence of a stable, abstract semantic memory system. We applied an episodic view of memory to questions raised by CSA in a series of studies examining normal observers' recall of newly learned attributes…

  16. Real-world visual statistics and infants' first-learned object names.

    Science.gov (United States)

    Clerkin, Elizabeth M; Hart, Elizabeth; Rehg, James M; Yu, Chen; Smith, Linda B

    2017-01-05

    We offer a new solution to the unsolved problem of how infants break into word learning based on the visual statistics of everyday infant-perspective scenes. Images from head camera video captured by 8 1/2 to 10 1/2 month-old infants at 147 at-home mealtime events were analysed for the objects in view. The images were found to be highly cluttered with many different objects in view. However, the frequency distribution of object categories was extremely right skewed such that a very small set of objects was pervasively present-a fact that may substantially reduce the problem of referential ambiguity. The statistical structure of objects in these infant egocentric scenes differs markedly from that in the training sets used in computational models and in experiments on statistical word-referent learning. Therefore, the results also indicate a need to re-examine current explanations of how infants break into word learning.This article is part of the themed issue 'New frontiers for statistical learning in the cognitive sciences'. © 2016 The Author(s).

  17. Cross-Cultural Differences in Children's Beliefs about the Objectivity of Social Categories

    Science.gov (United States)

    Diesendruck, Gil; Goldfein-Elbaz, Rebecca; Rhodes, Marjorie; Gelman, Susan; Neumark, Noam

    2013-01-01

    The present study compared 5-and 10-year-old North American and Israeli children's beliefs about the objectivity of different categories (n = 109). Children saw picture triads composed of two exemplars of the same category (e.g., two women) and an exemplar of a contrasting category (e.g., a man). Children were asked whether it would be acceptable…

  18. Social Categories are Natural Kinds, not Objective Types (and Why it Matters Politically

    Directory of Open Access Journals (Sweden)

    Bach Theodore

    2016-08-01

    Full Text Available There is growing support for the view that social categories like men and women refer to “objective types.” An objective type is a similarity class for which the axis of similarity is an objective rather than nominal or fictional property. Such types are independently real and causally relevant, yet their unity does not derive from an essential property. Given this tandem of features, it is not surprising why empirically-minded researchers interested in fighting oppression and marginalization have found this ontological category so attractive: objective types have the ontological credentials to secure the reality (and thus political representation of social categories, and yet they do not impose exclusionary essences that also naturalize and legitimize social inequalities. This essay argues that, from the perspective of these political goals of fighting oppression and marginalization, the category of objective types is in fact a Trojan horse; it looks like a gift, but it ends up creating trouble. I argue that objective type classifications often lack empirical adequacy, and as a result they lack political adequacy. I also provide, and in reference to the normative goals described above, several arguments for preferring a social ontology of natural kinds with historical essences.

  19. Cultural differences in visual object recognition in 3-year-old children

    Science.gov (United States)

    Kuwabara, Megumi; Smith, Linda B.

    2016-01-01

    Recent research indicates that culture penetrates fundamental processes of perception and cognition (e.g. Nisbett & Miyamoto, 2005). Here, we provide evidence that these influences begin early and influence how preschool children recognize common objects. The three tasks (n=128) examined the degree to which nonface object recognition by 3 year olds was based on individual diagnostic features versus more configural and holistic processing. Task 1 used a 6-alternative forced choice task in which children were asked to find a named category in arrays of masked objects in which only 3 diagnostic features were visible for each object. U.S. children outperformed age-matched Japanese children. Task 2 presented pictures of objects to children piece by piece. U.S. children recognized the objects given fewer pieces than Japanese children and likelihood of recognition increased for U.S., but not Japanese children when the piece added was rated by both U.S. and Japanese adults as highly defining. Task 3 used a standard measure of configural progressing, asking the degree to which recognition of matching pictures was disrupted by the rotation of one picture. Japanese children’s recognition was more disrupted by inversion than was that of U.S. children, indicating more configural processing by Japanese than U.S. children. The pattern suggests early cross-cultural differences in visual processing; findings that raise important questions about how visual experiences differ across cultures and about universal patterns of cognitive development. PMID:26985576

  20. Cultural differences in visual object recognition in 3-year-old children.

    Science.gov (United States)

    Kuwabara, Megumi; Smith, Linda B

    2016-07-01

    Recent research indicates that culture penetrates fundamental processes of perception and cognition. Here, we provide evidence that these influences begin early and influence how preschool children recognize common objects. The three tasks (N=128) examined the degree to which nonface object recognition by 3-year-olds was based on individual diagnostic features versus more configural and holistic processing. Task 1 used a 6-alternative forced choice task in which children were asked to find a named category in arrays of masked objects where only three diagnostic features were visible for each object. U.S. children outperformed age-matched Japanese children. Task 2 presented pictures of objects to children piece by piece. U.S. children recognized the objects given fewer pieces than Japanese children, and the likelihood of recognition increased for U.S. children, but not Japanese children, when the piece added was rated by both U.S. and Japanese adults as highly defining. Task 3 used a standard measure of configural progressing, asking the degree to which recognition of matching pictures was disrupted by the rotation of one picture. Japanese children's recognition was more disrupted by inversion than was that of U.S. children, indicating more configural processing by Japanese than U.S. children. The pattern suggests early cross-cultural differences in visual processing; findings that raise important questions about how visual experiences differ across cultures and about universal patterns of cognitive development. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. A review of functional imaging studies on category-specificity

    DEFF Research Database (Denmark)

    Gerlach, Christian

    2007-01-01

    such as familiarity and visual complexity. Of the most consistent activations found, none appear to be selective for natural objects or artefacts. The findings reviewed are compatible with theories of category-specificity that assume a widely distributed conceptual system not organized by category....

  2. The visual extent of an object: suppose we know the object locations

    NARCIS (Netherlands)

    Uijlings, J.R.R.; Smeulders, A.W.M.; Scha, R.J.H.

    2012-01-01

    The visual extent of an object reaches beyond the object itself. This is a long standing fact in psychology and is reflected in image retrieval techniques which aggregate statistics from the whole image in order to identify the object within. However, it is unclear to what degree and how the visual

  3. Efficient Cross-Modal Transfer of Shape Information in Visual and Haptic Object Categorization

    Directory of Open Access Journals (Sweden)

    Nina Gaissert

    2011-10-01

    Full Text Available Categorization has traditionally been studied in the visual domain with only a few studies focusing on the abilities of the haptic system in object categorization. During the first years of development, however, touch and vision are closely coupled in the exploratory procedures used by the infant to gather information about objects. Here, we investigate how well shape information can be transferred between those two modalities in a categorization task. Our stimuli consisted of amoeba-like objects that were parametrically morphed in well-defined steps. Participants explored the objects in a categorization task either visually or haptically. Interestingly, both modalities led to similar categorization behavior suggesting that similar shape processing might occur in vision and haptics. Next, participants received training on specific categories in one of the two modalities. As would be expected, training increased performance in the trained modality; however, we also found significant transfer of training to the other, untrained modality after only relatively few training trials. Taken together, our results demonstrate that complex shape information can be transferred efficiently across the two modalities, which speaks in favor of multisensory, higher-level representations of shape.

  4. 2-Cosemisimplicial objects in a 2-category, permutohedra and deformations of pseudofunctors

    OpenAIRE

    Elgueta, Josep

    2004-01-01

    In this paper we take up again the deformation theory for $K$-linear pseudofunctors initiated in a previous work (Adv. Math. 182 (2004) 204-277). We start by introducing a notion of a 2-cosemisimplicial object in an arbitrary 2-category and analyzing the corresponding coherence question, where the permutohedra make their appearence. We then describe a general method to obtain cochain complexes of K-modules from (enhanced) 2-cosemisimplicial objects in the 2-category ${\\bf Cat}_K$ of small $K$...

  5. Effects of Grammatical Categories on Children's Visual Language Processing: Evidence from Event-Related Brain Potentials

    Science.gov (United States)

    Weber-Fox, Christine; Hart, Laura J.; Spruill, John E., III

    2006-01-01

    This study examined how school-aged children process different grammatical categories. Event-related brain potentials elicited by words in visually presented sentences were analyzed according to seven grammatical categories with naturally varying characteristics of linguistic functions, semantic features, and quantitative attributes of length and…

  6. Learning Category-Specific Dictionary and Shared Dictionary for Fine-Grained Image Categorization.

    Science.gov (United States)

    Gao, Shenghua; Tsang, Ivor Wai-Hung; Ma, Yi

    2014-02-01

    This paper targets fine-grained image categorization by learning a category-specific dictionary for each category and a shared dictionary for all the categories. Such category-specific dictionaries encode subtle visual differences among different categories, while the shared dictionary encodes common visual patterns among all the categories. To this end, we impose incoherence constraints among the different dictionaries in the objective of feature coding. In addition, to make the learnt dictionary stable, we also impose the constraint that each dictionary should be self-incoherent. Our proposed dictionary learning formulation not only applies to fine-grained classification, but also improves conventional basic-level object categorization and other tasks such as event recognition. Experimental results on five data sets show that our method can outperform the state-of-the-art fine-grained image categorization frameworks as well as sparse coding based dictionary learning frameworks. All these results demonstrate the effectiveness of our method.

  7. Visual agnosia and focal brain injury.

    Science.gov (United States)

    Martinaud, O

    Visual agnosia encompasses all disorders of visual recognition within a selective visual modality not due to an impairment of elementary visual processing or other cognitive deficit. Based on a sequential dichotomy between the perceptual and memory systems, two different categories of visual object agnosia are usually considered: 'apperceptive agnosia' and 'associative agnosia'. Impaired visual recognition within a single category of stimuli is also reported in: (i) visual object agnosia of the ventral pathway, such as prosopagnosia (for faces), pure alexia (for words), or topographagnosia (for landmarks); (ii) visual spatial agnosia of the dorsal pathway, such as cerebral akinetopsia (for movement), or orientation agnosia (for the placement of objects in space). Focal brain injuries provide a unique opportunity to better understand regional brain function, particularly with the use of effective statistical approaches such as voxel-based lesion-symptom mapping (VLSM). The aim of the present work was twofold: (i) to review the various agnosia categories according to the traditional visual dual-pathway model; and (ii) to better assess the anatomical network underlying visual recognition through lesion-mapping studies correlating neuroanatomical and clinical outcomes. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  8. The Representation of Object Viewpoint in Human Visual Cortex

    OpenAIRE

    Andresen, David R.; Vinberg, Joakim; Grill-Spector, Kalanit

    2008-01-01

    Understanding the nature of object representations in the human brain is critical for understanding the neural basis of invariant object recognition. However, the degree to which object representations are sensitive to object viewpoint is unknown. Using fMRI we employed a parametric approach to examine the sensitivity to object view as a function of rotation (0°–180°), category (animal/vehicle) and fMRI-adaptation paradigm (short or long-lagged). For both categories and fMRI-adaptation paradi...

  9. Manifold-Based Visual Object Counting.

    Science.gov (United States)

    Wang, Yi; Zou, Yuexian; Wang, Wenwu

    2018-07-01

    Visual object counting (VOC) is an emerging area in computer vision which aims to estimate the number of objects of interest in a given image or video. Recently, object density based estimation method is shown to be promising for object counting as well as rough instance localization. However, the performance of this method tends to degrade when dealing with new objects and scenes. To address this limitation, we propose a manifold-based method for visual object counting (M-VOC), based on the manifold assumption that similar image patches share similar object densities. Firstly, the local geometry of a given image patch is represented linearly by its neighbors using a predefined patch training set, and the object density of this given image patch is reconstructed by preserving the local geometry using locally linear embedding. To improve the characterization of local geometry, additional constraints such as sparsity and non-negativity are also considered via regularization, nonlinear mapping, and kernel trick. Compared with the state-of-the-art VOC methods, our proposed M-VOC methods achieve competitive performance on seven benchmark datasets. Experiments verify that the proposed M-VOC methods have several favorable properties, such as robustness to the variation in the size of training dataset and image resolution, as often encountered in real-world VOC applications.

  10. Aerial Object Following Using Visual Fuzzy Servoing

    OpenAIRE

    Olivares Méndez, Miguel Ángel; Mondragon Bernal, Ivan Fernando; Campoy Cervera, Pascual; Mejias Alvarez, Luis; Martínez Luna, Carol Viviana

    2011-01-01

    This article presents a visual servoing system to follow a 3D moving object by a Micro Unmanned Aerial Vehicle (MUAV). The presented control strategy is based only on the visual information given by an adaptive tracking method based on the color information. A visual fuzzy system has been developed for servoing the camera situated on a rotary wing MAUV, that also considers its own dynamics. This system is focused on continuously following of an aerial moving target object, maintai...

  11. An interactive visualization tool for mobile objects

    Science.gov (United States)

    Kobayashi, Tetsuo

    Recent advancements in mobile devices---such as Global Positioning System (GPS), cellular phones, car navigation system, and radio-frequency identification (RFID)---have greatly influenced the nature and volume of data about individual-based movement in space and time. Due to the prevalence of mobile devices, vast amounts of mobile objects data are being produced and stored in databases, overwhelming the capacity of traditional spatial analytical methods. There is a growing need for discovering unexpected patterns, trends, and relationships that are hidden in the massive mobile objects data. Geographic visualization (GVis) and knowledge discovery in databases (KDD) are two major research fields that are associated with knowledge discovery and construction. Their major research challenges are the integration of GVis and KDD, enhancing the ability to handle large volume mobile objects data, and high interactivity between the computer and users of GVis and KDD tools. This dissertation proposes a visualization toolkit to enable highly interactive visual data exploration for mobile objects datasets. Vector algebraic representation and online analytical processing (OLAP) are utilized for managing and querying the mobile object data to accomplish high interactivity of the visualization tool. In addition, reconstructing trajectories at user-defined levels of temporal granularity with time aggregation methods allows exploration of the individual objects at different levels of movement generality. At a given level of generality, individual paths can be combined into synthetic summary paths based on three similarity measures, namely, locational similarity, directional similarity, and geometric similarity functions. A visualization toolkit based on the space-time cube concept exploits these functionalities to create a user-interactive environment for exploring mobile objects data. Furthermore, the characteristics of visualized trajectories are exported to be utilized for data

  12. Visual-Spatial Attention Aids the Maintenance of Object Representations in Visual Working Memory

    Science.gov (United States)

    Williams, Melonie; Pouget, Pierre; Boucher, Leanne; Woodman, Geoffrey F.

    2013-01-01

    Theories have proposed that the maintenance of object representations in visual working memory is aided by a spatial rehearsal mechanism. In this study, we used two different approaches to test the hypothesis that overt and covert visual-spatial attention mechanisms contribute to the maintenance of object representations in visual working memory. First, we tracked observers’ eye movements while remembering a variable number of objects during change-detection tasks. We observed that during the blank retention interval, participants spontaneously shifted gaze to the locations that the objects had occupied in the memory array. Next, we hypothesized that if attention mechanisms contribute to the maintenance of object representations, then drawing attention away from the object locations during the retention interval would impair object memory during these change-detection tasks. Supporting this prediction, we found that attending to the fixation point in anticipation of a brief probe stimulus during the retention interval reduced change-detection accuracy even on the trials in which no probe occurred. These findings support models of working memory in which visual-spatial selection mechanisms contribute to the maintenance of object representations. PMID:23371773

  13. Deep Residual Network Predicts Cortical Representation and Organization of Visual Features for Rapid Categorization.

    Science.gov (United States)

    Wen, Haiguang; Shi, Junxing; Chen, Wei; Liu, Zhongming

    2018-02-28

    The brain represents visual objects with topographic cortical patterns. To address how distributed visual representations enable object categorization, we established predictive encoding models based on a deep residual network, and trained them to predict cortical responses to natural movies. Using this predictive model, we mapped human cortical representations to 64,000 visual objects from 80 categories with high throughput and accuracy. Such representations covered both the ventral and dorsal pathways, reflected multiple levels of object features, and preserved semantic relationships between categories. In the entire visual cortex, object representations were organized into three clusters of categories: biological objects, non-biological objects, and background scenes. In a finer scale specific to each cluster, object representations revealed sub-clusters for further categorization. Such hierarchical clustering of category representations was mostly contributed by cortical representations of object features from middle to high levels. In summary, this study demonstrates a useful computational strategy to characterize the cortical organization and representations of visual features for rapid categorization.

  14. Visual attention is required for multiple object tracking.

    Science.gov (United States)

    Tran, Annie; Hoffman, James E

    2016-12-01

    In the multiple object tracking task, participants attempt to keep track of a moving set of target objects embedded in an identical set of moving distractors. Depending on several display parameters, observers are usually only able to accurately track 3 to 4 objects. Various proposals attribute this limit to a fixed number of discrete indexes (Pylyshyn, 1989), limits in visual attention (Cavanagh & Alvarez, 2005), or "architectural limits" in visual cortical areas (Franconeri, 2013). The present set of experiments examined the specific role of visual attention in tracking using a dual-task methodology in which participants tracked objects while identifying letter probes appearing on the tracked objects and distractors. As predicted by the visual attention model, probe identification was faster and/or more accurate when probes appeared on tracked objects. This was the case even when probes were more than twice as likely to appear on distractors suggesting that some minimum amount of attention is required to maintain accurate tracking performance. When the need to protect tracking accuracy was relaxed, participants were able to allocate more attention to distractors when probes were likely to appear there but only at the expense of large reductions in tracking accuracy. A final experiment showed that people attend to tracked objects even when letters appearing on them are task-irrelevant, suggesting that allocation of attention to tracked objects is an obligatory process. These results support the claim that visual attention is required for tracking objects. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  15. Coding the presence of visual objects in a recurrent neural network of visual cortex.

    Science.gov (United States)

    Zwickel, Timm; Wachtler, Thomas; Eckhorn, Reinhard

    2007-01-01

    Before we can recognize a visual object, our visual system has to segregate it from its background. This requires a fast mechanism for establishing the presence and location of objects independently of their identity. Recently, border-ownership neurons were recorded in monkey visual cortex which might be involved in this task [Zhou, H., Friedmann, H., von der Heydt, R., 2000. Coding of border ownership in monkey visual cortex. J. Neurosci. 20 (17), 6594-6611]. In order to explain the basic mechanisms required for fast coding of object presence, we have developed a neural network model of visual cortex consisting of three stages. Feed-forward and lateral connections support coding of Gestalt properties, including similarity, good continuation, and convexity. Neurons of the highest area respond to the presence of an object and encode its position, invariant of its form. Feedback connections to the lowest area facilitate orientation detectors activated by contours belonging to potential objects, and thus generate the experimentally observed border-ownership property. This feedback control acts fast and significantly improves the figure-ground segregation required for the consecutive task of object recognition.

  16. Recurrent processing during object recognition

    Directory of Open Access Journals (Sweden)

    Randall C. O'Reilly

    2013-04-01

    Full Text Available How does the brain learn to recognize objects visually, and perform this difficult feat robustly in the face of many sources of ambiguity and variability? We present a computational model based on the biology of the relevant visual pathways that learns to reliably recognize 100 different object categories in the face of of naturally-occurring variability in location, rotation, size, and lighting. The model exhibits robustness to highly ambiguous, partially occluded inputs. Both the unified, biologically plausible learning mechanism and the robustness to occlusion derive from the role that recurrent connectivity and recurrent processing mechanisms play in the model. Furthermore, this interaction of recurrent connectivity and learning predicts that high-level visual representations should be shaped by error signals from nearby, associated brain areas over the course of visual learning. Consistent with this prediction, we show how semantic knowledge about object categories changes the nature of their learned visual representations, as well as how this representational shift supports the mapping between perceptual and conceptual knowledge. Altogether, these findings support the potential importance of ongoing recurrent processing throughout the brain's visual system and suggest ways in which object recognition can be understood in terms of interactions within and between processes over time.

  17. Visual Field Preferences of Object Analysis for Grasping with One Hand

    Directory of Open Access Journals (Sweden)

    Ada eLe

    2014-10-01

    Full Text Available When we grasp an object using one hand, the opposite hemisphere predominantly guides the motor control of grasp movements (Davare et al. 2007; Rice et al. 2007. However, it is unclear whether visual object analysis for grasp control relies more on inputs (a from the contralateral than the ipsilateral visual field, (b from one dominant visual field regardless of the grasping hand, or (c from both visual fields equally. For bimanual grasping of a single object we have recently demonstrated a visual field preference for the left visual field (Le and Niemeier 2013a, 2013b, consistent with a general right-hemisphere dominance for sensorimotor control of bimanual grasps (Le et al., 2013. But visual field differences have never been tested for unimanual grasping. Therefore, here we asked right-handed participants to fixate to the left or right of an object and then grasp the object either with their right or left hand using a precision grip. We found that participants grasping with their right hand performed better with objects in the right visual field: maximum grip apertures (MGAs were more closely matched to the object width and were smaller than for objects in the left visual field. In contrast, when people grasped with their left hand, preferences switched to the left visual field. What is more, MGA scaling showed greater visual field differences compared to right-hand grasping. Our data suggest that, visual object analysis for unimanual grasping shows a preference for visual information from the ipsilateral visual field, and that the left hemisphere is better equipped to control grasps in both visual fields.

  18. First-Pass Processing of Value Cues in the Ventral Visual Pathway.

    Science.gov (United States)

    Sasikumar, Dennis; Emeric, Erik; Stuphorn, Veit; Connor, Charles E

    2018-02-19

    Real-world value often depends on subtle, continuously variable visual cues specific to particular object categories, like the tailoring of a suit, the condition of an automobile, or the construction of a house. Here, we used microelectrode recording in behaving monkeys to test two possible mechanisms for category-specific value-cue processing: (1) previous findings suggest that prefrontal cortex (PFC) identifies object categories, and based on category identity, PFC could use top-down attentional modulation to enhance visual processing of category-specific value cues, providing signals to PFC for calculating value, and (2) a faster mechanism would be first-pass visual processing of category-specific value cues, immediately providing the necessary visual information to PFC. This, however, would require learned mechanisms for processing the appropriate cues in a given object category. To test these hypotheses, we trained monkeys to discriminate value in four letter-like stimulus categories. Each category had a different, continuously variable shape cue that signified value (liquid reward amount) as well as other cues that were irrelevant. Monkeys chose between stimuli of different reward values. Consistent with the first-pass hypothesis, we found early signals for category-specific value cues in area TE (the final stage in monkey ventral visual pathway) beginning 81 ms after stimulus onset-essentially at the start of TE responses. Task-related activity emerged in lateral PFC approximately 40 ms later and consisted mainly of category-invariant value tuning. Our results show that, for familiar, behaviorally relevant object categories, high-level ventral pathway cortex can implement rapid, first-pass processing of category-specific value cues. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Investigating category- and shape-selective neural processing in ventral and dorsal visual stream under interocular suppression.

    Science.gov (United States)

    Ludwig, Karin; Kathmann, Norbert; Sterzer, Philipp; Hesselmann, Guido

    2015-01-01

    Recent behavioral and neuroimaging studies using continuous flash suppression (CFS) have suggested that action-related processing in the dorsal visual stream might be independent of perceptual awareness, in line with the "vision-for-perception" versus "vision-for-action" distinction of the influential dual-stream theory. It remains controversial if evidence suggesting exclusive dorsal stream processing of tool stimuli under CFS can be explained by their elongated shape alone or by action-relevant category representations in dorsal visual cortex. To approach this question, we investigated category- and shape-selective functional magnetic resonance imaging-blood-oxygen level-dependent responses in both visual streams using images of faces and tools. Multivariate pattern analysis showed enhanced decoding of elongated relative to non-elongated tools, both in the ventral and dorsal visual stream. The second aim of our study was to investigate whether the depth of interocular suppression might differentially affect processing in dorsal and ventral areas. However, parametric modulation of suppression depth by varying the CFS mask contrast did not yield any evidence for differential modulation of category-selective activity. Together, our data provide evidence for shape-selective processing under CFS in both dorsal and ventral stream areas and, therefore, do not support the notion that dorsal "vision-for-action" processing is exclusively preserved under interocular suppression. © 2014 Wiley Periodicals, Inc.

  20. Visual Tracking of Deformation and Classification of Non-Rigid Objects with Robot Hand Probing

    Directory of Open Access Journals (Sweden)

    Fei Hui

    2017-03-01

    Full Text Available Performing tasks with a robot hand often requires a complete knowledge of the manipulated object, including its properties (shape, rigidity, surface texture and its location in the environment, in order to ensure safe and efficient manipulation. While well-established procedures exist for the manipulation of rigid objects, as well as several approaches for the manipulation of linear or planar deformable objects such as ropes or fabric, research addressing the characterization of deformable objects occupying a volume remains relatively limited. The paper proposes an approach for tracking the deformation of non-rigid objects under robot hand manipulation using RGB-D data. The purpose is to automatically classify deformable objects as rigid, elastic, plastic, or elasto-plastic, based on the material they are made of, and to support recognition of the category of such objects through a robotic probing process in order to enhance manipulation capabilities. The proposed approach combines advantageously classical color and depth image processing techniques and proposes a novel combination of the fast level set method with a log-polar mapping of the visual data to robustly detect and track the contour of a deformable object in a RGB-D data stream. Dynamic time warping is employed to characterize the object properties independently from the varying length of the tracked contour as the object deforms. The proposed solution achieves a classification rate over all categories of material of up to 98.3%. When integrated in the control loop of a robot hand, it can contribute to ensure stable grasp, and safe manipulation capability that will preserve the physical integrity of the object.

  1. Storage and binding of object features in visual working memory

    OpenAIRE

    Bays, Paul M; Wu, Emma Y; Husain, Masud

    2010-01-01

    An influential conception of visual working memory is of a small number of discrete memory “slots”, each storing an integrated representation of a single visual object, including all its component features. When a scene contains more objects than there are slots, visual attention controls which objects gain access to memory.

  2. Orienting attention to objects in visual short-term memory

    NARCIS (Netherlands)

    Dell'Acqua, Roberto; Sessa, Paola; Toffanin, Paolo; Luria, Roy; Joliccoeur, Pierre

    We measured electroencephalographic activity during visual search of a target object among objects available to perception or among objects held in visual short-term memory (VSTM). For perceptual search, a single shape was shown first (pre-cue) followed by a search-array and the task was to decide

  3. Category specific spatial dissociations of parallel processes underlying visual naming.

    Science.gov (United States)

    Conner, Christopher R; Chen, Gang; Pieters, Thomas A; Tandon, Nitin

    2014-10-01

    The constituent elements and dynamics of the networks responsible for word production are a central issue to understanding human language. Of particular interest is their dependency on lexical category, particularly the possible segregation of nouns and verbs into separate processing streams. We applied a novel mixed-effects, multilevel analysis to electrocorticographic data collected from 19 patients (1942 electrodes) to examine the activity of broadly disseminated cortical networks during the retrieval of distinct lexical categories. This approach was designed to overcome the issues of sparse sampling and individual variability inherent to invasive electrophysiology. Both noun and verb generation evoked overlapping, yet distinct nonhierarchical processes favoring ventral and dorsal visual streams, respectively. Notable differences in activity patterns were noted in Broca's area and superior lateral temporo-occipital regions (verb > noun) and in parahippocampal and fusiform cortices (noun > verb). Comparisons with functional magnetic resonance imaging (fMRI) results yielded a strong correlation of blood oxygen level-dependent signal and gamma power and an independent estimate of group size needed for fMRI studies of cognition. Our findings imply parallel, lexical category-specific processes and reconcile discrepancies between lesional and functional imaging studies. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  4. Object similarity affects the perceptual strategy underlying invariant visual object recognition in rats

    Directory of Open Access Journals (Sweden)

    Federica Bianca Rosselli

    2015-03-01

    Full Text Available In recent years, a number of studies have explored the possible use of rats as models of high-level visual functions. One central question at the root of such an investigation is to understand whether rat object vision relies on the processing of visual shape features or, rather, on lower-order image properties (e.g., overall brightness. In a recent study, we have shown that rats are capable of extracting multiple features of an object that are diagnostic of its identity, at least when those features are, structure-wise, distinct enough to be parsed by the rat visual system. In the present study, we have assessed the impact of object structure on rat perceptual strategy. We trained rats to discriminate between two structurally similar objects, and compared their recognition strategies with those reported in our previous study. We found that, under conditions of lower stimulus discriminability, rat visual discrimination strategy becomes more view-dependent and subject-dependent. Rats were still able to recognize the target objects, in a way that was largely tolerant (i.e., invariant to object transformation; however, the larger structural and pixel-wise similarity affected the way objects were processed. Compared to the findings of our previous study, the patterns of diagnostic features were: i smaller and more scattered; ii only partially preserved across object views; and iii only partially reproducible across rats. On the other hand, rats were still found to adopt a multi-featural processing strategy and to make use of part of the optimal discriminatory information afforded by the two objects. Our findings suggest that, as in humans, rat invariant recognition can flexibly rely on either view-invariant representations of distinctive object features or view-specific object representations, acquired through learning.

  5. Visual comparison for information visualization

    KAUST Repository

    Gleicher, M.; Albers, D.; Walker, R.; Jusufi, I.; Hansen, C. D.; Roberts, J. C.

    2011-01-01

    Data analysis often involves the comparison of complex objects. With the ever increasing amounts and complexity of data, the demand for systems to help with these comparisons is also growing. Increasingly, information visualization tools support such comparisons explicitly, beyond simply allowing a viewer to examine each object individually. In this paper, we argue that the design of information visualizations of complex objects can, and should, be studied in general, that is independently of what those objects are. As a first step in developing this general understanding of comparison, we propose a general taxonomy of visual designs for comparison that groups designs into three basic categories, which can be combined. To clarify the taxonomy and validate its completeness, we provide a survey of work in information visualization related to comparison. Although we find a great diversity of systems and approaches, we see that all designs are assembled from the building blocks of juxtaposition, superposition and explicit encodings. This initial exploration shows the power of our model, and suggests future challenges in developing a general understanding of comparative visualization and facilitating the development of more comparative visualization tools. © The Author(s) 2011.

  6. Visual comparison for information visualization

    KAUST Repository

    Gleicher, M.

    2011-09-07

    Data analysis often involves the comparison of complex objects. With the ever increasing amounts and complexity of data, the demand for systems to help with these comparisons is also growing. Increasingly, information visualization tools support such comparisons explicitly, beyond simply allowing a viewer to examine each object individually. In this paper, we argue that the design of information visualizations of complex objects can, and should, be studied in general, that is independently of what those objects are. As a first step in developing this general understanding of comparison, we propose a general taxonomy of visual designs for comparison that groups designs into three basic categories, which can be combined. To clarify the taxonomy and validate its completeness, we provide a survey of work in information visualization related to comparison. Although we find a great diversity of systems and approaches, we see that all designs are assembled from the building blocks of juxtaposition, superposition and explicit encodings. This initial exploration shows the power of our model, and suggests future challenges in developing a general understanding of comparative visualization and facilitating the development of more comparative visualization tools. © The Author(s) 2011.

  7. Storage of features, conjunctions and objects in visual working memory.

    Science.gov (United States)

    Vogel, E K; Woodman, G F; Luck, S J

    2001-02-01

    Working memory can be divided into separate subsystems for verbal and visual information. Although the verbal system has been well characterized, the storage capacity of visual working memory has not yet been established for simple features or for conjunctions of features. The authors demonstrate that it is possible to retain information about only 3-4 colors or orientations in visual working memory at one time. Observers are also able to retain both the color and the orientation of 3-4 objects, indicating that visual working memory stores integrated objects rather than individual features. Indeed, objects defined by a conjunction of four features can be retained in working memory just as well as single-feature objects, allowing many individual features to be retained when distributed across a small number of objects. Thus, the capacity of visual working memory must be understood in terms of integrated objects rather than individual features.

  8. Integrative and distinctive coding of visual and conceptual object features in the ventral visual stream.

    Science.gov (United States)

    Martin, Chris B; Douglas, Danielle; Newsome, Rachel N; Man, Louisa Ly; Barense, Morgan D

    2018-02-02

    A significant body of research in cognitive neuroscience is aimed at understanding how object concepts are represented in the human brain. However, it remains unknown whether and where the visual and abstract conceptual features that define an object concept are integrated. We addressed this issue by comparing the neural pattern similarities among object-evoked fMRI responses with behavior-based models that independently captured the visual and conceptual similarities among these stimuli. Our results revealed evidence for distinctive coding of visual features in lateral occipital cortex, and conceptual features in the temporal pole and parahippocampal cortex. By contrast, we found evidence for integrative coding of visual and conceptual object features in perirhinal cortex. The neuroanatomical specificity of this effect was highlighted by results from a searchlight analysis. Taken together, our findings suggest that perirhinal cortex uniquely supports the representation of fully specified object concepts through the integration of their visual and conceptual features. © 2018, Martin et al.

  9. Integrative and distinctive coding of visual and conceptual object features in the ventral visual stream

    Science.gov (United States)

    Douglas, Danielle; Newsome, Rachel N; Man, Louisa LY

    2018-01-01

    A significant body of research in cognitive neuroscience is aimed at understanding how object concepts are represented in the human brain. However, it remains unknown whether and where the visual and abstract conceptual features that define an object concept are integrated. We addressed this issue by comparing the neural pattern similarities among object-evoked fMRI responses with behavior-based models that independently captured the visual and conceptual similarities among these stimuli. Our results revealed evidence for distinctive coding of visual features in lateral occipital cortex, and conceptual features in the temporal pole and parahippocampal cortex. By contrast, we found evidence for integrative coding of visual and conceptual object features in perirhinal cortex. The neuroanatomical specificity of this effect was highlighted by results from a searchlight analysis. Taken together, our findings suggest that perirhinal cortex uniquely supports the representation of fully specified object concepts through the integration of their visual and conceptual features. PMID:29393853

  10. A new selective developmental deficit: Impaired object recognition with normal face recognition.

    Science.gov (United States)

    Germine, Laura; Cashdollar, Nathan; Düzel, Emrah; Duchaine, Bradley

    2011-05-01

    Studies of developmental deficits in face recognition, or developmental prosopagnosia, have shown that individuals who have not suffered brain damage can show face recognition impairments coupled with normal object recognition (Duchaine and Nakayama, 2005; Duchaine et al., 2006; Nunn et al., 2001). However, no developmental cases with the opposite dissociation - normal face recognition with impaired object recognition - have been reported. The existence of a case of non-face developmental visual agnosia would indicate that the development of normal face recognition mechanisms does not rely on the development of normal object recognition mechanisms. To see whether a developmental variant of non-face visual object agnosia exists, we conducted a series of web-based object and face recognition tests to screen for individuals showing object recognition memory impairments but not face recognition impairments. Through this screening process, we identified AW, an otherwise normal 19-year-old female, who was then tested in the lab on face and object recognition tests. AW's performance was impaired in within-class visual recognition memory across six different visual categories (guns, horses, scenes, tools, doors, and cars). In contrast, she scored normally on seven tests of face recognition, tests of memory for two other object categories (houses and glasses), and tests of recall memory for visual shapes. Testing confirmed that her impairment was not related to a general deficit in lower-level perception, object perception, basic-level recognition, or memory. AW's results provide the first neuropsychological evidence that recognition memory for non-face visual object categories can be selectively impaired in individuals without brain damage or other memory impairment. These results indicate that the development of recognition memory for faces does not depend on intact object recognition memory and provide further evidence for category-specific dissociations in visual

  11. Visual memory for objects following foveal vision loss.

    Science.gov (United States)

    Geringswald, Franziska; Herbik, Anne; Hofmüller, Wolfram; Hoffmann, Michael B; Pollmann, Stefan

    2015-09-01

    Allocation of visual attention is crucial for encoding items into visual long-term memory. In free vision, attention is closely linked to the center of gaze, raising the question whether foveal vision loss entails suboptimal deployment of attention and subsequent impairment of object encoding. To investigate this question, we examined visual long-term memory for objects in patients suffering from foveal vision loss due to age-related macular degeneration. We measured patients' change detection sensitivity after a period of free scene exploration monocularly with their worse eye when possible, and under binocular vision, comparing sensitivity and eye movements to matched normal-sighted controls. A highly salient cue was used to capture attention to a nontarget location before a target change occurred in half of the trials, ensuring that change detection relied on memory. Patients' monocular and binocular sensitivity to object change was comparable to controls, even after more than 4 intervening fixations, and not significantly correlated with visual impairment. We conclude that extrafoveal vision suffices for efficient encoding into visual long-term memory. (c) 2015 APA, all rights reserved).

  12. Visual Priming of Inverted and Rotated Objects

    Science.gov (United States)

    Knowlton, Barbara J.; McAuliffe, Sean P.; Coelho, Chase J.; Hummel, John E.

    2009-01-01

    Object images are identified more efficiently after prior exposure. Here, the authors investigated shape representations supporting object priming. The dependent measure in all experiments was the minimum exposure duration required to correctly identify an object image in a rapid serial visual presentation stream. Priming was defined as the change…

  13. Multimedia Visualizer: An Animated, Object-Based OPAC.

    Science.gov (United States)

    Lee, Newton S.

    1991-01-01

    Describes the Multimedia Visualizer, an online public access catalog (OPAC) that uses animated visualizations to make it more user friendly. Pictures of the system are shown that illustrate the interactive objects that patrons can access, including card catalog drawers, librarian desks, and bookshelves; and access to multimedia items is described.…

  14. Visual object agnosia is associated with a breakdown of object-selective responses in the lateral occipital cortex.

    Science.gov (United States)

    Ptak, Radek; Lazeyras, François; Di Pietro, Marie; Schnider, Armin; Simon, Stéphane R

    2014-07-01

    Patients with visual object agnosia fail to recognize the identity of visually presented objects despite preserved semantic knowledge. Object agnosia may result from damage to visual cortex lying close to or overlapping with the lateral occipital complex (LOC), a brain region that exhibits selectivity to the shape of visually presented objects. Despite this anatomical overlap the relationship between shape processing in the LOC and shape representations in object agnosia is unknown. We studied a patient with object agnosia following isolated damage to the left occipito-temporal cortex overlapping with the LOC. The patient showed intact processing of object structure, yet often made identification errors that were mainly based on the global visual similarity between objects. Using functional Magnetic Resonance Imaging (fMRI) we found that the damaged as well as the contralateral, structurally intact right LOC failed to show any object-selective fMRI activity, though the latter retained selectivity for faces. Thus, unilateral damage to the left LOC led to a bilateral breakdown of neural responses to a specific stimulus class (objects and artefacts) while preserving the response to a different stimulus class (faces). These findings indicate that representations of structure necessary for the identification of objects crucially rely on bilateral, distributed coding of shape features. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. The representation of object viewpoint in human visual cortex.

    Science.gov (United States)

    Andresen, David R; Vinberg, Joakim; Grill-Spector, Kalanit

    2009-04-01

    Understanding the nature of object representations in the human brain is critical for understanding the neural basis of invariant object recognition. However, the degree to which object representations are sensitive to object viewpoint is unknown. Using fMRI we employed a parametric approach to examine the sensitivity to object view as a function of rotation (0 degrees-180 degrees ), category (animal/vehicle) and fMRI-adaptation paradigm (short or long-lagged). For both categories and fMRI-adaptation paradigms, object-selective regions recovered from adaptation when a rotated view of an object was shown after adaptation to a specific view of that object, suggesting that representations are sensitive to object rotation. However, we found evidence for differential representations across categories and ventral stream regions. Rotation cross-adaptation was larger for animals than vehicles, suggesting higher sensitivity to vehicle than animal rotation, and was largest in the left fusiform/occipito-temporal sulcus (pFUS/OTS), suggesting that this region has low sensitivity to rotation. Moreover, right pFUS/OTS and FFA responded more strongly to front than back views of animals (without adaptation) and rotation cross-adaptation depended both on the level of rotation and the adapting view. This result suggests a prevalence of neurons that prefer frontal views of animals in fusiform regions. Using a computational model of view-tuned neurons, we demonstrate that differential neural view tuning widths and relative distributions of neural-tuned populations in fMRI voxels can explain the fMRI results. Overall, our findings underscore the utility of parametric approaches for studying the neural basis of object invariance and suggest that there is no complete invariance to object view in the human ventral stream.

  16. Functional dissociation between action and perception of object shape in developmental visual object agnosia.

    Science.gov (United States)

    Freud, Erez; Ganel, Tzvi; Avidan, Galia; Gilaie-Dotan, Sharon

    2016-03-01

    According to the two visual systems model, the cortical visual system is segregated into a ventral pathway mediating object recognition, and a dorsal pathway mediating visuomotor control. In the present study we examined whether the visual control of action could develop normally even when visual perceptual abilities are compromised from early childhood onward. Using his fingers, LG, an individual with a rare developmental visual object agnosia, manually estimated (perceptual condition) the width of blocks that varied in width and length (but not in overall size), or simply picked them up across their width (grasping condition). LG's perceptual sensitivity to target width was profoundly impaired in the manual estimation task compared to matched controls. In contrast, the sensitivity to object shape during grasping, as measured by maximum grip aperture (MGA), the time to reach the MGA, the reaction time and the total movement time were all normal in LG. Further analysis, however, revealed that LG's sensitivity to object shape during grasping emerged at a later time stage during the movement compared to controls. Taken together, these results demonstrate a dissociation between action and perception of object shape, and also point to a distinction between different stages of the grasping movement, namely planning versus online control. Moreover, the present study implies that visuomotor abilities can develop normally even when perceptual abilities developed in a profoundly impaired fashion. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Looking at anything that is green when hearing "frog": how object surface colour and stored object colour knowledge influence language-mediated overt attention.

    Science.gov (United States)

    Huettig, Falk; Altmann, Gerry T M

    2011-01-01

    Three eye-tracking experiments investigated the influence of stored colour knowledge, perceived surface colour, and conceptual category of visual objects on language-mediated overt attention. Participants heard spoken target words whose concepts are associated with a diagnostic colour (e.g., "spinach"; spinach is typically green) while their eye movements were monitored to (a) objects associated with a diagnostic colour but presented in black and white (e.g., a black-and-white line drawing of a frog), (b) objects associated with a diagnostic colour but presented in an appropriate but atypical colour (e.g., a colour photograph of a yellow frog), and (c) objects not associated with a diagnostic colour but presented in the diagnostic colour of the target concept (e.g., a green blouse; blouses are not typically green). We observed that colour-mediated shifts in overt attention are primarily due to the perceived surface attributes of the visual objects rather than stored knowledge about the typical colour of the object. In addition our data reveal that conceptual category information is the primary determinant of overt attention if both conceptual category and surface colour competitors are copresent in the visual environment.

  18. Binding Objects to Locations: The Relationship between Object Files and Visual Working Memory

    Science.gov (United States)

    Hollingworth, Andrew; Rasmussen, Ian P.

    2010-01-01

    The relationship between object files and visual working memory (VWM) was investigated in a new paradigm combining features of traditional VWM experiments (color change detection) and object-file experiments (memory for the properties of moving objects). Object-file theory was found to account for a key component of object-position binding in VWM:…

  19. How learning might strengthen existing visual object representations in human object-selective cortex.

    Science.gov (United States)

    Brants, Marijke; Bulthé, Jessica; Daniels, Nicky; Wagemans, Johan; Op de Beeck, Hans P

    2016-02-15

    Visual object perception is an important function in primates which can be fine-tuned by experience, even in adults. Which factors determine the regions and the neurons that are modified by learning is still unclear. Recently, it was proposed that the exact cortical focus and distribution of learning effects might depend upon the pre-learning mapping of relevant functional properties and how this mapping determines the informativeness of neural units for the stimuli and the task to be learned. From this hypothesis we would expect that visual experience would strengthen the pre-learning distributed functional map of the relevant distinctive object properties. Here we present a first test of this prediction in twelve human subjects who were trained in object categorization and differentiation, preceded and followed by a functional magnetic resonance imaging session. Specifically, training increased the distributed multi-voxel pattern information for trained object distinctions in object-selective cortex, resulting in a generalization from pre-training multi-voxel activity patterns to after-training activity patterns. Simulations show that the increased selectivity combined with the inter-session generalization is consistent with a training-induced strengthening of a pre-existing selectivity map. No training-related neural changes were detected in other regions. In sum, training to categorize or individuate objects strengthened pre-existing representations in human object-selective cortex, providing a first indication that the neuroanatomical distribution of learning effects depends upon the pre-learning mapping of visual object properties. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. The difference in subjective and objective complexity in the visual short-term memory

    DEFF Research Database (Denmark)

    Dall, Jonas Olsen; Sørensen, Thomas Alrik

    Several studies discuss the influence of complexity on the visual short term memory; some have demonstrated that short-term memory is surprisingly stable regardless of content (e.g. Luck & Vogel, 1997) where others have shown that memory can be influenced by the complexity of stimulus (e.g. Alvarez...... characters. On the contrary expertise or word frequency may reflect what could be termed subjective complexity, as this relate directly to the individual mental categories established. This study will be able to uncover more details on how we should define complexity of objects to be encoded into short-term....... & Cavanagh, 2004). But the term complexity is often not clearly defined. Sørensen (2008; see also Dall, Katsumi, & Sørensen, 2016) suggested that complexity can be related to two different types; objective and subjective complexity. This distinction is supported by a number of studies on the influence...

  1. Category Selectivity of Human Visual Cortex in Perception of Rubin Face–Vase Illusion

    Directory of Open Access Journals (Sweden)

    Xiaogang Wang

    2017-09-01

    Full Text Available When viewing the Rubin face–vase illusion, our conscious perception spontaneously alternates between the face and the vase; this illusion has been widely used to explore bistable perception. Previous functional magnetic resonance imaging (fMRI studies have studied the neural mechanisms underlying bistable perception through univariate and multivariate pattern analyses; however, no studies have investigated the issue of category selectivity. Here, we used fMRI to investigate the neural mechanisms underlying the Rubin face–vase illusion by introducing univariate amplitude and multivariate pattern analyses. The results from the amplitude analysis suggested that the activity in the fusiform face area was likely related to the subjective face perception. Furthermore, the pattern analysis results showed that the early visual cortex (EVC and the face-selective cortex could discriminate the activity patterns of the face and vase perceptions. However, further analysis of the activity patterns showed that only the face-selective cortex contains the face information. These findings indicated that although the EVC and face-selective cortex activities could discriminate the visual information, only the activity and activity pattern in the face-selective areas contained the category information of face perception in the Rubin face–vase illusion.

  2. Multi-Label Object Categorization Using Histograms of Global Relations

    DEFF Research Database (Denmark)

    Mustafa, Wail; Xiong, Hanchen; Kraft, Dirk

    2015-01-01

    In this paper, we present an object categorization system capable of assigning multiple and related categories for novel objects using multi-label learning. In this system, objects are described using global geometric relations of 3D features. We propose using the Joint SVM method for learning......). The experiments are carried out on a dataset of 100 objects belonging to 13 visual and action-related categories. The results indicate that multi-label methods are able to identify the relation between the dependent categories and hence perform categorization accordingly. It is also found that extracting...

  3. Words, shape, visual search and visual working memory in 3-year-old children.

    Science.gov (United States)

    Vales, Catarina; Smith, Linda B

    2015-01-01

    Do words cue children's visual attention, and if so, what are the relevant mechanisms? Across four experiments, 3-year-old children (N = 163) were tested in visual search tasks in which targets were cued with only a visual preview versus a visual preview and a spoken name. The experiments were designed to determine whether labels facilitated search times and to examine one route through which labels could have their effect: By influencing the visual working memory representation of the target. The targets and distractors were pictures of instances of basic-level known categories and the labels were the common name for the target category. We predicted that the label would enhance the visual working memory representation of the target object, guiding attention to objects that better matched the target representation. Experiments 1 and 2 used conjunctive search tasks, and Experiment 3 varied shape discriminability between targets and distractors. Experiment 4 compared the effects of labels to repeated presentations of the visual target, which should also influence the working memory representation of the target. The overall pattern fits contemporary theories of how the contents of visual working memory interact with visual search and attention, and shows that even in very young children heard words affect the processing of visual information. © 2014 John Wiley & Sons Ltd.

  4. Visual working memory for global, object, and part-based information.

    Science.gov (United States)

    Patterson, Michael D; Bly, Benjamin Martin; Porcelli, Anthony J; Rypma, Bart

    2007-06-01

    We investigated visual working memory for novel objects and parts of novel objects. After a delay period, participants showed strikingly more accurate performance recognizing a single whole object than the parts of that object. This bias to remember whole objects, rather than parts, persisted even when the division between parts was clearly defined and the parts were disconnected from each other so that, in order to remember the single whole object, the participants needed to mentally combine the parts. In addition, the bias was confirmed when the parts were divided by color. These experiments indicated that holistic perceptual-grouping biases are automatically used to organize storage in visual working memory. In addition, our results suggested that the bias was impervious to top-down consciously directed control, because when task demands were manipulated through instruction and catch trials, the participants still recognized whole objects more quickly and more accurately than their parts. This bias persisted even when the whole objects were novel and the parts were familiar. We propose that visual working memory representations depend primarily on the global configural properties of whole objects, rather than part-based representations, even when the parts themselves can be clearly perceived as individual objects. This global configural bias beneficially reduces memory load on a capacity-limited system operating in a complex visual environment, because fewer distinct items must be remembered.

  5. Nicotine deprivation elevates neural representation of smoking-related cues in object-sensitive visual cortex: a proof of concept study.

    Science.gov (United States)

    Havermans, Anne; van Schayck, Onno C P; Vuurman, Eric F P M; Riedel, Wim J; van den Hurk, Job

    2017-08-01

    In the current study, we use functional magnetic resonance imaging (fMRI) and multi-voxel pattern analysis (MVPA) to investigate whether tobacco addiction biases basic visual processing in favour of smoking-related images. We hypothesize that the neural representation of smoking-related stimuli in the lateral occipital complex (LOC) is elevated after a period of nicotine deprivation compared to a satiated state, but that this is not the case for object categories unrelated to smoking. Current smokers (≥10 cigarettes a day) underwent two fMRI scanning sessions: one after 10 h of nicotine abstinence and the other one after smoking ad libitum. Regional blood oxygenated level-dependent (BOLD) response was measured while participants were presented with 24 blocks of 8 colour-matched pictures of cigarettes, pencils or chairs. The functional data of 10 participants were analysed through a pattern classification approach. In bilateral LOC clusters, the classifier was able to discriminate between patterns of activity elicited by visually similar smoking-related (cigarettes) and neutral objects (pencils) above empirically estimated chance levels only during deprivation (mean = 61.0%, chance (permutations) = 50.0%, p = .01) but not during satiation (mean = 53.5%, chance (permutations) = 49.9%, ns.). For all other stimulus contrasts, there was no difference in discriminability between the deprived and satiated conditions. The discriminability between smoking and non-smoking visual objects was elevated in object-selective brain region LOC after a period of nicotine abstinence. This indicates that attention bias likely affects basic visual object processing.

  6. Visual Object Pattern Separation Varies in Older Adults

    Science.gov (United States)

    Holden, Heather M.; Toner, Chelsea; Pirogovsky, Eva; Kirwan, C. Brock; Gilbert, Paul E.

    2013-01-01

    Young and nondemented older adults completed a visual object continuous recognition memory task in which some stimuli (lures) were similar but not identical to previously presented objects. The lures were hypothesized to result in increased interference and increased pattern separation demand. To examine variability in object pattern separation…

  7. Eye movements during object recognition in visual agnosia.

    Science.gov (United States)

    Charles Leek, E; Patterson, Candy; Paul, Matthew A; Rafal, Robert; Cristino, Filipe

    2012-07-01

    This paper reports the first ever detailed study about eye movement patterns during single object recognition in visual agnosia. Eye movements were recorded in a patient with an integrative agnosic deficit during two recognition tasks: common object naming and novel object recognition memory. The patient showed normal directional biases in saccades and fixation dwell times in both tasks and was as likely as controls to fixate within object bounding contour regardless of recognition accuracy. In contrast, following initial saccades of similar amplitude to controls, the patient showed a bias for short saccades. In object naming, but not in recognition memory, the similarity of the spatial distributions of patient and control fixations was modulated by recognition accuracy. The study provides new evidence about how eye movements can be used to elucidate the functional impairments underlying object recognition deficits. We argue that the results reflect a breakdown in normal functional processes involved in the integration of shape information across object structure during the visual perception of shape. Copyright © 2012 Elsevier Ltd. All rights reserved.

  8. Visual Memory for Objects Following Foveal Vision Loss

    Science.gov (United States)

    Geringswald, Franziska; Herbik, Anne; Hofmüller, Wolfram; Hoffmann, Michael B.; Pollmann, Stefan

    2015-01-01

    Allocation of visual attention is crucial for encoding items into visual long-term memory. In free vision, attention is closely linked to the center of gaze, raising the question whether foveal vision loss entails suboptimal deployment of attention and subsequent impairment of object encoding. To investigate this question, we examined visual…

  9. Task context impacts visual object processing differentially across the cortex

    Science.gov (United States)

    Harel, Assaf; Kravitz, Dwight J.; Baker, Chris I.

    2014-01-01

    Perception reflects an integration of “bottom-up” (sensory-driven) and “top-down” (internally generated) signals. Although models of visual processing often emphasize the central role of feed-forward hierarchical processing, less is known about the impact of top-down signals on complex visual representations. Here, we investigated whether and how the observer’s goals modulate object processing across the cortex. We examined responses elicited by a diverse set of objects under six distinct tasks, focusing on either physical (e.g., color) or conceptual properties (e.g., man-made). Critically, the same stimuli were presented in all tasks, allowing us to investigate how task impacts the neural representations of identical visual input. We found that task has an extensive and differential impact on object processing across the cortex. First, we found task-dependent representations in the ventral temporal and prefrontal cortex. In particular, although object identity could be decoded from the multivoxel response within task, there was a significant reduction in decoding across tasks. In contrast, the early visual cortex evidenced equivalent decoding within and across tasks, indicating task-independent representations. Second, task information was pervasive and present from the earliest stages of object processing. However, although the responses of the ventral temporal, prefrontal, and parietal cortex enabled decoding of both the type of task (physical/conceptual) and the specific task (e.g., color), the early visual cortex was not sensitive to type of task and could only be used to decode individual physical tasks. Thus, object processing is highly influenced by the behavioral goal of the observer, highlighting how top-down signals constrain and inform the formation of visual representations. PMID:24567402

  10. Visual Form Perception Can Be a Cognitive Correlate of Lower Level Math Categories for Teenagers

    Science.gov (United States)

    Cui, Jiaxin; Zhang, Yiyun; Cheng, Dazhi; Li, Dawei; Zhou, Xinlin

    2017-01-01

    Numerous studies have assessed the cognitive correlates of performance in mathematics, but little research has been conducted to systematically examine the relations between visual perception as the starting point of visuospatial processing and typical mathematical performance. In the current study, we recruited 223 seventh graders to perform a visual form perception task (figure matching), numerosity comparison, digit comparison, exact computation, approximate computation, and curriculum-based mathematical achievement tests. Results showed that, after controlling for gender, age, and five general cognitive processes (choice reaction time, visual tracing, mental rotation, spatial working memory, and non-verbal matrices reasoning), visual form perception had unique contributions to numerosity comparison, digit comparison, and exact computation, but had no significant relation with approximate computation or curriculum-based mathematical achievement. These results suggest that visual form perception is an important independent cognitive correlate of lower level math categories, including the approximate number system, digit comparison, and exact computation. PMID:28824513

  11. Development of Object Permanence in Visually Impaired Infants.

    Science.gov (United States)

    Rogers, S. J.; Puchalski, C. B.

    1988-01-01

    Development of object permanence skills was examined longitudinally in 20 visually impaired infants (ages 4-25 months). Order of skill acquisition and span of time required to master skills paralleled that of sighted infants, but the visually impaired subjects were 8-12 months older than sighted counterparts when similar skills were acquired.…

  12. The Visual Object Tracking VOT2015 Challenge Results

    KAUST Repository

    Kristan, Matej; Matas, Jiri; Leonardis, Ale; Felsberg, Michael; Cehovin, Luka; Fernandez, Gustavo; Vojir, Toma; Hager, Gustav; Nebehay, Georg; Pflugfelder, Roman; Gupta, Abhinav; Bibi, Adel Aamer; Lukezic, Alan; Garcia-Martin, Alvaro; Saffari, Amir; Petrosino, Alfredo; Montero, Andres Solıs; Varfolomieiev, Anton; Baskurt, Atilla; Zhao, Baojun; Ghanem, Bernard; Martinez, Brais; Lee, ByeongJu; Han, Bohyung; Wang, Chaohui; Garcia, Christophe; Zhang, Chunyuan; Schmid, Cordelia; Tao, Dacheng; Kim, Daijin; Huang, Dafei; Prokhorov, Danil; Du, Dawei; Yeung, Dit-Yan; Ribeiro, Eraldo; Khan, Fahad Shahbaz; Porikli, Fatih; Bunyak, Filiz; Zhu, Gao; Seetharaman, Guna; Kieritz, Hilke; Yau, Hing Tuen; Li, Hongdong; Qi, Honggang; Bischof, Horst; Possegger, Horst; Lee, Hyemin; Nam, Hyeonseob; Bogun, Ivan; Jeong, Jae-chan; Cho, Jae-il; Lee, Jae-Yeong; Zhu, Jianke; Shi, Jianping; Li, Jiatong; Jia, Jiaya; Feng, Jiayi; Gao, Jin; Choi, Jin Young; Kim, Ji-Wan; Lang, Jochen; Martinez, Jose M.; Choi, Jongwon; Xing, Junliang; Xue, Kai; Palaniappan, Kannappan; Lebeda, Karel; Alahari, Karteek; Gao, Ke; Yun, Kimin; Wong, Kin Hong; Luo, Lei; Ma, Liang; Ke, Lipeng; Wen, Longyin; Bertinetto, Luca; Pootschi, Mahdieh; Maresca, Mario; Danelljan, Martin; Wen, Mei; Zhang, Mengdan; Arens, Michael; Valstar, Michel; Tang, Ming; Chang, Ming-Ching; Khan, Muhammad Haris; Fan, Nana; Wang, Naiyan; Miksik, Ondrej; Torr, Philip H S; Wang, Qiang; Martin-Nieto, Rafael; Pelapur, Rengarajan; Bowden, Richard; Laganiere, Robert; Moujtahid, Salma; Hare, Sam; Hadfield, Simon; Lyu, Siwei; Li, Siyi; Zhu, Song-Chun; Becker, Stefan; Duffner, Stefan; Hicks, Stephen L; Golodetz, Stuart; Choi, Sunglok; Wu, Tianfu; Mauthner, Thomas; Pridmore, Tony; Hu, Weiming; Hubner, Wolfgang; Wang, Xiaomeng; Li, Xin; Shi, Xinchu; Zhao, Xu; Mei, Xue; Shizeng, Yao; Hua, Yang; Li, Yang; Lu, Yang; Li, Yuezun; Chen, Zhaoyun; Huang, Zehua; Chen, Zhe; Zhang, Zhe; He, Zhenyu; Hong, Zhibin

    2015-01-01

    The Visual Object Tracking challenge 2015, VOT2015, aims at comparing short-term single-object visual trackers that do not apply pre-learned models of object appearance. Results of 62 trackers are presented. The number of tested trackers makes VOT 2015 the largest benchmark on short-term tracking to date. For each participating tracker, a short description is provided in the appendix. Features of the VOT2015 challenge that go beyond its VOT2014 predecessor are: (i) a new VOT2015 dataset twice as large as in VOT2014 with full annotation of targets by rotated bounding boxes and per-frame attribute, (ii) extensions of the VOT2014 evaluation methodology by introduction of a new performance measure. The dataset, the evaluation kit as well as the results are publicly available at the challenge website.

  13. The Visual Object Tracking VOT2015 Challenge Results

    KAUST Repository

    Kristan, Matej

    2015-12-07

    The Visual Object Tracking challenge 2015, VOT2015, aims at comparing short-term single-object visual trackers that do not apply pre-learned models of object appearance. Results of 62 trackers are presented. The number of tested trackers makes VOT 2015 the largest benchmark on short-term tracking to date. For each participating tracker, a short description is provided in the appendix. Features of the VOT2015 challenge that go beyond its VOT2014 predecessor are: (i) a new VOT2015 dataset twice as large as in VOT2014 with full annotation of targets by rotated bounding boxes and per-frame attribute, (ii) extensions of the VOT2014 evaluation methodology by introduction of a new performance measure. The dataset, the evaluation kit as well as the results are publicly available at the challenge website.

  14. An object-based visual attention model for robotic applications.

    Science.gov (United States)

    Yu, Yuanlong; Mann, George K I; Gosine, Raymond G

    2010-10-01

    By extending integrated competition hypothesis, this paper presents an object-based visual attention model, which selects one object of interest using low-dimensional features, resulting that visual perception starts from a fast attentional selection procedure. The proposed attention model involves seven modules: learning of object representations stored in a long-term memory (LTM), preattentive processing, top-down biasing, bottom-up competition, mediation between top-down and bottom-up ways, generation of saliency maps, and perceptual completion processing. It works in two phases: learning phase and attending phase. In the learning phase, the corresponding object representation is trained statistically when one object is attended. A dual-coding object representation consisting of local and global codings is proposed. Intensity, color, and orientation features are used to build the local coding, and a contour feature is employed to constitute the global coding. In the attending phase, the model preattentively segments the visual field into discrete proto-objects using Gestalt rules at first. If a task-specific object is given, the model recalls the corresponding representation from LTM and deduces the task-relevant feature(s) to evaluate top-down biases. The mediation between automatic bottom-up competition and conscious top-down biasing is then performed to yield a location-based saliency map. By combination of location-based saliency within each proto-object, the proto-object-based saliency is evaluated. The most salient proto-object is selected for attention, and it is finally put into the perceptual completion processing module to yield a complete object region. This model has been applied into distinct tasks of robots: detection of task-specific stationary and moving objects. Experimental results under different conditions are shown to validate this model.

  15. The Visual Object Tracking VOT2016 Challenge Results

    KAUST Repository

    Kristan, Matej

    2016-11-02

    The Visual Object Tracking challenge VOT2016 aims at comparing short-term single-object visual trackers that do not apply pre-learned models of object appearance. Results of 70 trackers are presented, with a large number of trackers being published at major computer vision conferences and journals in the recent years. The number of tested state-of-the-art trackers makes the VOT 2016 the largest and most challenging benchmark on short-term tracking to date. For each participating tracker, a short description is provided in the Appendix. The VOT2016 goes beyond its predecessors by (i) introducing a new semi-automatic ground truth bounding box annotation methodology and (ii) extending the evaluation system with the no-reset experiment. The dataset, the evaluation kit as well as the results are publicly available at the challenge website (http://votchallenge.net).

  16. The Visual Object Tracking VOT2016 Challenge Results

    KAUST Repository

    Kristan, Matej; Leonardis, Aleš; Matas, Jiři; Felsberg, Michael; Pflugfelder, Roman; Čehovin, Luka; Vojí r̃, Tomá š; Hä ger, Gustav; Lukežič, Alan; Ferná ndez, Gustavo; Gupta, Abhinav; Petrosino, Alfredo; Memarmoghadam, Alireza; Garcia-Martin, Alvaro; Solí s Montero, André s; Vedaldi, Andrea; Robinson, Andreas; Ma, Andy J.; Varfolomieiev, Anton; Alatan, Aydin; Erdem, Aykut; Ghanem, Bernard; Liu, Bin; Han, Bohyung; Martinez, Brais; Chang, Chang-Ming; Xu, Changsheng; Sun, Chong; Kim, Daijin; Chen, Dapeng; Du, Dawei; Mishra, Deepak; Yeung, Dit-Yan; Gundogdu, Erhan; Erdem, Erkut; Khan, Fahad; Porikli, Fatih; Zhao, Fei; Bunyak, Filiz; Battistone, Francesco; Zhu, Gao; Roffo, Giorgio; Subrahmanyam, Gorthi R. K. Sai; Bastos, Guilherme; Seetharaman, Guna; Medeiros, Henry; Li, Hongdong; Qi, Honggang; Bischof, Horst; Possegger, Horst; Lu, Huchuan; Lee, Hyemin; Nam, Hyeonseob; Chang, Hyung Jin; Drummond, Isabela; Valmadre, Jack; Jeong, Jae-chan; Cho, Jae-il; Lee, Jae-Yeong; Zhu, Jianke; Feng, Jiayi; Gao, Jin; Choi, Jin Young; Xiao, Jingjing; Kim, Ji-Wan; Jeong, Jiyeoup; Henriques, Joã o F.; Lang, Jochen; Choi, Jongwon; Martinez, Jose M.; Xing, Junliang; Gao, Junyu; Palaniappan, Kannappan; Lebeda, Karel; Gao, Ke; Mikolajczyk, Krystian; Qin, Lei; Wang, Lijun; Wen, Longyin; Bertinetto, Luca; Rapuru, Madan Kumar; Poostchi, Mahdieh; Maresca, Mario; Danelljan, Martin; Mueller, Matthias; Zhang, Mengdan; Arens, Michael; Valstar, Michel; Tang, Ming; Baek, Mooyeol; Khan, Muhammad Haris; Wang, Naiyan; Fan, Nana; Al-Shakarji, Noor; Miksik, Ondrej; Akin, Osman; Moallem, Payman; Senna, Pedro; Torr, Philip H. S.; Yuen, Pong C.; Huang, Qingming; Martin-Nieto, Rafael; Pelapur, Rengarajan; Bowden, Richard; Laganiè re, Robert; Stolkin, Rustam; Walsh, Ryan; Krah, Sebastian B.; Li, Shengkun; Zhang, Shengping; Yao, Shizeng; Hadfield, Simon; Melzi, Simone; Lyu, Siwei; Li, Siyi; Becker, Stefan; Golodetz, Stuart; Kakanuru, Sumithra; Choi, Sunglok; Hu, Tao; Mauthner, Thomas; Zhang, Tianzhu; Pridmore, Tony; Santopietro, Vincenzo; Hu, Weiming; Li, Wenbo; Hü bner, Wolfgang; Lan, Xiangyuan; Wang, Xiaomeng; Li, Xin; Li, Yang; Demiris, Yiannis; Wang, Yifan; Qi, Yuankai; Yuan, Zejian; Cai, Zexiong; Xu, Zhan; He, Zhenyu; Chi, Zhizhen

    2016-01-01

    The Visual Object Tracking challenge VOT2016 aims at comparing short-term single-object visual trackers that do not apply pre-learned models of object appearance. Results of 70 trackers are presented, with a large number of trackers being published at major computer vision conferences and journals in the recent years. The number of tested state-of-the-art trackers makes the VOT 2016 the largest and most challenging benchmark on short-term tracking to date. For each participating tracker, a short description is provided in the Appendix. The VOT2016 goes beyond its predecessors by (i) introducing a new semi-automatic ground truth bounding box annotation methodology and (ii) extending the evaluation system with the no-reset experiment. The dataset, the evaluation kit as well as the results are publicly available at the challenge website (http://votchallenge.net).

  17. Use of subjective and objective criteria to categorise visual disability.

    Science.gov (United States)

    Kajla, Garima; Rohatgi, Jolly; Dhaliwal, Upreet

    2014-04-01

    Visual disability is categorised using objective criteria. Subjective measures are not considered. To use subjective criteria along with objective ones to categorise visual disability. Ophthalmology out-patient department; teaching hospital; observational study. Consecutive persons aged >25 years, with vision disability; group-zero: normal range of vision, to group-X: no perception of light, bilaterally. Snellen's vision; binocular contrast sensitivity (Pelli-Robson chart); automated binocular visual field (Humphrey; Esterman test); and vision-related quality of life (Indian Visual Function Questionnaire-33; IND-VFQ33) were recorded. SPSS version-17; Kruskal-wallis test was used to compare contrast sensitivity and visual fields across groups, and Mann-Whitney U test for pair-wise comparison (Bonferroni adjustment; P visual fields were comparable for differing disability grades except when disability was severe (P disability grades but comparable for groups III (78.51 ± 6.86) and IV (82.64 ± 5.80), and groups IV and V (77.23 ± 3.22); these were merged to generate group 345; similarly, global scores were comparable for adjacent groups V and VI (72.53 ± 6.77), VI and VII (74.46 ± 4.32), and VII and VIII (69.12 ± 5.97); these were merged to generate group 5678; thereafter, contrast sensitivity and global and individual IND-VFQ33 scores could differentiate between different grades of disability in the five new groups. Subjective criteria made it possible to objectively reclassify visual disability. Visual disability grades could be redefined to accommodate all from zero-100%.

  18. Foraging through multiple target categories reveals the flexibility of visual working memory.

    Science.gov (United States)

    Kristjánsson, Tómas; Kristjánsson, Árni

    2018-02-01

    A key assumption in the literature on visual attention is that templates, actively maintained in visual working memory (VWM), guide visual attention. An important question therefore involves the nature and capacity of VWM. According to load theories, more than one search template can be active at the same time and capacity is determined by the total load rather than a precise number of templates. By an alternative account only one search template can be active within visual working memory at any given time, while other templates are in an accessory state - but do not affect visual selection. We addressed this question by varying the number of targets and distractors in a visual foraging task for 40 targets among 40 distractors in two ways: 1) Fixed-distractor-number, involving two distractor types while target categories varied from one to four. 2) Fixed-color-number (7), so that if the target types were two, distractors types were five, while if target number increased to three, distractor types were four (etc.). The two accounts make differing predictions. Under the single-template account, we should expect large switch costs as target types increase to two, but switch-costs should not increase much as target types increase beyond two. Load accounts predict an approximately linear increase in switch costs with increased target type number. The results were that switch costs increased roughly linearly in both conditions, in line with load accounts. The results are discussed in light of recent proposals that working memory reflects lingering neural activity at various sites that operate on the stimuli in each case and findings showing neurally silent working memory representations. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. [Symptoms and lesion localization in visual agnosia].

    Science.gov (United States)

    Suzuki, Kyoko

    2004-11-01

    There are two cortical visual processing streams, the ventral and dorsal stream. The ventral visual stream plays the major role in constructing our perceptual representation of the visual world and the objects within it. Disturbance of visual processing at any stage of the ventral stream could result in impairment of visual recognition. Thus we need systematic investigations to diagnose visual agnosia and its type. Two types of category-selective visual agnosia, prosopagnosia and landmark agnosia, are different from others in that patients could recognize a face as a face and buildings as buildings, but could not identify an individual person or building. Neuronal bases of prosopagnosia and landmark agnosia are distinct. Importance of the right fusiform gyrus for face recognition was confirmed by both clinical and neuroimaging studies. Landmark agnosia is related to lesions in the right parahippocampal gyrus. Enlarged lesions including both the right fusiform and parahippocampal gyri can result in prosopagnosia and landmark agnosia at the same time. Category non-selective visual agnosia is related to bilateral occipito-temporal lesions, which is in agreement with the results of neuroimaging studies that revealed activation of the bilateral occipito-temporal during object recognition tasks.

  20. Online decoding of object-based attention using real-time fMRI.

    Science.gov (United States)

    Niazi, Adnan M; van den Broek, Philip L C; Klanke, Stefan; Barth, Markus; Poel, Mannes; Desain, Peter; van Gerven, Marcel A J

    2014-01-01

    Visual attention is used to selectively filter relevant information depending on current task demands and goals. Visual attention is called object-based attention when it is directed to coherent forms or objects in the visual field. This study used real-time functional magnetic resonance imaging for moment-to-moment decoding of attention to spatially overlapped objects belonging to two different object categories. First, a whole-brain classifier was trained on pictures of faces and places. Subjects then saw transparently overlapped pictures of a face and a place, and attended to only one of them while ignoring the other. The category of the attended object, face or place, was decoded on a scan-by-scan basis using the previously trained decoder. The decoder performed at 77.6% accuracy indicating that despite competing bottom-up sensory input, object-based visual attention biased neural patterns towards that of the attended object. Furthermore, a comparison between different classification approaches indicated that the representation of faces and places is distributed rather than focal. This implies that real-time decoding of object-based attention requires a multivariate decoding approach that can detect these distributed patterns of cortical activity. © 2013 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  1. Single-trial multisensory memories affect later auditory and visual object discrimination.

    Science.gov (United States)

    Thelen, Antonia; Talsma, Durk; Murray, Micah M

    2015-05-01

    Multisensory memory traces established via single-trial exposures can impact subsequent visual object recognition. This impact appears to depend on the meaningfulness of the initial multisensory pairing, implying that multisensory exposures establish distinct object representations that are accessible during later unisensory processing. Multisensory contexts may be particularly effective in influencing auditory discrimination, given the purportedly inferior recognition memory in this sensory modality. The possibility of this generalization and the equivalence of effects when memory discrimination was being performed in the visual vs. auditory modality were at the focus of this study. First, we demonstrate that visual object discrimination is affected by the context of prior multisensory encounters, replicating and extending previous findings by controlling for the probability of multisensory contexts during initial as well as repeated object presentations. Second, we provide the first evidence that single-trial multisensory memories impact subsequent auditory object discrimination. Auditory object discrimination was enhanced when initial presentations entailed semantically congruent multisensory pairs and was impaired after semantically incongruent multisensory encounters, compared to sounds that had been encountered only in a unisensory manner. Third, the impact of single-trial multisensory memories upon unisensory object discrimination was greater when the task was performed in the auditory vs. visual modality. Fourth, there was no evidence for correlation between effects of past multisensory experiences on visual and auditory processing, suggestive of largely independent object processing mechanisms between modalities. We discuss these findings in terms of the conceptual short term memory (CSTM) model and predictive coding. Our results suggest differential recruitment and modulation of conceptual memory networks according to the sensory task at hand. Copyright

  2. Object versus spatial visual mental imagery in patients with schizophrenia

    Science.gov (United States)

    Aleman, André; de Haan, Edward H.F.; Kahn, René S.

    2005-01-01

    Objective Recent research has revealed a larger impairment of object perceptual discrimination than of spatial perceptual discrimination in patients with schizophrenia. It has been suggested that mental imagery may share processing systems with perception. We investigated whether patients with schizophrenia would show greater impairment regarding object imagery than spatial imagery. Methods Forty-four patients with schizophrenia and 20 healthy control subjects were tested on a task of object visual mental imagery and on a task of spatial visual mental imagery. Both tasks included a condition in which no imagery was needed for adequate performance, but which was in other respects identical to the imagery condition. This allowed us to adjust for nonspecific differences in individual performance. Results The results revealed a significant difference between patients and controls on the object imagery task (F1,63 = 11.8, p = 0.001) but not on the spatial imagery task (F1,63 = 0.14, p = 0.71). To test for a differential effect, we conducted a 2 (patients v. controls) х 2 (object task v. spatial task) analysis of variance. The interaction term was statistically significant (F1,62 = 5.2, p = 0.026). Conclusions Our findings suggest a differential dysfunction of systems mediating object and spatial visual mental imagery in schizophrenia. PMID:15644999

  3. Online Feature Transformation Learning for Cross-Domain Object Category Recognition.

    Science.gov (United States)

    Zhang, Xuesong; Zhuang, Yan; Wang, Wei; Pedrycz, Witold

    2017-06-09

    In this paper, we introduce a new research problem termed online feature transformation learning in the context of multiclass object category recognition. The learning of a feature transformation is viewed as learning a global similarity metric function in an online manner. We first consider the problem of online learning a feature transformation matrix expressed in the original feature space and propose an online passive aggressive feature transformation algorithm. Then these original features are mapped to kernel space and an online single kernel feature transformation (OSKFT) algorithm is developed to learn a nonlinear feature transformation. Based on the OSKFT and the existing Hedge algorithm, a novel online multiple kernel feature transformation algorithm is also proposed, which can further improve the performance of online feature transformation learning in large-scale application. The classifier is trained with k nearest neighbor algorithm together with the learned similarity metric function. Finally, we experimentally examined the effect of setting different parameter values in the proposed algorithms and evaluate the model performance on several multiclass object recognition data sets. The experimental results demonstrate the validity and good performance of our methods on cross-domain and multiclass object recognition application.

  4. Object-based attention underlies the rehearsal of feature binding in visual working memory.

    Science.gov (United States)

    Shen, Mowei; Huang, Xiang; Gao, Zaifeng

    2015-04-01

    Feature binding is a core concept in many research fields, including the study of working memory (WM). Over the past decade, it has been debated whether keeping the feature binding in visual WM consumes more visual attention than the constituent single features. Previous studies have only explored the contribution of domain-general attention or space-based attention in the binding process; no study so far has explored the role of object-based attention in retaining binding in visual WM. We hypothesized that object-based attention underlay the mechanism of rehearsing feature binding in visual WM. Therefore, during the maintenance phase of a visual WM task, we inserted a secondary mental rotation (Experiments 1-3), transparent motion (Experiment 4), or an object-based feature report task (Experiment 5) to consume the object-based attention available for binding. In line with the prediction of the object-based attention hypothesis, Experiments 1-5 revealed a more significant impairment for binding than for constituent single features. However, this selective binding impairment was not observed when inserting a space-based visual search task (Experiment 6). We conclude that object-based attention underlies the rehearsal of binding representation in visual WM. (c) 2015 APA, all rights reserved.

  5. Change blindness and visual memory: visual representations get rich and act poor.

    Science.gov (United States)

    Varakin, D Alexander; Levin, Daniel T

    2006-02-01

    Change blindness is often taken as evidence that visual representations are impoverished, while successful recognition of specific objects is taken as evidence that they are richly detailed. In the current experiments, participants performed cover tasks that required each object in a display to be attended. Change detection trials were unexpectedly introduced and surprise recognition tests were given for nonchanging displays. For both change detection and recognition, participants had to distinguish objects from the same basic-level category, making it likely that specific visual information had to be used for successful performance. Although recognition was above chance, incidental change detection usually remained at floor. These results help reconcile demonstrations of poor change detection with demonstrations of good memory because they suggest that the capability to store visual information in memory is not reflected by the visual system's tendency to utilize these representations for purposes of detecting unexpected changes.

  6. Supporting Sensemaking of Complex Objects with Visualizations: Visibility and Complementarity of Interactions

    Directory of Open Access Journals (Sweden)

    Kamran Sedig

    2016-10-01

    Full Text Available Making sense of complex objects is difficult, and typically requires the use of external representations to support cognitive demands while reasoning about the objects. Visualizations are one type of external representation that can be used to support sensemaking activities. In this paper, we investigate the role of two design strategies in making the interactive features of visualizations more supportive of users’ exploratory needs when trying to make sense of complex objects. These two strategies are visibility and complementarity of interactions. We employ a theoretical framework concerned with human–information interaction and complex cognitive activities to inform, contextualize, and interpret the effects of the design strategies. The two strategies are incorporated in the design of Polyvise, a visualization tool that supports making sense of complex four-dimensional geometric objects. A mixed-methods study was conducted to evaluate the design strategies and the overall usability of Polyvise. We report the findings of the study, discuss some implications for the design of visualization tools that support sensemaking of complex objects, and propose five design guidelines. We anticipate that our results are transferrable to other contexts, and that these two design strategies can be used broadly in visualization tools intended to support activities with complex objects and information spaces.

  7. Defining Auditory-Visual Objects: Behavioral Tests and Physiological Mechanisms.

    Science.gov (United States)

    Bizley, Jennifer K; Maddox, Ross K; Lee, Adrian K C

    2016-02-01

    Crossmodal integration is a term applicable to many phenomena in which one sensory modality influences task performance or perception in another sensory modality. We distinguish the term binding as one that should be reserved specifically for the process that underpins perceptual object formation. To unambiguously differentiate binding form other types of integration, behavioral and neural studies must investigate perception of a feature orthogonal to the features that link the auditory and visual stimuli. We argue that supporting true perceptual binding (as opposed to other processes such as decision-making) is one role for cross-sensory influences in early sensory cortex. These early multisensory interactions may therefore form a physiological substrate for the bottom-up grouping of auditory and visual stimuli into auditory-visual (AV) objects. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  8. Objective Evaluation of Visual Fatigue Using Binocular Fusion Maintenance.

    Science.gov (United States)

    Hirota, Masakazu; Morimoto, Takeshi; Kanda, Hiroyuki; Endo, Takao; Miyoshi, Tomomitsu; Miyagawa, Suguru; Hirohara, Yoko; Yamaguchi, Tatsuo; Saika, Makoto; Fujikado, Takashi

    2018-03-01

    In this study, we investigated whether an individual's visual fatigue can be evaluated objectively and quantitatively from their ability to maintain binocular fusion. Binocular fusion maintenance (BFM) was measured using a custom-made binocular open-view Shack-Hartmann wavefront aberrometer equipped with liquid crystal shutters, wherein eye movements and wavefront aberrations were measured simultaneously. Transmittance in the liquid crystal shutter in front of the subject's nondominant eye was reduced linearly, and BFM was determined from the transmittance at the point when binocular fusion was broken and vergence eye movement was induced. In total, 40 healthy subjects underwent the BFM test and completed a questionnaire regarding subjective symptoms before and after a visual task lasting 30 minutes. BFM was significantly reduced after the visual task ( P eye symptom score (adjusted R 2 = 0.752, P devices, such as head-mount display, objectively.

  9. Category Learning Research in the Interactive Online Environment Second Life

    Science.gov (United States)

    Andrews, Jan; Livingston, Ken; Sturm, Joshua; Bliss, Daniel; Hawthorne, Daniel

    2011-01-01

    The interactive online environment Second Life allows users to create novel three-dimensional stimuli that can be manipulated in a meaningful yet controlled environment. These features suggest Second Life's utility as a powerful tool for investigating how people learn concepts for unfamiliar objects. The first of two studies was designed to establish that cognitive processes elicited in this virtual world are comparable to those tapped in conventional settings by attempting to replicate the established finding that category learning systematically influences perceived similarity . From the perspective of an avatar, participants navigated a course of unfamiliar three-dimensional stimuli and were trained to classify them into two labeled categories based on two visual features. Participants then gave similarity ratings for pairs of stimuli and their responses were compared to those of control participants who did not learn the categories. Results indicated significant compression, whereby objects classified together were judged to be more similar by learning than control participants, thus supporting the validity of using Second Life as a laboratory for studying human cognition. A second study used Second Life to test the novel hypothesis that effects of learning on perceived similarity do not depend on the presence of verbal labels for categories. We presented the same stimuli but participants classified them by selecting between two complex visual patterns designed to be extremely difficult to label. While learning was more challenging in this condition , those who did learn without labels showed a compression effect identical to that found in the first study using verbal labels. Together these studies establish that at least some forms of human learning in Second Life parallel learning in the actual world and thus open the door to future studies that will make greater use of the enriched variety of objects and interactions possible in simulated environments

  10. Size matters: large objects capture attention in visual search.

    Science.gov (United States)

    Proulx, Michael J

    2010-12-23

    Can objects or events ever capture one's attention in a purely stimulus-driven manner? A recent review of the literature set out the criteria required to find stimulus-driven attentional capture independent of goal-directed influences, and concluded that no published study has satisfied that criteria. Here visual search experiments assessed whether an irrelevantly large object can capture attention. Capture of attention by this static visual feature was found. The results suggest that a large object can indeed capture attention in a stimulus-driven manner and independent of displaywide features of the task that might encourage a goal-directed bias for large items. It is concluded that these results are either consistent with the stimulus-driven criteria published previously or alternatively consistent with a flexible, goal-directed mechanism of saliency detection.

  11. Visual Neurons in the Superior Colliculus Discriminate Many Objects by Their Historical Values

    Directory of Open Access Journals (Sweden)

    Whitney S. Griggs

    2018-06-01

    Full Text Available The superior colliculus (SC is an important structure in the mammalian brain that orients the animal toward distinct visual events. Visually responsive neurons in SC are modulated by visual object features, including size, motion, and color. However, it remains unclear whether SC activity is modulated by non-visual object features, such as the reward value associated with the object. To address this question, three monkeys were trained (>10 days to saccade to multiple fractal objects, half of which were consistently associated with large rewards while other half were associated with small rewards. This created historically high-valued (‘good’ and low-valued (‘bad’ objects. During the neuronal recordings from the SC, the monkeys maintained fixation at the center while the objects were flashed in the receptive field of the neuron without any reward. We found that approximately half of the visual neurons responded more strongly to the good than bad objects. In some neurons, this value-coding remained intact for a long time (>1 year after the last object-reward association learning. Notably, the neuronal discrimination of reward values started about 100 ms after the appearance of visual objects and lasted for more than 100 ms. These results provide evidence that SC neurons can discriminate objects by their historical (long-term values. This object value information may be provided by the basal ganglia, especially the circuit originating from the tail of the caudate nucleus. The information may be used by the neural circuits inside SC for motor (saccade output or may be sent to the circuits outside SC for future behavior.

  12. Crossmodal Activation of Visual Object Regions for Auditorily Presented Concrete Words

    Directory of Open Access Journals (Sweden)

    Jasper J F van den Bosch

    2011-10-01

    Full Text Available Dual-coding theory (Paivio, 1986 postulates that the human mind represents objects not just with an analogous, or semantic code, but with a perceptual representation as well. Previous studies (eg, Fiebach & Friederici, 2004 indicated that the modality of this representation is not necessarily the one that triggers the representation. The human visual cortex contains several regions, such as the Lateral Occipital Complex (LOC, that respond specifically to object stimuli. To investigate whether these principally visual representations regions are also recruited for auditory stimuli, we presented subjects with spoken words with specific, concrete meanings (‘car’ as well as words with abstract meanings (‘hope’. Their brain activity was measured with functional magnetic resonance imaging. Whole-brain contrasts showed overlap between regions differentially activated by words for concrete objects compared to words for abstract concepts with visual regions activated by a contrast of object versus non-object visual stimuli. We functionally localized LOC for individual subjects and a preliminary analysis showed a trend for a concreteness effect in this region-of-interest on the group level. Appropriate further analysis might include connectivity and classification measures. These results can shed light on the role of crossmodal representations in cognition.

  13. Nouns, verbs, objects, actions, and abstractions: local fMRI activity indexes semantics, not lexical categories.

    Science.gov (United States)

    Moseley, Rachel L; Pulvermüller, Friedemann

    2014-05-01

    Noun/verb dissociations in the literature defy interpretation due to the confound between lexical category and semantic meaning; nouns and verbs typically describe concrete objects and actions. Abstract words, pertaining to neither, are a critical test case: dissociations along lexical-grammatical lines would support models purporting lexical category as the principle governing brain organisation, whilst semantic models predict dissociation between concrete words but not abstract items. During fMRI scanning, participants read orthogonalised word categories of nouns and verbs, with or without concrete, sensorimotor meaning. Analysis of inferior frontal/insula, precentral and central areas revealed an interaction between lexical class and semantic factors with clear category differences between concrete nouns and verbs but not abstract ones. Though the brain stores the combinatorial and lexical-grammatical properties of words, our data show that topographical differences in brain activation, especially in the motor system and inferior frontal cortex, are driven by semantics and not by lexical class. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  14. Human object-similarity judgments reflect and transcend the primate-IT object representation

    Directory of Open Access Journals (Sweden)

    Marieke eMur

    2013-03-01

    Full Text Available Primate inferior temporal (IT cortex is thought to contain a high-level representation of objects at the interface between vision and semantics. This suggests that the perceived similarity of real-world objects might be predicted from the IT representation. Here we show that objects that elicit similar activity patterns in human IT tend to be judged as similar by humans. The IT representation explained the human judgments better than early visual cortex, other ventral stream regions, and a range of computational models. Human similarity judgments exhibited category clusters that reflected several categorical divisions that are prevalent in the IT representation of both human and monkey, including the animate/inanimate and the face/body division. Human judgments also reflected the within-category representation of IT. However, the judgments transcended the IT representation in that they introduced additional categorical divisions. In particular, human judgments emphasized human-related additional divisions between human and nonhuman animals and between man-made and natural objects. Human IT was more similar to monkey IT than to human judgments. One interpretation is that IT has evolved visual feature detectors that distinguish between animates and inanimates and between faces and bodies because these divisions are fundamental to survival and reproduction for all primate species, and that other brain systems serve to more flexibly introduce species-dependent and evolutionarily more recent divisions.

  15. Use of interactive data visualization in multi-objective forest planning.

    Science.gov (United States)

    Haara, Arto; Pykäläinen, Jouni; Tolvanen, Anne; Kurttila, Mikko

    2018-03-15

    Common to multi-objective forest planning situations is that they all require comparisons, searches and evaluation among decision alternatives. Through these actions, the decision maker can learn from the information presented and thus make well-justified decisions. Interactive data visualization is an evolving approach that supports learning and decision making in multidimensional decision problems and planning processes. Data visualization contributes the formation of mental image data and this process is further boosted by allowing interaction with the data. In this study, we introduce a multi-objective forest planning decision problem framework and the corresponding characteristics of data. We utilize the framework with example planning data to illustrate and evaluate the potential of 14 interactive data visualization techniques to support multi-objective forest planning decisions. Furthermore, broader utilization possibilities of these techniques to incorporate the provisioning of ecosystem services into forest management and planning are discussed. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. Hierarchical Neural Representation of Dreamed Objects Revealed by Brain Decoding with Deep Neural Network Features.

    Science.gov (United States)

    Horikawa, Tomoyasu; Kamitani, Yukiyasu

    2017-01-01

    Dreaming is generally thought to be generated by spontaneous brain activity during sleep with patterns common to waking experience. This view is supported by a recent study demonstrating that dreamed objects can be predicted from brain activity during sleep using statistical decoders trained with stimulus-induced brain activity. However, it remains unclear whether and how visual image features associated with dreamed objects are represented in the brain. In this study, we used a deep neural network (DNN) model for object recognition as a proxy for hierarchical visual feature representation, and DNN features for dreamed objects were analyzed with brain decoding of fMRI data collected during dreaming. The decoders were first trained with stimulus-induced brain activity labeled with the feature values of the stimulus image from multiple DNN layers. The decoders were then used to decode DNN features from the dream fMRI data, and the decoded features were compared with the averaged features of each object category calculated from a large-scale image database. We found that the feature values decoded from the dream fMRI data positively correlated with those associated with dreamed object categories at mid- to high-level DNN layers. Using the decoded features, the dreamed object category could be identified at above-chance levels by matching them to the averaged features for candidate categories. The results suggest that dreaming recruits hierarchical visual feature representations associated with objects, which may support phenomenal aspects of dream experience.

  17. Visual hull method for tomographic PIV measurement of flow around moving objects

    Energy Technology Data Exchange (ETDEWEB)

    Adhikari, D.; Longmire, E.K. [University of Minnesota, Department of Aerospace Engineering and Mechanics, Minneapolis, MN (United States)

    2012-10-15

    Tomographic particle image velocimetry (PIV) is a recently developed method to measure three components of velocity within a volumetric space. We present a visual hull technique that automates identification and masking of discrete objects within the measurement volume, and we apply existing tomographic PIV reconstruction software to measure the velocity surrounding the objects. The technique is demonstrated by considering flow around falling bodies of different shape with Reynolds number {proportional_to}1,000. Acquired image sets are processed using separate routines to reconstruct both the volumetric mask around the object and the surrounding tracer particles. After particle reconstruction, the reconstructed object mask is used to remove any ghost particles that otherwise appear within the object volume. Velocity vectors corresponding with fluid motion can then be determined up to the boundary of the visual hull without being contaminated or affected by the neighboring object velocity. Although the visual hull method is not meant for precise tracking of objects, the reconstructed object volumes nevertheless can be used to estimate the object location and orientation at each time step. (orig.)

  18. Robustness and prediction accuracy of machine learning for objective visual quality assessment

    OpenAIRE

    HINES, ANDREW

    2014-01-01

    PUBLISHED Lisbon, Portugal Machine Learning (ML) is a powerful tool to support the development of objective visual quality assessment metrics, serving as a substitute model for the perceptual mechanisms acting in visual quality appreciation. Nevertheless, the reli- ability of ML-based techniques within objective quality as- sessment metrics is often questioned. In this study, the ro- bustness of ML in supporting objective quality assessment is investigated, specific...

  19. Visualizing Data as Objects by DC (Difference of Convex) Optimization

    DEFF Research Database (Denmark)

    Carrizosa, Emilio; Guerrero, Vanesa; Morales, Dolores Romero

    2018-01-01

    In this paper we address the problem of visualizing in a bounded region a set of individuals, which has attached a dissimilarity measure and a statistical value, as convex objects. This problem, which extends the standard Multidimensional Scaling Analysis, is written as a global optimization...... problem whose objective is the difference of two convex functions (DC). Suitable DC decompositions allow us to use the Difference of Convex Algorithm (DCA) in a very efficient way. Our algorithmic approach is used to visualize two real-world datasets....

  20. Internal attention to features in visual short-term memory guides object learning.

    Science.gov (United States)

    Fan, Judith E; Turk-Browne, Nicholas B

    2013-11-01

    Attending to objects in the world affects how we perceive and remember them. What are the consequences of attending to an object in mind? In particular, how does reporting the features of a recently seen object guide visual learning? In three experiments, observers were presented with abstract shapes in a particular color, orientation, and location. After viewing each object, observers were cued to report one feature from visual short-term memory (VSTM). In a subsequent test, observers were cued to report features of the same objects from visual long-term memory (VLTM). We tested whether reporting a feature from VSTM: (1) enhances VLTM for just that feature (practice-benefit hypothesis), (2) enhances VLTM for all features (object-based hypothesis), or (3) simultaneously enhances VLTM for that feature and suppresses VLTM for unreported features (feature-competition hypothesis). The results provided support for the feature-competition hypothesis, whereby the representation of an object in VLTM was biased towards features reported from VSTM and away from unreported features (Experiment 1). This bias could not be explained by the amount of sensory exposure or response learning (Experiment 2) and was amplified by the reporting of multiple features (Experiment 3). Taken together, these results suggest that selective internal attention induces competitive dynamics among features during visual learning, flexibly tuning object representations to align with prior mnemonic goals. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. Ensemble coding remains accurate under object and spatial visual working memory load.

    Science.gov (United States)

    Epstein, Michael L; Emmanouil, Tatiana A

    2017-10-01

    A number of studies have provided evidence that the visual system statistically summarizes large amounts of information that would exceed the limitations of attention and working memory (ensemble coding). However the necessity of working memory resources for ensemble coding has not yet been tested directly. In the current study, we used a dual task design to test the effect of object and spatial visual working memory load on size averaging accuracy. In Experiment 1, we tested participants' accuracy in comparing the mean size of two sets under various levels of object visual working memory load. Although the accuracy of average size judgments depended on the difference in mean size between the two sets, we found no effect of working memory load. In Experiment 2, we tested the same average size judgment while participants were under spatial visual working memory load, again finding no effect of load on averaging accuracy. Overall our results reveal that ensemble coding can proceed unimpeded and highly accurately under both object and spatial visual working memory load, providing further evidence that ensemble coding reflects a basic perceptual process distinct from that of individual object processing.

  2. Generating descriptive visual words and visual phrases for large-scale image applications.

    Science.gov (United States)

    Zhang, Shiliang; Tian, Qi; Hua, Gang; Huang, Qingming; Gao, Wen

    2011-09-01

    Bag-of-visual Words (BoWs) representation has been applied for various problems in the fields of multimedia and computer vision. The basic idea is to represent images as visual documents composed of repeatable and distinctive visual elements, which are comparable to the text words. Notwithstanding its great success and wide adoption, visual vocabulary created from single-image local descriptors is often shown to be not as effective as desired. In this paper, descriptive visual words (DVWs) and descriptive visual phrases (DVPs) are proposed as the visual correspondences to text words and phrases, where visual phrases refer to the frequently co-occurring visual word pairs. Since images are the carriers of visual objects and scenes, a descriptive visual element set can be composed by the visual words and their combinations which are effective in representing certain visual objects or scenes. Based on this idea, a general framework is proposed for generating DVWs and DVPs for image applications. In a large-scale image database containing 1506 object and scene categories, the visual words and visual word pairs descriptive to certain objects or scenes are identified and collected as the DVWs and DVPs. Experiments show that the DVWs and DVPs are informative and descriptive and, thus, are more comparable with the text words than the classic visual words. We apply the identified DVWs and DVPs in several applications including large-scale near-duplicated image retrieval, image search re-ranking, and object recognition. The combination of DVW and DVP performs better than the state of the art in large-scale near-duplicated image retrieval in terms of accuracy, efficiency and memory consumption. The proposed image search re-ranking algorithm: DWPRank outperforms the state-of-the-art algorithm by 12.4% in mean average precision and about 11 times faster in efficiency.

  3. ROBUSTNESS AND PREDICTION ACCURACY OF MACHINE LEARNING FOR OBJECTIVE VISUAL QUALITY ASSESSMENT

    OpenAIRE

    Hines, Andrew; Kendrick, Paul; Barri, Adriaan; Narwaria, Manish; Redi, Judith A.

    2014-01-01

    Machine Learning (ML) is a powerful tool to support the development of objective visual quality assessment metrics, serving as a substitute model for the perceptual mechanisms acting in visual quality appreciation. Nevertheless, the reliability of ML-based techniques within objective quality assessment metrics is often questioned. In this study, the robustness of ML in supporting objective quality assessment is investigated, specifically when the feature set adopted for prediction is suboptim...

  4. Research on Visual Servo Grasping of Household Objects for Nonholonomic Mobile Manipulator

    Directory of Open Access Journals (Sweden)

    Huangsheng Xie

    2014-01-01

    Full Text Available This paper focuses on the problem of visual servo grasping of household objects for nonholonomic mobile manipulator. Firstly, a new kind of artificial object mark based on QR (Quick Response Code is designed, which can be affixed to the surface of household objects. Secondly, after summarizing the vision-based autonomous mobile manipulation system as a generalized manipulator, the generalized manipulator’s kinematic model is established, the analytical inverse kinematic solutions of the generalized manipulator are acquired, and a novel active vision based camera calibration method is proposed to determine the hand-eye relationship. Finally, a visual servo switching control law is designed to control the service robot to finish object grasping operation. Experimental results show that QR Code-based artificial object mark can overcome the difficulties brought by household objects’ variety and operation complexity, and the proposed visual servo scheme makes it possible for service robot to grasp and deliver objects efficiently.

  5. Visual search for arbitrary objects in real scenes

    Science.gov (United States)

    Alvarez, George A.; Rosenholtz, Ruth; Kuzmova, Yoana I.; Sherman, Ashley M.

    2011-01-01

    How efficient is visual search in real scenes? In searches for targets among arrays of randomly placed distractors, efficiency is often indexed by the slope of the reaction time (RT) × Set Size function. However, it may be impossible to define set size for real scenes. As an approximation, we hand-labeled 100 indoor scenes and used the number of labeled regions as a surrogate for set size. In Experiment 1, observers searched for named objects (a chair, bowl, etc.). With set size defined as the number of labeled regions, search was very efficient (~5 ms/item). When we controlled for a possible guessing strategy in Experiment 2, slopes increased somewhat (~15 ms/item), but they were much shallower than search for a random object among other distinctive objects outside of a scene setting (Exp. 3: ~40 ms/item). In Experiments 4–6, observers searched repeatedly through the same scene for different objects. Increased familiarity with scenes had modest effects on RTs, while repetition of target items had large effects (>500 ms). We propose that visual search in scenes is efficient because scene-specific forms of attentional guidance can eliminate most regions from the “functional set size” of items that could possibly be the target. PMID:21671156

  6. Visual search for arbitrary objects in real scenes.

    Science.gov (United States)

    Wolfe, Jeremy M; Alvarez, George A; Rosenholtz, Ruth; Kuzmova, Yoana I; Sherman, Ashley M

    2011-08-01

    How efficient is visual search in real scenes? In searches for targets among arrays of randomly placed distractors, efficiency is often indexed by the slope of the reaction time (RT) × Set Size function. However, it may be impossible to define set size for real scenes. As an approximation, we hand-labeled 100 indoor scenes and used the number of labeled regions as a surrogate for set size. In Experiment 1, observers searched for named objects (a chair, bowl, etc.). With set size defined as the number of labeled regions, search was very efficient (~5 ms/item). When we controlled for a possible guessing strategy in Experiment 2, slopes increased somewhat (~15 ms/item), but they were much shallower than search for a random object among other distinctive objects outside of a scene setting (Exp. 3: ~40 ms/item). In Experiments 4-6, observers searched repeatedly through the same scene for different objects. Increased familiarity with scenes had modest effects on RTs, while repetition of target items had large effects (>500 ms). We propose that visual search in scenes is efficient because scene-specific forms of attentional guidance can eliminate most regions from the "functional set size" of items that could possibly be the target.

  7. Finding the Correspondence of Audio-Visual Events by Object Manipulation

    Science.gov (United States)

    Nishibori, Kento; Takeuchi, Yoshinori; Matsumoto, Tetsuya; Kudo, Hiroaki; Ohnishi, Noboru

    A human being understands the objects in the environment by integrating information obtained by the senses of sight, hearing and touch. In this integration, active manipulation of objects plays an important role. We propose a method for finding the correspondence of audio-visual events by manipulating an object. The method uses the general grouping rules in Gestalt psychology, i.e. “simultaneity” and “similarity” among motion command, sound onsets and motion of the object in images. In experiments, we used a microphone, a camera, and a robot which has a hand manipulator. The robot grasps an object like a bell and shakes it or grasps an object like a stick and beat a drum in a periodic, or non-periodic motion. Then the object emits periodical/non-periodical events. To create more realistic scenario, we put other event source (a metronome) in the environment. As a result, we had a success rate of 73.8 percent in finding the correspondence between audio-visual events (afferent signal) which are relating to robot motion (efferent signal).

  8. An insect-inspired model for visual binding I: learning objects and their characteristics.

    Science.gov (United States)

    Northcutt, Brandon D; Dyhr, Jonathan P; Higgins, Charles M

    2017-04-01

    Visual binding is the process of associating the responses of visual interneurons in different visual submodalities all of which are responding to the same object in the visual field. Recently identified neuropils in the insect brain termed optic glomeruli reside just downstream of the optic lobes and have an internal organization that could support visual binding. Working from anatomical similarities between optic and olfactory glomeruli, we have developed a model of visual binding based on common temporal fluctuations among signals of independent visual submodalities. Here we describe and demonstrate a neural network model capable both of refining selectivity of visual information in a given visual submodality, and of associating visual signals produced by different objects in the visual field by developing inhibitory neural synaptic weights representing the visual scene. We also show that this model is consistent with initial physiological data from optic glomeruli. Further, we discuss how this neural network model may be implemented in optic glomeruli at a neuronal level.

  9. A Visual Short-Term Memory Advantage for Objects of Expertise

    Science.gov (United States)

    Curby, Kim M.; Glazek, Kuba; Gauthier, Isabel

    2009-01-01

    Visual short-term memory (VSTM) is limited, especially for complex objects. Its capacity, however, is greater for faces than for other objects; this advantage may stem from the holistic nature of face processing. If the holistic processing explains this advantage, object expertise--which also relies on holistic processing--should endow experts…

  10. Objects Classification by Learning-Based Visual Saliency Model and Convolutional Neural Network.

    Science.gov (United States)

    Li, Na; Zhao, Xinbo; Yang, Yongjia; Zou, Xiaochun

    2016-01-01

    Humans can easily classify different kinds of objects whereas it is quite difficult for computers. As a hot and difficult problem, objects classification has been receiving extensive interests with broad prospects. Inspired by neuroscience, deep learning concept is proposed. Convolutional neural network (CNN) as one of the methods of deep learning can be used to solve classification problem. But most of deep learning methods, including CNN, all ignore the human visual information processing mechanism when a person is classifying objects. Therefore, in this paper, inspiring the completed processing that humans classify different kinds of objects, we bring forth a new classification method which combines visual attention model and CNN. Firstly, we use the visual attention model to simulate the processing of human visual selection mechanism. Secondly, we use CNN to simulate the processing of how humans select features and extract the local features of those selected areas. Finally, not only does our classification method depend on those local features, but also it adds the human semantic features to classify objects. Our classification method has apparently advantages in biology. Experimental results demonstrated that our method made the efficiency of classification improve significantly.

  11. Thickness and clearance visualization based on distance field of 3D objects

    Directory of Open Access Journals (Sweden)

    Masatomo Inui

    2015-07-01

    Full Text Available This paper proposes a novel method for visualizing the thickness and clearance of 3D objects in a polyhedral representation. The proposed method uses the distance field of the objects in the visualization. A parallel algorithm is developed for constructing the distance field of polyhedral objects using the GPU. The distance between a voxel and the surface polygons of the model is computed many times in the distance field construction. Similar sets of polygons are usually selected as close polygons for close voxels. By using this spatial coherence, a parallel algorithm is designed to compute the distances between a cluster of close voxels and the polygons selected by the culling operation so that the fast shared memory mechanism of the GPU can be fully utilized. The thickness/clearance of the objects is visualized by distributing points on the visible surfaces of the objects and painting them with a unique color corresponding to the thickness/clearance values at those points. A modified ray casting method is developed for computing the thickness/clearance using the distance field of the objects. A system based on these algorithms can compute the distance field of complex objects within a few minutes for most cases. After the distance field construction, thickness/clearance visualization at a near interactive rate is achieved.

  12. Object-based target templates guide attention during visual search.

    Science.gov (United States)

    Berggren, Nick; Eimer, Martin

    2018-05-03

    During visual search, attention is believed to be controlled in a strictly feature-based fashion, without any guidance by object-based target representations. To challenge this received view, we measured electrophysiological markers of attentional selection (N2pc component) and working memory (sustained posterior contralateral negativity; SPCN) in search tasks where two possible targets were defined by feature conjunctions (e.g., blue circles and green squares). Critically, some search displays also contained nontargets with two target features (incorrect conjunction objects, e.g., blue squares). Because feature-based guidance cannot distinguish these objects from targets, any selective bias for targets will reflect object-based attentional control. In Experiment 1, where search displays always contained only one object with target-matching features, targets and incorrect conjunction objects elicited identical N2pc and SPCN components, demonstrating that attentional guidance was entirely feature-based. In Experiment 2, where targets and incorrect conjunction objects could appear in the same display, clear evidence for object-based attentional control was found. The target N2pc became larger than the N2pc to incorrect conjunction objects from 250 ms poststimulus, and only targets elicited SPCN components. This demonstrates that after an initial feature-based guidance phase, object-based templates are activated when they are required to distinguish target and nontarget objects. These templates modulate visual processing and control access to working memory, and their activation may coincide with the start of feature integration processes. Results also suggest that while multiple feature templates can be activated concurrently, only a single object-based target template can guide attention at any given time. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  13. Structural similarity causes different category-effects depending on task characteristics

    DEFF Research Database (Denmark)

    Gerlach, Christian

    2001-01-01

    difference was found on easy object decision tasks. In experiment 2 an advantage for natural objects was found during object decisions performed under degraded viewing conditions (lateralized stimulus presentation). It is argued that these findings can be accounted for by assuming that natural objects...... it is in difficult object decision tasks). However, when viewing conditions are degraded and performance tends to depend on global shape information (carried by low spatial frequency components), natural objects may fare better than artefacts because the global shape of natural objects reveals more of their identity......It has been suggested that category-specific impairments for natural objects may reflect that natural objects are more globally visually similar than artefacts and therefore more difficult to recognize following brain damage [Aphasiology 13 (1992) 169]. This account has been challenged...

  14. Visual SLAM and Moving-object Detection for a Small-size Humanoid Robot

    Directory of Open Access Journals (Sweden)

    Yin-Tien Wang

    2010-09-01

    Full Text Available In the paper, a novel moving object detection (MOD algorithm is developed and integrated with robot visual Simultaneous Localization and Mapping (vSLAM. The moving object is assumed to be a rigid body and its coordinate system in space is represented by a position vector and a rotation matrix. The MOD algorithm is composed of detection of image features, initialization of image features, and calculation of object coordinates. Experimentation is implemented on a small-size humanoid robot and the results show that the performance of the proposed algorithm is efficient for robot visual SLAM and moving object detection.

  15. The Precategorical Nature of Visual Short-Term Memory

    Science.gov (United States)

    Quinlan, Philip T.; Cohen, Dale J.

    2016-01-01

    We conducted a series of recognition experiments that assessed whether visual short-term memory (VSTM) is sensitive to shared category membership of to-be-remembered (tbr) images of common objects. In Experiment 1 some of the tbr items shared the same basic level category (e.g., hand axe): Such items were no better retained than others. In the…

  16. Semantic priming effects of synonyms, antonyms, frame, implication and verb-object categories

    Directory of Open Access Journals (Sweden)

    Elsa Skënderi-Rakipllari

    2017-12-01

    Full Text Available Semantic priming has been a major subject of interest for psycholinguists, whose aim is to discover how lexical memory is structured and organized. The facilitation process of word retrieval through semantic priming has long been studied. The present research is aimed to reveal which semantic category has the best priming effect. Through a lexical decision task experiment we compared the reaction times of masked primed pairs and unprimed pairs. In addition, we analyzed the reaction times and priming effect of connected semantic relations: antonymy, frame, synonymy, implication and verb-object. The data collected and interpreted unveiled that the mean reaction times of primed pairs were shorter than those of unprimed pairs. As to semantic priming, the most significantly primed pairs were those of implications and verb- objects, and not those of synonymy or antonymy as it might be expected.

  17. The semantic category-based grouping in the Multiple Identity Tracking task.

    Science.gov (United States)

    Wei, Liuqing; Zhang, Xuemin; Li, Zhen; Liu, Jingyao

    2018-01-01

    In the Multiple Identity Tracking (MIT) task, categorical distinctions between targets and distractors have been found to facilitate tracking (Wei, Zhang, Lyu, & Li in Frontiers in Psychology, 7, 589, 2016). The purpose of this study was to further investigate the reasons for the facilitation effect, through six experiments. The results of Experiments 1-3 excluded the potential explanations of visual distinctiveness, attentional distribution strategy, and a working memory mechanism, respectively. When objects' visual information was preserved and categorical information was removed, the facilitation effect disappeared, suggesting that the visual distinctiveness between targets and distractors was not the main reason for the facilitation effect. Moreover, the facilitation effect was not the result of strategically shifting the attentional distribution, because the targets received more attention than the distractors in all conditions. Additionally, the facilitation effect did not come about because the identities of targets were encoded and stored in visual working memory to assist in the recovery from tracking errors; when working memory was disturbed by the object identities changing during tracking, the facilitation effect still existed. Experiments 4 and 5 showed that observers grouped targets together and segregated them from distractors on the basis of their categorical information. By doing this, observers could largely avoid distractor interference with tracking and improve tracking performance. Finally, Experiment 6 indicated that category-based grouping is not an automatic, but a goal-directed and effortful, strategy. In summary, the present findings show that a semantic category-based target-grouping mechanism exists in the MIT task, which is likely to be the major reason for the tracking facilitation effect.

  18. D Modelling and Interactive Web-Based Visualization of Cultural Heritage Objects

    Science.gov (United States)

    Koeva, M. N.

    2016-06-01

    Nowadays, there are rapid developments in the fields of photogrammetry, laser scanning, computer vision and robotics, together aiming to provide highly accurate 3D data that is useful for various applications. In recent years, various LiDAR and image-based techniques have been investigated for 3D modelling because of their opportunities for fast and accurate model generation. For cultural heritage preservation and the representation of objects that are important for tourism and their interactive visualization, 3D models are highly effective and intuitive for present-day users who have stringent requirements and high expectations. Depending on the complexity of the objects for the specific case, various technological methods can be applied. The selected objects in this particular research are located in Bulgaria - a country with thousands of years of history and cultural heritage dating back to ancient civilizations. This motivates the preservation, visualisation and recreation of undoubtedly valuable historical and architectural objects and places, which has always been a serious challenge for specialists in the field of cultural heritage. In the present research, comparative analyses regarding principles and technological processes needed for 3D modelling and visualization are presented. The recent problems, efforts and developments in interactive representation of precious objects and places in Bulgaria are presented. Three technologies based on real projects are described: (1) image-based modelling using a non-metric hand-held camera; (2) 3D visualization based on spherical panoramic images; (3) and 3D geometric and photorealistic modelling based on architectural CAD drawings. Their suitability for web-based visualization are demonstrated and compared. Moreover the possibilities for integration with additional information such as interactive maps, satellite imagery, sound, video and specific information for the objects are described. This comparative study

  19. What Makes an Object Memorable?

    KAUST Repository

    Dubey, Rachit

    2016-02-19

    Recent studies on image memorability have shed light on what distinguishes the memorability of different images and the intrinsic and extrinsic properties that make those images memorable. However, a clear understanding of the memorability of specific objects inside an image remains elusive. In this paper, we provide the first attempt to answer the question: what exactly is remembered about an image? We augment both the images and object segmentations from the PASCAL-S dataset with ground truth memorability scores and shed light on the various factors and properties that make an object memorable (or forgettable) to humans. We analyze various visual factors that may influence object memorability (e.g. color, visual saliency, and object categories). We also study the correlation between object and image memorability and find that image memorability is greatly affected by the memorability of its most memorable object. Lastly, we explore the effectiveness of deep learning and other computational approaches in predicting object memorability in images. Our efforts offer a deeper understanding of memorability in general thereby opening up avenues for a wide variety of applications. © 2015 IEEE.

  20. What Makes an Object Memorable?

    KAUST Repository

    Dubey, Rachit; Peterson, Joshua; Khosla, Aditya; Yang, Ming-Hsuan; Ghanem, Bernard

    2016-01-01

    Recent studies on image memorability have shed light on what distinguishes the memorability of different images and the intrinsic and extrinsic properties that make those images memorable. However, a clear understanding of the memorability of specific objects inside an image remains elusive. In this paper, we provide the first attempt to answer the question: what exactly is remembered about an image? We augment both the images and object segmentations from the PASCAL-S dataset with ground truth memorability scores and shed light on the various factors and properties that make an object memorable (or forgettable) to humans. We analyze various visual factors that may influence object memorability (e.g. color, visual saliency, and object categories). We also study the correlation between object and image memorability and find that image memorability is greatly affected by the memorability of its most memorable object. Lastly, we explore the effectiveness of deep learning and other computational approaches in predicting object memorability in images. Our efforts offer a deeper understanding of memorability in general thereby opening up avenues for a wide variety of applications. © 2015 IEEE.

  1. Sequential sampling of visual objects during sustained attention.

    Directory of Open Access Journals (Sweden)

    Jianrong Jia

    2017-06-01

    Full Text Available In a crowded visual scene, attention must be distributed efficiently and flexibly over time and space to accommodate different contexts. It is well established that selective attention enhances the corresponding neural responses, presumably implying that attention would persistently dwell on the task-relevant item. Meanwhile, recent studies, mostly in divided attentional contexts, suggest that attention does not remain stationary but samples objects alternately over time, suggesting a rhythmic view of attention. However, it remains unknown whether the dynamic mechanism essentially mediates attentional processes at a general level. Importantly, there is also a complete lack of direct neural evidence reflecting whether and how the brain rhythmically samples multiple visual objects during stimulus processing. To address these issues, in this study, we employed electroencephalography (EEG and a temporal response function (TRF approach, which can dissociate responses that exclusively represent a single object from the overall neuronal activity, to examine the spatiotemporal characteristics of attention in various attentional contexts. First, attention, which is characterized by inhibitory alpha-band (approximately 10 Hz activity in TRFs, switches between attended and unattended objects every approximately 200 ms, suggesting a sequential sampling even when attention is required to mostly stay on the attended object. Second, the attentional spatiotemporal pattern is modulated by the task context, such that alpha-mediated switching becomes increasingly prominent as the task requires a more uniform distribution of attention. Finally, the switching pattern correlates with attentional behavioral performance. Our work provides direct neural evidence supporting a generally central role of temporal organization mechanism in attention, such that multiple objects are sequentially sorted according to their priority in attentional contexts. The results suggest

  2. Evaluating Color Descriptors for Object and Scene Recognition

    NARCIS (Netherlands)

    van de Sande, K.E.A.; Gevers, T.; Snoek, C.G.M.

    2010-01-01

    Image category recognition is important to access visual information on the level of objects and scene types. So far, intensity-based descriptors have been widely used for feature extraction at salient points. To increase illumination invariance and discriminative power, color descriptors have been

  3. Mobile device geo-localization and object visualization in sensor networks

    Science.gov (United States)

    Lemaire, Simon; Bodensteiner, Christoph; Arens, Michael

    2014-10-01

    In this paper we present a method to visualize geo-referenced objects on modern smartphones using a multi- functional application design. The application applies different localization and visualization methods including the smartphone camera image. The presented application copes well with different scenarios. A generic application work flow and augmented reality visualization techniques are described. The feasibility of the approach is experimentally validated using an online desktop selection application in a network with a modern of-the-shelf smartphone. Applications are widespread and include for instance crisis and disaster management or military applications.

  4. Supervised and Unsupervised Learning of Multidimensional Acoustic Categories

    Science.gov (United States)

    Goudbeek, Martijn; Swingley, Daniel; Smits, Roel

    2009-01-01

    Learning to recognize the contrasts of a language-specific phonemic repertoire can be viewed as forming categories in a multidimensional psychophysical space. Research on the learning of distributionally defined visual categories has shown that categories defined over 1 dimension are easy to learn and that learning multidimensional categories is…

  5. Color categories only affect post-perceptual processes when same- and different-category colors are equally discriminable.

    Science.gov (United States)

    He, Xun; Witzel, Christoph; Forder, Lewis; Clifford, Alexandra; Franklin, Anna

    2014-04-01

    Prior claims that color categories affect color perception are confounded by inequalities in the color space used to equate same- and different-category colors. Here, we equate same- and different-category colors in the number of just-noticeable differences, and measure event-related potentials (ERPs) to these colors on a visual oddball task to establish if color categories affect perceptual or post-perceptual stages of processing. Category effects were found from 200 ms after color presentation, only in ERP components that reflect post-perceptual processes (e.g., N2, P3). The findings suggest that color categories affect post-perceptual processing, but do not affect the perceptual representation of color.

  6. Deconstructing visual scenes in cortex: gradients of object and spatial layout information.

    Science.gov (United States)

    Harel, Assaf; Kravitz, Dwight J; Baker, Chris I

    2013-04-01

    Real-world visual scenes are complex cluttered, and heterogeneous stimuli engaging scene- and object-selective cortical regions including parahippocampal place area (PPA), retrosplenial complex (RSC), and lateral occipital complex (LOC). To understand the unique contribution of each region to distributed scene representations, we generated predictions based on a neuroanatomical framework adapted from monkey and tested them using minimal scenes in which we independently manipulated both spatial layout (open, closed, and gradient) and object content (furniture, e.g., bed, dresser). Commensurate with its strong connectivity with posterior parietal cortex, RSC evidenced strong spatial layout information but no object information, and its response was not even modulated by object presence. In contrast, LOC, which lies within the ventral visual pathway, contained strong object information but no background information. Finally, PPA, which is connected with both the dorsal and the ventral visual pathway, showed information about both objects and spatial backgrounds and was sensitive to the presence or absence of either. These results suggest that 1) LOC, PPA, and RSC have distinct representations, emphasizing different aspects of scenes, 2) the specific representations in each region are predictable from their patterns of connectivity, and 3) PPA combines both spatial layout and object information as predicted by connectivity.

  7. Visual long-term memory has a massive storage capacity for object details.

    Science.gov (United States)

    Brady, Timothy F; Konkle, Talia; Alvarez, George A; Oliva, Aude

    2008-09-23

    One of the major lessons of memory research has been that human memory is fallible, imprecise, and subject to interference. Thus, although observers can remember thousands of images, it is widely assumed that these memories lack detail. Contrary to this assumption, here we show that long-term memory is capable of storing a massive number of objects with details from the image. Participants viewed pictures of 2,500 objects over the course of 5.5 h. Afterward, they were shown pairs of images and indicated which of the two they had seen. The previously viewed item could be paired with either an object from a novel category, an object of the same basic-level category, or the same object in a different state or pose. Performance in each of these conditions was remarkably high (92%, 88%, and 87%, respectively), suggesting that participants successfully maintained detailed representations of thousands of images. These results have implications for cognitive models, in which capacity limitations impose a primary computational constraint (e.g., models of object recognition), and pose a challenge to neural models of memory storage and retrieval, which must be able to account for such a large and detailed storage capacity.

  8. Fragile visual short-term memory is an object-based and location-specific store.

    Science.gov (United States)

    Pinto, Yaïr; Sligte, Ilja G; Shapiro, Kimron L; Lamme, Victor A F

    2013-08-01

    Fragile visual short-term memory (FM) is a recently discovered form of visual short-term memory. Evidence suggests that it provides rich and high-capacity storage, like iconic memory, yet it exists, without interference, almost as long as visual working memory. In the present study, we sought to unveil the functional underpinnings of this memory storage. We found that FM is only completely erased when the new visual scene appears at the same location and consists of the same objects as the to-be-recalled information. This result has two important implications: First, it shows that FM is an object- and location-specific store, and second, it suggests that FM might be used in everyday life when the presentation of visual information is appropriately designed.

  9. It's all connected: Pathways in visual object recognition and early noun learning.

    Science.gov (United States)

    Smith, Linda B

    2013-11-01

    A developmental pathway may be defined as the route, or chain of events, through which a new structure or function forms. For many human behaviors, including object name learning and visual object recognition, these pathways are often complex and multicausal and include unexpected dependencies. This article presents three principles of development that suggest the value of a developmental psychology that explicitly seeks to trace these pathways and uses empirical evidence on developmental dependencies among motor development, action on objects, visual object recognition, and object name learning in 12- to 24-month-old infants to make the case. The article concludes with a consideration of the theoretical implications of this approach. (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  10. Figure-ground organization and the emergence of proto-objects in the visual cortex.

    Science.gov (United States)

    von der Heydt, Rüdiger

    2015-01-01

    A long history of studies of perception has shown that the visual system organizes the incoming information early on, interpreting the 2D image in terms of a 3D world and producing a structure that provides perceptual continuity and enables object-based attention. Recordings from monkey visual cortex show that many neurons, especially in area V2, are selective for border ownership. These neurons are edge selective and have ordinary classical receptive fields (CRF), but in addition their responses are modulated (enhanced or suppressed) depending on the location of a 'figure' relative to the edge in their receptive field. Each neuron has a fixed preference for location on one side or the other. This selectivity is derived from the image context far beyond the CRF. This paper reviews evidence indicating that border ownership selectivity reflects the formation of early object representations ('proto-objects'). The evidence includes experiments showing (1) reversal of border ownership signals with change of perceived object structure, (2) border ownership specific enhancement of responses in object-based selective attention, (3) persistence of border ownership signals in accordance with continuity of object perception, and (4) remapping of border ownership signals across saccades and object movements. Findings 1 and 2 can be explained by hypothetical grouping circuits that sum contour feature signals in search of objectness, and, via recurrent projections, enhance the corresponding low-level feature signals. Findings 3 and 4 might be explained by assuming that the activity of grouping circuits persists and can be remapped. Grouping, persistence, and remapping are fundamental operations of vision. Finding these operations manifest in low-level visual areas challenges traditional views of visual processing. New computational models need to be developed for a comprehensive understanding of the function of the visual cortex.

  11. The Nature of Experience Determines Object Representations in the Visual System

    Science.gov (United States)

    Wong, Yetta K.; Folstein, Jonathan R.; Gauthier, Isabel

    2012-01-01

    Visual perceptual learning (PL) and perceptual expertise (PE) traditionally lead to different training effects and recruit different brain areas, but reasons for these differences are largely unknown. Here, we tested how the learning history influences visual object representations. Two groups were trained with tasks typically used in PL or PE…

  12. Efficient light scattering through thin semi-transparent objects

    DEFF Research Database (Denmark)

    Frisvad, Jeppe Revall; Christensen, Niels Jørgen; Falster, Peter

    2005-01-01

    This paper concerns real-time rendering of thin semi-transparent objects. An object in this category could be a piece of cloth, eg. a curtain. Semi-transparent objects are visualized most correctly using volume rendering techniques. In general such techniques are, however, intractable for real-ti...... in this new area gives far better results than what is obtainable with a traditional real-time rendering scheme using a constant factor for alpha blending....

  13. Visual awareness of objects and their colour.

    Science.gov (United States)

    Pilling, Michael; Gellatly, Angus

    2011-10-01

    At any given moment, our awareness of what we 'see' before us seems to be rather limited. If, for instance, a display containing multiple objects is shown (red or green disks), when one object is suddenly covered at random, observers are often little better than chance in reporting about its colour (Wolfe, Reinecke, & Brawn, Visual Cognition, 14, 749-780, 2006). We tested whether, when object attributes (such as colour) are unknown, observers still retain any knowledge of the presence of that object at a display location. Experiments 1-3 involved a task requiring two-alternative (yes/no) responses about the presence or absence of a colour-defined object at a probed location. On this task, if participants knew about the presence of an object at a location, responses indicated that they also knew about its colour. A fourth experiment presented the same displays but required a three-alternative response. This task did result in a data pattern consistent with participants' knowing more about the locations of objects within a display than about their individual colours. However, this location advantage, while highly significant, was rather small in magnitude. Results are compared with those of Huang (Journal of Vision, 10(10, Art. 24), 1-17, 2010), who also reported an advantage for object locations, but under quite different task conditions.

  14. Computational Modelling of the Neural Representation of Object Shape in the Primate Ventral Visual System

    Directory of Open Access Journals (Sweden)

    Akihiro eEguchi

    2015-08-01

    Full Text Available Neurons in successive stages of the primate ventral visual pathway encode the spatial structure of visual objects. In this paper, we investigate through computer simulation how these cell firing properties may develop through unsupervised visually-guided learning. Individual neurons in the model are shown to exploit statistical regularity and temporal continuity of the visual inputs during training to learn firing properties that are similar to neurons in V4 and TEO. Neurons in V4 encode the conformation of boundary contour elements at a particular position within an object regardless of the location of the object on the retina, while neurons in TEO integrate information from multiple boundary contour elements. This representation goes beyond mere object recognition, in which neurons simply respond to the presence of a whole object, but provides an essential foundation from which the brain is subsequently able to recognise the whole object.

  15. Experience moderates overlap between object and face recognition, suggesting a common ability.

    Science.gov (United States)

    Gauthier, Isabel; McGugin, Rankin W; Richler, Jennifer J; Herzmann, Grit; Speegle, Magen; Van Gulick, Ana E

    2014-07-03

    Some research finds that face recognition is largely independent from the recognition of other objects; a specialized and innate ability to recognize faces could therefore have little or nothing to do with our ability to recognize objects. We propose a new framework in which recognition performance for any category is the product of domain-general ability and category-specific experience. In Experiment 1, we show that the overlap between face and object recognition depends on experience with objects. In 256 subjects we measured face recognition, object recognition for eight categories, and self-reported experience with these categories. Experience predicted neither face recognition nor object recognition but moderated their relationship: Face recognition performance is increasingly similar to object recognition performance with increasing object experience. If a subject has a lot of experience with objects and is found to perform poorly, they also prove to have a low ability with faces. In a follow-up survey, we explored the dimensions of experience with objects that may have contributed to self-reported experience in Experiment 1. Different dimensions of experience appear to be more salient for different categories, with general self-reports of expertise reflecting judgments of verbal knowledge about a category more than judgments of visual performance. The complexity of experience and current limitations in its measurement support the importance of aggregating across multiple categories. Our findings imply that both face and object recognition are supported by a common, domain-general ability expressed through experience with a category and best measured when accounting for experience. © 2014 ARVO.

  16. Effects of verbal and nonverbal interference on spatial and object visual working memory.

    Science.gov (United States)

    Postle, Bradley R; Desposito, Mark; Corkin, Suzanne

    2005-03-01

    We tested the hypothesis that a verbal coding mechanism is necessarily engaged by object, but not spatial, visual working memory tasks. We employed a dual-task procedure that paired n-back working memory tasks with domain-specific distractor trials inserted into each interstimulus interval of the n-back tasks. In two experiments, object n-back performance demonstrated greater sensitivity to verbal distraction, whereas spatial n-back performance demonstrated greater sensitivity to motion distraction. Visual object and spatial working memory may differ fundamentally in that the mnemonic representation of featural characteristics of objects incorporates a verbal (perhaps semantic) code, whereas the mnemonic representation of the location of objects does not. Thus, the processes supporting working memory for these two types of information may differ in more ways than those dictated by the "what/where" organization of the visual system, a fact more easily reconciled with a component process than a memory systems account of working memory function.

  17. The interplay of bottom-up and top-down mechanisms in visual guidance during object naming.

    Science.gov (United States)

    Coco, Moreno I; Malcolm, George L; Keller, Frank

    2014-01-01

    An ongoing issue in visual cognition concerns the roles played by low- and high-level information in guiding visual attention, with current research remaining inconclusive about the interaction between the two. In this study, we bring fresh evidence into this long-standing debate by investigating visual saliency and contextual congruency during object naming (Experiment 1), a task in which visual processing interacts with language processing. We then compare the results of this experiment to data of a memorization task using the same stimuli (Experiment 2). In Experiment 1, we find that both saliency and congruency influence visual and naming responses and interact with linguistic factors. In particular, incongruent objects are fixated later and less often than congruent ones. However, saliency is a significant predictor of object naming, with salient objects being named earlier in a trial. Furthermore, the saliency and congruency of a named object interact with the lexical frequency of the associated word and mediate the time-course of fixations at naming. In Experiment 2, we find a similar overall pattern in the eye-movement responses, but only the congruency of the target is a significant predictor, with incongruent targets fixated less often than congruent targets. Crucially, this finding contrasts with claims in the literature that incongruent objects are more informative than congruent objects by deviating from scene context and hence need a longer processing. Overall, this study suggests that different sources of information are interactively used to guide visual attention on the targets to be named and raises new questions for existing theories of visual attention.

  18. Transformation-tolerant object recognition in rats revealed by visual priming.

    Science.gov (United States)

    Tafazoli, Sina; Di Filippo, Alessandro; Zoccolan, Davide

    2012-01-04

    Successful use of rodents as models for studying object vision crucially depends on the ability of their visual system to construct representations of visual objects that tolerate (i.e., remain relatively unchanged with respect to) the tremendous changes in object appearance produced, for instance, by size and viewpoint variation. Whether this is the case is still controversial, despite some recent demonstration of transformation-tolerant object recognition in rats. In fact, it remains unknown to what extent such a tolerant recognition has a spontaneous, perceptual basis, or, alternatively, mainly reflects learning of arbitrary associative relations among trained object appearances. In this study, we addressed this question by training rats to categorize a continuum of morph objects resulting from blending two object prototypes. The resulting psychometric curve (reporting the proportion of responses to one prototype along the morph line) served as a reference when, in a second phase of the experiment, either prototype was briefly presented as a prime, immediately before a test morph object. The resulting shift of the psychometric curve showed that recognition became biased toward the identity of the prime. Critically, this bias was observed also when the primes were transformed along a variety of dimensions (i.e., size, position, viewpoint, and their combination) that the animals had never experienced before. These results indicate that rats spontaneously perceive different views/appearances of an object as similar (i.e., as instances of the same object) and argue for the existence of neuronal substrates underlying formation of transformation-tolerant object representations in rats.

  19. Object representation in the bottlenose dolphin (Tursiops truncatus): integration of visual and echoic information.

    Science.gov (United States)

    Harley, H E; Roitblat, H L; Nachtigall, P E

    1996-04-01

    A dolphin performed a 3-alternative matching-to-sample task in different modality conditions (visual/echoic, both vision and echolocation: visual, vision only; echoic, echolocation only). In Experiment 1, training occurred in the dual-modality (visual/echoic) condition. Choice accuracy in tests of all conditions was above chance without further training. In Experiment 2, unfamiliar objects with complementary similarity relations in vision and echolocation were presented in single-modality conditions until accuracy was about 70%. When tested in the visual/echoic condition, accuracy immediately rose (95%), suggesting integration across modalities. In Experiment 3, conditions varied between presentation of sample and alternatives. The dolphin successfully matched familiar objects in the cross-modal conditions. These data suggest that the dolphin has an object-based representational system.

  20. Development of a vocabulary of object shapes in a child with a very-early-acquired visual agnosia: a unique case.

    Science.gov (United States)

    Funnell, Elaine; Wilding, John

    2011-02-01

    We report a longitudinal study of an exceptional child (S.R.) whose early-acquired visual agnosia, following encephalitis at 8 weeks of age, did not prevent her from learning to construct an increasing vocabulary of visual object forms (drawn from different categories), albeit slowly. S.R. had problems perceiving subtle differences in shape; she was unable to segment local letters within global displays; and she would bring complex scenes close to her eyes: a symptom suggestive of an attempt to reduce visual crowding. Investigations revealed a robust ability to use the gestalt grouping factors of proximity and collinearity to detect fragmented forms in noisy backgrounds, compared with a very weak ability to segment fragmented forms on the basis of contrasts of shape. When contrasts in spatial grouping and shape were pitted against each other, shape made little contribution, consistent with problems in perceiving complex scenes, but when shape contrast was varied, and spatial grouping was held constant, S.R. showed the same hierarchy of difficulty as the controls, although her responses were slowed. This is the first report of a child's visual-perceptual development following very early neurological impairments to the visual cortex. Her ability to learn to perceive visual shape following damage at a rudimentary stage of perceptual development contrasts starkly with the loss of such ability in childhood cases of acquired visual agnosia that follow damage to the established perceptual system. Clearly, there is a critical period during which neurological damage to the highly active, early developing visual-perceptual system does not prevent but only impairs further learning.

  1. Activity in human visual and parietal cortex reveals object-based attention in working memory.

    Science.gov (United States)

    Peters, Benjamin; Kaiser, Jochen; Rahm, Benjamin; Bledowski, Christoph

    2015-02-25

    Visual attention enables observers to select behaviorally relevant information based on spatial locations, features, or objects. Attentional selection is not limited to physically present visual information, but can also operate on internal representations maintained in working memory (WM) in service of higher-order cognition. However, only little is known about whether attention to WM contents follows the same principles as attention to sensory stimuli. To address this question, we investigated in humans whether the typically observed effects of object-based attention in perception are also evident for object-based attentional selection of internal object representations in WM. In full accordance with effects in visual perception, the key behavioral and neuronal characteristics of object-based attention were observed in WM. Specifically, we found that reaction times were shorter when shifting attention to memory positions located on the currently attended object compared with equidistant positions on a different object. Furthermore, functional magnetic resonance imaging and multivariate pattern analysis of visuotopic activity in visual (areas V1-V4) and parietal cortex revealed that directing attention to one position of an object held in WM also enhanced brain activation for other positions on the same object, suggesting that attentional selection in WM activates the entire object. This study demonstrated that all characteristic features of object-based attention are present in WM and thus follows the same principles as in perception. Copyright © 2015 the authors 0270-6474/15/353360-10$15.00/0.

  2. Figure–ground organization and the emergence of proto-objects in the visual cortex

    Science.gov (United States)

    von der Heydt, Rüdiger

    2015-01-01

    A long history of studies of perception has shown that the visual system organizes the incoming information early on, interpreting the 2D image in terms of a 3D world and producing a structure that provides perceptual continuity and enables object-based attention. Recordings from monkey visual cortex show that many neurons, especially in area V2, are selective for border ownership. These neurons are edge selective and have ordinary classical receptive fields (CRF), but in addition their responses are modulated (enhanced or suppressed) depending on the location of a ‘figure’ relative to the edge in their receptive field. Each neuron has a fixed preference for location on one side or the other. This selectivity is derived from the image context far beyond the CRF. This paper reviews evidence indicating that border ownership selectivity reflects the formation of early object representations (‘proto-objects’). The evidence includes experiments showing (1) reversal of border ownership signals with change of perceived object structure, (2) border ownership specific enhancement of responses in object-based selective attention, (3) persistence of border ownership signals in accordance with continuity of object perception, and (4) remapping of border ownership signals across saccades and object movements. Findings 1 and 2 can be explained by hypothetical grouping circuits that sum contour feature signals in search of objectness, and, via recurrent projections, enhance the corresponding low-level feature signals. Findings 3 and 4 might be explained by assuming that the activity of grouping circuits persists and can be remapped. Grouping, persistence, and remapping are fundamental operations of vision. Finding these operations manifest in low-level visual areas challenges traditional views of visual processing. New computational models need to be developed for a comprehensive understanding of the function of the visual cortex. PMID:26579062

  3. The highs and lows of object impossibility: effects of spatial frequency on holistic processing of impossible objects.

    Science.gov (United States)

    Freud, Erez; Avidan, Galia; Ganel, Tzvi

    2015-02-01

    Holistic processing, the decoding of a stimulus as a unified whole, is a basic characteristic of object perception. Recent research using Garner's speeded classification task has shown that this processing style is utilized even for impossible objects that contain an inherent spatial ambiguity. In particular, similar Garner interference effects were found for possible and impossible objects, indicating similar holistic processing styles for the two object categories. In the present study, we further investigated the perceptual mechanisms that mediate such holistic representation of impossible objects. We relied on the notion that, whereas information embedded in the high-spatial-frequency (HSF) content supports fine-detailed processing of object features, the information conveyed by low spatial frequencies (LSF) is more crucial for the emergence of a holistic shape representation. To test the effects of image frequency on the holistic processing of impossible objects, participants performed the Garner speeded classification task on images of possible and impossible cubes filtered for their LSF and HSF information. For images containing only LSF, similar interference effects were observed for possible and impossible objects, indicating that the two object categories were processed in a holistic manner. In contrast, for the HSF images, Garner interference was obtained only for possible, but not for impossible objects. Importantly, we provided evidence to show that this effect could not be attributed to a lack of sensitivity to object possibility in the LSF images. Particularly, even for full-spectrum images, Garner interference was still observed for both possible and impossible objects. Additionally, performance in an object classification task revealed high sensitivity to object possibility, even for LSF images. Taken together, these findings suggest that the visual system can tolerate the spatial ambiguity typical to impossible objects by relying on information

  4. Invariant Visual Object and Face Recognition: Neural and Computational Bases, and a Model, VisNet.

    Science.gov (United States)

    Rolls, Edmund T

    2012-01-01

    Neurophysiological evidence for invariant representations of objects and faces in the primate inferior temporal visual cortex is described. Then a computational approach to how invariant representations are formed in the brain is described that builds on the neurophysiology. A feature hierarchy model in which invariant representations can be built by self-organizing learning based on the temporal and spatial statistics of the visual input produced by objects as they transform in the world is described. VisNet can use temporal continuity in an associative synaptic learning rule with a short-term memory trace, and/or it can use spatial continuity in continuous spatial transformation learning which does not require a temporal trace. The model of visual processing in the ventral cortical stream can build representations of objects that are invariant with respect to translation, view, size, and also lighting. The model has been extended to provide an account of invariant representations in the dorsal visual system of the global motion produced by objects such as looming, rotation, and object-based movement. The model has been extended to incorporate top-down feedback connections to model the control of attention by biased competition in, for example, spatial and object search tasks. The approach has also been extended to account for how the visual system can select single objects in complex visual scenes, and how multiple objects can be represented in a scene. The approach has also been extended to provide, with an additional layer, for the development of representations of spatial scenes of the type found in the hippocampus.

  5. Barack Obama Blindness (BOB): Absence of Visual Awareness to a Single Object.

    Science.gov (United States)

    Persuh, Marjan; Melara, Robert D

    2016-01-01

    In two experiments, we evaluated whether a perceiver's prior expectations could alone obliterate his or her awareness of a salient visual stimulus. To establish expectancy, observers first made a demanding visual discrimination on each of three baseline trials. Then, on a fourth, critical trial, a single, salient and highly visible object appeared in full view at the center of the visual field and in the absence of any competing visual input. Surprisingly, fully half of the participants were unaware of the solitary object in front of their eyes. Dramatically, observers were blind even when the only stimulus on display was the face of U.S. President Barack Obama. We term this novel, counterintuitive phenomenon, Barack Obama Blindness (BOB). Employing a method that rules out putative memory effects by probing awareness immediately after presentation of the critical stimulus, we demonstrate that the BOB effect is a true failure of conscious vision.

  6. Converging modalities ground abstract categories: the case of politics.

    Science.gov (United States)

    Farias, Ana Rita; Garrido, Margarida V; Semin, Gün R

    2013-01-01

    Three studies are reported examining the grounding of abstract concepts across two modalities (visual and auditory) and their symbolic representation. A comparison of the outcomes across these studies reveals that the symbolic representation of political concepts and their visual and auditory modalities is convergent. In other words, the spatial relationships between specific instances of the political categories are highly overlapping across the symbolic, visual and auditory modalities. These findings suggest that abstract categories display redundancy across modal and amodal representations, and are multimodal.

  7. 3D MODELLING AND INTERACTIVE WEB-BASED VISUALIZATION OF CULTURAL HERITAGE OBJECTS

    Directory of Open Access Journals (Sweden)

    M. N. Koeva

    2016-06-01

    Full Text Available Nowadays, there are rapid developments in the fields of photogrammetry, laser scanning, computer vision and robotics, together aiming to provide highly accurate 3D data that is useful for various applications. In recent years, various LiDAR and image-based techniques have been investigated for 3D modelling because of their opportunities for fast and accurate model generation. For cultural heritage preservation and the representation of objects that are important for tourism and their interactive visualization, 3D models are highly effective and intuitive for present-day users who have stringent requirements and high expectations. Depending on the complexity of the objects for the specific case, various technological methods can be applied. The selected objects in this particular research are located in Bulgaria – a country with thousands of years of history and cultural heritage dating back to ancient civilizations. \\this motivates the preservation, visualisation and recreation of undoubtedly valuable historical and architectural objects and places, which has always been a serious challenge for specialists in the field of cultural heritage. In the present research, comparative analyses regarding principles and technological processes needed for 3D modelling and visualization are presented. The recent problems, efforts and developments in interactive representation of precious objects and places in Bulgaria are presented. Three technologies based on real projects are described: (1 image-based modelling using a non-metric hand-held camera; (2 3D visualization based on spherical panoramic images; (3 and 3D geometric and photorealistic modelling based on architectural CAD drawings. Their suitability for web-based visualization are demonstrated and compared. Moreover the possibilities for integration with additional information such as interactive maps, satellite imagery, sound, video and specific information for the objects are described. This

  8. Semantic and visual determinants of face recognition in a prosopagnosic patient.

    Science.gov (United States)

    Dixon, M J; Bub, D N; Arguin, M

    1998-05-01

    Prosopagnosia is the neuropathological inability to recognize familiar people by their faces. It can occur in isolation or can coincide with recognition deficits for other nonface objects. Often, patients whose prosopagnosia is accompanied by object recognition difficulties have more trouble identifying certain categories of objects relative to others. In previous research, we demonstrated that objects that shared multiple visual features and were semantically close posed severe recognition difficulties for a patient with temporal lobe damage. We now demonstrate that this patient's face recognition is constrained by these same parameters. The prosopagnosic patient ELM had difficulties pairing faces to names when the faces shared visual features and the names were semantically related (e.g., Tonya Harding, Nancy Kerrigan, and Josee Chouinard -three ice skaters). He made tenfold fewer errors when the exact same faces were associated with semantically unrelated people (e.g., singer Celine Dion, actress Betty Grable, and First Lady Hillary Clinton). We conclude that prosopagnosia and co-occurring category-specific recognition problems both stem from difficulties disambiguating the stored representations of objects that share multiple visual features and refer to semantically close identities or concepts.

  9. Studying visual attention using the multiple object tracking paradigm: A tutorial review.

    Science.gov (United States)

    Meyerhoff, Hauke S; Papenmeier, Frank; Huff, Markus

    2017-07-01

    Human observers are capable of tracking multiple objects among identical distractors based only on their spatiotemporal information. Since the first report of this ability in the seminal work of Pylyshyn and Storm (1988, Spatial Vision, 3, 179-197), multiple object tracking has attracted many researchers. A reason for this is that it is commonly argued that the attentional processes studied with the multiple object paradigm apparently match the attentional processing during real-world tasks such as driving or team sports. We argue that multiple object tracking provides a good mean to study the broader topic of continuous and dynamic visual attention. Indeed, several (partially contradicting) theories of attentive tracking have been proposed within the almost 30 years since its first report, and a large body of research has been conducted to test these theories. With regard to the richness and diversity of this literature, the aim of this tutorial review is to provide researchers who are new in the field of multiple object tracking with an overview over the multiple object tracking paradigm, its basic manipulations, as well as links to other paradigms investigating visual attention and working memory. Further, we aim at reviewing current theories of tracking as well as their empirical evidence. Finally, we review the state of the art in the most prominent research fields of multiple object tracking and how this research has helped to understand visual attention in dynamic settings.

  10. Figure-ground organization and the emergence of proto-objects in the visual cortex

    Directory of Open Access Journals (Sweden)

    Rüdiger evon der Heydt

    2015-11-01

    Full Text Available A long history of studies of perception has shown that the visual system organizes the incoming information early on, interpreting the 2D image in terms of a 3D world and producing a structure that provides perceptual continuity and enables object-based attention. Recordings from monkey visual cortex show that many neurons, especially in area V2, are selective for border ownership. These neurons are edge selective and have ordinary classical receptive fields, but in addition their responses are modulated (enhanced or suppressed depending on the location of a ‘figure’ relative to the edge in their receptive field. Each neuron has a fixed preference for location on one side or the other. This selectivity is derived from the image context far beyond the classical receptive field. This paper reviews evidence indicating that border ownership selectivity reflects the formation of early object representations (‘proto-objects’. The evidence includes experiments showing (1 reversal of border ownership signals with change of perceived object structure, (2 border ownership specific enhancement of responses in object-based selective attention, (3 persistence of border ownership signals in accordance with continuity of object perception, and (4 remapping of border ownership signals across saccades and object movements. Findings 1 and 2 can be explained by hypothetical grouping circuits that sum contour feature signals in search of objecthood, and, via recurrent projections, enhance the corresponding low-level feature signals. Findings 3 and 4 might be explained by assuming that the activity of grouping circuits persists and can be remapped. Grouping, persistence and remapping are fundamental operations of vision. Finding these operations manifest in low-level visual areas challenges traditional views of visual processing. New computational models need to be developed for a comprehensive understanding of the function of the visual cortex.

  11. Converging modalities ground abstract categories: the case of politics.

    Directory of Open Access Journals (Sweden)

    Ana Rita Farias

    Full Text Available Three studies are reported examining the grounding of abstract concepts across two modalities (visual and auditory and their symbolic representation. A comparison of the outcomes across these studies reveals that the symbolic representation of political concepts and their visual and auditory modalities is convergent. In other words, the spatial relationships between specific instances of the political categories are highly overlapping across the symbolic, visual and auditory modalities. These findings suggest that abstract categories display redundancy across modal and amodal representations, and are multimodal.

  12. Barack Obama Blindness (BOB: Absence of visual awareness to a single object

    Directory of Open Access Journals (Sweden)

    Marjan ePersuh

    2016-03-01

    Full Text Available In two experiments we evaluated whether a perceiver’s prior expectations could alone obliterate his or her awareness of a salient visual stimulus. To establish expectancy, observers first made a demanding visual discrimination on each of three baseline trials. Then, on a fourth, critical trial, a single, salient and highly visible object appeared in full view at the center of the visual field and in the absence of any competing visual input. Surprisingly, fully half of the participants were unaware of the solitary object in front of their eyes. Dramatically, observers were blind even when the only stimulus on display was the face of U.S. President Barack Obama. We term this novel, counterintuitive phenomenon, Barack Obama Blindness (BOB. Employing a method that rules out putative memory effects by probing awareness immediately after presentation of the critical stimulus, we demonstrate that the BOB effect is a true failure of conscious vision.

  13. Tracking Location and Features of Objects within Visual Working Memory

    Directory of Open Access Journals (Sweden)

    Michael Patterson

    2012-10-01

    Full Text Available Four studies examined how color or shape features can be accessed to retrieve the memory of an object's location. In each trial, 6 colored dots (Experiments 1 and 2 or 6 black shapes (Experiments 3 and 4 were displayed in randomly selected locations for 1.5 s. An auditory cue for either the shape or the color to-be-remembered was presented either simultaneously, immediately, or 2 s later. Non-informative cues appeared in some trials to serve as a control condition. After a 4 s delay, 5/6 objects were re-presented, and participants indicated the location of the missing object either by moving the mouse (Experiments 1 and 3, or by typing coordinates using a grid (Experiments 2 and 4. Compared to the control condition, cues presented simultaneously or immediately after stimuli improved location accuracy in all experiments. However, cues presented after 2 s only improved accuracy in Experiment 1. These results suggest that location information may not be addressable within visual working memory using shape features. In Experiment 1, but not Experiments 2–4, cues significantly improved accuracy when they indicated the missing object could be any of the three identical objects. In Experiments 2–4, location accuracy was highly impaired when the missing object came from a group of identical rather than uniquely identifiable objects. This indicates that when items with similar features are presented, location accuracy may be reduced. In summary, both feature type and response mode can influence the accuracy and accessibility of visual working memory for object location.

  14. Different measures of structural similarity tap different aspects of visual object processing

    DEFF Research Database (Denmark)

    Gerlach, Christian

    2017-01-01

    The structural similarity of objects has been an important variable in explaining why some objects are easier to categorize at a superordinate level than to individuate, and also why some patients with brain injury have more difficulties in recognizing natural (structurally similar) objects than...... artifacts (structurally distinct objects). In spite of its merits as an explanatory variable, structural similarity is not a unitary construct, and it has been operationalized in different ways. Furthermore, even though measures of structural similarity have been successful in explaining task and category-effects...

  15. Computing with Connections in Visual Recognition of Origami Objects.

    Science.gov (United States)

    Sabbah, Daniel

    1985-01-01

    Summarizes an initial foray in tackling artificial intelligence problems using a connectionist approach. The task chosen is visual recognition of Origami objects, and the questions answered are how to construct a connectionist network to represent and recognize projected Origami line drawings and the advantages such an approach would have. (30…

  16. The role of space and time in object-based visual search

    NARCIS (Netherlands)

    Schreij, D.B.B.; Olivers, C.N.L.

    2013-01-01

    Recently we have provided evidence that observers more readily select a target from a visual search display if the motion trajectory of the display object suggests that the observer has dealt with it before. Here we test the prediction that this object-based memory effect on search breaks down if

  17. Priming Contour-Deleted Images: Evidence for Immediate Representations in Visual Object Recognition.

    Science.gov (United States)

    Biederman, Irving; Cooper, Eric E.

    1991-01-01

    Speed and accuracy of identification of pictures of objects are facilitated by prior viewing. Contributions of image features, convex or concave components, and object models in a repetition priming task were explored in 2 studies involving 96 college students. Results provide evidence of intermediate representations in visual object recognition.…

  18. 1/f 2 Characteristics and isotropy in the fourier power spectra of visual art, cartoons, comics, mangas, and different categories of photographs.

    Science.gov (United States)

    Koch, Michael; Denzler, Joachim; Redies, Christoph

    2010-08-19

    Art images and natural scenes have in common that their radially averaged (1D) Fourier spectral power falls according to a power-law with increasing spatial frequency (1/f(2) characteristics), which implies that the power spectra have scale-invariant properties. In the present study, we show that other categories of man-made images, cartoons and graphic novels (comics and mangas), have similar properties. Further on, we extend our investigations to 2D power spectra. In order to determine whether the Fourier power spectra of man-made images differed from those of other categories of images (photographs of natural scenes, objects, faces and plants and scientific illustrations), we analyzed their 2D power spectra by principal component analysis. Results indicated that the first fifteen principal components allowed a partial separation of the different image categories. The differences between the image categories were studied in more detail by analyzing whether the mean power and the slope of the power gradients from low to high spatial frequencies varied across orientations in the power spectra. Mean power was generally higher in cardinal orientations both in real-world photographs and artworks, with no systematic difference between the two types of images. However, the slope of the power gradients showed a lower degree of mean variability across spectral orientations (i.e., more isotropy) in art images, cartoons and graphic novels than in photographs of comparable subject matters. Taken together, these results indicate that art images, cartoons and graphic novels possess relatively uniform 1/f(2) characteristics across all orientations. In conclusion, the man-made stimuli studied, which were presumably produced to evoke pleasant and/or enjoyable visual perception in human observers, form a subset of all images and share statistical properties in their Fourier power spectra. Whether these properties are necessary or sufficient to induce aesthetic perception remains

  19. Role of early visual cortex in trans-saccadic memory of object features.

    Science.gov (United States)

    Malik, Pankhuri; Dessing, Joost C; Crawford, J Douglas

    2015-08-01

    Early visual cortex (EVC) participates in visual feature memory and the updating of remembered locations across saccades, but its role in the trans-saccadic integration of object features is unknown. We hypothesized that if EVC is involved in updating object features relative to gaze, feature memory should be disrupted when saccades remap an object representation into a simultaneously perturbed EVC site. To test this, we applied transcranial magnetic stimulation (TMS) over functional magnetic resonance imaging-localized EVC clusters corresponding to the bottom left/right visual quadrants (VQs). During experiments, these VQs were probed psychophysically by briefly presenting a central object (Gabor patch) while subjects fixated gaze to the right or left (and above). After a short memory interval, participants were required to detect the relative change in orientation of a re-presented test object at the same spatial location. Participants either sustained fixation during the memory interval (fixation task) or made a horizontal saccade that either maintained or reversed the VQ of the object (saccade task). Three TMS pulses (coinciding with the pre-, peri-, and postsaccade intervals) were applied to the left or right EVC. This had no effect when (a) fixation was maintained, (b) saccades kept the object in the same VQ, or (c) the EVC quadrant corresponding to the first object was stimulated. However, as predicted, TMS reduced performance when saccades (especially larger saccades) crossed the remembered object location and brought it into the VQ corresponding to the TMS site. This suppression effect was statistically significant for leftward saccades and followed a weaker trend for rightward saccades. These causal results are consistent with the idea that EVC is involved in the gaze-centered updating of object features for trans-saccadic memory and perception.

  20. When a Picasso is a "Picasso": the entry point in the identification of visual art.

    Science.gov (United States)

    Belke, B; Leder, H; Harsanyi, G; Carbon, C C

    2010-02-01

    We investigated whether art is distinguished from other real world objects in human cognition, in that art allows for a special memorial representation and identification based on artists' specific stylistic appearances. Testing art-experienced viewers, converging empirical evidence from three experiments, which have proved sensitive to addressing the question of initial object recognition, suggest that identification of visual art is at the subordinate level of the producing artist. Specifically, in a free naming task it was found that art-objects as opposed to non-art-objects were most frequently named with subordinate level categories, with the artist's name as the most frequent category (Experiment 1). In a category-verification task (Experiment 2), art-objects were recognized faster than non-art-objects on the subordinate level with the artist's name. In a conceptual priming task, subordinate primes of artists' names facilitated matching responses to art-objects but subordinate primes did not facilitate responses to non-art-objects (Experiment 3). Collectively, these results suggest that the artist's name has a special status in the memorial representation of visual art and serves as a predominant entry point in recognition in art perception. Copyright 2009 Elsevier B.V. All rights reserved.

  1. Deformation-specific and deformation-invariant visual object recognition: pose vs identity recognition of people and deforming objects

    Directory of Open Access Journals (Sweden)

    Tristan J Webb

    2014-04-01

    Full Text Available When we see a human sitting down, standing up, or walking, we can recognise one of these poses independently of the individual, or we can recognise the individual person, independently of the pose. The same issues arise for deforming objects. For example, if we see a flag deformed by the wind, either blowing out or hanging languidly, we can usually recognise the flag, independently of its deformation; or we can recognise the deformation independently of the identity of the flag. We hypothesize that these types of recognition can be implemented by the primate visual system using temporo-spatial continuity as objects transform as a learning principle. In particular, we hypothesize that pose or deformation can be learned under conditions in which large numbers of different people are successively seen in the same pose, or objects in the same deformation. We also hypothesize that person-specific representations that are independent of pose, and object-specific representations that are independent of deformation and view, could be built, when individual people or objects are observed successively transforming from one pose or deformation and view to another. These hypotheses were tested in a simulation of the ventral visual system, VisNet, that uses temporal continuity, implemented in a synaptic learning rule with a short-term memory trace of previous neuronal activity, to learn invariant representations. It was found that depending on the statistics of the visual input, either pose-specific or deformation-specific representations could be built that were invariant with respect to individual and view; or that identity-specific representations could be built that were invariant with respect to pose or deformation and view. We propose that this is how pose-specific and pose-invariant, and deformation-specific and deformation-invariant, perceptual representations are built in the brain.

  2. An investigation of visual selection priority of objects with texture and crossed and uncrossed disparities

    Science.gov (United States)

    Khaustova, Dar'ya; Fournier, Jérôme; Wyckens, Emmanuel; Le Meur, Olivier

    2014-02-01

    The aim of this research is to understand the difference in visual attention to 2D and 3D content depending on texture and amount of depth. Two experiments were conducted using an eye-tracker and a 3DTV display. Collected fixation data were used to build saliency maps and to analyze the differences between 2D and 3D conditions. In the first experiment 51 observers participated in the test. Using scenes that contained objects with crossed disparity, it was discovered that such objects are the most salient, even if observers experience discomfort due to the high level of disparity. The goal of the second experiment is to decide whether depth is a determinative factor for visual attention. During the experiment, 28 observers watched the scenes that contained objects with crossed and uncrossed disparities. We evaluated features influencing the saliency of the objects in stereoscopic conditions by using contents with low-level visual features. With univariate tests of significance (MANOVA), it was detected that texture is more important than depth for selection of objects. Objects with crossed disparity are significantly more important for selection processes when compared to 2D. However, objects with uncrossed disparity have the same influence on visual attention as 2D objects. Analysis of eyemovements indicated that there is no difference in saccade length. Fixation durations were significantly higher in stereoscopic conditions for low-level stimuli than in 2D. We believe that these experiments can help to refine existing models of visual attention for 3D content.

  3. Perceptual Organization of Shape, Color, Shade, and Lighting in Visual and Pictorial Objects

    Directory of Open Access Journals (Sweden)

    Baingio Pinna

    2012-06-01

    Full Text Available The main questions we asked in this work are the following: Where are representations of shape, color, depth, and lighting mostly located? Does their formation take time to develop? How do they contribute to determining and defining a visual object, and how do they differ? How do visual artists use them to create objects and scenes? Is the way artists use them related to the way we perceive them? To answer these questions, we studied the microgenetic development of the object perception and formation. Our hypothesis is that the main object properties are extracted in sequential order and in the same order that these roles are also used by artists and children of different age to paint objects. The results supported the microgenesis of object formation according to the following sequence: contours, color, shading, and lighting.

  4. Perceptual organization of shape, color, shade, and lighting in visual and pictorial objects.

    Science.gov (United States)

    Pinna, Baingio

    2012-01-01

    THE MAIN QUESTIONS WE ASKED IN THIS WORK ARE THE FOLLOWING: Where are representations of shape, color, depth, and lighting mostly located? Does their formation take time to develop? How do they contribute to determining and defining a visual object, and how do they differ? How do visual artists use them to create objects and scenes? Is the way artists use them related to the way we perceive them? To answer these questions, we studied the microgenetic development of the object perception and formation. Our hypothesis is that the main object properties are extracted in sequential order and in the same order that these roles are also used by artists and children of different age to paint objects. The results supported the microgenesis of object formation according to the following sequence: contours, color, shading, and lighting.

  5. The visual system supports online translation invariance for object identification.

    Science.gov (United States)

    Bowers, Jeffrey S; Vankov, Ivan I; Ludwig, Casimir J H

    2016-04-01

    The ability to recognize the same image projected to different retinal locations is critical for visual object recognition in natural contexts. According to many theories, the translation invariance for objects extends only to trained retinal locations, so that a familiar object projected to a nontrained location should not be identified. In another approach, invariance is achieved "online," such that learning to identify an object in one location immediately affords generalization to other locations. We trained participants to name novel objects at one retinal location using eyetracking technology and then tested their ability to name the same images presented at novel retinal locations. Across three experiments, we found robust generalization. These findings provide a strong constraint for theories of vision.

  6. Knowledge is power: how conceptual knowledge transforms visual cognition.

    Science.gov (United States)

    Collins, Jessica A; Olson, Ingrid R

    2014-08-01

    In this review, we synthesize the existing literature demonstrating the dynamic interplay between conceptual knowledge and visual perceptual processing. We consider two theoretical frameworks that demonstrate interactions between processes and brain areas traditionally considered perceptual or conceptual. Specifically, we discuss categorical perception, in which visual objects are represented according to category membership, and highlight studies showing that category knowledge can penetrate early stages of visual analysis. We next discuss the embodied account of conceptual knowledge, which holds that concepts are instantiated in the same neural regions required for specific types of perception and action, and discuss the limitations of this framework. We additionally consider studies showing that gaining abstract semantic knowledge about objects and faces leads to behavioral and electrophysiological changes that are indicative of more efficient stimulus processing. Finally, we consider the role that perceiver goals and motivation may play in shaping the interaction between conceptual and perceptual processing. We hope to demonstrate how pervasive such interactions between motivation, conceptual knowledge, and perceptual processing are in our understanding of the visual environment, and to demonstrate the need for future research aimed at understanding how such interactions arise in the brain.

  7. Multisensory object perception in infancy: 4-month-olds perceive a mistuned harmonic as a separate auditory and visual object.

    Science.gov (United States)

    Smith, Nicholas A; Folland, Nicole A; Martinez, Diana M; Trainor, Laurel J

    2017-07-01

    Infants learn to use auditory and visual information to organize the sensory world into identifiable objects with particular locations. Here we use a behavioural method to examine infants' use of harmonicity cues to auditory object perception in a multisensory context. Sounds emitted by different objects sum in the air and the auditory system must figure out which parts of the complex waveform belong to different sources (auditory objects). One important cue to this source separation is that complex tones with pitch typically contain a fundamental frequency and harmonics at integer multiples of the fundamental. Consequently, adults hear a mistuned harmonic in a complex sound as a distinct auditory object (Alain, Theunissen, Chevalier, Batty, & Taylor, 2003). Previous work by our group demonstrated that 4-month-old infants are also sensitive to this cue. They behaviourally discriminate a complex tone with a mistuned harmonic from the same complex with in-tune harmonics, and show an object-related event-related potential (ERP) electrophysiological (EEG) response to the stimulus with mistuned harmonics. In the present study we use an audiovisual procedure to investigate whether infants perceive a complex tone with an 8% mistuned harmonic as emanating from two objects, rather than merely detecting the mistuned cue. We paired in-tune and mistuned complex tones with visual displays that contained either one or two bouncing balls. Four-month-old infants showed surprise at the incongruous pairings, looking longer at the display of two balls when paired with the in-tune complex and at the display of one ball when paired with the mistuned harmonic complex. We conclude that infants use harmonicity as a cue for source separation when integrating auditory and visual information in object perception. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Visual object imagery and autobiographical memory: Object Imagers are better at remembering their personal past.

    Science.gov (United States)

    Vannucci, Manila; Pelagatti, Claudia; Chiorri, Carlo; Mazzoni, Giuliana

    2016-01-01

    In the present study we examined whether higher levels of object imagery, a stable characteristic that reflects the ability and preference in generating pictorial mental images of objects, facilitate involuntary and voluntary retrieval of autobiographical memories (ABMs). Individuals with high (High-OI) and low (Low-OI) levels of object imagery were asked to perform an involuntary and a voluntary ABM task in the laboratory. Results showed that High-OI participants generated more involuntary and voluntary ABMs than Low-OI, with faster retrieval times. High-OI also reported more detailed memories compared to Low-OI and retrieved memories as visual images. Theoretical implications of these findings for research on voluntary and involuntary ABMs are discussed.

  9. Autonomous learning of robust visual object detection and identification on a humanoid

    NARCIS (Netherlands)

    Leitner, J.; Chandrashekhariah, P.; Harding, S.; Frank, M.; Spina, G.; Förster, A.; Triesch, J.; Schmidhuber, J.

    2012-01-01

    In this work we introduce a technique for a humanoid robot to autonomously learn the representations of objects within its visual environment. Our approach involves an attention mechanism in association with feature based segmentation that explores the environment and provides object samples for

  10. Neural Correlates of Body and Face Perception Following Bilateral Destruction of the Primary Visual Cortices

    Directory of Open Access Journals (Sweden)

    Jan eVan den Stock

    2014-02-01

    Full Text Available Non-conscious visual processing of different object categories was investigated in a rare patient with bilateral destruction of the visual cortex (V1 and clinical blindness over the entire visual field. Images of biological and non-biological object categories were presented consisting of human bodies, faces, butterflies, cars, and scrambles. Behaviorally, only the body shape induced higher perceptual sensitivity, as revealed by signal detection analysis. Passive exposure to bodies and faces activated amygdala and superior temporal sulcus. In addition, bodies also activated the extrastriate body area, insula, orbitofrontal cortex (OFC and cerebellum. The results show that following bilateral damage to the primary visual cortex and ensuing complete cortical blindness, the human visual system is able to process categorical properties of human body shapes. This residual vision may be based on V1-independent input to body-selective areas along the ventral stream, in concert with areas involved in the representation of bodily states, like insula, OFC and cerebellum.

  11. Object integration requires attention: Visual search for Kanizsa figures in parietal extinction.

    Science.gov (United States)

    Gögler, Nadine; Finke, Kathrin; Keller, Ingo; Müller, Hermann J; Conci, Markus

    2016-11-01

    The contribution of selective attention to object integration is a topic of debate: integration of parts into coherent wholes, such as in Kanizsa figures, is thought to arise either from pre-attentive, automatic coding processes or from higher-order processes involving selective attention. Previous studies have attempted to examine the role of selective attention in object integration either by employing visual search paradigms or by studying patients with unilateral deficits in selective attention. Here, we combined these two approaches to investigate object integration in visual search in a group of five patients with left-sided parietal extinction. Our search paradigm was designed to assess the effect of left- and right-grouped nontargets on detecting a Kanizsa target square. The results revealed comparable reaction time (RT) performance in patients and controls when they were presented with displays consisting of a single to-be-grouped item that had to be classified as target vs. nontarget. However, when display size increased to two items, patients showed an extinction-specific pattern of enhanced RT costs for nontargets that induced a partial shape grouping on the right, i.e., in the attended hemifield (relative to the ungrouped baseline). Together, these findings demonstrate a competitive advantage for right-grouped objects, which in turn indicates that in parietal extinction, attentional competition between objects particularly limits integration processes in the contralesional, i.e., left hemifield. These findings imply a crucial contribution of selective attentional resources to visual object integration. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Invariant visual object and face recognition: neural and computational bases, and a model, VisNet

    Directory of Open Access Journals (Sweden)

    Edmund T eRolls

    2012-06-01

    Full Text Available Neurophysiological evidence for invariant representations of objects and faces in the primate inferior temporal visual cortex is described. Then a computational approach to how invariant representations are formed in the brain is described that builds on the neurophysiology. A feature hierarchy modelin which invariant representations can be built by self-organizing learning based on the temporal and spatialstatistics of the visual input produced by objects as they transform in the world is described. VisNet can use temporal continuity in an associativesynaptic learning rule with a short term memory trace, and/or it can use spatialcontinuity in Continuous Spatial Transformation learning which does not require a temporal trace. The model of visual processing in theventral cortical stream can build representations of objects that are invariant withrespect to translation, view, size, and also lighting. The modelhas been extended to provide an account of invariant representations in the dorsal visualsystem of the global motion produced by objects such as looming, rotation, and objectbased movement. The model has been extended to incorporate top-down feedback connectionsto model the control of attention by biased competition in for example spatial and objectsearch tasks. The model has also been extended to account for how the visual system canselect single objects in complex visual scenes, and how multiple objects can berepresented in a scene. The model has also been extended to provide, with an additional layer, for the development of representations of spatial scenes of the type found in the hippocampus.

  13. Category-length and category-strength effects using images of scenes.

    Science.gov (United States)

    Baumann, Oliver; Vromen, Joyce M G; Boddy, Adam C; Crawshaw, Eloise; Humphreys, Michael S

    2018-06-21

    Global matching models have provided an important theoretical framework for recognition memory. Key predictions of this class of models are that (1) increasing the number of occurrences in a study list of some items affects the performance on other items (list-strength effect) and that (2) adding new items results in a deterioration of performance on the other items (list-length effect). Experimental confirmation of these predictions has been difficult, and the results have been inconsistent. A review of the existing literature, however, suggests that robust length and strength effects do occur when sufficiently similar hard-to-label items are used. In an effort to investigate this further, we had participants study lists containing one or more members of visual scene categories (bathrooms, beaches, etc.). Experiments 1 and 2 replicated and extended previous findings showing that the study of additional category members decreased accuracy, providing confirmation of the category-length effect. Experiment 3 showed that repeating some category members decreased the accuracy of nonrepeated members, providing evidence for a category-strength effect. Experiment 4 eliminated a potential challenge to these results. Taken together, these findings provide robust support for global matching models of recognition memory. The overall list lengths, the category sizes, and the number of repetitions used demonstrated that scene categories are well-suited to testing the fundamental assumptions of global matching models. These include (A) interference from memories for similar items and contexts, (B) nondestructive interference, and (C) that conjunctive information is made available through a matching operation.

  14. How high is visual short-term memory capacity for object layout?

    Science.gov (United States)

    Sanocki, Thomas; Sellers, Eric; Mittelstadt, Jeff; Sulman, Noah

    2010-05-01

    Previous research measuring visual short-term memory (VSTM) suggests that the capacity for representing the layout of objects is fairly high. In four experiments, we further explored the capacity of VSTM for layout of objects, using the change detection method. In Experiment 1, participants retained most of the elements in displays of 4 to 8 elements. In Experiments 2 and 3, with up to 20 elements, participants retained many of them, reaching a capacity of 13.4 stimulus elements. In Experiment 4, participants retained much of a complex naturalistic scene. In most cases, increasing display size caused only modest reductions in performance, consistent with the idea of configural, variable-resolution grouping. The results indicate that participants can retain a substantial amount of scene layout information (objects and locations) in short-term memory. We propose that this is a case of remote visual understanding, where observers' ability to integrate information from a scene is paramount.

  15. Visualizing Data as Objects by DC (Difference of Convex) Optimization

    DEFF Research Database (Denmark)

    Carrizosa, Emilio; Guerrero, Vanesa; Morales, Dolores Romero

    In this paper we address the problem of visualizing in a bounded region a set of individuals, which has attached a dissimilarity measure and a statistical value. This problem, which extends the standard Multidimensional Scaling Analysis, is written as a global optimization problem whose objective...

  16. Object-based Encoding in Visual Working Memory: Evidence from Memory-driven Attentional Capture.

    Science.gov (United States)

    Gao, Zaifeng; Yu, Shixian; Zhu, Chengfeng; Shui, Rende; Weng, Xuchu; Li, Peng; Shen, Mowei

    2016-03-09

    Visual working memory (VWM) adopts a specific manner of object-based encoding (OBE) to extract perceptual information: Whenever one feature-dimension is selected for entry into VWM, the others are also extracted. Currently most studies revealing OBE probed an 'irrelevant-change distracting effect', where changes of irrelevant-features dramatically affected the performance of the target feature. However, the existence of irrelevant-feature change may affect participants' processing manner, leading to a false-positive result. The current study conducted a strict examination of OBE in VWM, by probing whether irrelevant-features guided the deployment of attention in visual search. The participants memorized an object's colour yet ignored shape and concurrently performed a visual-search task. They searched for a target line among distractor lines, each embedded within a different object. One object in the search display could match the shape, colour, or both dimensions of the memory item, but this object never contained the target line. Relative to a neutral baseline, where there was no match between the memory and search displays, search time was significantly prolonged in all match conditions, regardless of whether the memory item was displayed for 100 or 1000 ms. These results suggest that task-irrelevant shape was extracted into VWM, supporting OBE in VWM.

  17. Modelling individual difference in visual categorization.

    Science.gov (United States)

    Shen, Jianhong; Palmeri, Thomas J

    Recent years has seen growing interest in understanding, characterizing, and explaining individual differences in visual cognition. We focus here on individual differences in visual categorization. Categorization is the fundamental visual ability to group different objects together as the same kind of thing. Research on visual categorization and category learning has been significantly informed by computational modeling, so our review will focus both on how formal models of visual categorization have captured individual differences and how individual difference have informed the development of formal models. We first examine the potential sources of individual differences in leading models of visual categorization, providing a brief review of a range of different models. We then describe several examples of how computational models have captured individual differences in visual categorization. This review also provides a bit of an historical perspective, starting with models that predicted no individual differences, to those that captured group differences, to those that predict true individual differences, and to more recent hierarchical approaches that can simultaneously capture both group and individual differences in visual categorization. Via this selective review, we see how considerations of individual differences can lead to important theoretical insights into how people visually categorize objects in the world around them. We also consider new directions for work examining individual differences in visual categorization.

  18. Sex differences in visual realism in drawings of animate and inanimate objects.

    Science.gov (United States)

    Lange-Küttner, Chris

    2011-10-01

    Sex differences in a visually realistic drawing style were examined using the model of a curvy cup as an inanimate object, and the Draw-A-Person test (DAP) as a task involving animate objects, with 7- to 12-year-old children (N = 60; 30 boys). Accurately drawing the internal detail of the cup--indicating interest in a depth feature--was not dependent on age in boys, but only in girls, as 7-year-old boys were already engaging with this cup feature. However, the age effect of the correct omission of an occluded handle--indicating a transition from realism in terms of function (intellectual realism) to one of appearance (visual realism)--was the same for both sexes. The correct omission of the occluded handle was correlated with bilingualism and drawing the internal cup detail in girls, but with drawing the silhouette contour of the cup in boys. Because a figure's silhouette enables object identification from a distance, while perception of detail and language occurs in nearer space, it was concluded that boys and girls may differ in the way they conceptualize depth in pictorial space, rather than in visual realism as such.

  19. Convolutional Neural Network for Multi-Category Rapid Serial Visual Presentation BCI

    Directory of Open Access Journals (Sweden)

    Ran eManor

    2015-12-01

    Full Text Available Brain computer interfaces rely on machine learning algorithms to decode the brain's electrical activity into decisions. For example, in rapid serial visual presentation (RSVP tasks, the subject is presented with a continuous stream of images containing rare target images among standard images, while the algorithm has to detect brain activity associated with target images. Here, we continue our previous work, presenting a deep neural network model for the use of single trial EEG classification in RSVP tasks. Deep neural networks have shown state of the art performance in computer vision and speech recognition and thus have great promise for other learning tasks, like classification of EEG samples. In our model, we introduce a novel spatio-temporal regularization for EEG data to reduce overfitting. We show improved classification performance compared to our earlier work on a five categories RSVP experiment. In addition, we compare performance on data from different sessions and validate the model on a public benchmark data set of a P300 speller task. Finally, we discuss the advantages of using neural network models compared to manually designing feature extraction algorithms.

  20. Real-time tracking of visually attended objects in virtual environments and its application to LOD.

    Science.gov (United States)

    Lee, Sungkil; Kim, Gerard Jounghyun; Choi, Seungmoon

    2009-01-01

    This paper presents a real-time framework for computationally tracking objects visually attended by the user while navigating in interactive virtual environments. In addition to the conventional bottom-up (stimulus-driven) saliency map, the proposed framework uses top-down (goal-directed) contexts inferred from the user's spatial and temporal behaviors, and identifies the most plausibly attended objects among candidates in the object saliency map. The computational framework was implemented using GPU, exhibiting high computational performance adequate for interactive virtual environments. A user experiment was also conducted to evaluate the prediction accuracy of the tracking framework by comparing objects regarded as visually attended by the framework to actual human gaze collected with an eye tracker. The results indicated that the accuracy was in the level well supported by the theory of human cognition for visually identifying single and multiple attentive targets, especially owing to the addition of top-down contextual information. Finally, we demonstrate how the visual attention tracking framework can be applied to managing the level of details in virtual environments, without any hardware for head or eye tracking.

  1. Mid-level perceptual features distinguish objects of different real-world sizes.

    Science.gov (United States)

    Long, Bria; Konkle, Talia; Cohen, Michael A; Alvarez, George A

    2016-01-01

    Understanding how perceptual and conceptual representations are connected is a fundamental goal of cognitive science. Here, we focus on a broad conceptual distinction that constrains how we interact with objects--real-world size. Although there appear to be clear perceptual correlates for basic-level categories (apples look like other apples, oranges look like other oranges), the perceptual correlates of broader categorical distinctions are largely unexplored, i.e., do small objects look like other small objects? Because there are many kinds of small objects (e.g., cups, keys), there may be no reliable perceptual features that distinguish them from big objects (e.g., cars, tables). Contrary to this intuition, we demonstrated that big and small objects have reliable perceptual differences that can be extracted by early stages of visual processing. In a series of visual search studies, participants found target objects faster when the distractor objects differed in real-world size. These results held when we broadly sampled big and small objects, when we controlled for low-level features and image statistics, and when we reduced objects to texforms--unrecognizable textures that loosely preserve an object's form. However, this effect was absent when we used more basic textures. These results demonstrate that big and small objects have reliably different mid-level perceptual features, and suggest that early perceptual information about broad-category membership may influence downstream object perception, recognition, and categorization processes. (c) 2015 APA, all rights reserved).

  2. Does visual working memory represent the predicted locations of future target objects? An event-related brain potential study.

    Science.gov (United States)

    Grubert, Anna; Eimer, Martin

    2015-11-11

    During the maintenance of task-relevant objects in visual working memory, the contralateral delay activity (CDA) is elicited over the hemisphere opposite to the visual field where these objects are presented. The presence of this lateralised CDA component demonstrates the existence of position-dependent object representations in working memory. We employed a change detection task to investigate whether the represented object locations in visual working memory are shifted in preparation for the known location of upcoming comparison stimuli. On each trial, bilateral memory displays were followed after a delay period by bilateral test displays. Participants had to encode and maintain three visual objects on one side of the memory display, and to judge whether they were identical or different to three objects in the test display. Task-relevant memory and test stimuli were located in the same visual hemifield in the no-shift task, and on opposite sides in the horizontal shift task. CDA components of similar size were triggered contralateral to the memorized objects in both tasks. The absence of a polarity reversal of the CDA in the horizontal shift task demonstrated that there was no preparatory shift of memorized object location towards the side of the upcoming comparison stimuli. These results suggest that visual working memory represents the locations of visual objects during encoding, and that the matching of memorized and test objects at different locations is based on a comparison process that can bridge spatial translations between these objects. This article is part of a Special Issue entitled SI: Prediction and Attention. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Why some colors appear more memorable than others: A model combining categories and particulars in color working memory.

    Science.gov (United States)

    Bae, Gi-Yeul; Olkkonen, Maria; Allred, Sarah R; Flombaum, Jonathan I

    2015-08-01

    Categorization with basic color terms is an intuitive and universal aspect of color perception. Yet research on visual working memory capacity has largely assumed that only continuous estimates within color space are relevant to memory. As a result, the influence of color categories on working memory remains unknown. We propose a dual content model of color representation in which color matches to objects that are either present (perception) or absent (memory) integrate category representations along with estimates of specific values on a continuous scale ("particulars"). We develop and test the model through 4 experiments. In a first experiment pair, participants reproduce a color target, both with and without a delay, using a recently influential estimation paradigm. In a second experiment pair, we use standard methods in color perception to identify boundary and focal colors in the stimulus set. The main results are that responses drawn from working memory are significantly biased away from category boundaries and toward category centers. Importantly, the same pattern of results is present without a memory delay. The proposed dual content model parsimoniously explains these results, and it should replace prevailing single content models in studies of visual working memory. More broadly, the model and the results demonstrate how the main consequence of visual working memory maintenance is the amplification of category related biases and stimulus-specific variability that originate in perception. (c) 2015 APA, all rights reserved).

  4. Brain activity related to integrative processes in visual object recognition

    DEFF Research Database (Denmark)

    Gerlach, Christian; Aaside, C T; Humphreys, G W

    2002-01-01

    We report evidence from a PET activation study that the inferior occipital gyri (likely to include area V2) and the posterior parts of the fusiform and inferior temporal gyri are involved in the integration of visual elements into perceptual wholes (single objects). Of these areas, the fusiform a......) that perceptual and memorial processes can be dissociated on both functional and anatomical grounds. No evidence was obtained for the involvement of the parietal lobes in the integration of single objects....

  5. Neural dynamics of object-based multifocal visual spatial attention and priming: object cueing, useful-field-of-view, and crowding.

    Science.gov (United States)

    Foley, Nicholas C; Grossberg, Stephen; Mingolla, Ennio

    2012-08-01

    How are spatial and object attention coordinated to achieve rapid object learning and recognition during eye movement search? How do prefrontal priming and parietal spatial mechanisms interact to determine the reaction time costs of intra-object attention shifts, inter-object attention shifts, and shifts between visible objects and covertly cued locations? What factors underlie individual differences in the timing and frequency of such attentional shifts? How do transient and sustained spatial attentional mechanisms work and interact? How can volition, mediated via the basal ganglia, influence the span of spatial attention? A neural model is developed of how spatial attention in the where cortical stream coordinates view-invariant object category learning in the what cortical stream under free viewing conditions. The model simulates psychological data about the dynamics of covert attention priming and switching requiring multifocal attention without eye movements. The model predicts how "attentional shrouds" are formed when surface representations in cortical area V4 resonate with spatial attention in posterior parietal cortex (PPC) and prefrontal cortex (PFC), while shrouds compete among themselves for dominance. Winning shrouds support invariant object category learning, and active surface-shroud resonances support conscious surface perception and recognition. Attentive competition between multiple objects and cues simulates reaction-time data from the two-object cueing paradigm. The relative strength of sustained surface-driven and fast-transient motion-driven spatial attention controls individual differences in reaction time for invalid cues. Competition between surface-driven attentional shrouds controls individual differences in detection rate of peripheral targets in useful-field-of-view tasks. The model proposes how the strength of competition can be mediated, though learning or momentary changes in volition, by the basal ganglia. A new explanation of

  6. Semantic and functional relationships among objects increase the capacity of visual working memory.

    Science.gov (United States)

    O'Donnell, Ryan E; Clement, Andrew; Brockmole, James R

    2018-04-12

    Visual working memory (VWM) has a limited capacity of approximately 3-4 visual objects. Current theories of VWM propose that a limited pool of resources can be flexibly allocated to objects, allowing them to be represented at varying levels of precision. Factors that influence the allocation of these resources, such as the complexity and perceptual grouping of objects, can thus affect the capacity of VWM. We sought to identify whether semantic and functional relationships between objects could influence the grouping of objects, thereby increasing the functional capacity of VWM. Observers viewed arrays of 8 to-be-remembered objects arranged into 4 pairs. We manipulated both the semantic association and functional interaction between the objects, then probed participants' memory for the arrays. When objects were semantically related, participants' memory for the arrays improved. Participants' memory further improved when semantically related objects were positioned to interact with each other. However, when we increased the spacing between the objects in each pair, the benefits of functional but not semantic relatedness were eliminated. These findings suggest that action-relevant properties of objects can increase the functional capacity of VWM, but only when objects are positioned to directly interact with each other. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  7. Patterns of Visual Attention to Faces and Objects in Autism Spectrum Disorder

    Science.gov (United States)

    McPartland, James C.; Webb, Sara Jane; Keehn, Brandon; Dawson, Geraldine

    2011-01-01

    This study used eye-tracking to examine visual attention to faces and objects in adolescents with autism spectrum disorder (ASD) and typical peers. Point of gaze was recorded during passive viewing of images of human faces, inverted human faces, monkey faces, three-dimensional curvilinear objects, and two-dimensional geometric patterns.…

  8. Dissociation of object and spatial visual processing pathways in human extrastriate cortex

    Energy Technology Data Exchange (ETDEWEB)

    Haxby, J.V.; Grady, C.L.; Horwitz, B.; Ungerleider, L.G.; Mishkin, M.; Carson, R.E.; Herscovitch, P.; Schapiro, M.B.; Rapoport, S.I. (National Institutes of Health, Bethesda, MD (USA))

    1991-03-01

    The existence and neuroanatomical locations of separate extrastriate visual pathways for object recognition and spatial localization were investigated in healthy young men. Regional cerebral blood flow was measured by positron emission tomography and bolus injections of H2(15)O, while subjects performed face matching, dot-location matching, or sensorimotor control tasks. Both visual matching tasks activated lateral occipital cortex. Face discrimination alone activated a region of occipitotemporal cortex that was anterior and inferior to the occipital area activated by both tasks. The spatial location task alone activated a region of lateral superior parietal cortex. Perisylvian and anterior temporal cortices were not activated by either task. These results demonstrate the existence of three functionally dissociable regions of human visual extrastriate cortex. The ventral and dorsal locations of the regions specialized for object recognition and spatial localization, respectively, suggest some homology between human and nonhuman primate extrastriate cortex, with displacement in human brain, possibly related to the evolution of phylogenetically newer cortical areas.

  9. Using a Topological Model in Psychology: Developing Sense and Choice Categories.

    Science.gov (United States)

    Mammen, Jens

    2016-06-01

    A duality of sense categories and choice categories is introduced to map two distinct but co-operating ways in which we as humans are relating actively to the world. We are sensing similarities and differences in our world of objects and persons, but we are also as bodies moving around in this world encountering, selecting, and attaching to objects beyond our sensory interactions and in this way also relating to the individual objects' history. This duality is necessary if we shall understand man as relating to the historical depth of our natural and cultural world, and to understand our cognitions and affections. Our personal affections and attachments, as well as our shared cultural values are centered around objects and persons chosen as reference points and landmarks in our lives, uniting and separating, not to be understood only in terms of sensory selections. The ambition is to bridge the gap between psychology as part of Naturwissenschaft and of Geisteswissenschaft, and at the same time establish a common frame for understanding cognition and affection, and our practical and cultural life (Mammen and Mironenko 2015). The duality of sense and choice categories can be described formally using concepts from modern mathematics, primarily topology, surmounting the reductions rooted in the mechanistic concepts from Renaissance science and mathematics. The formal description is based on 11 short and simple axioms held in ordinary language and visualized with instructive figures. The axioms are bridging psychology and mathematics and not only enriching psychology but also opening for a new interpretation of parts of the foundation of mathematics and logic.

  10. The Correlation between Subjective and Objective Visual Function Test in Optic Neuropathy Patients

    Directory of Open Access Journals (Sweden)

    Ungsoo Kim

    2012-10-01

    Full Text Available Purpose: To investigate the correlation between visual acuity and quantitative measurements of visual evoked potentials (VEP, optical coherence tomography (OCT, and visual field test (VF in optic neuropathy patients. Methods: We evaluated 28 patients with optic neuropathy. Patients who had pale disc, visual acuity of less than 0.5 and abnormal visual field defect were included. At the first visit, we performed visual acuity and VF as subjective methods and OCT and VEP as objective methods. In the spectral domain OCT, rim volume, average and temporal quadrant retinal nerve fiber layer (RNFL thickness were measured. And pattern VEP (N75, P100, N135 latency, and P100 amplitude and Humphrey 24-2 visual field test (mean deviation and pattern standard deviation were obtained. Using Spearman's correlation coefficient, the correlation between visual acuity and various techniques were assessed. Results: Visual acuity was most correlated with the mean deviation of Humphrey perimetry.

  11. Dissociable influences of auditory object vs. spatial attention on visual system oscillatory activity.

    Directory of Open Access Journals (Sweden)

    Jyrki Ahveninen

    Full Text Available Given that both auditory and visual systems have anatomically separate object identification ("what" and spatial ("where" pathways, it is of interest whether attention-driven cross-sensory modulations occur separately within these feature domains. Here, we investigated how auditory "what" vs. "where" attention tasks modulate activity in visual pathways using cortically constrained source estimates of magnetoencephalograpic (MEG oscillatory activity. In the absence of visual stimuli or tasks, subjects were presented with a sequence of auditory-stimulus pairs and instructed to selectively attend to phonetic ("what" vs. spatial ("where" aspects of these sounds, or to listen passively. To investigate sustained modulatory effects, oscillatory power was estimated from time periods between sound-pair presentations. In comparison to attention to sound locations, phonetic auditory attention was associated with stronger alpha (7-13 Hz power in several visual areas (primary visual cortex; lingual, fusiform, and inferior temporal gyri, lateral occipital cortex, as well as in higher-order visual/multisensory areas including lateral/medial parietal and retrosplenial cortices. Region-of-interest (ROI analyses of dynamic changes, from which the sustained effects had been removed, suggested further power increases during Attend Phoneme vs. Location centered at the alpha range 400-600 ms after the onset of second sound of each stimulus pair. These results suggest distinct modulations of visual system oscillatory activity during auditory attention to sound object identity ("what" vs. sound location ("where". The alpha modulations could be interpreted to reflect enhanced crossmodal inhibition of feature-specific visual pathways and adjacent audiovisual association areas during "what" vs. "where" auditory attention.

  12. Navon's classical paradigm concerning local and global processing relates systematically to visual object classification performance.

    Science.gov (United States)

    Gerlach, Christian; Poirel, Nicolas

    2018-01-10

    Forty years ago David Navon tried to tackle a central problem in psychology concerning the time course of perceptual processing: Do we first see the details (local level) followed by the overall outlay (global level) or is it rather the other way around? He did this by developing a now classical paradigm involving the presentation of compound stimuli; large letters composed of smaller letters. Despite the usefulness of this paradigm it remains uncertain whether effects found with compound stimuli relate directly to visual object recognition. It does so because compound stimuli are not actual objects but rather formations of elements and because the elements that form the global shape of compound stimuli are not features of the global shape but rather objects in their own right. To examine the relationship between performance on Navon's paradigm and visual object processing we derived two indexes from Navon's paradigm that reflect different aspects of the relationship between global and local processing. We find that individual differences on these indexes can explain a considerable amount of variance in two standard object classification paradigms; object decision and superordinate categorization, suggesting that Navon's paradigm does relate to visual object processing.

  13. Spatial and object-based attention modulates broadband high-frequency responses across the human visual cortical hierarchy.

    Science.gov (United States)

    Davidesco, Ido; Harel, Michal; Ramot, Michal; Kramer, Uri; Kipervasser, Svetlana; Andelman, Fani; Neufeld, Miri Y; Goelman, Gadi; Fried, Itzhak; Malach, Rafael

    2013-01-16

    One of the puzzling aspects in the visual attention literature is the discrepancy between electrophysiological and fMRI findings: whereas fMRI studies reveal strong attentional modulation in the earliest visual areas, single-unit and local field potential studies yielded mixed results. In addition, it is not clear to what extent spatial attention effects extend from early to high-order visual areas. Here we addressed these issues using electrocorticography recordings in epileptic patients. The patients performed a task that allowed simultaneous manipulation of both spatial and object-based attention. They were presented with composite stimuli, consisting of a small object (face or house) superimposed on a large one, and in separate blocks, were instructed to attend one of the objects. We found a consistent increase in broadband high-frequency (30-90 Hz) power, but not in visual evoked potentials, associated with spatial attention starting with V1/V2 and continuing throughout the visual hierarchy. The magnitude of the attentional modulation was correlated with the spatial selectivity of each electrode and its distance from the occipital pole. Interestingly, the latency of the attentional modulation showed a significant decrease along the visual hierarchy. In addition, electrodes placed over high-order visual areas (e.g., fusiform gyrus) showed both effects of spatial and object-based attention. Overall, our results help to reconcile previous observations of discrepancy between fMRI and electrophysiology. They also imply that spatial attention effects can be found both in early and high-order visual cortical areas, in parallel with their stimulus tuning properties.

  14. Looking at anything that is green when hearing ‘frog’: How object surface colour and stored object colour knowledge influence language-mediated overt attention

    OpenAIRE

    Huettig, F.; Altmann, G.

    2011-01-01

    Three eye-tracking experiments investigated the influence of stored colour knowledge, perceived surface colour, and conceptual category of visual objects on language-mediated overt attention. Participants heard spoken target words whose concepts are associated with a diagnostic colour (e.g., "spinach"; spinach is typically green) while their eye movements were monitored to (a) objects associated with a diagnostic colour but presented in black and white (e.g., a black-and-white line drawing of...

  15. How hearing aids, background noise, and visual cues influence objective listening effort.

    Science.gov (United States)

    Picou, Erin M; Ricketts, Todd A; Hornsby, Benjamin W Y

    2013-09-01

    The purpose of this article was to evaluate factors that influence the listening effort experienced when processing speech for people with hearing loss. Specifically, the change in listening effort resulting from introducing hearing aids, visual cues, and background noise was evaluated. An additional exploratory aim was to investigate the possible relationships between the magnitude of listening effort change and individual listeners' working memory capacity, verbal processing speed, or lipreading skill. Twenty-seven participants with bilateral sensorineural hearing loss were fitted with linear behind-the-ear hearing aids and tested using a dual-task paradigm designed to evaluate listening effort. The primary task was monosyllable word recognition and the secondary task was a visual reaction time task. The test conditions varied by hearing aids (unaided, aided), visual cues (auditory-only, auditory-visual), and background noise (present, absent). For all participants, the signal to noise ratio was set individually so that speech recognition performance in noise was approximately 60% in both the auditory-only and auditory-visual conditions. In addition to measures of listening effort, working memory capacity, verbal processing speed, and lipreading ability were measured using the Automated Operational Span Task, a Lexical Decision Task, and the Revised Shortened Utley Lipreading Test, respectively. In general, the effects measured using the objective measure of listening effort were small (~10 msec). Results indicated that background noise increased listening effort, and hearing aids reduced listening effort, while visual cues did not influence listening effort. With regard to the individual variables, verbal processing speed was negatively correlated with hearing aid benefit for listening effort; faster processors were less likely to derive benefit. Working memory capacity, verbal processing speed, and lipreading ability were related to benefit from visual cues. No

  16. The ventral visual pathway: an expanded neural framework for the processing of object quality.

    Science.gov (United States)

    Kravitz, Dwight J; Saleem, Kadharbatcha S; Baker, Chris I; Ungerleider, Leslie G; Mishkin, Mortimer

    2013-01-01

    Since the original characterization of the ventral visual pathway, our knowledge of its neuroanatomy, functional properties, and extrinsic targets has grown considerably. Here we synthesize this recent evidence and propose that the ventral pathway is best understood as a recurrent occipitotemporal network containing neural representations of object quality both utilized and constrained by at least six distinct cortical and subcortical systems. Each system serves its own specialized behavioral, cognitive, or affective function, collectively providing the raison d'être for the ventral visual pathway. This expanded framework contrasts with the depiction of the ventral visual pathway as a largely serial staged hierarchy culminating in singular object representations and more parsimoniously incorporates attentional, contextual, and feedback effects. Published by Elsevier Ltd.

  17. BlueJ Visual Debugger for Learning the Execution of Object-Oriented Programs?

    Science.gov (United States)

    Bennedsen, Jens; Schulte, Carsten

    2010-01-01

    This article reports on an experiment undertaken in order to evaluate the effect of a program visualization tool for helping students to better understand the dynamics of object-oriented programs. The concrete tool used was BlueJ's debugger and object inspector. The study was done as a control-group experiment in an introductory programming…

  18. Cortical activation patterns during long-term memory retrieval of visually or haptically encoded objects and locations.

    Science.gov (United States)

    Stock, Oliver; Röder, Brigitte; Burke, Michael; Bien, Siegfried; Rösler, Frank

    2009-01-01

    The present study used functional magnetic resonance imaging to delineate cortical networks that are activated when objects or spatial locations encoded either visually (visual encoding group, n=10) or haptically (haptic encoding group, n=10) had to be retrieved from long-term memory. Participants learned associations between auditorily presented words and either meaningless objects or locations in a 3-D space. During the retrieval phase one day later, participants had to decide whether two auditorily presented words shared an association with a common object or location. Thus, perceptual stimulation during retrieval was always equivalent, whereas either visually or haptically encoded object or location associations had to be reactivated. Moreover, the number of associations fanning out from each word varied systematically, enabling a parametric increase of the number of reactivated representations. Recall of visual objects predominantly activated the left superior frontal gyrus and the intraparietal cortex, whereas visually learned locations activated the superior parietal cortex of both hemispheres. Retrieval of haptically encoded material activated the left medial frontal gyrus and the intraparietal cortex in the object condition, and the bilateral superior parietal cortex in the location condition. A direct test for modality-specific effects showed that visually encoded material activated more vision-related areas (BA 18/19) and haptically encoded material more motor and somatosensory-related areas. A conjunction analysis identified supramodal and material-unspecific activations within the medial and superior frontal gyrus and the superior parietal lobe including the intraparietal sulcus. These activation patterns strongly support the idea that code-specific representations are consolidated and reactivated within anatomically distributed cell assemblies that comprise sensory and motor processing systems.

  19. Modeling guidance and recognition in categorical search: bridging human and computer object detection.

    Science.gov (United States)

    Zelinsky, Gregory J; Peng, Yifan; Berg, Alexander C; Samaras, Dimitris

    2013-10-08

    Search is commonly described as a repeating cycle of guidance to target-like objects, followed by the recognition of these objects as targets or distractors. Are these indeed separate processes using different visual features? We addressed this question by comparing observer behavior to that of support vector machine (SVM) models trained on guidance and recognition tasks. Observers searched for a categorically defined teddy bear target in four-object arrays. Target-absent trials consisted of random category distractors rated in their visual similarity to teddy bears. Guidance, quantified as first-fixated objects during search, was strongest for targets, followed by target-similar, medium-similarity, and target-dissimilar distractors. False positive errors to first-fixated distractors also decreased with increasing dissimilarity to the target category. To model guidance, nine teddy bear detectors, using features ranging in biological plausibility, were trained on unblurred bears then tested on blurred versions of the same objects appearing in each search display. Guidance estimates were based on target probabilities obtained from these detectors. To model recognition, nine bear/nonbear classifiers, trained and tested on unblurred objects, were used to classify the object that would be fixated first (based on the detector estimates) as a teddy bear or a distractor. Patterns of categorical guidance and recognition accuracy were modeled almost perfectly by an HMAX model in combination with a color histogram feature. We conclude that guidance and recognition in the context of search are not separate processes mediated by different features, and that what the literature knows as guidance is really recognition performed on blurred objects viewed in the visual periphery.

  20. Attitudes and evaluative practices: category vs. item and subjective vs. objective constructions in everyday food assessments.

    Science.gov (United States)

    Wiggins, Sally; Potter, Jonathan

    2003-12-01

    In social psychology, evaluative expressions have traditionally been understood in terms of their relationship to, and as the expression of, underlying 'attitudes'. In contrast, discursive approaches have started to study evaluative expressions as part of varied social practices, considering what such expressions are doing rather than their relationship to attitudinal objects or other putative mental entities. In this study the latter approach will be used to examine the construction of food and drink evaluations in conversation. The data are taken from a corpus of family mealtimes recorded over a period of months. The aim of this study is to highlight two distinctions that are typically obscured in traditional attitude work ('subjective' vs. 'objective' expressions, category vs. item evaluations). A set of extracts is examined to document the presence of these distinctions in talk that evaluates food and the way they are used and rhetorically developed to perform particular activities (accepting/refusing food, complimenting the food provider, persuading someone to eat). The analysis suggests that researchers (a) should be aware of the potential significance of these distinctions; (b) should be cautious when treating evaluative terms as broadly equivalent and (c) should be cautious when blurring categories and instances. This analysis raises the broader question of how far evaluative practices may be specific to particular domains, and what this specificity might consist in. It is concluded that research in this area could benefit from starting to focus on the role of evaluations in practices and charting their association with specific topics and objects.

  1. Developmental visual perception deficits with no indications of prosopagnosia in a child with abnormal eye movements.

    Science.gov (United States)

    Gilaie-Dotan, Sharon; Doron, Ravid

    2017-06-01

    Visual categories are associated with eccentricity biases in high-order visual cortex: Faces and reading with foveally-biased regions, while common objects and space with mid- and peripherally-biased regions. As face perception and reading are among the most challenging human visual skills, and are often regarded as the peak achievements of a distributed neural network supporting common objects perception, it is unclear why objects, which also rely on foveal vision to be processed, are associated with mid-peripheral rather than with a foveal bias. Here, we studied BN, a 9 y.o. boy who has normal basic-level vision, abnormal (limited) oculomotor pursuit and saccades, and shows developmental object and contour integration deficits but with no indication of prosopagnosia. Although we cannot infer causation from the data presented here, we suggest that normal pursuit and saccades could be critical for the development of contour integration and object perception. While faces and perhaps reading, when fixated upon, take up a small portion of central visual field and require only small eye movements to be properly processed, common objects typically prevail in mid-peripheral visual field and rely on longer-distance voluntary eye movements as saccades to be brought to fixation. While retinal information feeds into early visual cortex in an eccentricity orderly manner, we hypothesize that propagation of non-foveal information to mid and high-order visual cortex critically relies on circuitry involving eye movements. Limited or atypical eye movements, as in the case of BN, may hinder normal information flow to mid-eccentricity biased high-order visual cortex, adversely affecting its development and consequently inducing visual perceptual deficits predominantly for categories associated with these regions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Occupational Therapy Interventions Effect on Visual-Motor Skills in Children with Learning Disorders

    Directory of Open Access Journals (Sweden)

    Batoul Mandani

    2007-07-01

    Full Text Available Objective: Visual-motor skill is a part of visual perception which can integrate visual processing skills to fine movements. Visual-motor dysfunction is often to cause problems in copying and writing. The purpose of this study is investigation of occupational therapy interventions effect on the visual-motor skill in children with learning disorders. Materials & Methods: In this interventional and experimental study, 23 students with learning disorders (2nd, 3rd, 4th grade were selected and they were divided (through Randomized Block Method into two groups, 11 persons as intervention group and the others as the control group (12 people. Both groups were administered the “Test of Visual-Motor Skills- Revised” (TVMS-R. Then case group received occupational therapy interventions for 16 sessions and two groups were administered by TVMS-R again. Data was analyzed by using paired T-test and independent T-test. Results: Total mark of TVMS-R demonstrated statistically significant difference in visual-motor skills between case and control groups (P<0/001. This test has 8 categories. Total mark of 1, 3,4,6,8 categories demonstrated that occupational therapy had significant effect on visual analysis skills (P<0/005. Total mark of 2, 5, 7 categories demonstrated that occupational therapy had significant effect on visual-spatial skills (P<0/001. Conclusion: Occupational therapy interventions had significant effect on the visual-motor skills and its items (visual-spatial, visual analysis, visual-motor integration and eye fixation skills.

  3. Learning Deep Visual Object Models From Noisy Web Data: How to Make it Work

    OpenAIRE

    Massouh, Nizar; Babiloni, Francesca; Tommasi, Tatiana; Young, Jay; Hawes, Nick; Caputo, Barbara

    2017-01-01

    Deep networks thrive when trained on large scale data collections. This has given ImageNet a central role in the development of deep architectures for visual object classification. However, ImageNet was created during a specific period in time, and as such it is prone to aging, as well as dataset bias issues. Moving beyond fixed training datasets will lead to more robust visual systems, especially when deployed on robots in new environments which must train on the objects they encounter there...

  4. Prior Knowledge about Objects Determines Neural Color Representation in Human Visual Cortex.

    Science.gov (United States)

    Vandenbroucke, A R E; Fahrenfort, J J; Meuwese, J D I; Scholte, H S; Lamme, V A F

    2016-04-01

    To create subjective experience, our brain must translate physical stimulus input by incorporating prior knowledge and expectations. For example, we perceive color and not wavelength information, and this in part depends on our past experience with colored objects ( Hansen et al. 2006; Mitterer and de Ruiter 2008). Here, we investigated the influence of object knowledge on the neural substrates underlying subjective color vision. In a functional magnetic resonance imaging experiment, human subjects viewed a color that lay midway between red and green (ambiguous with respect to its distance from red and green) presented on either typical red (e.g., tomato), typical green (e.g., clover), or semantically meaningless (nonsense) objects. Using decoding techniques, we could predict whether subjects viewed the ambiguous color on typical red or typical green objects based on the neural response of veridical red and green. This shift of neural response for the ambiguous color did not occur for nonsense objects. The modulation of neural responses was observed in visual areas (V3, V4, VO1, lateral occipital complex) involved in color and object processing, as well as frontal areas. This demonstrates that object memory influences wavelength information relatively early in the human visual system to produce subjective color vision. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  5. Object-Based Visual Attention in 8-Month-Old Infants: Evidence from an Eye-Tracking Study

    Science.gov (United States)

    Bulf, Hermann; Valenza, Eloisa

    2013-01-01

    Visual attention is one of the infant's primary tools for gathering relevant information from the environment for further processing and learning. The space-based component of visual attention in infants has been widely investigated; however, the object-based component of visual attention has received scarce interest. This scarcity is…

  6. Visual Short-Term Memory Capacity for Simple and Complex Objects

    Science.gov (United States)

    Luria, Roy; Sessa, Paola; Gotler, Alex; Jolicoeur, Pierre; Dell'Acqua, Roberto

    2010-01-01

    Does the capacity of visual short-term memory (VSTM) depend on the complexity of the objects represented in memory? Although some previous findings indicated lower capacity for more complex stimuli, other results suggest that complexity effects arise during retrieval (due to errors in the comparison process with what is in memory) that is not…

  7. ERP signs of categorical and supra-categorical processing of visual information.

    Science.gov (United States)

    Zani, Alberto; Marsili, Giulia; Senerchia, Annapaola; Orlandi, Andrea; Citron, Francesca M M; Rizzi, Ezia; Proverbio, Alice M

    2015-01-01

    The aim of the present study was to investigate to what extent shared and distinct brain mechanisms are possibly subserving the processing of visual supra-categorical and categorical knowledge as observed with event-related potentials of the brain. Access time to these knowledge types was also investigated. Picture pairs of animals, objects, and mixed types were presented. Participants were asked to decide whether each pair contained pictures belonging to the same category (either animals or man-made objects) or to different categories by pressing one of two buttons. Response accuracy and reaction times (RTs) were also recorded. Both ERPs and RTs were grand-averaged separately for the same-different supra-categories and the animal-object categories. Behavioral performance was faster for more endomorphic pairs, i.e., animals vs. objects and same vs. different category pairs. For ERPs, a modulation of the earliest C1 and subsequent P1 responses to the same vs. different supra-category pairs, but not to the animal vs. object category pairs, was found. This finding supports the view that early afferent processing in the striate cortex can be boosted as a by-product of attention allocated to the processing of shapes and basic features that are mismatched, but not to their semantic quintessence, during same-different supra-categorical judgment. Most importantly, the fact that this processing accrual occurred independent of a traditional experimental condition requiring selective attention to a stimulus source out of the various sources addressed makes it conceivable that this processing accrual may arise from the attentional demand deriving from the alternate focusing of visual attention within and across stimulus categorical pairs' basic structural features. Additional posterior ERP reflections of the brain more prominently processing animal category and same-category pairs were observed at the N1 and N2 levels, respectively, as well as at a late positive complex level

  8. Auditory and phonetic category formation

    NARCIS (Netherlands)

    Goudbeek, Martijn; Cutler, A.; Smits, R.; Swingley, D.; Cohen, Henri; Lefebvre, Claire

    2017-01-01

    Among infants' first steps in language acquisition is learning the relevant contrasts of the language-specific phonemic repertoire. This learning is viewed as the formation of categories in a multidimensional psychophysical space. Research in the visual modality has shown that for adults, some kinds

  9. Attention enhances multi-voxel representation of novel objects in frontal, parietal and visual cortices.

    Science.gov (United States)

    Woolgar, Alexandra; Williams, Mark A; Rich, Anina N

    2015-04-01

    Selective attention is fundamental for human activity, but the details of its neural implementation remain elusive. One influential theory, the adaptive coding hypothesis (Duncan, 2001, An adaptive coding model of neural function in prefrontal cortex, Nature Reviews Neuroscience 2:820-829), proposes that single neurons in certain frontal and parietal regions dynamically adjust their responses to selectively encode relevant information. This selective representation may in turn support selective processing in more specialized brain regions such as the visual cortices. Here, we use multi-voxel decoding of functional magnetic resonance images to demonstrate selective representation of attended--and not distractor--objects in frontal, parietal, and visual cortices. In addition, we highlight a critical role for task demands in determining which brain regions exhibit selective coding. Strikingly, representation of attended objects in frontoparietal cortex was highest under conditions of high perceptual demand, when stimuli were hard to perceive and coding in early visual cortex was weak. Coding in early visual cortex varied as a function of attention and perceptual demand, while coding in higher visual areas was sensitive to the allocation of attention but robust to changes in perceptual difficulty. Consistent with high-profile reports, peripherally presented objects could also be decoded from activity at the occipital pole, a region which corresponds to the fovea. Our results emphasize the flexibility of frontoparietal and visual systems. They support the hypothesis that attention enhances the multi-voxel representation of information in the brain, and suggest that the engagement of this attentional mechanism depends critically on current task demands. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. Spike synchrony reveals emergence of proto-objects in visual cortex.

    Science.gov (United States)

    Martin, Anne B; von der Heydt, Rüdiger

    2015-04-29

    Neurons at early stages of the visual cortex signal elemental features, such as pieces of contour, but how these signals are organized into perceptual objects is unclear. Theories have proposed that spiking synchrony between these neurons encodes how features are grouped (binding-by-synchrony), but recent studies did not find the predicted increase in synchrony with binding. Here we propose that features are grouped to "proto-objects" by intrinsic feedback circuits that enhance the responses of the participating feature neurons. This hypothesis predicts synchrony exclusively between feature neurons that receive feedback from the same grouping circuit. We recorded from neurons in macaque visual cortex and used border-ownership selectivity, an intrinsic property of the neurons, to infer whether or not two neurons are part of the same grouping circuit. We found that binding produced synchrony between same-circuit neurons, but not between other pairs of neurons, as predicted by the grouping hypothesis. In a selective attention task, synchrony emerged with ignored as well as attended objects, and higher synchrony was associated with faster behavioral responses, as would be expected from early grouping mechanisms that provide the structure for object-based processing. Thus, synchrony could be produced by automatic activation of intrinsic grouping circuits. However, the binding-related elevation of synchrony was weak compared with its random fluctuations, arguing against synchrony as a code for binding. In contrast, feedback grouping circuits encode binding by modulating the response strength of related feature neurons. Thus, our results suggest a novel coding mechanism that might underlie the proto-objects of perception. Copyright © 2015 the authors 0270-6474/15/356860-11$15.00/0.

  11. Visual objects and universal meanings: AIDS posters and the politics of globalisation and history.

    Science.gov (United States)

    Stein, Claudia; Cooter, Roger

    2011-01-01

    Drawing on recent visual and spatial turns in history writing, this paper considers AIDS posters from the perspective of their museum 'afterlife' as collected material objects. Museum spaces serve changing political and epistemological projects, and the visual objects they house are not immune from them. A recent globally themed exhibition of AIDS posters at an arts and crafts museum in Hamburg is cited in illustration. The exhibition also serves to draw attention to institutional continuities in collecting agendas. Revealed, contrary to postmodernist expectations, is how today's application of aesthetic display for the purpose of making 'global connections' does not radically break with the virtues and morals attached to the visual at the end of the nineteenth century. The historicisation of such objects needs to take into account this complicated mix of change and continuity in aesthetic concepts and political inscriptions. Otherwise, historians fall prey to seductive aesthetics without being aware of the politics of them. This article submits that aesthetics is politics.

  12. Visual Objects and Universal Meanings: AIDS Posters and the Politics of Globalisation and History

    Science.gov (United States)

    STEIN, CLAUDIA; COOTER, ROGER

    2011-01-01

    Drawing on recent visual and spatial turns in history writing, this paper considers AIDS posters from the perspective of their museum ‘afterlife’ as collected material objects. Museum spaces serve changing political and epistemological projects, and the visual objects they house are not immune from them. A recent globally themed exhibition of AIDS posters at an arts and crafts museum in Hamburg is cited in illustration. The exhibition also serves to draw attention to institutional continuities in collecting agendas. Revealed, contrary to postmodernist expectations, is how today’s application of aesthetic display for the purpose of making ‘global connections’ does not radically break with the virtues and morals attached to the visual at the end of the nineteenth century. The historicisation of such objects needs to take into account this complicated mix of change and continuity in aesthetic concepts and political inscriptions. Otherwise, historians fall prey to seductive aesthetics without being aware of the politics of them. This article submits that aesthetics is politics. PMID:23752866

  13. Abnormalities of Object Visual Processing in Body Dysmorphic Disorder

    Science.gov (United States)

    Feusner, Jamie D.; Hembacher, Emily; Moller, Hayley; Moody, Teena D.

    2013-01-01

    Background Individuals with body dysmorphic disorder may have perceptual distortions for their appearance. Previous studies suggest imbalances in detailed relative to configural/holistic visual processing when viewing faces. No study has investigated the neural correlates of processing non-symptom-related stimuli. The objective of this study was to determine whether individuals with body dysmorphic disorder have abnormal patterns of brain activation when viewing non-face/non-body object stimuli. Methods Fourteen medication-free participants with DSM-IV body dysmorphic disorder and 14 healthy controls participated. We performed functional magnetic resonance imaging while participants matched photographs of houses that were unaltered, contained only high spatial frequency (high detail) information, or only low spatial frequency (low detail) information. The primary outcome was group differences in blood oxygen level-dependent signal changes. Results The body dysmorphic disorder group showed lesser activity in the parahippocampal gyrus, lingual gyrus, and precuneus for low spatial frequency images. There were greater activations in medial prefrontal regions for high spatial frequency images, although no significant differences when compared to a low-level baseline. Greater symptom severity was associated with lesser activity in dorsal occipital cortex and ventrolateral prefrontal cortex for normal and high spatial frequency images. Conclusions Individuals with body dysmorphic disorder have abnormal brain activation patterns when viewing objects. Hypoactivity in visual association areas for configural and holistic (low detail) elements and abnormal allocation of prefrontal systems for details is consistent with a model of imbalances in global vs. local processing. This may occur not only for appearance but also for general stimuli unrelated to their symptoms. PMID:21557897

  14. Category I structures program

    International Nuclear Information System (INIS)

    Endebrock, E.G.; Dove, R.C.

    1981-01-01

    The objective of the Category I Structure Program is to supply experimental and analytical information needed to assess the structural capacity of Category I structures (excluding the reactor cntainment building). Because the shear wall is a principal element of a Category I structure, and because relatively little experimental information is available on the shear walls, it was selected as the test element for the experimental program. The large load capacities of shear walls in Category I structures dictates that the experimental tests be conducted on small size shear wall structures that incorporates the general construction details and characteristics of as-built shear walls

  15. Where vision meets memory: prefrontal-posterior networks for visual object constancy during categorization and recognition.

    Science.gov (United States)

    Schendan, Haline E; Stern, Chantal E

    2008-07-01

    Objects seen from unusual relative to more canonical views require more time to categorize and recognize, and, according to object model verification theories, additionally recruit prefrontal processes for cognitive control that interact with parietal processes for mental rotation. To test this using functional magnetic resonance imaging, people categorized and recognized known objects from unusual and canonical views. Canonical views activated some components of a default network more on categorization than recognition. Activation to unusual views showed that both ventral and dorsal visual pathways, and prefrontal cortex, have key roles in visual object constancy. Unusual views activated object-sensitive and mental rotation (and not saccade) regions in ventrocaudal intraparietal, transverse occipital, and inferotemporal sulci, and ventral premotor cortex for verification processes of model testing on any task. A collateral-lingual sulci "place" area activated for mental rotation, working memory, and unusual views on correct recognition and categorization trials to accomplish detailed spatial matching. Ventrolateral prefrontal cortex and object-sensitive lateral occipital sulcus activated for mental rotation and unusual views on categorization more than recognition, supporting verification processes of model prediction. This visual knowledge framework integrates vision and memory theories to explain how distinct prefrontal-posterior networks enable meaningful interactions with objects in diverse situations.

  16. Predictive coding of visual object position ahead of moving objects revealed by time-resolved EEG decoding.

    Science.gov (United States)

    Hogendoorn, Hinze; Burkitt, Anthony N

    2018-05-01

    Due to the delays inherent in neuronal transmission, our awareness of sensory events necessarily lags behind the occurrence of those events in the world. If the visual system did not compensate for these delays, we would consistently mislocalize moving objects behind their actual position. Anticipatory mechanisms that might compensate for these delays have been reported in animals, and such mechanisms have also been hypothesized to underlie perceptual effects in humans such as the Flash-Lag Effect. However, to date no direct physiological evidence for anticipatory mechanisms has been found in humans. Here, we apply multivariate pattern classification to time-resolved EEG data to investigate anticipatory coding of object position in humans. By comparing the time-course of neural position representation for objects in both random and predictable apparent motion, we isolated anticipatory mechanisms that could compensate for neural delays when motion trajectories were predictable. As well as revealing an early neural position representation (lag 80-90 ms) that was unaffected by the predictability of the object's trajectory, we demonstrate a second neural position representation at 140-150 ms that was distinct from the first, and that was pre-activated ahead of the moving object when it moved on a predictable trajectory. The latency advantage for predictable motion was approximately 16 ± 2 ms. To our knowledge, this provides the first direct experimental neurophysiological evidence of anticipatory coding in human vision, revealing the time-course of predictive mechanisms without using a spatial proxy for time. The results are numerically consistent with earlier animal work, and suggest that current models of spatial predictive coding in visual cortex can be effectively extended into the temporal domain. Copyright © 2018 Elsevier Inc. All rights reserved.

  17. Visuospatial and visual object cognition in early Parkinson's disease

    OpenAIRE

    Possin, Katherine L.

    2007-01-01

    Recent evidence suggests that Parkinson's disease (PD) may be associated with greater impairment in visuospatial working memory as compared to visual object working memory. The nature of this selective impairment is not well understood, however, in part because successful performance on working memory tasks requires numerous cognitive processes. For example, the impairment may be limited to either the encoding or maintenance aspects of spatial working memory. Further, it is unknown at this po...

  18. Object-based target templates guide attention during visual search

    OpenAIRE

    Berggren, Nick; Eimer, Martin

    2018-01-01

    During visual search, attention is believed to be controlled in a strictly feature-based fashion, without any guidance by object-based target representations. To challenge this received view, we measured electrophysiological markers of attentional selection (N2pc component) and working memory (SPCN) in search tasks where two possible targets were defined by feature conjunctions (e.g., blue circles and green squares). Critically, some search displays also contained nontargets with two target f...

  19. Attribute-based classification for zero-shot visual object categorization.

    Science.gov (United States)

    Lampert, Christoph H; Nickisch, Hannes; Harmeling, Stefan

    2014-03-01

    We study the problem of object recognition for categories for which we have no training examples, a task also called zero--data or zero-shot learning. This situation has hardly been studied in computer vision research, even though it occurs frequently; the world contains tens of thousands of different object classes, and image collections have been formed and suitably annotated for only a few of them. To tackle the problem, we introduce attribute-based classification: Objects are identified based on a high-level description that is phrased in terms of semantic attributes, such as the object's color or shape. Because the identification of each such property transcends the specific learning task at hand, the attribute classifiers can be prelearned independently, for example, from existing image data sets unrelated to the current task. Afterward, new classes can be detected based on their attribute representation, without the need for a new training phase. In this paper, we also introduce a new data set, Animals with Attributes, of over 30,000 images of 50 animal classes, annotated with 85 semantic attributes. Extensive experiments on this and two more data sets show that attribute-based classification indeed is able to categorize images without access to any training images of the target classes.

  20. Methodology for the Efficient Progressive Distribution and Visualization of 3D Building Objects

    Directory of Open Access Journals (Sweden)

    Bo Mao

    2016-10-01

    Full Text Available Three-dimensional (3D, city models have been applied in a variety of fields. One of the main problems in 3D city model utilization, however, is the large volume of data. In this paper, a method is proposed to generalize the 3D building objects in 3D city models at different levels of detail, and to combine multiple Levels of Detail (LODs for a progressive distribution and visualization of the city models. First, an extended structure for multiple LODs of building objects, BuildingTree, is introduced that supports both single buildings and building groups; second, constructive solid geometry (CSG representations of buildings are created and generalized. Finally, the BuildingTree is stored in the NoSQL database MongoDB for dynamic visualization requests. The experimental results indicate that the proposed progressive method can efficiently visualize 3D city models, especially for large areas.

  1. Deep neural networks rival the representation of primate IT cortex for core visual object recognition.

    Directory of Open Access Journals (Sweden)

    Charles F Cadieu

    2014-12-01

    Full Text Available The primate visual system achieves remarkable visual object recognition performance even in brief presentations, and under changes to object exemplar, geometric transformations, and background variation (a.k.a. core visual object recognition. This remarkable performance is mediated by the representation formed in inferior temporal (IT cortex. In parallel, recent advances in machine learning have led to ever higher performing models of object recognition using artificial deep neural networks (DNNs. It remains unclear, however, whether the representational performance of DNNs rivals that of the brain. To accurately produce such a comparison, a major difficulty has been a unifying metric that accounts for experimental limitations, such as the amount of noise, the number of neural recording sites, and the number of trials, and computational limitations, such as the complexity of the decoding classifier and the number of classifier training examples. In this work, we perform a direct comparison that corrects for these experimental limitations and computational considerations. As part of our methodology, we propose an extension of "kernel analysis" that measures the generalization accuracy as a function of representational complexity. Our evaluations show that, unlike previous bio-inspired models, the latest DNNs rival the representational performance of IT cortex on this visual object recognition task. Furthermore, we show that models that perform well on measures of representational performance also perform well on measures of representational similarity to IT, and on measures of predicting individual IT multi-unit responses. Whether these DNNs rely on computational mechanisms similar to the primate visual system is yet to be determined, but, unlike all previous bio-inspired models, that possibility cannot be ruled out merely on representational performance grounds.

  2. Relations of Preschoolers' Visual-Motor and Object Manipulation Skills With Executive Function and Social Behavior.

    Science.gov (United States)

    MacDonald, Megan; Lipscomb, Shannon; McClelland, Megan M; Duncan, Rob; Becker, Derek; Anderson, Kim; Kile, Molly

    2016-12-01

    The purpose of this article was to examine specific linkages between early visual-motor integration skills and executive function, as well as between early object manipulation skills and social behaviors in the classroom during the preschool year. Ninety-two children aged 3 to 5 years old (M age  = 4.31 years) were recruited to participate. Comprehensive measures of visual-motor integration skills, object manipulation skills, executive function, and social behaviors were administered in the fall and spring of the preschool year. Our findings indicated that children who had better visual-motor integration skills in the fall had better executive function scores (B = 0.47 [0.20], p gender, Head Start status, and site location, but not after controlling for children's baseline levels of executive function. In addition, children who demonstrated better object manipulation skills in the fall showed significantly stronger social behavior in their classrooms (as rated by teachers) in the spring, including more self-control (B - 0.03 [0.00], p social behavior in the fall and other covariates. Children's visual-motor integration and object manipulation skills in the fall have modest to moderate relations with executive function and social behaviors later in the preschool year. These findings have implications for early learning initiatives and school readiness.

  3. BOLD repetition decreases in object-responsive ventral visual areas depend on spatial attention.

    Science.gov (United States)

    Eger, E; Henson, R N A; Driver, J; Dolan, R J

    2004-08-01

    Functional imaging studies of priming-related repetition phenomena have become widely used to study neural object representation. Although blood oxygenation level-dependent (BOLD) repetition decreases can sometimes be observed without awareness of repetition, any role for spatial attention in BOLD repetition effects remains largely unknown. We used fMRI in 13 healthy subjects to test whether BOLD repetition decreases for repeated objects in ventral visual cortices depend on allocation of spatial attention to the prime. Subjects performed a size-judgment task on a probe object that had been attended or ignored in a preceding prime display of 2 lateralized objects. Reaction times showed faster responses when the probe was the same object as the attended prime, independent of the view tested (identical vs. mirror image). No behavioral effect was evident from unattended primes. BOLD repetition decreases for attended primes were found in lateral occipital and fusiform regions bilaterally, which generalized across identical and mirror-image repeats. No repetition decreases were observed for ignored primes. Our results suggest a critical role for attention in achieving visual representations of objects that lead to both BOLD signal decreases and behavioral priming on repeated presentation.

  4. Holding an object one is looking at : Kinesthetic information on the object's distance does not improve visual judgments of its size

    NARCIS (Netherlands)

    Brenner, Eli; Van Damme, Wim J.M.; Smeets, Jeroen B.J.

    1997-01-01

    Visual judgments of distance are often inaccurate. Nevertheless, information on distance must be procured if retinal image size is to be used to judge an object's dimensions. In the present study, we examined whether kinesthetic information about an object's distance - based on the posture of the

  5. A Multi-Objective Approach to Visualize Proportions and Similarities Between Individuals by Rectangular Maps

    DEFF Research Database (Denmark)

    Carrizosa, Emilio; Guerrero, Vanesa; Morales, Dolores Romero

    In this paper we address the problem of visualizing the proportions and the similarities attached to a set of individuals. We represent this information using a rectangular map, i.e., a subdivision of a rectangle into rectangular portions so that each portion is associated with one individual...... area and adjacency requirements, this visualization problem is formulated as a three-objective Mixed Integer Nonlinear Problem. The first objective seeks to maximize the number of true adjacencies that the rectangular map is able to reproduce, the second one is to minimize the number of false...

  6. Music and words in the visual cortex: The impact of musical expertise.

    Science.gov (United States)

    Mongelli, Valeria; Dehaene, Stanislas; Vinckier, Fabien; Peretz, Isabelle; Bartolomeo, Paolo; Cohen, Laurent

    2017-01-01

    How does the human visual system accommodate expertise for two simultaneously acquired symbolic systems? We used fMRI to compare activations induced in the visual cortex by musical notation, written words and other classes of objects, in professional musicians and in musically naïve controls. First, irrespective of expertise, selective activations for music were posterior and lateral to activations for words in the left occipitotemporal cortex. This indicates that symbols characterized by different visual features engage distinct cortical areas. Second, musical expertise increased the volume of activations for music and led to an anterolateral displacement of word-related activations. In musicians, there was also a dramatic increase of the brain-scale networks connected to the music-selective visual areas. Those findings reveal that acquiring a double visual expertise involves an expansion of category-selective areas, the development of novel long-distance functional connectivity, and possibly some competition between categories for the colonization of cortical space. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. An object-oriented framework for medical image registration, fusion, and visualization.

    Science.gov (United States)

    Zhu, Yang-Ming; Cochoff, Steven M

    2006-06-01

    An object-oriented framework for image registration, fusion, and visualization was developed based on the classic model-view-controller paradigm. The framework employs many design patterns to facilitate legacy code reuse, manage software complexity, and enhance the maintainability and portability of the framework. Three sample applications built a-top of this framework are illustrated to show the effectiveness of this framework: the first one is for volume image grouping and re-sampling, the second one is for 2D registration and fusion, and the last one is for visualization of single images as well as registered volume images.

  8. Colour Terms Affect Detection of Colour and Colour-Associated Objects Suppressed from Visual Awareness.

    Science.gov (United States)

    Forder, Lewis; Taylor, Olivia; Mankin, Helen; Scott, Ryan B; Franklin, Anna

    2016-01-01

    The idea that language can affect how we see the world continues to create controversy. A potentially important study in this field has shown that when an object is suppressed from visual awareness using continuous flash suppression (a form of binocular rivalry), detection of the object is differently affected by a preceding word prime depending on whether the prime matches or does not match the object. This may suggest that language can affect early stages of vision. We replicated this paradigm and further investigated whether colour terms likewise influence the detection of colours or colour-associated object images suppressed from visual awareness by continuous flash suppression. This method presents rapidly changing visual noise to one eye while the target stimulus is presented to the other. It has been shown to delay conscious perception of a target for up to several minutes. In Experiment 1 we presented greyscale photos of objects. They were either preceded by a congruent object label, an incongruent label, or white noise. Detection sensitivity (d') and hit rates were significantly poorer for suppressed objects preceded by an incongruent label compared to a congruent label or noise. In Experiment 2, targets were coloured discs preceded by a colour term. Detection sensitivity was significantly worse for suppressed colour patches preceded by an incongruent colour term as compared to a congruent term or white noise. In Experiment 3 targets were suppressed greyscale object images preceded by an auditory presentation of a colour term. On congruent trials the colour term matched the object's stereotypical colour and on incongruent trials the colour term mismatched. Detection sensitivity was significantly poorer on incongruent trials than congruent trials. Overall, these findings suggest that colour terms affect awareness of coloured stimuli and colour- associated objects, and provide new evidence for language-perception interaction in the brain.

  9. Colour Terms Affect Detection of Colour and Colour-Associated Objects Suppressed from Visual Awareness.

    Directory of Open Access Journals (Sweden)

    Lewis Forder

    Full Text Available The idea that language can affect how we see the world continues to create controversy. A potentially important study in this field has shown that when an object is suppressed from visual awareness using continuous flash suppression (a form of binocular rivalry, detection of the object is differently affected by a preceding word prime depending on whether the prime matches or does not match the object. This may suggest that language can affect early stages of vision. We replicated this paradigm and further investigated whether colour terms likewise influence the detection of colours or colour-associated object images suppressed from visual awareness by continuous flash suppression. This method presents rapidly changing visual noise to one eye while the target stimulus is presented to the other. It has been shown to delay conscious perception of a target for up to several minutes. In Experiment 1 we presented greyscale photos of objects. They were either preceded by a congruent object label, an incongruent label, or white noise. Detection sensitivity (d' and hit rates were significantly poorer for suppressed objects preceded by an incongruent label compared to a congruent label or noise. In Experiment 2, targets were coloured discs preceded by a colour term. Detection sensitivity was significantly worse for suppressed colour patches preceded by an incongruent colour term as compared to a congruent term or white noise. In Experiment 3 targets were suppressed greyscale object images preceded by an auditory presentation of a colour term. On congruent trials the colour term matched the object's stereotypical colour and on incongruent trials the colour term mismatched. Detection sensitivity was significantly poorer on incongruent trials than congruent trials. Overall, these findings suggest that colour terms affect awareness of coloured stimuli and colour- associated objects, and provide new evidence for language-perception interaction in the brain.

  10. Visual agnosia and posterior cerebral artery infarcts: an anatomical-clinical study.

    Science.gov (United States)

    Martinaud, Olivier; Pouliquen, Dorothée; Gérardin, Emmanuel; Loubeyre, Maud; Hirsbein, David; Hannequin, Didier; Cohen, Laurent

    2012-01-01

    To evaluate systematically the cognitive deficits following posterior cerebral artery (PCA) strokes, especially agnosic visual disorders, and to study anatomical-clinical correlations. We investigated 31 patients at the chronic stage (mean duration of 29.1 months post infarct) with standardized cognitive tests. New experimental tests were used to assess visual impairments for words, faces, houses, and objects. Forty-one healthy subjects participated as controls. Brain lesions were normalized, combined, and related to occipitotemporal areas responsive to specific visual categories, including words (VWFA), faces (FFA and OFA), houses (PPA) and common objects (LOC). Lesions were located in the left hemisphere in 15 patients, in the right in 13, and bilaterally in 3. Visual field defects were found in 23 patients. Twenty patients had a visual disorder in at least one of the experimental tests (9 with faces, 10 with houses, 7 with phones, 3 with words). Six patients had a deficit just for a single category of stimulus. The regions of maximum overlap of brain lesions associated with a deficit for a given category of stimuli were contiguous to the peaks of the corresponding functional areas as identified in normal subjects. However, the strength of anatomical-clinical correlations was greater for words than for faces or houses, probably due to the stronger lateralization of the VWFA, as compared to the FFA or the PPA. Agnosic visual disorders following PCA infarcts are more frequent than previously reported. Dedicated batteries of tests, such as those developed here, are required to identify such deficits, which may escape clinical notice. The spatial relationships of lesions and of regions activated in normal subjects predict the nature of the deficits, although individual variability and bilaterally represented systems may blur those correlations.

  11. Visual agnosia and posterior cerebral artery infarcts: an anatomical-clinical study.

    Directory of Open Access Journals (Sweden)

    Olivier Martinaud

    Full Text Available BACKGROUND: To evaluate systematically the cognitive deficits following posterior cerebral artery (PCA strokes, especially agnosic visual disorders, and to study anatomical-clinical correlations. METHODS AND FINDINGS: We investigated 31 patients at the chronic stage (mean duration of 29.1 months post infarct with standardized cognitive tests. New experimental tests were used to assess visual impairments for words, faces, houses, and objects. Forty-one healthy subjects participated as controls. Brain lesions were normalized, combined, and related to occipitotemporal areas responsive to specific visual categories, including words (VWFA, faces (FFA and OFA, houses (PPA and common objects (LOC. Lesions were located in the left hemisphere in 15 patients, in the right in 13, and bilaterally in 3. Visual field defects were found in 23 patients. Twenty patients had a visual disorder in at least one of the experimental tests (9 with faces, 10 with houses, 7 with phones, 3 with words. Six patients had a deficit just for a single category of stimulus. The regions of maximum overlap of brain lesions associated with a deficit for a given category of stimuli were contiguous to the peaks of the corresponding functional areas as identified in normal subjects. However, the strength of anatomical-clinical correlations was greater for words than for faces or houses, probably due to the stronger lateralization of the VWFA, as compared to the FFA or the PPA. CONCLUSIONS: Agnosic visual disorders following PCA infarcts are more frequent than previously reported. Dedicated batteries of tests, such as those developed here, are required to identify such deficits, which may escape clinical notice. The spatial relationships of lesions and of regions activated in normal subjects predict the nature of the deficits, although individual variability and bilaterally represented systems may blur those correlations.

  12. How Do Observer's Responses Affect Visual Long-Term Memory?

    Science.gov (United States)

    Makovski, Tal; Jiang, Yuhong V.; Swallow, Khena M.

    2013-01-01

    How does responding to an object affect explicit memory for visual information? The close theoretical relationship between action and perception suggests that items that require a response should be better remembered than items that require no response. However, conclusive evidence for this claim is lacking, as semantic coherence, category size,…

  13. Enhanced recognition memory in grapheme-color synaesthesia for different categories of visual stimuli.

    Science.gov (United States)

    Ward, Jamie; Hovard, Peter; Jones, Alicia; Rothen, Nicolas

    2013-01-01

    Memory has been shown to be enhanced in grapheme-color synaesthesia, and this enhancement extends to certain visual stimuli (that don't induce synaesthesia) as well as stimuli comprised of graphemes (which do). Previous studies have used a variety of testing procedures to assess memory in synaesthesia (e.g., free recall, recognition, associative learning) making it hard to know the extent to which memory benefits are attributable to the stimulus properties themselves, the testing method, participant strategies, or some combination of these factors. In the first experiment, we use the same testing procedure (recognition memory) for a variety of stimuli (written words, non-words, scenes, and fractals) and also check which memorization strategies were used. We demonstrate that grapheme-color synaesthetes show enhanced memory across all these stimuli, but this is not found for a non-visual type of synaesthesia (lexical-gustatory). In the second experiment, the memory advantage for scenes is explored further by manipulating the properties of the old and new images (changing color, orientation, or object presence). Again, grapheme-color synaesthetes show a memory advantage for scenes across all manipulations. Although recognition memory is generally enhanced in this study, the largest effects were found for abstract visual images (fractals) and scenes for which color can be used to discriminate old/new status.

  14. Visual Short-Term Memory for Complex Objects in 6- and 8-Month-Old Infants

    Science.gov (United States)

    Kwon, Mee-Kyoung; Luck, Steven J.; Oakes, Lisa M.

    2014-01-01

    Infants' visual short-term memory (VSTM) for simple objects undergoes dramatic development: Six-month-old infants can store in VSTM information about only a simple object presented in isolation, whereas 8-month-old infants can store information about simple objects presented in multiple-item arrays. This study extended this work to examine…

  15. Error-Driven Learning in Visual Categorization and Object Recognition: A Common-Elements Model

    Science.gov (United States)

    Soto, Fabian A.; Wasserman, Edward A.

    2010-01-01

    A wealth of empirical evidence has now accumulated concerning animals' categorizing photographs of real-world objects. Although these complex stimuli have the advantage of fostering rapid category learning, they are difficult to manipulate experimentally and to represent in formal models of behavior. We present a solution to the representation…

  16. From groups to categorial algebra introduction to protomodular and mal’tsev categories

    CERN Document Server

    Bourn, Dominique

    2017-01-01

    This book gives a thorough and entirely self-contained, in-depth introduction to a specific approach to group theory, in a large sense of that word. The focus lie on the relationships which a group may have with other groups, via “universal properties”, a view on that group “from the outside”. This method of categorical algebra, is actually not limited to the study of groups alone, but applies equally well to other similar categories of algebraic objects. By introducing protomodular categories and Mal’tsev categories, which form a larger class, the structural properties of the category Gp of groups, show how they emerge from four very basic observations about the algebraic litteral calculus and how, studied for themselves at the conceptual categorical level, they lead to the main striking features of the category Gp of groups. Hardly any previous knowledge of category theory is assumed, and just a little experience with standard algebraic structures such as groups and monoids. Examples and exercises...

  17. Bundles of C*-categories and duality

    OpenAIRE

    Vasselli, Ezio

    2005-01-01

    We introduce the notions of multiplier C*-category and continuous bundle of C*-categories, as the categorical analogues of the corresponding C*-algebraic notions. Every symmetric tensor C*-category with conjugates is a continuous bundle of C*-categories, with base space the spectrum of the C*-algebra associated with the identity object. We classify tensor C*-categories with fibre the dual of a compact Lie group in terms of suitable principal bundles. This also provides a classification for ce...

  18. Colour Terms Affect Detection of Colour and Colour-Associated Objects Suppressed from Visual Awareness

    Science.gov (United States)

    Forder, Lewis; Taylor, Olivia; Mankin, Helen; Scott, Ryan B.; Franklin, Anna

    2016-01-01

    The idea that language can affect how we see the world continues to create controversy. A potentially important study in this field has shown that when an object is suppressed from visual awareness using continuous flash suppression (a form of binocular rivalry), detection of the object is differently affected by a preceding word prime depending on whether the prime matches or does not match the object. This may suggest that language can affect early stages of vision. We replicated this paradigm and further investigated whether colour terms likewise influence the detection of colours or colour-associated object images suppressed from visual awareness by continuous flash suppression. This method presents rapidly changing visual noise to one eye while the target stimulus is presented to the other. It has been shown to delay conscious perception of a target for up to several minutes. In Experiment 1 we presented greyscale photos of objects. They were either preceded by a congruent object label, an incongruent label, or white noise. Detection sensitivity (d’) and hit rates were significantly poorer for suppressed objects preceded by an incongruent label compared to a congruent label or noise. In Experiment 2, targets were coloured discs preceded by a colour term. Detection sensitivity was significantly worse for suppressed colour patches preceded by an incongruent colour term as compared to a congruent term or white noise. In Experiment 3 targets were suppressed greyscale object images preceded by an auditory presentation of a colour term. On congruent trials the colour term matched the object’s stereotypical colour and on incongruent trials the colour term mismatched. Detection sensitivity was significantly poorer on incongruent trials than congruent trials. Overall, these findings suggest that colour terms affect awareness of coloured stimuli and colour- associated objects, and provide new evidence for language-perception interaction in the brain. PMID:27023274

  19. A unified computational model of the development of object unity, object permanence, and occluded object trajectory perception.

    Science.gov (United States)

    Franz, A; Triesch, J

    2010-12-01

    The perception of the unity of objects, their permanence when out of sight, and the ability to perceive continuous object trajectories even during occlusion belong to the first and most important capacities that infants have to acquire. Despite much research a unified model of the development of these abilities is still missing. Here we make an attempt to provide such a unified model. We present a recurrent artificial neural network that learns to predict the motion of stimuli occluding each other and that develops representations of occluded object parts. It represents completely occluded, moving objects for several time steps and successfully predicts their reappearance after occlusion. This framework allows us to account for a broad range of experimental data. Specifically, the model explains how the perception of object unity develops, the role of the width of the occluders, and it also accounts for differences between data for moving and stationary stimuli. We demonstrate that these abilities can be acquired by learning to predict the sensory input. The model makes specific predictions and provides a unifying framework that has the potential to be extended to other visual event categories. Copyright © 2010 Elsevier Inc. All rights reserved.

  20. Dynamic information processing states revealed through neurocognitive models of object semantics

    Science.gov (United States)

    Clarke, Alex

    2015-01-01

    Recognising objects relies on highly dynamic, interactive brain networks to process multiple aspects of object information. To fully understand how different forms of information about objects are represented and processed in the brain requires a neurocognitive account of visual object recognition that combines a detailed cognitive model of semantic knowledge with a neurobiological model of visual object processing. Here we ask how specific cognitive factors are instantiated in our mental processes and how they dynamically evolve over time. We suggest that coarse semantic information, based on generic shared semantic knowledge, is rapidly extracted from visual inputs and is sufficient to drive rapid category decisions. Subsequent recurrent neural activity between the anterior temporal lobe and posterior fusiform supports the formation of object-specific semantic representations – a conjunctive process primarily driven by the perirhinal cortex. These object-specific representations require the integration of shared and distinguishing object properties and support the unique recognition of objects. We conclude that a valuable way of understanding the cognitive activity of the brain is though testing the relationship between specific cognitive measures and dynamic neural activity. This kind of approach allows us to move towards uncovering the information processing states of the brain and how they evolve over time. PMID:25745632

  1. Effects of dividing attention during encoding on perceptual priming of unfamiliar visual objects

    Science.gov (United States)

    Soldan, Anja; Mangels, Jennifer A.; Cooper, Lynn A.

    2008-01-01

    According to the distractor-selection hypothesis (Mulligan, 2003), dividing attention during encoding reduces perceptual priming when responses to non-critical (i.e., distractor) stimuli are selected frequently and simultaneously with critical stimulus encoding. Because direct support for this hypothesis comes exclusively from studies using familiar word stimuli, the present study tested whether the predictions of the distractor-selection hypothesis extend to perceptual priming of unfamiliar visual objects using the possible/impossible object-decision test. Consistent with the distractor-selection hypothesis, Experiments 1 and 2 found no reduction in priming when the non-critical stimuli were presented infrequently and non-synchronously with the critical target stimuli, even though explicit recognition memory was reduced. In Experiment 3, non-critical stimuli were presented frequently and simultaneously during encoding of critical stimuli; however, no decrement in priming was detected, even when encoding time was reduced. These results suggest that priming in the possible/impossible object-decision test is relatively immune to reductions in central attention and that not all aspects of the distractor-selection hypothesis generalize to priming of unfamiliar visual objects. Implications for theoretical models of object-decision priming are discussed. PMID:18821167

  2. General object recognition is specific: Evidence from novel and familiar objects.

    Science.gov (United States)

    Richler, Jennifer J; Wilmer, Jeremy B; Gauthier, Isabel

    2017-09-01

    In tests of object recognition, individual differences typically correlate modestly but nontrivially across familiar categories (e.g. cars, faces, shoes, birds, mushrooms). In theory, these correlations could reflect either global, non-specific mechanisms, such as general intelligence (IQ), or more specific mechanisms. Here, we introduce two separate methods for effectively capturing category-general performance variation, one that uses novel objects and one that uses familiar objects. In each case, we show that category-general performance variance is unrelated to IQ, thereby implicating more specific mechanisms. The first approach examines three newly developed novel object memory tests (NOMTs). We predicted that NOMTs would exhibit more shared, category-general variance than familiar object memory tests (FOMTs) because novel objects, unlike familiar objects, lack category-specific environmental influences (e.g. exposure to car magazines or botany classes). This prediction held, and remarkably, virtually none of the substantial shared variance among NOMTs was explained by IQ. Also, while NOMTs correlated nontrivially with two FOMTs (faces, cars), these correlations were smaller than among NOMTs and no larger than between the face and car tests themselves, suggesting that the category-general variance captured by NOMTs is specific not only relative to IQ, but also, to some degree, relative to both face and car recognition. The second approach averaged performance across multiple FOMTs, which we predicted would increase category-general variance by averaging out category-specific factors. This prediction held, and as with NOMTs, virtually none of the shared variance among FOMTs was explained by IQ. Overall, these results support the existence of object recognition mechanisms that, though category-general, are specific relative to IQ and substantially separable from face and car recognition. They also add sensitive, well-normed NOMTs to the tools available to study

  3. The Representation of Color across the Human Visual Cortex: Distinguishing Chromatic Signals Contributing to Object Form Versus Surface Color.

    Science.gov (United States)

    Seymour, K J; Williams, M A; Rich, A N

    2016-05-01

    Many theories of visual object perception assume the visual system initially extracts borders between objects and their background and then "fills in" color to the resulting object surfaces. We investigated the transformation of chromatic signals across the human ventral visual stream, with particular interest in distinguishing representations of object surface color from representations of chromatic signals reflecting the retinal input. We used fMRI to measure brain activity while participants viewed figure-ground stimuli that differed either in the position or in the color contrast polarity of the foreground object (the figure). Multivariate pattern analysis revealed that classifiers were able to decode information about which color was presented at a particular retinal location from early visual areas, whereas regions further along the ventral stream exhibited biases for representing color as part of an object's surface, irrespective of its position on the retina. Additional analyses showed that although activity in V2 contained strong chromatic contrast information to support the early parsing of objects within a visual scene, activity in this area also signaled information about object surface color. These findings are consistent with the view that mechanisms underlying scene segmentation and the binding of color to object surfaces converge in V2. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  4. Short-term storage capacity for visual objects depends on expertise

    DEFF Research Database (Denmark)

    Sørensen, Thomas Alrik; Kyllingsbæk, Søren

    2012-01-01

    Visual short-term memory (VSTM) has traditionally been thought to have a very limited capacity of around 3–4 objects. However, recently several researchers have argued that VSTM may be limited in the amount of information retained rather than by a specific number of objects. Here we present a study...... of the effect of long-term practice on VSTM capacity. We investigated four age groups ranging from pre-school children to adults and measured the change in VSTM capacity for letters and pictures. We found a clear increase in VSTM capacity for letters with age but not for pictures. Our results indicate that VSTM...

  5. The neural basis of precise visual short-term memory for complex recognisable objects.

    Science.gov (United States)

    Veldsman, Michele; Mitchell, Daniel J; Cusack, Rhodri

    2017-10-01

    Recent evidence suggests that visual short-term memory (VSTM) capacity estimated using simple objects, such as colours and oriented bars, may not generalise well to more naturalistic stimuli. More visual detail can be stored in VSTM when complex, recognisable objects are maintained compared to simple objects. It is not yet known if it is recognisability that enhances memory precision, nor whether maintenance of recognisable objects is achieved with the same network of brain regions supporting maintenance of simple objects. We used a novel stimulus generation method to parametrically warp photographic images along a continuum, allowing separate estimation of the precision of memory representations and the number of items retained. The stimulus generation method was also designed to create unrecognisable, though perceptually matched, stimuli, to investigate the impact of recognisability on VSTM. We adapted the widely-used change detection and continuous report paradigms for use with complex, photographic images. Across three functional magnetic resonance imaging (fMRI) experiments, we demonstrated greater precision for recognisable objects in VSTM compared to unrecognisable objects. This clear behavioural advantage was not the result of recruitment of additional brain regions, or of stronger mean activity within the core network. Representational similarity analysis revealed greater variability across item repetitions in the representations of recognisable, compared to unrecognisable complex objects. We therefore propose that a richer range of neural representations support VSTM for complex recognisable objects. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Automatic frame-centered object representation and integration revealed by iconic memory, visual priming, and backward masking.

    Science.gov (United States)

    Lin, Zhicheng; He, Sheng

    2012-10-25

    Object identities ("what") and their spatial locations ("where") are processed in distinct pathways in the visual system, raising the question of how the what and where information is integrated. Because of object motions and eye movements, the retina-based representations are unstable, necessitating nonretinotopic representation and integration. A potential mechanism is to code and update objects according to their reference frames (i.e., frame-centered representation and integration). To isolate frame-centered processes, in a frame-to-frame apparent motion configuration, we (a) presented two preceding or trailing objects on the same frame, equidistant from the target on the other frame, to control for object-based (frame-based) effect and space-based effect, and (b) manipulated the target's relative location within its frame to probe frame-centered effect. We show that iconic memory, visual priming, and backward masking depend on objects' relative frame locations, orthogonal of the retinotopic coordinate. These findings not only reveal that iconic memory, visual priming, and backward masking can be nonretinotopic but also demonstrate that these processes are automatically constrained by contextual frames through a frame-centered mechanism. Thus, object representation is robustly and automatically coupled to its reference frame and continuously being updated through a frame-centered, location-specific mechanism. These findings lead to an object cabinet framework, in which objects ("files") within the reference frame ("cabinet") are orderly coded relative to the frame.

  7. Automatic guidance of attention during real-world visual search.

    Science.gov (United States)

    Seidl-Rathkopf, Katharina N; Turk-Browne, Nicholas B; Kastner, Sabine

    2015-08-01

    Looking for objects in cluttered natural environments is a frequent task in everyday life. This process can be difficult, because the features, locations, and times of appearance of relevant objects often are not known in advance. Thus, a mechanism by which attention is automatically biased toward information that is potentially relevant may be helpful. We tested for such a mechanism across five experiments by engaging participants in real-world visual search and then assessing attentional capture for information that was related to the search set but was otherwise irrelevant. Isolated objects captured attention while preparing to search for objects from the same category embedded in a scene, as revealed by lower detection performance (Experiment 1A). This capture effect was driven by a central processing bottleneck rather than the withdrawal of spatial attention (Experiment 1B), occurred automatically even in a secondary task (Experiment 2A), and reflected enhancement of matching information rather than suppression of nonmatching information (Experiment 2B). Finally, attentional capture extended to objects that were semantically associated with the target category (Experiment 3). We conclude that attention is efficiently drawn towards a wide range of information that may be relevant for an upcoming real-world visual search. This mechanism may be adaptive, allowing us to find information useful for our behavioral goals in the face of uncertainty.

  8. Automatic guidance of attention during real-world visual search

    Science.gov (United States)

    Seidl-Rathkopf, Katharina N.; Turk-Browne, Nicholas B.; Kastner, Sabine

    2015-01-01

    Looking for objects in cluttered natural environments is a frequent task in everyday life. This process can be difficult, as the features, locations, and times of appearance of relevant objects are often not known in advance. A mechanism by which attention is automatically biased toward information that is potentially relevant may thus be helpful. Here we tested for such a mechanism across five experiments by engaging participants in real-world visual search and then assessing attentional capture for information that was related to the search set but was otherwise irrelevant. Isolated objects captured attention while preparing to search for objects from the same category embedded in a scene, as revealed by lower detection performance (Experiment 1A). This capture effect was driven by a central processing bottleneck rather than the withdrawal of spatial attention (Experiment 1B), occurred automatically even in a secondary task (Experiment 2A), and reflected enhancement of matching information rather than suppression of non-matching information (Experiment 2B). Finally, attentional capture extended to objects that were semantically associated with the target category (Experiment 3). We conclude that attention is efficiently drawn towards a wide range of information that may be relevant for an upcoming real-world visual search. This mechanism may be adaptive, allowing us to find information useful for our behavioral goals in the face of uncertainty. PMID:25898897

  9. Chromatic Perceptual Learning but No Category Effects without Linguistic Input.

    Science.gov (United States)

    Grandison, Alexandra; Sowden, Paul T; Drivonikou, Vicky G; Notman, Leslie A; Alexander, Iona; Davies, Ian R L

    2016-01-01

    Perceptual learning involves an improvement in perceptual judgment with practice, which is often specific to stimulus or task factors. Perceptual learning has been shown on a range of visual tasks but very little research has explored chromatic perceptual learning. Here, we use two low level perceptual threshold tasks and a supra-threshold target detection task to assess chromatic perceptual learning and category effects. Experiment 1 investigates whether chromatic thresholds reduce as a result of training and at what level of analysis learning effects occur. Experiment 2 explores the effect of category training on chromatic thresholds, whether training of this nature is category specific and whether it can induce categorical responding. Experiment 3 investigates the effect of category training on a higher level, lateralized target detection task, previously found to be sensitive to category effects. The findings indicate that performance on a perceptual threshold task improves following training but improvements do not transfer across retinal location or hue. Therefore, chromatic perceptual learning is category specific and can occur at relatively early stages of visual analysis. Additionally, category training does not induce category effects on a low level perceptual threshold task, as indicated by comparable discrimination thresholds at the newly learned hue boundary and adjacent test points. However, category training does induce emerging category effects on a supra-threshold target detection task. Whilst chromatic perceptual learning is possible, learnt category effects appear to be a product of left hemisphere processing, and may require the input of higher level linguistic coding processes in order to manifest.

  10. Grammatical Gender and Mental Representation of Object: The Case of Musical Instruments

    Science.gov (United States)

    Vuksanovic, Jasmina; Bjekic, Jovana; Radivojevic, Natalija

    2015-01-01

    A body of research shows that grammatical gender, although an arbitrary category, is viewed as the system with its own meaning. However, the question remains to what extent does grammatical gender influence shaping our notions about objects when both verbal and visual information are available. Two experiments were conducted. The results obtained…

  11. Object integration requires attention: visual search for Kanizsa figures in parietal extinction

    OpenAIRE

    Gögler, N.; Finke, K.; Keller, I.; Muller, Hermann J.; Conci, M.

    2016-01-01

    The contribution of selective attention to object integration is a topic of debate: integration of parts into coherent wholes, such as in Kanizsa figures, is thought to arise either from pre-attentive, automatic coding processes or from higher-order processes involving selective attention. Previous studies have attempted to examine the role of selective attention in object integration either by employing visual search paradigms or by studying patients with unilateral deficits in selective att...

  12. Studying the added value of visual attention in objective image quality metrics based on eye movement data

    NARCIS (Netherlands)

    Liu, H.; Heynderickx, I.E.J.

    2009-01-01

    Current research on image quality assessment tends to include visual attention in objective metrics to further enhance their performance. A variety of computational models of visual attention are implemented in different metrics, but their accuracy in representing human visual attention is not fully

  13. Enhanced Recognition Memory in Grapheme-Colour Synaesthesia for Different Categories of Visual Stimuli

    Directory of Open Access Journals (Sweden)

    Jamie eWard

    2013-10-01

    Full Text Available Memory has been shown to be enhanced in grapheme-colour synaesthesia, and this enhancement extends to certain visual stimuli (that don’t induce synaesthesia as well as stimuli comprised of graphemes (which do. Previous studies have used a variety of testing procedures to assess memory in synaesthesia (e.g. free recall, recognition, associative learning making it hard to know the extent to which memory benefits are attributable to the stimulus properties themselves, the testing method, participant strategies, or some combination of these factors. In the first experiment, we use the same testing procedure (recognition memory for a variety of stimuli (written words, nonwords, scenes, and fractals and also check which memorisation strategies were used. We demonstrate that grapheme-colour synaesthetes show enhanced memory across all these stimuli, but this is not found for a non-visual type of synaesthesia (lexical-gustatory. In the second experiment, the memory advantage for scenes is explored further by manipulating the properties of the old and new images (changing colour, orientation, or object presence. Again, grapheme-colour synaesthetes show a memory advantage for scenes across all manipulations. Although recognition memory is generally enhanced in this study, the largest effects were found for abstract visual images (fractals and scenes for which colour can be used to discriminate old/new status.

  14. Detecting changes in real-world objects: The relationship between visual long-term memory and change blindness.

    Science.gov (United States)

    Brady, Timothy F; Konkle, Talia; Oliva, Aude; Alvarez, George A

    2009-01-01

    A large body of literature has shown that observers often fail to notice significant changes in visual scenes, even when these changes happen right in front of their eyes. For instance, people often fail to notice if their conversation partner is switched to another person, or if large background objects suddenly disappear.1,2 These 'change blindness' studies have led to the inference that the amount of information we remember about each item in a visual scene may be quite low.1 However, in recent work we have demonstrated that long-term memory is capable of storing a massive number of visual objects with significant detail about each item.3 In the present paper we attempt to reconcile these findings by demonstrating that observers do not experience 'change blindness' with the real world objects used in our previous experiment if they are given sufficient time to encode each item. The results reported here suggest that one of the major causes of change blindness for real-world objects is a lack of encoding time or attention to each object (see also refs. 4 and 5).

  15. Auditory Scene Analysis and sonified visual images. Does consonance negatively impact on object formation when using complex sonified stimuli?

    Directory of Open Access Journals (Sweden)

    David J Brown

    2015-10-01

    Full Text Available A critical task for the brain is the sensory representation and identification of perceptual objects in the world. When the visual sense is impaired, hearing and touch must take primary roles and in recent times compensatory techniques have been developed that employ the tactile or auditory system as a substitute for the visual system. Visual-to-auditory sonifications provide a complex, feature-based auditory representation that must be decoded and integrated into an object-based representation by the listener. However, we don’t yet know what role the auditory system plays in the object integration stage and whether the principles of auditory scene analysis apply. Here we used coarse sonified images in a two-tone discrimination task to test whether auditory feature-based representations of visual objects would be confounded when their features conflicted with the principles of auditory consonance. We found that listeners (N = 36 performed worse in an object recognition task when the auditory feature-based representation was harmonically consonant. We also found that this conflict was not negated with the provision of congruent audio-visual information. The findings suggest that early auditory processes of harmonic grouping dominate the object formation process and that the complexity of the signal, and additional sensory information have limited effect on this.

  16. The Characteristics and Limits of Rapid Visual Categorization

    Science.gov (United States)

    Fabre-Thorpe, Michèle

    2011-01-01

    Visual categorization appears both effortless and virtually instantaneous. The study by Thorpe et al. (1996) was the first to estimate the processing time necessary to perform fast visual categorization of animals in briefly flashed (20 ms) natural photographs. They observed a large differential EEG activity between target and distracter correct trials that developed from 150 ms after stimulus onset, a value that was later shown to be even shorter in monkeys! With such strong processing time constraints, it was difficult to escape the conclusion that rapid visual categorization was relying on massively parallel, essentially feed-forward processing of visual information. Since 1996, we have conducted a large number of studies to determine the characteristics and limits of fast visual categorization. The present chapter will review some of the main results obtained. I will argue that rapid object categorizations in natural scenes can be done without focused attention and are most likely based on coarse and unconscious visual representations activated with the first available (magnocellular) visual information. Fast visual processing proved efficient for the categorization of large superordinate object or scene categories, but shows its limits when more detailed basic representations are required. The representations for basic objects (dogs, cars) or scenes (mountain or sea landscapes) need additional processing time to be activated. This finding is at odds with the widely accepted idea that such basic representations are at the entry level of the system. Interestingly, focused attention is still not required to perform these time consuming basic categorizations. Finally we will show that object and context processing can interact very early in an ascending wave of visual information processing. We will discuss how such data could result from our experience with a highly structured and predictable surrounding world that shaped neuronal visual selectivity. PMID

  17. Grounding grammatical categories: attention bias in hand space influences grammatical congruency judgment of Chinese nominal classifiers.

    Science.gov (United States)

    Lobben, Marit; D'Ascenzo, Stefania

    2015-01-01

    Embodied cognitive theories predict that linguistic conceptual representations are grounded and continually represented in real world, sensorimotor experiences. However, there is an on-going debate on whether this also holds for abstract concepts. Grammar is the archetype of abstract knowledge, and therefore constitutes a test case against embodied theories of language representation. Former studies have largely focussed on lexical-level embodied representations. In the present study we take the grounding-by-modality idea a step further by using reaction time (RT) data from the linguistic processing of nominal classifiers in Chinese. We take advantage of an independent body of research, which shows that attention in hand space is biased. Specifically, objects near the hand consistently yield shorter RTs as a function of readiness for action on graspable objects within reaching space, and the same biased attention inhibits attentional disengagement. We predicted that this attention bias would equally apply to the graspable object classifier but not to the big object classifier. Chinese speakers (N = 22) judged grammatical congruency of classifier-noun combinations in two conditions: graspable object classifier and big object classifier. We found that RTs for the graspable object classifier were significantly faster in congruent combinations, and significantly slower in incongruent combinations, than the big object classifier. There was no main effect on grammatical violations, but rather an interaction effect of classifier type. Thus, we demonstrate here grammatical category-specific effects pertaining to the semantic content and by extension the visual and tactile modality of acquisition underlying the acquisition of these categories. We conclude that abstract grammatical categories are subjected to the same mechanisms as general cognitive and neurophysiological processes and may therefore be grounded.

  18. Visual perspective in autobiographical memories: reliability, consistency, and relationship to objective memory performance.

    Science.gov (United States)

    Siedlecki, Karen L

    2015-01-01

    Visual perspective in autobiographical memories was examined in terms of reliability, consistency, and relationship to objective memory performance in a sample of 99 individuals. Autobiographical memories may be recalled from two visual perspectives--a field perspective in which individuals experience the memory through their own eyes, or an observer perspective in which individuals experience the memory from the viewpoint of an observer in which they can see themselves. Participants recalled nine word-cued memories that differed in emotional valence (positive, negative and neutral) and rated their memories on 18 scales. Results indicate that visual perspective was the most reliable memory characteristic overall and is consistently related to emotional intensity at the time of recall and amount of emotion experienced during the memory. Visual perspective is unrelated to memory for words, stories, abstract line drawings or faces.

  19. Encoding of faces and objects into visual working memory: an event-related brain potential study.

    Science.gov (United States)

    Meinhardt-Injac, Bozana; Persike, Malte; Berti, Stefan

    2013-09-11

    Visual working memory (VWM) is an important prerequisite for cognitive functions, but little is known on whether the general perceptual processing advantage for faces also applies to VWM processes. The aim of the present study was (a) to test whether there is a general advantage for face stimuli in VWM and (b) to unravel whether this advantage is related to early sensory processing stages. To address these questions, we compared encoding of faces and complex nonfacial objects into VWM within a combined behavioral and event-related brain potential (ERP) study. In detail, we tested whether the N170 ERP component - which is associated with face-specific holistic processing - is affected by memory load for faces or whether it might be involved in WM encoding of any complex object. Participants performed a same-different task with either face or watch stimuli and with two different levels of memory load. Behavioral measures show an advantage for faces on the level of VWM, mirrored in higher estimated VWM capacity (i.e. Cowan's K) for faces compared with watches. In the ERP, the N170 amplitude was enhanced for faces compared with watches. However, the N170 was not modulated by working memory load either for faces or for watches. In contrast, the P3b component was affected by memory load irrespective of the stimulus category. Taken together, the results suggest that the VWM advantage for faces is not reflected at the sensory stages of stimulus processing, but rather at later higher-level processes as reflected by the P3b component.

  20. The 5-HT2A/1A agonist psilocybin disrupts modal object completion associated with visual hallucinations.

    Science.gov (United States)

    Kometer, Michael; Cahn, B Rael; Andel, David; Carter, Olivia L; Vollenweider, Franz X

    2011-03-01

    Recent findings suggest that the serotonergic system and particularly the 5-HT2A/1A receptors are implicated in visual processing and possibly the pathophysiology of visual disturbances including hallucinations in schizophrenia and Parkinson's disease. To investigate the role of 5-HT2A/1A receptors in visual processing the effect of the hallucinogenic 5-HT2A/1A agonist psilocybin (125 and 250 μg/kg vs. placebo) on the spatiotemporal dynamics of modal object completion was assessed in normal volunteers (n = 17) using visual evoked potential recordings in conjunction with topographic-mapping and source analysis. These effects were then considered in relation to the subjective intensity of psilocybin-induced visual hallucinations quantified by psychometric measurement. Psilocybin dose-dependently decreased the N170 and, in contrast, slightly enhanced the P1 component selectively over occipital electrode sites. The decrease of the N170 was most apparent during the processing of incomplete object figures. Moreover, during the time period of the N170, the overall reduction of the activation in the right extrastriate and posterior parietal areas correlated positively with the intensity of visual hallucinations. These results suggest a central role of the 5-HT2A/1A-receptors in the modulation of visual processing. Specifically, a reduced N170 component was identified as potentially reflecting a key process of 5-HT2A/1A receptor-mediated visual hallucinations and aberrant modal object completion potential. Copyright © 2011 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  1. [Recognition of visual objects under forward masking. Effects of cathegorial similarity of test and masking stimuli].

    Science.gov (United States)

    Gerasimenko, N Iu; Slavutskaia, A V; Kalinin, S A; Kulikov, M A; Mikhaĭlova, E S

    2013-01-01

    In 38 healthy subjects accuracy and response time were examined during recognition of two categories of images--animals andnonliving objects--under forward masking. We revealed new data that masking effects depended of categorical similarity of target and masking stimuli. The recognition accuracy was the lowest and the response time was the most slow, when the target and masking stimuli belongs to the same category, that was combined with high dispersion of response times. The revealed effects were more clear in the task of animal recognition in comparison with the recognition of nonliving objects. We supposed that the revealed effects connected with interference between cortical representations of the target and masking stimuli and discussed our results in context of cortical interference and negative priming.

  2. Social Vision: Visual cues communicate categories to observers

    OpenAIRE

    Johnson, Kerri L

    2009-01-01

    This information ranges from appreciating category membership to evaluating more enduring traits and dispositions. These aspects of social perception appear to be highly automated, some would even call them obligatory, and they are heavily influenced by two sources of information: the face and the body. From minimal information such as brief exposure to the face or degraded images of dynamic body motion, social judgments are made with remarkable efficiency and, at times, surprising accuracy.

  3. Evaluating color descriptors for object and scene recognition.

    Science.gov (United States)

    van de Sande, Koen E A; Gevers, Theo; Snoek, Cees G M

    2010-09-01

    Image category recognition is important to access visual information on the level of objects and scene types. So far, intensity-based descriptors have been widely used for feature extraction at salient points. To increase illumination invariance and discriminative power, color descriptors have been proposed. Because many different descriptors exist, a structured overview is required of color invariant descriptors in the context of image category recognition. Therefore, this paper studies the invariance properties and the distinctiveness of color descriptors (software to compute the color descriptors from this paper is available from http://www.colordescriptors.com) in a structured way. The analytical invariance properties of color descriptors are explored, using a taxonomy based on invariance properties with respect to photometric transformations, and tested experimentally using a data set with known illumination conditions. In addition, the distinctiveness of color descriptors is assessed experimentally using two benchmarks, one from the image domain and one from the video domain. From the theoretical and experimental results, it can be derived that invariance to light intensity changes and light color changes affects category recognition. The results further reveal that, for light intensity shifts, the usefulness of invariance is category-specific. Overall, when choosing a single descriptor and no prior knowledge about the data set and object and scene categories is available, the OpponentSIFT is recommended. Furthermore, a combined set of color descriptors outperforms intensity-based SIFT and improves category recognition by 8 percent on the PASCAL VOC 2007 and by 7 percent on the Mediamill Challenge.

  4. Figure–ground organization and the emergence of proto-objects in the visual cortex

    OpenAIRE

    von der Heydt, Rüdiger

    2015-01-01

    A long history of studies of perception has shown that the visual system organizes the incoming information early on, interpreting the 2D image in terms of a 3D world and producing a structure that provides perceptual continuity and enables object-based attention. Recordings from monkey visual cortex show that many neurons, especially in area V2, are selective for border ownership. These neurons are edge selective and have ordinary classical receptive fields, but in addition their responses a...

  5. The Role of Shape in Semantic Memory Organization of Objects: An Experimental Study Using PI-Release.

    Science.gov (United States)

    van Weelden, Lisanne; Schilperoord, Joost; Swerts, Marc; Pecher, Diane

    2015-01-01

    Visual information contributes fundamentally to the process of object categorization. The present study investigated whether the degree of activation of visual information in this process is dependent on the contextual relevance of this information. We used the Proactive Interference (PI-release) paradigm. In four experiments, we manipulated the information by which objects could be categorized and subsequently be retrieved from memory. The pattern of PI-release showed that if objects could be stored and retrieved both by (non-perceptual) semantic and (perceptual) shape information, then shape information was overruled by semantic information. If, however, semantic information could not be (satisfactorily) used to store and retrieve objects, then objects were stored in memory in terms of their shape. The latter effect was found to be strongest for objects from identical semantic categories.

  6. Visual perception and interception of falling objects: a review of evidence for an internal model of gravity.

    Science.gov (United States)

    Zago, Myrka; Lacquaniti, Francesco

    2005-09-01

    Prevailing views on how we time the interception of a moving object assume that the visual inputs are informationally sufficient to estimate the time-to-contact from the object's kinematics. However, there are limitations in the visual system that raise questions about the general validity of these theories. Most notably, vision is poorly sensitive to arbitrary accelerations. How then does the brain deal with the motion of objects accelerated by Earth's gravity? Here we review evidence in favor of the view that the brain makes the best estimate about target motion based on visually measured kinematics and an a priori guess about the causes of motion. According to this theory, a predictive model is used to extrapolate time-to-contact from the expected kinetics in the Earth's gravitational field.

  7. Convergence semigroup categories

    Directory of Open Access Journals (Sweden)

    Gary Richardson

    2013-09-01

    Full Text Available Properties of the category consisting of all objects of the form (X, S, λ are investigated, where X is a convergence space, S is a commutative semigroup, and λ: X × S → X is a continuous action. A “generalized quotient” of each object is defined without making the usual assumption that for each fixed g ∈ S, λ(., g : X  → X is an injection.

  8. Visual Processing of Object Velocity and Acceleration

    Science.gov (United States)

    1994-02-04

    A failure of motion deblurring in the human visual system. Investigative Opthalmology and Visual Sciences (Suppl),34, 1230 Watamaniuk, S.N.J. and...McKee, S.P. Why is a trajectory more detectable in noise than correlated signal dots? Investigative Opthalmology and Visual Sciences (Suppl),34, 1364

  9. On hierarchical models for visual recognition and learning of objects, scenes, and activities

    CERN Document Server

    Spehr, Jens

    2015-01-01

    In many computer vision applications, objects have to be learned and recognized in images or image sequences. This book presents new probabilistic hierarchical models that allow an efficient representation of multiple objects of different categories, scales, rotations, and views. The idea is to exploit similarities between objects and object parts in order to share calculations and avoid redundant information. Furthermore inference approaches for fast and robust detection are presented. These new approaches combine the idea of compositional and similarity hierarchies and overcome limitations of previous methods. Besides classical object recognition the book shows the use for detection of human poses in a project for gait analysis. The use of activity detection is presented for the design of environments for ageing, to identify activities and behavior patterns in smart homes. In a presented project for parking spot detection using an intelligent vehicle, the proposed approaches are used to hierarchically model...

  10. Reciprocal Engagement Between a Scientist and Visual Displays

    Science.gov (United States)

    Nolasco, Michelle Maria

    In this study the focus of investigation was the reciprocal engagement between a professional scientist and the visual displays with which he interacted. Visual displays are considered inextricable from everyday scientific endeavors and their interpretation requires a "back-and-forthness" between the viewers and the objects being viewed. The query that drove this study was: How does a scientist engage with visual displays during the explanation of his understanding of extremely small biological objects? The conceptual framework was based in embodiment where the scientist's talk, gesture, and body position were observed and microanalyzed. The data consisted of open-ended interviews that positioned the scientist to interact with visual displays when he explained the structure and function of different sub-cellular features. Upon microanalyzing the scientist's talk, gesture, and body position during his interactions with two different visual displays, four themes were uncovered: Naming, Layering, Categorizing, and Scaling . Naming occurred when the scientist added markings to a pre-existing, hand-drawn visual display. The markings had meaning as stand-alone label and iconic symbols. Also, the markings transformed the pre-existing visual display, which resulted in its function as a new visual object. Layering occurred when the scientist gestured over images so that his gestures aligned with one or more of the image's features, but did not touch the actual visual display. Categorizing occurred when the scientist used contrasting categories, e.g. straight vs. not straight, to explain his understanding about different characteristics that the small biological objects held. Scaling occurred when the scientist used gesture to resize an image's features so that they fit his bodily scale. Three main points were drawn from this study. First, the scientist employed a variety of embodied strategies—coordinated talk, gesture, and body position—when he explained the structure

  11. How semantic category modulates preschool children's visual memory.

    Science.gov (United States)

    Giganti, Fiorenza; Viggiano, Maria Pia

    2015-01-01

    The dynamic interplay between perception and memory has been explored in preschool children by presenting filtered stimuli regarding animals and artifacts. The identification of filtered images was markedly influenced by both prior exposure and the semantic nature of the stimuli. The identification of animals required less physical information than artifacts did. Our results corroborate the notion that the human attention system evolves to reliably develop definite category-specific selection criteria by which living entities are monitored in different ways.

  12. The cost of selective attention in category learning: Developmental differences between adults and infants

    Science.gov (United States)

    Best, Catherine A.; Yim, Hyungwook; Sloutsky, Vladimir M.

    2013-01-01

    Selective attention plays an important role in category learning. However, immaturities of top-down attentional control during infancy coupled with successful category learning suggest that early category learning is achieved without attending selectively. Research presented here examines this possibility by focusing on category learning in infants (6–8 months old) and adults. Participants were trained on a novel visual category. Halfway through the experiment, unbeknownst to participants, the to-be-learned category switched to another category, where previously relevant features became irrelevant and previously irrelevant features became relevant. If participants attend selectively to the relevant features of the first category, they should incur a cost of selective attention immediately after the unknown category switch. Results revealed that adults demonstrated a cost, as evidenced by a decrease in accuracy and response time on test trials as well as a decrease in visual attention to newly relevant features. In contrast, infants did not demonstrate a similar cost of selective attention as adults despite evidence of learning both to-be-learned categories. Findings are discussed as supporting multiple systems of category learning and as suggesting that learning mechanisms engaged by adults may be different from those engaged by infants. PMID:23773914

  13. Do object refixations during scene viewing indicate rehearsal in visual working memory?

    Science.gov (United States)

    Zelinsky, Gregory J; Loschky, Lester C; Dickinson, Christopher A

    2011-05-01

    Do refixations serve a rehearsal function in visual working memory (VWM)? We analyzed refixations from observers freely viewing multiobject scenes. An eyetracker was used to limit the viewing of a scene to a specified number of objects fixated after the target (intervening objects), followed by a four-alternative forced choice recognition test. Results showed that the probability of target refixation increased with the number of fixated intervening objects, and these refixations produced a 16% accuracy benefit over the first five intervening-object conditions. Additionally, refixations most frequently occurred after fixations on only one to two other objects, regardless of the intervening-object condition. These behaviors could not be explained by random or minimally constrained computational models; a VWM component was required to completely describe these data. We explain these findings in terms of a monitor-refixate rehearsal system: The activations of object representations in VWM are monitored, with refixations occurring when these activations decrease suddenly.

  14. A note on thick subcategories of stable derived categories

    OpenAIRE

    Krause, Henning; Stevenson, Greg

    2013-01-01

    For an exact category having enough projective objects, we establish a bijection between thick subcategories containing the projective objects and thick subcategories of the stable derived category. Using this bijection, we classify thick subcategories of finitely generated modules over strict local complete intersections and produce generators for the category of coherent sheaves on a separated Noetherian scheme with an ample family of line bundles.

  15. Visual hallucinatory syndromes and the anatomy of the visual brain.

    Science.gov (United States)

    Santhouse, A M; Howard, R J; ffytche, D H

    2000-10-01

    We have set out to identify phenomenological correlates of cerebral functional architecture within Charles Bonnet syndrome (CBS) hallucinations by looking for associations between specific hallucination categories. Thirty-four CBS patients were examined with a structured interview/questionnaire to establish the presence of 28 different pathological visual experiences. Associations between categories of pathological experience were investigated by an exploratory factor analysis. Twelve of the pathological experiences partitioned into three segregated syndromic clusters. The first cluster consisted of hallucinations of extended landscape scenes and small figures in costumes with hats; the second, hallucinations of grotesque, disembodied and distorted faces with prominent eyes and teeth; and the third, visual perseveration and delayed palinopsia. The three visual psycho-syndromes mirror the segregation of hierarchical visual pathways into streams and suggest a novel theoretical framework for future research into the pathophysiology of neuropsychiatric syndromes.

  16. Spatial constancy of attention across eye movements is mediated by the presence of visual objects.

    Science.gov (United States)

    Lisi, Matteo; Cavanagh, Patrick; Zorzi, Marco

    2015-05-01

    Recent studies have shown that attentional facilitation lingers at the retinotopic coordinates of a previously attended position after an eye movement. These results are intriguing, because the retinotopic location becomes behaviorally irrelevant once the eyes have moved. Critically, in these studies participants were asked to maintain attention on a blank location of the screen. In the present study, we examined whether the continuing presence of a visual object at the cued location could affect the allocation of attention across eye movements. We used a trans-saccadic cueing paradigm in which the relevant positions could be defined or not by visual objects (simple square outlines). We find an attentional benefit at the spatiotopic location of the cue only when the object (the placeholder) has been continuously present at that location. We conclude that the presence of an object at the attended location is a critical factor for the maintenance of spatial constancy of attention across eye movements, a finding that helps to reconcile previous conflicting results.

  17. What Is the Unit of Visual Attention? Object for Selection, but Boolean Map for Access

    Science.gov (United States)

    Huang, Liqiang

    2010-01-01

    In the past 20 years, numerous theories and findings have suggested that the unit of visual attention is the object. In this study, I first clarify 2 different meanings of unit of visual attention, namely the unit of access in the sense of measurement and the unit of selection in the sense of division. In accordance with this distinction, I argue…

  18. Visualization of the tire-soil interaction area by means of ObjectARX programming interface

    Science.gov (United States)

    Mueller, W.; Gruszczyński, M.; Raba, B.; Lewicki, A.; Przybył, K.; Zaborowicz, M.; Koszela, K.; Boniecki, P.

    2014-04-01

    The process of data visualization, important for their analysis, becomes problematic when large data sets generated via computer simulations are available. This problem concerns, among others, the models that describe the geometry of tire-soil interaction. For the purpose of a graphical representation of this area and implementation of various geometric calculations the authors have developed a plug-in application for AutoCAD, based on the latest technologies, including ObjectARX, LINQ and the use of Visual Studio platform. Selected programming tools offer a wide variety of IT structures that enable data visualization and data analysis and are important e.g. in model verification.

  19. Effects of object shape on the visual guidance of action.

    Science.gov (United States)

    Eloka, Owino; Franz, Volker H

    2011-04-22

    Little is known of how visual coding of the shape of an object affects grasping movements. We addressed this issue by investigating the influence of shape perturbations on grasping. Twenty-six participants grasped a disc or a bar that were chosen such that they could in principle be grasped with identical movements (i.e., relevant sizes were identical such that the final grips consisted of identical separations of the fingers and no parts of the objects constituted obstacles for the movement). Nevertheless, participants took object shape into account and grasped the bar with a larger maximum grip aperture and a different hand angle than the disc. In 20% of the trials, the object changed its shape from bar to disc or vice versa early or late during the movement. If there was enough time (early perturbations), grasps were often adapted in flight to the new shape. These results show that the motor system takes into account even small and seemingly irrelevant changes of object shape and adapts the movement in a fine-grained manner. Although this adaptation might seem computationally expensive, we presume that its benefits (e.g., a more comfortable and more accurate movement) outweigh the costs. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. Motivational Objects in Natural Scenes (MONS): A Database of >800 Objects.

    Science.gov (United States)

    Schomaker, Judith; Rau, Elias M; Einhäuser, Wolfgang; Wittmann, Bianca C

    2017-01-01

    In daily life, we are surrounded by objects with pre-existing motivational associations. However, these are rarely controlled for in experiments with natural stimuli. Research on natural stimuli would therefore benefit from stimuli with well-defined motivational properties; in turn, such stimuli also open new paths in research on motivation. Here we introduce a database of Motivational Objects in Natural Scenes (MONS). The database consists of 107 scenes. Each scene contains 2 to 7 objects placed at approximately equal distance from the scene center. Each scene was photographed creating 3 versions, with one object ("critical object") being replaced to vary the overall motivational value of the scene (appetitive, aversive, and neutral), while maintaining high visual similarity between the three versions. Ratings on motivation, valence, arousal and recognizability were obtained using internet-based questionnaires. Since the main objective was to provide stimuli of well-defined motivational value, three motivation scales were used: (1) Desire to own the object; (2) Approach/Avoid; (3) Desire to interact with the object. Three sets of ratings were obtained in independent sets of observers: for all 805 objects presented on a neutral background, for 321 critical objects presented in their scene context, and for the entire scenes. On the basis of the motivational ratings, objects were subdivided into aversive, neutral, and appetitive categories. The MONS database will provide a standardized basis for future studies on motivational value under realistic conditions.

  1. The cost of selective attention in category learning: developmental differences between adults and infants.

    Science.gov (United States)

    Best, Catherine A; Yim, Hyungwook; Sloutsky, Vladimir M

    2013-10-01

    Selective attention plays an important role in category learning. However, immaturities of top-down attentional control during infancy coupled with successful category learning suggest that early category learning is achieved without attending selectively. Research presented here examines this possibility by focusing on category learning in infants (6-8months old) and adults. Participants were trained on a novel visual category. Halfway through the experiment, unbeknownst to participants, the to-be-learned category switched to another category, where previously relevant features became irrelevant and previously irrelevant features became relevant. If participants attend selectively to the relevant features of the first category, they should incur a cost of selective attention immediately after the unknown category switch. Results revealed that adults demonstrated a cost, as evidenced by a decrease in accuracy and response time on test trials as well as a decrease in visual attention to newly relevant features. In contrast, infants did not demonstrate a similar cost of selective attention as adults despite evidence of learning both to-be-learned categories. Findings are discussed as supporting multiple systems of category learning and as suggesting that learning mechanisms engaged by adults may be different from those engaged by infants. Copyright © 2013 Elsevier Inc. All rights reserved.

  2. Visual variability affects early verb learning.

    Science.gov (United States)

    Twomey, Katherine E; Lush, Lauren; Pearce, Ruth; Horst, Jessica S

    2014-09-01

    Research demonstrates that within-category visual variability facilitates noun learning; however, the effect of visual variability on verb learning is unknown. We habituated 24-month-old children to a novel verb paired with an animated star-shaped actor. Across multiple trials, children saw either a single action from an action category (identical actions condition, for example, travelling while repeatedly changing into a circle shape) or multiple actions from that action category (variable actions condition, for example, travelling while changing into a circle shape, then a square shape, then a triangle shape). Four test trials followed habituation. One paired the habituated verb with a new action from the habituated category (e.g., 'dacking' + pentagon shape) and one with a completely novel action (e.g., 'dacking' + leg movement). The others paired a new verb with a new same-category action (e.g., 'keefing' + pentagon shape), or a completely novel category action (e.g., 'keefing' + leg movement). Although all children discriminated novel verb/action pairs, children in the identical actions condition discriminated trials that included the completely novel verb, while children in the variable actions condition discriminated the out-of-category action. These data suggest that - as in noun learning - visual variability affects verb learning and children's ability to form action categories. © 2014 The British Psychological Society.

  3. Object-Based Benefits without Object-Based Representations

    OpenAIRE

    Alvarez, George Angelo; Fougnie, Daryl; Cormiea, Sarah M

    2012-01-01

    The organization of visual information into objects strongly influences visual memory: Displays with objects defined by two features (e.g. color, orientation) are easier to remember than displays with twice as many objects defined by one feature (Olson & Jiang, 2002). Existing theories suggest that this ‘object-benefit’ is based on object-based limitations in working memory: because a limited number of objects can be stored, packaging features together so that fewer objects have to be remembe...

  4. Neural Mechanisms Underlying Visual Short-Term Memory Gain for Temporally Distinct Objects.

    Science.gov (United States)

    Ihssen, Niklas; Linden, David E J; Miller, Claire E; Shapiro, Kimron L

    2015-08-01

    Recent research has shown that visual short-term memory (VSTM) can substantially be improved when the to-be-remembered objects are split in 2 half-arrays (i.e., sequenced) or the entire array is shown twice (i.e., repeated), rather than presented simultaneously. Here we investigate the hypothesis that sequencing and repeating displays overcomes attentional "bottlenecks" during simultaneous encoding. Using functional magnetic resonance imaging, we show that sequencing and repeating displays increased brain activation in extrastriate and primary visual areas, relative to simultaneous displays (Study 1). Passively viewing identical stimuli did not increase visual activation (Study 2), ruling out a physical confound. Importantly, areas of the frontoparietal attention network showed increased activation in repetition but not in sequential trials. This dissociation suggests that repeating a display increases attentional control by allowing attention to be reallocated in a second encoding episode. In contrast, sequencing the array poses fewer demands on control, with competition from nonattended objects being reduced by the half-arrays. This idea was corroborated by a third study in which we found optimal VSTM for sequential displays minimizing attentional demands. Importantly these results provide support within the same experimental paradigm for the role of stimulus-driven and top-down attentional control aspects of biased competition theory in setting constraints on VSTM. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  5. Colour Terms Affect Detection of Colour and Colour-Associated Objects Suppressed from Visual Awareness

    OpenAIRE

    Forder, Lewis; Taylor, Olivia; Mankin, Helen; Scott, Ryan B.; Franklin, Anna

    2016-01-01

    The idea that language can affect how we see the world continues to create controversy. A potentially important study in this field has shown that when an object is suppressed from visual awareness using continuous flash suppression (a form of binocular rivalry), detection of the object is differently affected by a preceding word prime depending on whether the prime matches or does not match the object. This may suggest that language can affect early stages of vision. We replicated this parad...

  6. Hippocampal activation during episodic and semantic memory retrieval: comparing category production and category cued recall.

    Science.gov (United States)

    Ryan, Lee; Cox, Christine; Hayes, Scott M; Nadel, Lynn

    2008-01-01

    Whether or not the hippocampus participates in semantic memory retrieval has been the focus of much debate in the literature. However, few neuroimaging studies have directly compared hippocampal activation during semantic and episodic retrieval tasks that are well matched in all respects other than the source of the retrieved information. In Experiment 1, we compared hippocampal fMRI activation during a classic semantic memory task, category production, and an episodic version of the same task, category cued recall. Left hippocampal activation was observed in both episodic and semantic conditions, although other regions of the brain clearly distinguished the two tasks. Interestingly, participants reported using retrieval strategies during the semantic retrieval task that relied on autobiographical and spatial information; for example, visualizing themselves in their kitchen while producing items for the category kitchen utensils. In Experiment 2, we considered whether the use of these spatial and autobiographical retrieval strategies could have accounted for the hippocampal activation observed in Experiment 1. Categories were presented that elicited one of three retrieval strategy types, autobiographical and spatial, autobiographical and nonspatial, and neither autobiographical nor spatial. Once again, similar hippocampal activation was observed for all three category types, regardless of the inclusion of spatial or autobiographical content. We conclude that the distinction between semantic and episodic memory is more complex than classic memory models suggest.

  7. Motivational Objects in Natural Scenes (MONS: A Database of >800 Objects

    Directory of Open Access Journals (Sweden)

    Judith Schomaker

    2017-09-01

    Full Text Available In daily life, we are surrounded by objects with pre-existing motivational associations. However, these are rarely controlled for in experiments with natural stimuli. Research on natural stimuli would therefore benefit from stimuli with well-defined motivational properties; in turn, such stimuli also open new paths in research on motivation. Here we introduce a database of Motivational Objects in Natural Scenes (MONS. The database consists of 107 scenes. Each scene contains 2 to 7 objects placed at approximately equal distance from the scene center. Each scene was photographed creating 3 versions, with one object (“critical object” being replaced to vary the overall motivational value of the scene (appetitive, aversive, and neutral, while maintaining high visual similarity between the three versions. Ratings on motivation, valence, arousal and recognizability were obtained using internet-based questionnaires. Since the main objective was to provide stimuli of well-defined motivational value, three motivation scales were used: (1 Desire to own the object; (2 Approach/Avoid; (3 Desire to interact with the object. Three sets of ratings were obtained in independent sets of observers: for all 805 objects presented on a neutral background, for 321 critical objects presented in their scene context, and for the entire scenes. On the basis of the motivational ratings, objects were subdivided into aversive, neutral, and appetitive categories. The MONS database will provide a standardized basis for future studies on motivational value under realistic conditions.

  8. Development of assistive technology for the visually impaired: use of the male condom

    OpenAIRE

    Barbosa, Giselly Oseni Laurentino; Wanderley, Luana Duarte; Reboucas, Cristiana Brasil de Almeida; Oliveira, Paula Marciana Pinheiro de; Pagliuca, Lorita Marlena Freitag

    2013-01-01

    The objectives were to develop and evaluate an assistive technology for the use of the male condom by visually impaired men. It was a technology development study with the participation of seven subjects. Three workshops were performed between April and May of 2010; they were all filmed and the statements of the participants were transcribed and analyzed by content. Three categories were established: Sexuality of the visually impaired; Utilization of the text, For avoiding STDs, condoms we wi...

  9. Coding of visual object features and feature conjunctions in the human brain.

    Science.gov (United States)

    Martinovic, Jasna; Gruber, Thomas; Müller, Matthias M

    2008-01-01

    Object recognition is achieved through neural mechanisms reliant on the activity of distributed coordinated neural assemblies. In the initial steps of this process, an object's features are thought to be coded very rapidly in distinct neural assemblies. These features play different functional roles in the recognition process--while colour facilitates recognition, additional contours and edges delay it. Here, we selectively varied the amount and role of object features in an entry-level categorization paradigm and related them to the electrical activity of the human brain. We found that early synchronizations (approx. 100 ms) increased quantitatively when more image features had to be coded, without reflecting their qualitative contribution to the recognition process. Later activity (approx. 200-400 ms) was modulated by the representational role of object features. These findings demonstrate that although early synchronizations may be sufficient for relatively crude discrimination of objects in visual scenes, they cannot support entry-level categorization. This was subserved by later processes of object model selection, which utilized the representational value of object features such as colour or edges to select the appropriate model and achieve identification.

  10. Typicality effects in artificial categories: is there a hemisphere difference?

    Science.gov (United States)

    Richards, L G; Chiarello, C

    1990-07-01

    In category classification tasks, typicality effects are usually found: accuracy and reaction time depend upon distance from a prototype. In this study, subjects learned either verbal or nonverbal dot pattern categories, followed by a lateralized classification task. Comparable typicality effects were found in both reaction time and accuracy across visual fields for both verbal and nonverbal categories. Both hemispheres appeared to use a similarity-to-prototype matching strategy in classification. This indicates that merely having a verbal label does not differentiate classification in the two hemispheres.

  11. Discrete capacity limits and neuroanatomical correlates of visual short-term memory for objects and spatial locations.

    Science.gov (United States)

    Konstantinou, Nikos; Constantinidou, Fofi; Kanai, Ryota

    2017-02-01

    Working memory is responsible for keeping information in mind when it is no longer in view, linking perception with higher cognitive functions. Despite such crucial role, short-term maintenance of visual information is severely limited. Research suggests that capacity limits in visual short-term memory (VSTM) are correlated with sustained activity in distinct brain areas. Here, we investigated whether variability in the structure of the brain is reflected in individual differences of behavioral capacity estimates for spatial and object VSTM. Behavioral capacity estimates were calculated separately for spatial and object information using a novel adaptive staircase procedure and were found to be unrelated, supporting domain-specific VSTM capacity limits. Voxel-based morphometry (VBM) analyses revealed dissociable neuroanatomical correlates of spatial versus object VSTM. Interindividual variability in spatial VSTM was reflected in the gray matter density of the inferior parietal lobule. In contrast, object VSTM was reflected in the gray matter density of the left insula. These dissociable findings highlight the importance of considering domain-specific estimates of VSTM capacity and point to the crucial brain regions that limit VSTM capacity for different types of visual information. Hum Brain Mapp 38:767-778, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  12. Are Face and Object Recognition Independent? A Neurocomputational Modeling Exploration.

    Science.gov (United States)

    Wang, Panqu; Gauthier, Isabel; Cottrell, Garrison

    2016-04-01

    Are face and object recognition abilities independent? Although it is commonly believed that they are, Gauthier et al. [Gauthier, I., McGugin, R. W., Richler, J. J., Herzmann, G., Speegle, M., & VanGulick, A. E. Experience moderates overlap between object and face recognition, suggesting a common ability. Journal of Vision, 14, 7, 2014] recently showed that these abilities become more correlated as experience with nonface categories increases. They argued that there is a single underlying visual ability, v, that is expressed in performance with both face and nonface categories as experience grows. Using the Cambridge Face Memory Test and the Vanderbilt Expertise Test, they showed that the shared variance between Cambridge Face Memory Test and Vanderbilt Expertise Test performance increases monotonically as experience increases. Here, we address why a shared resource across different visual domains does not lead to competition and to an inverse correlation in abilities? We explain this conundrum using our neurocomputational model of face and object processing ["The Model", TM, Cottrell, G. W., & Hsiao, J. H. Neurocomputational models of face processing. In A. J. Calder, G. Rhodes, M. Johnson, & J. Haxby (Eds.), The Oxford handbook of face perception. Oxford, UK: Oxford University Press, 2011]. We model the domain general ability v as the available computational resources (number of hidden units) in the mapping from input to label and experience as the frequency of individual exemplars in an object category appearing during network training. Our results show that, as in the behavioral data, the correlation between subordinate level face and object recognition accuracy increases as experience grows. We suggest that different domains do not compete for resources because the relevant features are shared between faces and objects. The essential power of experience is to generate a "spreading transform" for faces (separating them in representational space) that

  13. Cortical Dynamics of Contextually Cued Attentive Visual Learning and Search: Spatial and Object Evidence Accumulation

    Science.gov (United States)

    Huang, Tsung-Ren; Grossberg, Stephen

    2010-01-01

    How do humans use target-predictive contextual information to facilitate visual search? How are consistently paired scenic objects and positions learned and used to more efficiently guide search in familiar scenes? For example, humans can learn that a certain combination of objects may define a context for a kitchen and trigger a more efficient…

  14. Modular categories and 3-manifold invariants

    International Nuclear Information System (INIS)

    Tureav, V.G.

    1992-01-01

    The aim of this paper is to give a concise introduction to the theory of knot invariants and 3-manifold invariants which generalize the Jones polynomial and which may be considered as a mathematical version of the Witten invariants. Such a theory was introduced by N. Reshetikhin and the author on the ground of the theory of quantum groups. here we use more general algebraic objects, specifically, ribbon and modular categories. Such categories in particular arise as the categories of representations of quantum groups. The notion of modular category, interesting in itself, is closely related to the notion of modular tensor category in the sense of G. Moore and N. Seiberg. For simplicity we restrict ourselves in this paper to the case of closed 3-manifolds

  15. Humans use visual and remembered information about object location to plan pointing movements

    NARCIS (Netherlands)

    Brouwer, A.-M.; Knill, D.C.

    2009-01-01

    We investigated whether humans use a target's remembered location to plan reaching movements to targets according to the relative reliabilities of visual and remembered information. Using their index finger, subjects moved a virtual object from one side of a table to the other, and then went back to

  16. Category Theory as a Formal Mathematical Foundation for Model-Based Systems Engineering

    KAUST Repository

    Mabrok, Mohamed

    2017-01-09

    In this paper, we introduce Category Theory as a formal foundation for model-based systems engineering. A generalised view of the system based on category theory is presented, where any system can be considered as a category. The objects of the category represent all the elements and components of the system and the arrows represent the relations between these components (objects). The relationship between these objects are the arrows or the morphisms in the category. The Olog is introduced as a formal language to describe a given real-world situation description and requirement writing. A simple example is provided.

  17. Scale-adaptive Local Patches for Robust Visual Object Tracking

    Directory of Open Access Journals (Sweden)

    Kang Sun

    2014-04-01

    Full Text Available This paper discusses the problem of robustly tracking objects which undergo rapid and dramatic scale changes. To remove the weakness of global appearance models, we present a novel scheme that combines object’s global and local appearance features. The local feature is a set of local patches that geometrically constrain the changes in the target’s appearance. In order to adapt to the object’s geometric deformation, the local patches could be removed and added online. The addition of these patches is constrained by the global features such as color, texture and motion. The global visual features are updated via the stable local patches during tracking. To deal with scale changes, we adapt the scale of patches in addition to adapting the object bound box. We evaluate our method by comparing it to several state-of-the-art trackers on publicly available datasets. The experimental results on challenging sequences confirm that, by using this scale-adaptive local patches and global properties, our tracker outperforms the related trackers in many cases by having smaller failure rate as well as better accuracy.

  18. The effects of short-term and long-term learning on the responses of lateral intraparietal neurons to visually presented objects.

    Science.gov (United States)

    Sigurdardottir, Heida M; Sheinberg, David L

    2015-07-01

    The lateral intraparietal area (LIP) is thought to play an important role in the guidance of where to look and pay attention. LIP can also respond selectively to differently shaped objects. We sought to understand to what extent short-term and long-term experience with visual orienting determines the responses of LIP to objects of different shapes. We taught monkeys to arbitrarily associate centrally presented objects of various shapes with orienting either toward or away from a preferred spatial location of a neuron. The training could last for less than a single day or for several months. We found that neural responses to objects are affected by such experience, but that the length of the learning period determines how this neural plasticity manifests. Short-term learning affects neural responses to objects, but these effects are only seen relatively late after visual onset; at this time, the responses to newly learned objects resemble those of familiar objects that share their meaning or arbitrary association. Long-term learning affects the earliest bottom-up responses to visual objects. These responses tend to be greater for objects that have been associated with looking toward, rather than away from, LIP neurons' preferred spatial locations. Responses to objects can nonetheless be distinct, although they have been similarly acted on in the past and will lead to the same orienting behavior in the future. Our results therefore indicate that a complete experience-driven override of LIP object responses may be difficult or impossible. We relate these results to behavioral work on visual attention.

  19. BUILDING A BILLION SPATIO-TEMPORAL OBJECT SEARCH AND VISUALIZATION PLATFORM

    Directory of Open Access Journals (Sweden)

    D. Kakkar

    2017-10-01

    Full Text Available With funding from the Sloan Foundation and Harvard Dataverse, the Harvard Center for Geographic Analysis (CGA has developed a prototype spatio-temporal visualization platform called the Billion Object Platform or BOP. The goal of the project is to lower barriers for scholars who wish to access large, streaming, spatio-temporal datasets. The BOP is now loaded with the latest billion geo-tweets, and is fed a real-time stream of about 1 million tweets per day. The geo-tweets are enriched with sentiment and census/admin boundary codes when they enter the system. The system is open source and is currently hosted on Massachusetts Open Cloud (MOC, an OpenStack environment with all components deployed in Docker orchestrated by Kontena. This paper will provide an overview of the BOP architecture, which is built on an open source stack consisting of Apache Lucene, Solr, Kafka, Zookeeper, Swagger, scikit-learn, OpenLayers, and AngularJS. The paper will further discuss the approach used for harvesting, enriching, streaming, storing, indexing, visualizing and querying a billion streaming geo-tweets.

  20. Building a Billion Spatio-Temporal Object Search and Visualization Platform

    Science.gov (United States)

    Kakkar, D.; Lewis, B.

    2017-10-01

    With funding from the Sloan Foundation and Harvard Dataverse, the Harvard Center for Geographic Analysis (CGA) has developed a prototype spatio-temporal visualization platform called the Billion Object Platform or BOP. The goal of the project is to lower barriers for scholars who wish to access large, streaming, spatio-temporal datasets. The BOP is now loaded with the latest billion geo-tweets, and is fed a real-time stream of about 1 million tweets per day. The geo-tweets are enriched with sentiment and census/admin boundary codes when they enter the system. The system is open source and is currently hosted on Massachusetts Open Cloud (MOC), an OpenStack environment with all components deployed in Docker orchestrated by Kontena. This paper will provide an overview of the BOP architecture, which is built on an open source stack consisting of Apache Lucene, Solr, Kafka, Zookeeper, Swagger, scikit-learn, OpenLayers, and AngularJS. The paper will further discuss the approach used for harvesting, enriching, streaming, storing, indexing, visualizing and querying a billion streaming geo-tweets.

  1. Iterative Object Localization Algorithm Using Visual Images with a Reference Coordinate

    Directory of Open Access Journals (Sweden)

    We-Duke Cho

    2008-09-01

    Full Text Available We present a simplified algorithm for localizing an object using multiple visual images that are obtained from widely used digital imaging devices. We use a parallel projection model which supports both zooming and panning of the imaging devices. Our proposed algorithm is based on a virtual viewable plane for creating a relationship between an object position and a reference coordinate. The reference point is obtained from a rough estimate which may be obtained from the preestimation process. The algorithm minimizes localization error through the iterative process with relatively low-computational complexity. In addition, nonlinearity distortion of the digital image devices is compensated during the iterative process. Finally, the performances of several scenarios are evaluated and analyzed in both indoor and outdoor environments.

  2. Gravity influences the visual representation of object tilt in parietal cortex.

    Science.gov (United States)

    Rosenberg, Ari; Angelaki, Dora E

    2014-10-22

    Sensory systems encode the environment in egocentric (e.g., eye, head, or body) reference frames, creating inherently unstable representations that shift and rotate as we move. However, it is widely speculated that the brain transforms these signals into an allocentric, gravity-centered representation of the world that is stable and independent of the observer's spatial pose. Where and how this representation may be achieved is currently unknown. Here we demonstrate that a subpopulation of neurons in the macaque caudal intraparietal area (CIP) visually encodes object tilt in nonegocentric coordinates defined relative to the gravitational vector. Neuronal responses to the tilt of a visually presented planar surface were measured with the monkey in different spatial orientations (upright and rolled left/right ear down) and then compared. This revealed a continuum of representations in which planar tilt was encoded in a gravity-centered reference frame in approximately one-tenth of the comparisons, intermediate reference frames ranging between gravity-centered and egocentric in approximately two-tenths of the comparisons, and in an egocentric reference frame in less than half of the comparisons. Altogether, almost half of the comparisons revealed a shift in the preferred tilt and/or a gain change consistent with encoding object orientation in nonegocentric coordinates. Through neural network modeling, we further show that a purely gravity-centered representation of object tilt can be achieved directly from the population activity of CIP-like units. These results suggest that area CIP may play a key role in creating a stable, allocentric representation of the environment defined relative to an "earth-vertical" direction. Copyright © 2014 the authors 0270-6474/14/3414170-11$15.00/0.

  3. Integrating Spherical Panoramas and Maps for Visualization of Cultural Heritage Objects Using Virtual Reality Technology.

    Science.gov (United States)

    Koeva, Mila; Luleva, Mila; Maldjanski, Plamen

    2017-04-11

    Development and virtual representation of 3D models of Cultural Heritage (CH) objects has triggered great interest over the past decade. The main reason for this is the rapid development in the fields of photogrammetry and remote sensing, laser scanning, and computer vision. The advantages of using 3D models for restoration, preservation, and documentation of valuable historical and architectural objects have been numerously demonstrated by scientists in the field. Moreover, 3D model visualization in virtual reality has been recognized as an efficient, fast, and easy way of representing a variety of objects worldwide for present-day users, who have stringent requirements and high expectations. However, the main focus of recent research is the visual, geometric, and textural characteristics of a single concrete object, while integration of large numbers of models with additional information-such as historical overview, detailed description, and location-are missing. Such integrated information can be beneficial, not only for tourism but also for accurate documentation. For that reason, we demonstrate in this paper an integration of high-resolution spherical panoramas, a variety of maps, GNSS, sound, video, and text information for representation of numerous cultural heritage objects. These are then displayed in a web-based portal with an intuitive interface. The users have the opportunity to choose freely from the provided information, and decide for themselves what is interesting to visit. Based on the created web application, we provide suggestions and guidelines for similar studies. We selected objects, which are located in Bulgaria-a country with thousands of years of history and cultural heritage dating back to ancient civilizations. The methods used in this research are applicable for any type of spherical or cylindrical images and can be easily followed and applied in various domains. After a visual and metric assessment of the panoramas and the evaluation of

  4. Colour categories are reflected in sensory stages of colour perception when stimulus issues are resolved

    Science.gov (United States)

    He, Xun; Franklin, Anna

    2017-01-01

    Debate exists about the time course of the effect of colour categories on visual processing. We investigated the effect of colour categories for two groups who differed in whether they categorised a blue-green boundary colour as the same- or different-category to a reliably-named blue colour and a reliably-named green colour. Colour differences were equated in just-noticeable differences to be equally discriminable. We analysed event-related potentials for these colours elicited on a passive visual oddball task and investigated the time course of categorical effects on colour processing. Support for category effects was found 100 ms after stimulus onset, and over frontal sites around 250 ms, suggesting that colour naming affects both early sensory and later stages of chromatic processing. PMID:28542426

  5. Visualization: A Tool for Enhancing Students' Concept Images of Basic Object-Oriented Concepts

    Science.gov (United States)

    Cetin, Ibrahim

    2013-01-01

    The purpose of this study was twofold: to investigate students' concept images about class, object, and their relationship and to help them enhance their learning of these notions with a visualization tool. Fifty-six second-year university students participated in the study. To investigate his/her concept images, the researcher developed a survey…

  6. Are Categorical Spatial Relations Encoded by Shifting Visual Attention between Objects?

    Science.gov (United States)

    Uttal, David; Franconeri, Steven

    2016-01-01

    Perceiving not just values, but relations between values, is critical to human cognition. We tested the predictions of a proposed mechanism for processing categorical spatial relations between two objects—the shift account of relation processing—which states that relations such as ‘above’ or ‘below’ are extracted by shifting visual attention upward or downward in space. If so, then shifts of attention should improve the representation of spatial relations, compared to a control condition of identity memory. Participants viewed a pair of briefly flashed objects and were then tested on either the relative spatial relation or identity of one of those objects. Using eye tracking to reveal participants’ voluntary shifts of attention over time, we found that when initial fixation was on neither object, relational memory showed an absolute advantage for the object following an attention shift, while identity memory showed no advantage for either object. This result is consistent with the shift account of relation processing. When initial fixation began on one of the objects, identity memory strongly benefited this fixated object, while relational memory only showed a relative benefit for objects following an attention shift. This result is also consistent, although not as uniquely, with the shift account of relation processing. Taken together, we suggest that the attention shift account provides a mechanistic explanation for the overall results. This account can potentially serve as the common mechanism underlying both linguistic and perceptual representations of spatial relations. PMID:27695104

  7. A configural effect in visual short-term memory for features from different parts of an object.

    Science.gov (United States)

    Delvenne, Jean-François; Bruyer, Raymond

    2006-09-01

    Previous studies have shown that change detection performance is improved when the visual display holds features (e.g., a colour and an orientation) that are grouped into different parts of the same object compared to when they are all spatially separated (Xu, 2002a, 2002b). These findings indicate that visual short-term memory (VSTM) encoding can be "object based". Recently, however, it has been demonstrated that changing the orientation of an item could affect the spatial configuration of the display (Jiang, Chun, & Olson, 2004), which may have an important influence on change detection. The perceptual grouping of features into an object obviously reduces the amount of distinct spatial relations in a display and hence the complexity of the spatial configuration. In the present study, we ask whether the object-based encoding benefit observed in previous studies may reflect the use of configural coding rather than the outcome of a true object-based effect. The results show that when configural cues are removed, the object-based encoding benefit remains for features (i.e., colour and orientation) from different parts of an object, but is significantly reduced. These findings support the view that memory for features from different parts of an object can benefit from object-based encoding, but the use of configural coding significantly helps enlarge this effect.

  8. Working memory capacity accounts for the ability to switch between object-based and location-based allocation of visual attention.

    Science.gov (United States)

    Bleckley, M Kathryn; Foster, Jeffrey L; Engle, Randall W

    2015-04-01

    Bleckley, Durso, Crutchfield, Engle, and Khanna (Psychonomic Bulletin & Review, 10, 884-889, 2003) found that visual attention allocation differed between groups high or low in working memory capacity (WMC). High-span, but not low-span, subjects showed an invalid-cue cost during a letter localization task in which the letter appeared closer to fixation than the cue, but not when the letter appeared farther from fixation than the cue. This suggests that low-spans allocated attention as a spotlight, whereas high-spans allocated their attention to objects. In this study, we tested whether utilizing object-based visual attention is a resource-limited process that is difficult for low-span individuals. In the first experiment, we tested the uses of object versus location-based attention with high and low-span subjects, with half of the subjects completing a demanding secondary load task. Under load, high-spans were no longer able to use object-based visual attention. A second experiment supported the hypothesis that these differences in allocation were due to high-spans using object-based allocation, whereas low-spans used location-based allocation.

  9. Right fusiform response patterns reflect visual object identity rather than semantic similarity.

    Science.gov (United States)

    Bruffaerts, Rose; Dupont, Patrick; De Grauwe, Sophie; Peeters, Ronald; De Deyne, Simon; Storms, Gerrit; Vandenberghe, Rik

    2013-12-01

    We previously reported the neuropsychological consequences of a lesion confined to the middle and posterior part of the right fusiform gyrus (case JA) causing a partial loss of knowledge of visual attributes of concrete entities in the absence of category-selectivity (animate versus inanimate). We interpreted this in the context of a two-step model that distinguishes structural description knowledge from associative-semantic processing and implicated the lesioned area in the former process. To test this hypothesis in the intact brain, multi-voxel pattern analysis was used in a series of event-related fMRI studies in a total of 46 healthy subjects. We predicted that activity patterns in this region would be determined by the identity of rather than the conceptual similarity between concrete entities. In a prior behavioral experiment features were generated for each entity by more than 1000 subjects. Based on a hierarchical clustering analysis the entities were organised into 3 semantic clusters (musical instruments, vehicles, tools). Entities were presented as words or pictures. With foveal presentation of pictures, cosine similarity between fMRI response patterns in right fusiform cortex appeared to reflect both the identity of and the semantic similarity between the entities. No such effects were found for words in this region. The effect of object identity was invariant for location, scaling, orientation axis and color (grayscale versus color). It also persisted for different exemplars referring to a same concrete entity. The apparent semantic similarity effect however was not invariant. This study provides further support for a neurobiological distinction between structural description knowledge and processing of semantic relationships and confirms the role of right mid-posterior fusiform cortex in the former process, in accordance with previous lesion evidence. © 2013.

  10. CHURCH, Category, and Speciation

    Directory of Open Access Journals (Sweden)

    Rinderknecht Jakob Karl

    2018-01-01

    Full Text Available The Roman Catholic definition of “church”, especially as applied to groups of Protestant Christians, creates a number of well-known difficulties. The similarly complex category, “species,” provides a model for applying this term so as to neither lose the centrality of certain examples nor draw a hard boundary to rule out border cases. In this way, it can help us to more adequately apply the complex ecclesiology of the Second Vatican Council. This article draws parallels between the understanding of speciation and categorization and the definition of Church since the council. In doing so, it applies the work of cognitive linguists, including George Lakoff, Zoltan Kovecses, Giles Fauconnier and Mark Turner on categorization. We tend to think of categories as containers into which we sort objects according to essential criteria. However, categories are actually built inductively by making associations between objects. This means that natural categories, including species, are more porous than we assume, but nevertheless bear real meaning about the natural world. Taxonomists dispute the border between “zebras” and “wild asses,” but this distinction arises out of genetic and evolutionary reality; it is not merely arbitrary. Genetic descriptions of species has also led recently to the conviction that there are four species of giraffe, not one. This engagement will ground a vantage point from which the Council‘s complex ecclesiology can be more easily described so as to authentically integrate its noncompetitive vision vis-a-vis other Christians with its sense of the unique place held by Catholic Church.

  11. Retrospective Cues Based on Object Features Improve Visual Working Memory Performance in Older Adults

    OpenAIRE

    Gilchrist, Amanda L.; Duarte, Audrey; Verhaeghen, Paul

    2015-01-01

    Research with younger adults has shown that retrospective cues can be used to orient top-down attention toward relevant items in working memory. We examined whether older adults could take advantage of these cues to improve memory performance. Younger and older adults were presented with visual arrays of five colored shapes; during maintenance, participants were either presented with an informative cue based on an object feature (here, object shape or color) that would be probed, or with an u...

  12. The Improved SVM Multi Objects' Identification For the Uncalibrated Visual Servoing

    Directory of Open Access Journals (Sweden)

    Min Wang

    2009-03-01

    Full Text Available For the assembly of multi micro objects in micromanipulation, the first task is to identify multi micro parts. We present an improved support vector machine algorithm, which employs invariant moments based edge extraction to obtain feature attribute and then presents a heuristic attribute reduction algorithm based on rough set's discernibility matrix to obtain attribute reduction, with using support vector machine to identify and classify the targets. The visual servoing is the second task. For avoiding the complicated calibration of intrinsic parameter of camera, We apply an improved broyden's method to estimate the image jacobian matrix online, which employs chebyshev polynomial to construct a cost function to approximate the optimization value, obtaining a fast convergence for online estimation. Last, a two DOF visual controller based fuzzy adaptive PD control law for micro-manipulation is presented. The experiments of micro-assembly of micro parts in microscopes confirm that the proposed methods are effective and feasible.

  13. The Improved SVM Multi Objects's Identification for the Uncalibrated Visual Servoing

    Directory of Open Access Journals (Sweden)

    Xiangjin Zeng

    2009-03-01

    Full Text Available For the assembly of multi micro objects in micromanipulation, the first task is to identify multi micro parts. We present an improved support vector machine algorithm, which employs invariant moments based edge extraction to obtain feature attribute and then presents a heuristic attribute reduction algorithm based on rough set's discernibility matrix to obtain attribute reduction, with using support vector machine to identify and classify the targets. The visual servoing is the second task. For avoiding the complicated calibration of intrinsic parameter of camera, We apply an improved broyden's method to estimate the image jacobian matrix online, which employs chebyshev polynomial to construct a cost function to approximate the optimization value, obtaining a fast convergence for online estimation. Last, a two DOF visual controller based fuzzy adaptive PD control law for micro-manipulation is presented. The experiments of micro-assembly of micro parts in microscopes confirm that the proposed methods are effective and feasible.

  14. Difference in Subjective Accessibility of On Demand Recall of Visual, Taste, and Olfactory Memories

    OpenAIRE

    Zach, Petr; Zimmelová, Petra; Mrzílková, Jana; Kutová, Martina

    2018-01-01

    We present here significant difference in the evocation capability between sensory memories (visual, taste, and olfactory) throughout certain categories of the population. As object for this memory recall we selected French fries that are simple and generally known. From daily life we may intuitively feel that there is much better recall of the visual and auditory memory compared to the taste and olfactory ones. Our results in young (age 12–21 years) mostly females and some males show low cap...

  15. Smart-system of distance learning of visually impaired people based on approaches of artificial intelligence

    Science.gov (United States)

    Samigulina, Galina A.; Shayakhmetova, Assem S.

    2016-11-01

    Research objective is the creation of intellectual innovative technology and information Smart-system of distance learning for visually impaired people. The organization of the available environment for receiving quality education for visually impaired people, their social adaptation in society are important and topical issues of modern education.The proposed Smart-system of distance learning for visually impaired people can significantly improve the efficiency and quality of education of this category of people. The scientific novelty of proposed Smart-system is using intelligent and statistical methods of processing multi-dimensional data, and taking into account psycho-physiological characteristics of perception and awareness learning information by visually impaired people.

  16. Visual object recognition and tracking

    Science.gov (United States)

    Chang, Chu-Yin (Inventor); English, James D. (Inventor); Tardella, Neil M. (Inventor)

    2010-01-01

    This invention describes a method for identifying and tracking an object from two-dimensional data pictorially representing said object by an object-tracking system through processing said two-dimensional data using at least one tracker-identifier belonging to the object-tracking system for providing an output signal containing: a) a type of the object, and/or b) a position or an orientation of the object in three-dimensions, and/or c) an articulation or a shape change of said object in said three dimensions.

  17. Retrospective cues based on object features improve visual working memory performance in older adults.

    Science.gov (United States)

    Gilchrist, Amanda L; Duarte, Audrey; Verhaeghen, Paul

    2016-01-01

    Research with younger adults has shown that retrospective cues can be used to orient top-down attention toward relevant items in working memory. We examined whether older adults could take advantage of these cues to improve memory performance. Younger and older adults were presented with visual arrays of five colored shapes; during maintenance, participants were presented either with an informative cue based on an object feature (here, object shape or color) that would be probed, or with an uninformative, neutral cue. Although older adults were less accurate overall, both age groups benefited from the presentation of an informative, feature-based cue relative to a neutral cue. Surprisingly, we also observed differences in the effectiveness of shape versus color cues and their effects upon post-cue memory load. These results suggest that older adults can use top-down attention to remove irrelevant items from visual working memory, provided that task-relevant features function as cues.

  18. Object-centered representations support flexible exogenous visual attention across translation and reflection.

    Science.gov (United States)

    Lin, Zhicheng

    2013-11-01

    Visual attention can be deployed to stimuli based on our willful, top-down goal (endogenous attention) or on their intrinsic saliency against the background (exogenous attention). Flexibility is thought to be a hallmark of endogenous attention, whereas decades of research show that exogenous attention is attracted to the retinotopic locations of the salient stimuli. However, to the extent that salient stimuli in the natural environment usually form specific spatial relations with the surrounding context and are dynamic, exogenous attention, to be adaptive, should embrace these structural regularities. Here we test a non-retinotopic, object-centered mechanism in exogenous attention, in which exogenous attention is dynamically attracted to a relative, object-centered location. Using a moving frame configuration, we presented two frames in succession, forming either apparent translational motion or in mirror reflection, with a completely uninformative, transient cue presented at one of the item locations in the first frame. Despite that the cue is presented in a spatially separate frame, in both translation and mirror reflection, behavioralperformance in visual search is enhanced when the target in the second frame appears at the same relative location as the cue location than at other locations. These results provide unambiguous evidence for non-retinotopic exogenous attention and further reveal an object-centered mechanism supporting flexible exogenous attention. Moreover, attentional generalization across mirror reflection may constitute an attentional correlate of perceptual generalization across lateral mirror images, supporting an adaptive, functional account of mirror images confusion. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. Categorization for Faces and Tools-Two Classes of Objects Shaped by Different Experience-Differs in Processing Timing, Brain Areas Involved, and Repetition Effects.

    Science.gov (United States)

    Kozunov, Vladimir; Nikolaeva, Anastasia; Stroganova, Tatiana A

    2017-01-01

    The brain mechanisms that integrate the separate features of sensory input into a meaningful percept depend upon the prior experience of interaction with the object and differ between categories of objects. Recent studies using representational similarity analysis (RSA) have characterized either the spatial patterns of brain activity for different categories of objects or described how category structure in neuronal representations emerges in time, but never simultaneously. Here we applied a novel, region-based, multivariate pattern classification approach in combination with RSA to magnetoencephalography data to extract activity associated with qualitatively distinct processing stages of visual perception. We asked participants to name what they see whilst viewing bitonal visual stimuli of two categories predominantly shaped by either value-dependent or sensorimotor experience, namely faces and tools, and meaningless images. We aimed to disambiguate the spatiotemporal patterns of brain activity between the meaningful categories and determine which differences in their processing were attributable to either perceptual categorization per se , or later-stage mentalizing-related processes. We have extracted three stages of cortical activity corresponding to low-level processing, category-specific feature binding, and supra-categorical processing. All face-specific spatiotemporal patterns were associated with bilateral activation of ventral occipito-temporal areas during the feature binding stage at 140-170 ms. The tool-specific activity was found both within the categorization stage and in a later period not thought to be associated with binding processes. The tool-specific binding-related activity was detected within a 210-220 ms window and was located to the intraparietal sulcus of the left hemisphere. Brain activity common for both meaningful categories started at 250 ms and included widely distributed assemblies within parietal, temporal, and prefrontal regions

  20. Categorization for Faces and Tools—Two Classes of Objects Shaped by Different Experience—Differs in Processing Timing, Brain Areas Involved, and Repetition Effects

    Science.gov (United States)

    Kozunov, Vladimir; Nikolaeva, Anastasia; Stroganova, Tatiana A.

    2018-01-01

    The brain mechanisms that integrate the separate features of sensory input into a meaningful percept depend upon the prior experience of interaction with the object and differ between categories of objects. Recent studies using representational similarity analysis (RSA) have characterized either the spatial patterns of brain activity for different categories of objects or described how category structure in neuronal representations emerges in time, but never simultaneously. Here we applied a novel, region-based, multivariate pattern classification approach in combination with RSA to magnetoencephalography data to extract activity associated with qualitatively distinct processing stages of visual perception. We asked participants to name what they see whilst viewing bitonal visual stimuli of two categories predominantly shaped by either value-dependent or sensorimotor experience, namely faces and tools, and meaningless images. We aimed to disambiguate the spatiotemporal patterns of brain activity between the meaningful categories and determine which differences in their processing were attributable to either perceptual categorization per se, or later-stage mentalizing-related processes. We have extracted three stages of cortical activity corresponding to low-level processing, category-specific feature binding, and supra-categorical processing. All face-specific spatiotemporal patterns were associated with bilateral activation of ventral occipito-temporal areas during the feature binding stage at 140–170 ms. The tool-specific activity was found both within the categorization stage and in a later period not thought to be associated with binding processes. The tool-specific binding-related activity was detected within a 210–220 ms window and was located to the intraparietal sulcus of the left hemisphere. Brain activity common for both meaningful categories started at 250 ms and included widely distributed assemblies within parietal, temporal, and prefrontal regions

  1. Color impact in visual attention deployment considering emotional images

    Science.gov (United States)

    Chamaret, C.

    2012-03-01

    Color is a predominant factor in the human visual attention system. Even if it cannot be sufficient to the global or complete understanding of a scene, it may impact the visual attention deployment. We propose to study the color impact as well as the emotion aspect of pictures regarding the visual attention deployment. An eye-tracking campaign has been conducted involving twenty people watching half pictures of database in full color and the other half of database in grey color. The eye fixations of color and black and white images were highly correlated leading to the question of the integration of such cues in the design of visual attention model. Indeed, the prediction of two state-of-the-art computational models shows similar results for the two color categories. Similarly, the study of saccade amplitude and fixation duration versus time viewing did not bring any significant differences between the two mentioned categories. In addition, spatial coordinates of eye fixations reveal an interesting indicator for investigating the differences of visual attention deployment over time and fixation number. The second factor related to emotion categories shows evidences of emotional inter-categories differences between color and grey eye fixations for passive and positive emotion. The particular aspect associated to this category induces a specific behavior, rather based on high frequencies, where the color components influence the visual attention deployment.

  2. Value is in the eye of the beholder: early visual cortex codes monetary value of objects during a diverted attention task.

    Science.gov (United States)

    Persichetti, Andrew S; Aguirre, Geoffrey K; Thompson-Schill, Sharon L

    2015-05-01

    A central concern in the study of learning and decision-making is the identification of neural signals associated with the values of choice alternatives. An important factor in understanding the neural correlates of value is the representation of the object itself, separate from the act of choosing. Is it the case that the representation of an object within visual areas will change if it is associated with a particular value? We used fMRI adaptation to measure the neural similarity of a set of novel objects before and after participants learned to associate monetary values with the objects. We used a range of both positive and negative values to allow us to distinguish effects of behavioral salience (i.e., large vs. small values) from effects of valence (i.e., positive vs. negative values). During the scanning session, participants made a perceptual judgment unrelated to value. Crucially, the similarity of the visual features of any pair of objects did not predict the similarity of their value, so we could distinguish adaptation effects due to each dimension of similarity. Within early visual areas, we found that value similarity modulated the neural response to the objects after training. These results show that an abstract dimension, in this case, monetary value, modulates neural response to an object in visual areas of the brain even when attention is diverted.

  3. Memorable objects are more susceptible to forgetting: Evidence for the inhibitory account of retrieval-induced forgetting.

    Science.gov (United States)

    Reppa, I; Williams, K E; Worth, E R; Greville, W J; Saunders, J

    2017-11-01

    Retrieval of target information can cause forgetting for related, but non-retrieved, information - retrieval-induced forgetting (RIF). The aim of the current studies was to examine a key prediction of the inhibitory account of RIF - interference dependence - whereby 'strong' non-retrieved items are more likely to interfere during retrieval and therefore, are more susceptible to RIF. Using visual objects allowed us to examine and contrast one index of item strength -object typicality, that is, how typical of its category an object is. Experiment 1 provided proof of concept for our variant of the recognition practice paradigm. Experiment 2 tested the prediction of the inhibitory account that the magnitude of RIF for natural visual objects would be dependent on item strength. Non-typical objects were more memorable overall than typical objects. We found that object memorability (as determined by typicality) influenced RIF with significant forgetting occurring for the memorable (non-typical), but not non-memorable (typical), objects. The current findings strongly support an inhibitory account of retrieval-induced forgetting. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Object-based warping: an illusory distortion of space within objects.

    Science.gov (United States)

    Vickery, Timothy J; Chun, Marvin M

    2010-12-01

    Visual objects are high-level primitives that are fundamental to numerous perceptual functions, such as guidance of attention. We report that objects warp visual perception of space in such a way that spatial distances within objects appear to be larger than spatial distances in ground regions. When two dots were placed inside a rectangular object, they appeared farther apart from one another than two dots with identical spacing outside of the object. To investigate whether this effect was object based, we measured the distortion while manipulating the structure surrounding the dots. Object displays were constructed with a single object, multiple objects, a partially occluded object, and an illusory object. Nonobject displays were constructed to be comparable to object displays in low-level visual attributes. In all cases, the object displays resulted in a more powerful distortion of spatial perception than comparable non-object-based displays. These results suggest that perception of space within objects is warped.

  5. Optimization of Visual Information Presentation for Visual Prosthesis

    Directory of Open Access Journals (Sweden)

    Fei Guo

    2018-01-01

    Full Text Available Visual prosthesis applying electrical stimulation to restore visual function for the blind has promising prospects. However, due to the low resolution, limited visual field, and the low dynamic range of the visual perception, huge loss of information occurred when presenting daily scenes. The ability of object recognition in real-life scenarios is severely restricted for prosthetic users. To overcome the limitations, optimizing the visual information in the simulated prosthetic vision has been the focus of research. This paper proposes two image processing strategies based on a salient object detection technique. The two processing strategies enable the prosthetic implants to focus on the object of interest and suppress the background clutter. Psychophysical experiments show that techniques such as foreground zooming with background clutter removal and foreground edge detection with background reduction have positive impacts on the task of object recognition in simulated prosthetic vision. By using edge detection and zooming technique, the two processing strategies significantly improve the recognition accuracy of objects. We can conclude that the visual prosthesis using our proposed strategy can assist the blind to improve their ability to recognize objects. The results will provide effective solutions for the further development of visual prosthesis.

  6. Optimization of Visual Information Presentation for Visual Prosthesis

    Science.gov (United States)

    Gao, Yong

    2018-01-01

    Visual prosthesis applying electrical stimulation to restore visual function for the blind has promising prospects. However, due to the low resolution, limited visual field, and the low dynamic range of the visual perception, huge loss of information occurred when presenting daily scenes. The ability of object recognition in real-life scenarios is severely restricted for prosthetic users. To overcome the limitations, optimizing the visual information in the simulated prosthetic vision has been the focus of research. This paper proposes two image processing strategies based on a salient object detection technique. The two processing strategies enable the prosthetic implants to focus on the object of interest and suppress the background clutter. Psychophysical experiments show that techniques such as foreground zooming with background clutter removal and foreground edge detection with background reduction have positive impacts on the task of object recognition in simulated prosthetic vision. By using edge detection and zooming technique, the two processing strategies significantly improve the recognition accuracy of objects. We can conclude that the visual prosthesis using our proposed strategy can assist the blind to improve their ability to recognize objects. The results will provide effective solutions for the further development of visual prosthesis. PMID:29731769

  7. What the voice reveals : Within- and between-category stereotyping on the basis of voice

    NARCIS (Netherlands)

    Ko, SJ; Judd, CM; Blair, [No Value; Blair, I.V

    The authors report research that attempts to shift the traditional focus of visual cues to auditory cues as a basis for stereotyping. Moreover, their approach examines whether gender-signaling vocal cues lead not only to between-category but also to within-category gender stereotyping. Study 1

  8. Deep Convolutional Neural Networks Outperform Feature-Based But Not Categorical Models in Explaining Object Similarity Judgments

    Science.gov (United States)

    Jozwik, Kamila M.; Kriegeskorte, Nikolaus; Storrs, Katherine R.; Mur, Marieke

    2017-01-01

    Recent advances in Deep convolutional Neural Networks (DNNs) have enabled unprecedentedly accurate computational models of brain representations, and present an exciting opportunity to model diverse cognitive functions. State-of-the-art DNNs achieve human-level performance on object categorisation, but it is unclear how well they capture human behavior on complex cognitive tasks. Recent reports suggest that DNNs can explain significant variance in one such task, judging object similarity. Here, we extend these findings by replicating them for a rich set of object images, comparing performance across layers within two DNNs of different depths, and examining how the DNNs’ performance compares to that of non-computational “conceptual” models. Human observers performed similarity judgments for a set of 92 images of real-world objects. Representations of the same images were obtained in each of the layers of two DNNs of different depths (8-layer AlexNet and 16-layer VGG-16). To create conceptual models, other human observers generated visual-feature labels (e.g., “eye”) and category labels (e.g., “animal”) for the same image set. Feature labels were divided into parts, colors, textures and contours, while category labels were divided into subordinate, basic, and superordinate categories. We fitted models derived from the features, categories, and from each layer of each DNN to the similarity judgments, using representational similarity analysis to evaluate model performance. In both DNNs, similarity within the last layer explains most of the explainable variance in human similarity judgments. The last layer outperforms almost all feature-based models. Late and mid-level layers outperform some but not all feature-based models. Importantly, categorical models predict similarity judgments significantly better than any DNN layer. Our results provide further evidence for commonalities between DNNs and brain representations. Models derived from visual features

  9. Recognition-induced forgetting of faces in visual long-term memory.

    Science.gov (United States)

    Rugo, Kelsi F; Tamler, Kendall N; Woodman, Geoffrey F; Maxcey, Ashleigh M

    2017-10-01

    Despite more than a century of evidence that long-term memory for pictures and words are different, much of what we know about memory comes from studies using words. Recent research examining visual long-term memory has demonstrated that recognizing an object induces the forgetting of objects from the same category. This recognition-induced forgetting has been shown with a variety of everyday objects. However, unlike everyday objects, faces are objects of expertise. As a result, faces may be immune to recognition-induced forgetting. However, despite excellent memory for such stimuli, we found that faces were susceptible to recognition-induced forgetting. Our findings have implications for how models of human memory account for recognition-induced forgetting as well as represent objects of expertise and consequences for eyewitness testimony and the justice system.

  10. Visual Semiotics & Uncertainty Visualization: An Empirical Study.

    Science.gov (United States)

    MacEachren, A M; Roth, R E; O'Brien, J; Li, B; Swingley, D; Gahegan, M

    2012-12-01

    This paper presents two linked empirical studies focused on uncertainty visualization. The experiments are framed from two conceptual perspectives. First, a typology of uncertainty is used to delineate kinds of uncertainty matched with space, time, and attribute components of data. Second, concepts from visual semiotics are applied to characterize the kind of visual signification that is appropriate for representing those different categories of uncertainty. This framework guided the two experiments reported here. The first addresses representation intuitiveness, considering both visual variables and iconicity of representation. The second addresses relative performance of the most intuitive abstract and iconic representations of uncertainty on a map reading task. Combined results suggest initial guidelines for representing uncertainty and discussion focuses on practical applicability of results.

  11. 3D geospatial visualizations: Animation and motion effects on spatial objects

    Science.gov (United States)

    Evangelidis, Konstantinos; Papadopoulos, Theofilos; Papatheodorou, Konstantinos; Mastorokostas, Paris; Hilas, Constantinos

    2018-02-01

    Digital Elevation Models (DEMs), in combination with high quality raster graphics provide realistic three-dimensional (3D) representations of the globe (virtual globe) and amazing navigation experience over the terrain through earth browsers. In addition, the adoption of interoperable geospatial mark-up languages (e.g. KML) and open programming libraries (Javascript) makes it also possible to create 3D spatial objects and convey on them the sensation of any type of texture by utilizing open 3D representation models (e.g. Collada). One step beyond, by employing WebGL frameworks (e.g. Cesium.js, three.js) animation and motion effects are attributed on 3D models. However, major GIS-based functionalities in combination with all the above mentioned visualization capabilities such as for example animation effects on selected areas of the terrain texture (e.g. sea waves) as well as motion effects on 3D objects moving in dynamically defined georeferenced terrain paths (e.g. the motion of an animal over a hill, or of a big fish in an ocean etc.) are not widely supported at least by open geospatial applications or development frameworks. Towards this we developed and made available to the research community, an open geospatial software application prototype that provides high level capabilities for dynamically creating user defined virtual geospatial worlds populated by selected animated and moving 3D models on user specified locations, paths and areas. At the same time, the generated code may enhance existing open visualization frameworks and programming libraries dealing with 3D simulations, with the geospatial aspect of a virtual world.

  12. How does the brain rapidly learn and reorganize view-invariant and position-invariant object representations in the inferotemporal cortex?

    Science.gov (United States)

    Cao, Yongqiang; Grossberg, Stephen; Markowitz, Jeffrey

    2011-12-01

    All primates depend for their survival on being able to rapidly learn about and recognize objects. Objects may be visually detected at multiple positions, sizes, and viewpoints. How does the brain rapidly learn and recognize objects while scanning a scene with eye movements, without causing a combinatorial explosion in the number of cells that are needed? How does the brain avoid the problem of erroneously classifying parts of different objects together at the same or different positions in a visual scene? In monkeys and humans, a key area for such invariant object category learning and recognition is the inferotemporal cortex (IT). A neural model is proposed to explain how spatial and object attention coordinate the ability of IT to learn invariant category representations of objects that are seen at multiple positions, sizes, and viewpoints. The model clarifies how interactions within a hierarchy of processing stages in the visual brain accomplish this. These stages include the retina, lateral geniculate nucleus, and cortical areas V1, V2, V4, and IT in the brain's What cortical stream, as they interact with spatial attention processes within the parietal cortex of the Where cortical stream. The model builds upon the ARTSCAN model, which proposed how view-invariant object representations are generated. The positional ARTSCAN (pARTSCAN) model proposes how the following additional processes in the What cortical processing stream also enable position-invariant object representations to be learned: IT cells with persistent activity, and a combination of normalizing object category competition and a view-to-object learning law which together ensure that unambiguous views have a larger effect on object recognition than ambiguous views. The model explains how such invariant learning can be fooled when monkeys, or other primates, are presented with an object that is swapped with another object during eye movements to foveate the original object. The swapping procedure is

  13. Development of the Object-Oriented Dynamic Simulation Models Using Visual C++ Freeware

    Directory of Open Access Journals (Sweden)

    Alexander I. Kozynchenko

    2016-01-01

    Full Text Available The paper mostly focuses on the methodological and programming aspects of developing a versatile desktop framework to provide the available basis for the high-performance simulation of dynamical models of different kinds and for diverse applications. So the paper gives some basic structure for creating a dynamical simulation model in C++ which is built on the Win32 platform with an interactive multiwindow interface and uses the lightweight Visual C++ Express as a free integrated development environment. The resultant simulation framework could be a more acceptable alternative to other solutions developed on the basis of commercial tools like Borland C++ or Visual C++ Professional, not to mention the domain specific languages and more specialized ready-made software such as Matlab, Simulink, and Modelica. This approach seems to be justified in the case of complex research object-oriented dynamical models having nonstandard structure, relationships, algorithms, and solvers, as it allows developing solutions of high flexibility. The essence of the model framework is shown using a case study of simulation of moving charged particles in the electrostatic field. The simulation model possesses the necessary visualization and control features such as an interactive input, real time graphical and text output, start, stop, and rate control.

  14. Discovery learning model with geogebra assisted for improvement mathematical visual thinking ability

    Science.gov (United States)

    Juandi, D.; Priatna, N.

    2018-05-01

    The main goal of this study is to improve the mathematical visual thinking ability of high school student through implementation the Discovery Learning Model with Geogebra Assisted. This objective can be achieved through study used quasi-experimental method, with non-random pretest-posttest control design. The sample subject of this research consist of 62 senior school student grade XI in one of school in Bandung district. The required data will be collected through documentation, observation, written tests, interviews, daily journals, and student worksheets. The results of this study are: 1) Improvement students Mathematical Visual Thinking Ability who obtain learning with applied the Discovery Learning Model with Geogebra assisted is significantly higher than students who obtain conventional learning; 2) There is a difference in the improvement of students’ Mathematical Visual Thinking ability between groups based on prior knowledge mathematical abilities (high, medium, and low) who obtained the treatment. 3) The Mathematical Visual Thinking Ability improvement of the high group is significantly higher than in the medium and low groups. 4) The quality of improvement ability of high and low prior knowledge is moderate category, in while the quality of improvement ability in the high category achieved by student with medium prior knowledge.

  15. The Role of Age and Executive Function in Auditory Category Learning

    Science.gov (United States)

    Reetzke, Rachel; Maddox, W. Todd; Chandrasekaran, Bharath

    2015-01-01

    Auditory categorization is a natural and adaptive process that allows for the organization of high-dimensional, continuous acoustic information into discrete representations. Studies in the visual domain have identified a rule-based learning system that learns and reasons via a hypothesis-testing process that requires working memory and executive attention. The rule-based learning system in vision shows a protracted development, reflecting the influence of maturing prefrontal function on visual categorization. The aim of the current study is two-fold: (a) to examine the developmental trajectory of rule-based auditory category learning from childhood through adolescence, into early adulthood; and (b) to examine the extent to which individual differences in rule-based category learning relate to individual differences in executive function. Sixty participants with normal hearing, 20 children (age range, 7–12), 21 adolescents (age range, 13–19), and 19 young adults (age range, 20–23), learned to categorize novel dynamic ripple sounds using trial-by-trial feedback. The spectrotemporally modulated ripple sounds are considered the auditory equivalent of the well-studied Gabor patches in the visual domain. Results revealed that auditory categorization accuracy improved with age, with young adults outperforming children and adolescents. Computational modeling analyses indicated that the use of the task-optimal strategy (i.e. a conjunctive rule-based learning strategy) improved with age. Notably, individual differences in executive flexibility significantly predicted auditory category learning success. The current findings demonstrate a protracted development of rule-based auditory categorization. The results further suggest that executive flexibility coupled with perceptual processes play important roles in successful rule-based auditory category learning. PMID:26491987

  16. Category Specific Spatial Dissociations of Parallel Processes Underlying Visual Naming

    OpenAIRE

    Conner, Christopher R.; Chen, Gang; Pieters, Thomas A.; Tandon, Nitin

    2013-01-01

    The constituent elements and dynamics of the networks responsible for word production are a central issue to understanding human language. Of particular interest is their dependency on lexical category, particularly the possible segregation of nouns and verbs into separate processing streams. We applied a novel mixed-effects, multilevel analysis to electrocorticographic data collected from 19 patients (1942 electrodes) to examine the activity of broadly disseminated cortical networks during t...

  17. Can you see what you feel? Color and folding properties affect visual-tactile material discrimination of fabrics.

    Science.gov (United States)

    Xiao, Bei; Bi, Wenyan; Jia, Xiaodan; Wei, Hanhan; Adelson, Edward H

    2016-01-01

    Humans can often estimate tactile properties of objects from vision alone. For example, during online shopping, we can often infer material properties of clothing from images and judge how the material would feel against our skin. What visual information is important for tactile perception? Previous studies in material perception have focused on measuring surface appearance, such as gloss and roughness, and using verbal reports of material attributes and categories. However, in real life, predicting tactile properties of an object might not require accurate verbal descriptions of its surface attributes or categories. In this paper, we use tactile perception as ground truth to measure visual material perception. Using fabrics as our stimuli, we measure how observers match what they see (photographs of fabric samples) with what they feel (physical fabric samples). The data shows that color has a significant main effect in that removing color significantly reduces accuracy, especially when the images contain 3-D folds. We also find that images of draped fabrics, which revealed 3-D shape information, achieved better matching accuracy than images with flattened fabrics. The data shows a strong interaction between color and folding conditions on matching accuracy, suggesting that, in 3-D folding conditions, the visual system takes advantage of chromatic gradients to infer tactile properties but not in flattened conditions. Together, using a visual-tactile matching task, we show that humans use folding and color information in matching the visual and tactile properties of fabrics.

  18. Shape configuration and category-specificity

    DEFF Research Database (Denmark)

    Gerlach, Christian; Law, Ian; Paulson, Olaf B

    2006-01-01

    in difficult object decision tasks, which is also found in the present experiments with outlines, is reversed when the stimuli are fragmented. This interaction between category (natural versus artefacts) and stimulus type (outlines versus fragmented forms) is in accordance with predictions derived from...

  19. Object-based Encoding in Visual Working Memory: Evidence from Memory-driven Attentional Capture

    OpenAIRE

    Gao, Zaifeng; Yu, Shixian; Zhu, Chengfeng; Shui, Rende; Weng, Xuchu; Li, Peng; Shen, Mowei

    2016-01-01

    Visual working memory (VWM) adopts a specific manner of object-based encoding (OBE) to extract perceptual information: Whenever one feature-dimension is selected for entry into VWM, the others are also extracted. Currently most studies revealing OBE probed an ?irrelevant-change distracting effect?, where changes of irrelevant-features dramatically affected the performance of the target feature. However, the existence of irrelevant-feature change may affect participants? processing manner, lea...

  20. The causal role of category-specific neuronal representations in the left ventral premotor cortex (PMv) in semantic processing.

    Science.gov (United States)

    Cattaneo, Zaira; Devlin, Joseph T; Salvini, Francesca; Vecchi, Tomaso; Silvanto, Juha

    2010-02-01

    The left ventral premotor cortex (PMv) is preferentially activated by exemplars of tools, suggestive of category specificity in this region. Here we used state-dependent transcranial magnetic stimulation (TMS) to investigate the causal role of such category-specific neuronal representations in the encoding of tool words. Priming to a category name (either "Tool" or "Animal") was used with the objective of modulating the initial activation state of this region prior to application of TMS and the presentation of the target stimulus. When the target word was an exemplar of the "Tool" category, the effects of TMS applied over PMv (but not PMd) interacted with priming history by facilitating reaction times on incongruent trials while not affecting congruent trials. This congruency/TMS interaction implies that the "Tool" and "Animal" primes had a differential effect on the initial activation state of the left PMv and implies that this region is one neural locus of category-specific behavioral priming for the "Tool" category. TMS applied over PMv had no behavioral effect when the target stimulus was an exemplar of the "Animal" category, regardless of whether the target word was congruent or incongruent with the prime. That TMS applied over the left PMv interacted with a priming effect that extended from the category name ("Tool") to exemplars of that category suggests that this region contains neuronal representation associated with a specific semantic category. Our results also demonstrate that the state-dependent effects obtained in the combination of visual priming and TMS are useful in the study of higher-level cognitive functions. Copyright (c) 2009 Elsevier Inc. All rights reserved.

  1. Domain-specificity of creativity: a study on the relationship between Visual Creativity and Visual Mental Imagery’.

    Directory of Open Access Journals (Sweden)

    Massimiliano ePalmiero

    2015-12-01

    Full Text Available Creativity refers to the capability to catch original and valuable ideas and solutions. It involves different processes. In this study the extent to which visual creativity is related to cognitive processes underlying visual mental imagery was investigated. Fifty college students (25 women carried out: the Creative Synthesis Task, which measures the ability to produce creative objects belonging to a given category (originality, synthesis and transformation scores of pre-inventive forms, and originality and practicality scores of inventions were computed; an adaptation of Clark’s Drawing Ability Test, which measures the ability to produce actual creative artworks (graphic ability, aesthetic and creativity scores of drawings were assessed and three mental imagery tasks that investigate the three main cognitive processes involved in visual mental imagery: generation, inspection and transformation. Vividness of imagery and verbalizer-visualizer cognitive style were also measured using questionnaires. Correlation analysis revealed that all measures of the creativity tasks positively correlated with the image transformation imagery ability; practicality of inventions negatively correlated with vividness of imagery; originality of inventions positively correlated with the visualization cognitive style. However, regression analysis confirmed the predictive role of the transformation imagery ability only for the originality score of inventions and for the graphic ability and aesthetic scores of artistic drawings; on the other hand, the visualization cognitive style predicted the originality of inventions, whereas the vividness of imagery predicted practicality of inventions. These results are consistent with the notion that visual creativity is domain- and task-specific.

  2. An interplay of fusiform gyrus and hippocampus enables prototype- and exemplar-based category learning.

    Science.gov (United States)

    Lech, Robert K; Güntürkün, Onur; Suchan, Boris

    2016-09-15

    The aim of the present study was to examine the contributions of different brain structures to prototype- and exemplar-based category learning using functional magnetic resonance imaging (fMRI). Twenty-eight subjects performed a categorization task in which they had to assign prototypes and exceptions to two different families. This test procedure usually produces different learning curves for prototype and exception stimuli. Our behavioral data replicated these previous findings by showing an initially superior performance for prototypes and typical stimuli and a switch from a prototype-based to an exemplar-based categorization for exceptions in the later learning phases. Since performance varied, we divided participants into learners and non-learners. Analysis of the functional imaging data revealed that the interaction of group (learners vs. non-learners) and block (Block 5 vs. Block 1) yielded an activation of the left fusiform gyrus for the processing of prototypes, and an activation of the right hippocampus for exceptions after learning the categories. Thus, successful prototype- and exemplar-based category learning is associated with activations of complementary neural substrates that constitute object-based processes of the ventral visual stream and their interaction with unique-cue representations, possibly based on sparse coding within the hippocampus. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Category Specific Knowledge Modulate Capacity Limitations of Visual Short-Term Memory

    DEFF Research Database (Denmark)

    Dall, Jonas Olsen; Watanabe, Katsumi; Sørensen, Thomas Alrik

    2016-01-01

    We explore whether expertise can modulate the capacity of visual short-term memory, as some seem to argue that training affects capacity of short-term memory [13] while others are not able to find this modulation [12]. We extend on a previous study [3] demonstrating expertise effects by investiga...... are in line with the theoretical interpretation that visual short-term memory reflects the sum of the reverberating feedback loops to representations in long-term memory.......We explore whether expertise can modulate the capacity of visual short-term memory, as some seem to argue that training affects capacity of short-term memory [13] while others are not able to find this modulation [12]. We extend on a previous study [3] demonstrating expertise effects......), and expert observers (Japanese university students). For both the picture and the letter condition we find no performance difference in memory capacity, however, in the critical hiragana condition we demonstrate a systematic difference relating expertise differences between the groups. These results...

  4. Age effects on visual-perceptual processing and confrontation naming.

    Science.gov (United States)

    Gutherie, Audrey H; Seely, Peter W; Beacham, Lauren A; Schuchard, Ronald A; De l'Aune, William A; Moore, Anna Bacon

    2010-03-01

    The impact of age-related changes in visual-perceptual processing on naming ability has not been reported. The present study investigated the effects of 6 levels of spatial frequency and 6 levels of contrast on accuracy and latency to name objects in 14 young and 13 older neurologically normal adults with intact lexical-semantic functioning. Spatial frequency and contrast manipulations were made independently. Consistent with the hypotheses, variations in these two visual parameters impact naming ability in young and older subjects differently. The results from the spatial frequency-manipulations revealed that, in general, young vs. older subjects are faster and more accurate to name. However, this age-related difference is dependent on the spatial frequency on the image; differences were only seen for images presented at low (e.g., 0.25-1 c/deg) or high (e.g., 8-16 c/deg) spatial frequencies. Contrary to predictions, the results from the contrast manipulations revealed that overall older vs. young adults are more accurate to name. Again, however, differences were only seen for images presented at the lower levels of contrast (i.e., 1.25%). Both age groups had shorter latencies on the second exposure of the contrast-manipulated images, but this possible advantage of exposure was not seen for spatial frequency. Category analyses conducted on the data from this study indicate that older vs. young adults exhibit a stronger nonliving-object advantage for naming spatial frequency-manipulated images. Moreover, the findings suggest that bottom-up visual-perceptual variables integrate with top-down category information in different ways. Potential implications on the aging and naming (and recognition) literature are discussed.

  5. A Prospective Curriculum Using Visual Literacy.

    Science.gov (United States)

    Hortin, John A.

    This report describes the uses of visual literacy programs in the schools and outlines four categories for incorporating training in visual thinking into school curriculums as part of the back to basics movement in education. The report recommends that curriculum writers include materials pertaining to: (1) reading visual language and…

  6. An electrophysiological study of the object-based correspondence effect: is the effect triggered by an intended grasping action?

    Science.gov (United States)

    Lien, Mei-Ching; Jardin, Elliott; Proctor, Robert W

    2013-11-01

    We examined Goslin, Dixon, Fischer, Cangelosi, and Ellis's (Psychological Science 23:152-157, 2012) claim that the object-based correspondence effect (i.e., faster keypress responses when the orientation of an object's graspable part corresponds with the response location than when it does not) is the result of object-based attention (vision-action binding). In Experiment 1, participants determined the category of a centrally located object (kitchen utensil vs. tool), as in Goslin et al.'s study. The handle orientation (left vs. right) did or did not correspond with the response location (left vs. right). We found no correspondence effect on the response times (RTs) for either category. The effect was also not evident in the P1 and N1 components of the event-related potentials, which are thought to reflect the allocation of early visual attention. This finding was replicated in Experiment 2 for centrally located objects, even when the object was presented 45 times (33 more times than in Exp. 1). Critically, the correspondence effects on RTs, P1s, and N1s emerged only when the object was presented peripherally, so that the object handle was clearly located to the left or right of fixation. Experiment 3 provided further evidence that the effect was observed only for the base-centered objects, in which the handle was clearly positioned to the left or right of center. These findings contradict those of Goslin et al. and provide no evidence that an intended grasping action modulates visual attention. Instead, the findings support the spatial-coding account of the object-based correspondence effect.

  7. Feedforward Object-Vision Models Only Tolerate Small Image Variations Compared to Human

    Directory of Open Access Journals (Sweden)

    Masoud eGhodrati

    2014-07-01

    Full Text Available Invariant object recognition is a remarkable ability of primates' visual system that its underlying mechanism has constantly been under intense investigations. Computational modelling is a valuable tool toward understanding the processes involved in invariant object recognition. Although recent computational models have shown outstanding performances on challenging image databases, they fail to perform well when images with more complex variations of the same object are applied to them. Studies have shown that making sparse representation of objects by extracting more informative visual features through a feedforward sweep can lead to higher recognition performances. Here, however, we show that when the complexity of image variations is high, even this approach results in poor performance compared to humans. To assess the performance of models and humans in invariant object recognition tasks, we built a parametrically controlled image database consisting of several object categories varied in different dimensions and levels, rendered from 3D planes. Comparing the performance of several object recognition models with human observers shows that only in low-level image variations the models perform similar to humans in categorization tasks. Furthermore, the results of our behavioral experiments demonstrate that, even under difficult experimental conditions (i.e. briefly presented masked stimuli with complex image variations, human observers performed outstandingly well, suggesting that the models are still far from resembling humans in invariant object recognition. Taken together, we suggest that learning sparse informative visual features, although desirable, is not a complete solution for future progresses in object-vision modelling. We show that this approach is not of significant help in solving the computational crux of object recognition (that is invariant object recognition when the identity-preserving image variations become more complex.

  8. Can Semi-Supervised Learning Explain Incorrect Beliefs about Categories?

    Science.gov (United States)

    Kalish, Charles W.; Rogers, Timothy T.; Lang, Jonathan; Zhu, Xiaojin

    2011-01-01

    Three experiments with 88 college-aged participants explored how unlabeled experiences--learning episodes in which people encounter objects without information about their category membership--influence beliefs about category structure. Participants performed a simple one-dimensional categorization task in a brief supervised learning phase, then…

  9. An introduction to the language of category theory

    CERN Document Server

    Roman, Steven

    2017-01-01

    This textbook provides an introduction to elementary category theory, with the aim of making what can be a confusing and sometimes overwhelming subject more accessible. In writing about this challenging subject, the author has brought to bear all of the experience he has gained in authoring over 30 books in university-level mathematics. The goal of this book is to present the five major ideas of category theory: categories, functors, natural transformations, universality, and adjoints in as friendly and relaxed a manner as possible while at the same time not sacrificing rigor. These topics are developed in a straightforward, step-by-step manner and are accompanied by numerous examples and exercises, most of which are drawn from abstract algebra. The first chapter of the book introduces the definitions of category and functor and discusses diagrams, duality, initial and terminal objects, special types of morphisms, and some special types of categories, particularly comma categories and hom-set categories. Chap...

  10. Do We Know others' Visual Liking?

    Directory of Open Access Journals (Sweden)

    Ryosuke Niimi

    2014-12-01

    Full Text Available Although personal liking varies considerably, there is a general trend of liking shared by many people (public favour. Visual liking in particular may be largely shared by people, as it is strongly influenced by relatively low-level perceptual factors. If so, it is likely that people have correct knowledge of public favour. We examined the human ability to predict public favour. In three experiments, participants rated the subjective likability of various visual objects (e.g. car, chair, and predicted the mean liking rating by other participants. Irrespective of the object's category, the correlation between individual prediction and actual mean liking of others (prediction validity was not higher than the correlation between the predictor's own liking and the mean liking of others. Further, individual prediction correlated more with the predictor's own liking than it was with others' liking. Namely, predictions were biased towards the predictor's subjective liking (a variation of the false consensus effect. The results suggest that humans do not have (or cannot access correct knowledge of public favour. It was suggested that increasing the number of predictors is the appropriate strategy for making a good prediction of public favour.

  11. Where's Waldo? How perceptual, cognitive, and emotional brain processes cooperate during learning to categorize and find desired objects in a cluttered scene

    Directory of Open Access Journals (Sweden)

    Stephen eGrossberg

    2014-06-01

    Full Text Available The Where’s Waldo problem concerns how individuals can rapidly learn to search a scene to detect, attend, recognize, and look at a valued target object in it. This article develops the ARTSCAN Search neural model to clarify how brain mechanisms across the What and Where cortical streams are coordinated to solve the Where's Waldo problem. The What stream learns positionally-invariant object representations, whereas the Where stream controls positionally-selective spatial and action representations. The model overcomes deficiencies of these computationally complementary properties through What and Where stream interactions. Where stream processes of spatial attention and predictive eye movement control modulate What stream processes whereby multiple view- and positionally-specific object categories are learned and associatively linked to view- and positionally-invariant object categories through bottom-up and attentive top-down interactions. Gain fields control the coordinate transformations that enable spatial attention and predictive eye movements to carry out this role. What stream cognitive-emotional learning processes enable the focusing of motivated attention upon the invariant object categories of desired objects. What stream cognitive names or motivational drives can prime a view- and positionally-invariant object category of a desired target object. A volitional signal can convert these primes into top-down activations that can, in turn, prime What stream view- and positionally-specific categories. When it also receives bottom-up activation from a target, such a positionally-specific category can cause an attentional shift in the Where stream to the positional representation of the target, and an eye movement can then be elicited to foveate it. These processes describe interactions among brain regions that include visual cortex, parietal cortex inferotemporal cortex, prefrontal cortex, amygdala, basal ganglia, and superior colliculus.

  12. Creativity, visualization abilities, and visual cognitive style.

    Science.gov (United States)

    Kozhevnikov, Maria; Kozhevnikov, Michael; Yu, Chen Jiao; Blazhenkova, Olesya

    2013-06-01

    Despite the recent evidence for a multi-component nature of both visual imagery and creativity, there have been no systematic studies on how the different dimensions of creativity and imagery might interrelate. The main goal of this study was to investigate the relationship between different dimensions of creativity (artistic and scientific) and dimensions of visualization abilities and styles (object and spatial). In addition, we compared the contributions of object and spatial visualization abilities versus corresponding styles to scientific and artistic dimensions of creativity. Twenty-four undergraduate students (12 females) were recruited for the first study, and 75 additional participants (36 females) were recruited for an additional experiment. Participants were administered a number of object and spatial visualization abilities and style assessments as well as a number of artistic and scientific creativity tests. The results show that object visualization relates to artistic creativity and spatial visualization relates to scientific creativity, while both are distinct from verbal creativity. Furthermore, our findings demonstrate that style predicts corresponding dimension of creativity even after removing shared variance between style and visualization ability. The results suggest that styles might be a more ecologically valid construct in predicting real-life creative behaviour, such as performance in different professional domains. © 2013 The British Psychological Society.

  13. Visualization of object-oriented (Java) programs

    NARCIS (Netherlands)

    Huizing, C.; Kuiper, R.; Luijten, C.A.A.M.; Vandalon, V.; Helfert, M.; Martins, M.J.; Cordeiro, J.

    2012-01-01

    We provide an explicit, consistent, execution model for OO programs, specifically Java, together with a tool that visualizes the model This equips the student with a model to think and communicate about OO programs. Especially for an e-learning situation this is significant. Firstly, such a model

  14. Visual Attention to Competing Social and Object Images by Preschool Children with Autism Spectrum Disorder

    Science.gov (United States)

    Sasson, Noah J.; Touchstone, Emily W.

    2014-01-01

    Eye tracking studies of young children with autism spectrum disorder (ASD) report a reduction in social attention and an increase in visual attention to non-social stimuli, including objects related to circumscribed interests (CI) (e.g., trains). In the current study, fifteen preschoolers with ASD and 15 typically developing controls matched on…

  15. Words can slow down category learning.

    Science.gov (United States)

    Brojde, Chandra L; Porter, Chelsea; Colunga, Eliana

    2011-08-01

    Words have been shown to influence many cognitive tasks, including category learning. Most demonstrations of these effects have focused on instances in which words facilitate performance. One possibility is that words augment representations, predicting an across the-board benefit of words during category learning. We propose that words shift attention to dimensions that have been historically predictive in similar contexts. Under this account, there should be cases in which words are detrimental to performance. The results from two experiments show that words impair learning of object categories under some conditions. Experiment 1 shows that words hurt performance when learning to categorize by texture. Experiment 2 shows that words also hurt when learning to categorize by brightness, leading to selectively attending to shape when both shape and hue could be used to correctly categorize stimuli. We suggest that both the positive and negative effects of words have developmental origins in the history of word usage while learning categories. [corrected

  16. Binocular fusion and invariant category learning due to predictive remapping during scanning of a depthful scene with eye movements

    Science.gov (United States)

    Grossberg, Stephen; Srinivasan, Karthik; Yazdanbakhsh, Arash

    2015-01-01

    How does the brain maintain stable fusion of 3D scenes when the eyes move? Every eye movement causes each retinal position to process a different set of scenic features, and thus the brain needs to binocularly fuse new combinations of features at each position after an eye movement. Despite these breaks in retinotopic fusion due to each movement, previously fused representations of a scene in depth often appear stable. The 3D ARTSCAN neural model proposes how the brain does this by unifying concepts about how multiple cortical areas in the What and Where cortical streams interact to coordinate processes of 3D boundary and surface perception, spatial attention, invariant object category learning, predictive remapping, eye movement control, and learned coordinate transformations. The model explains data from single neuron and psychophysical studies of covert visual attention shifts prior to eye movements. The model further clarifies how perceptual, attentional, and cognitive interactions among multiple brain regions (LGN, V1, V2, V3A, V4, MT, MST, PPC, LIP, ITp, ITa, SC) may accomplish predictive remapping as part of the process whereby view-invariant object categories are learned. These results build upon earlier neural models of 3D vision and figure-ground separation and the learning of invariant object categories as the eyes freely scan a scene. A key process concerns how an object's surface representation generates a form-fitting distribution of spatial attention, or attentional shroud, in parietal cortex that helps maintain the stability of multiple perceptual and cognitive processes. Predictive eye movement signals maintain the stability of the shroud, as well as of binocularly fused perceptual boundaries and surface representations. PMID:25642198

  17. Binocular fusion and invariant category learning due to predictive remapping during scanning of a depthful scene with eye movements.

    Science.gov (United States)

    Grossberg, Stephen; Srinivasan, Karthik; Yazdanbakhsh, Arash

    2014-01-01

    How does the brain maintain stable fusion of 3D scenes when the eyes move? Every eye movement causes each retinal position to process a different set of scenic features, and thus the brain needs to binocularly fuse new combinations of features at each position after an eye movement. Despite these breaks in retinotopic fusion due to each movement, previously fused representations of a scene in depth often appear stable. The 3D ARTSCAN neural model proposes how the brain does this by unifying concepts about how multiple cortical areas in the What and Where cortical streams interact to coordinate processes of 3D boundary and surface perception, spatial attention, invariant object category learning, predictive remapping, eye movement control, and learned coordinate transformations. The model explains data from single neuron and psychophysical studies of covert visual attention shifts prior to eye movements. The model further clarifies how perceptual, attentional, and cognitive interactions among multiple brain regions (LGN, V1, V2, V3A, V4, MT, MST, PPC, LIP, ITp, ITa, SC) may accomplish predictive remapping as part of the process whereby view-invariant object categories are learned. These results build upon earlier neural models of 3D vision and figure-ground separation and the learning of invariant object categories as the eyes freely scan a scene. A key process concerns how an object's surface representation generates a form-fitting distribution of spatial attention, or attentional shroud, in parietal cortex that helps maintain the stability of multiple perceptual and cognitive processes. Predictive eye movement signals maintain the stability of the shroud, as well as of binocularly fused perceptual boundaries and surface representations.

  18. Cohomological descent theory for a morphism of stacks and for equivariant derived categories

    International Nuclear Information System (INIS)

    Elagin, Alexei D

    2011-01-01

    In the paper, we find necessary and sufficient conditions under which, if X→S is a morphism of algebraic varieties (or, in a more general case, of stacks), the derived category of S can be recovered by using the tools of descent theory from the derived category of X. We show that for an action of a linearly reductive algebraic group G on a scheme X this result implies the equivalence of the derived category of G-equivariant sheaves on X and the category of objects in the derived category of sheaves on X with a given action of G on each object. Bibliography: 18 titles.

  19. Category Theory as a Formal Mathematical Foundation for Model-Based Systems Engineering

    KAUST Repository

    Mabrok, Mohamed; Ryan, Michael J.

    2017-01-01

    In this paper, we introduce Category Theory as a formal foundation for model-based systems engineering. A generalised view of the system based on category theory is presented, where any system can be considered as a category. The objects

  20. Effect of Colour of Object on Simple Visual Reaction Time in Normal Subjects

    Directory of Open Access Journals (Sweden)

    Sunita B. Kalyanshetti

    2014-01-01

    Full Text Available The measure of simple reaction time has been used to evaluate the processing speed of CNS and the co-ordination between the sensory and motor systems. As the reaction time is influenced by different factors; the impact of colour of objects in modulating the reaction time has been investigated in this study. 200 healthy volunteers (female gender 100 and male gender100 of age group 18-25 yrs were included as subjects. The subjects were presented with two visual stimuli viz; red and green light by using an electronic response analyzer. Paired‘t’ test for comparison of visual reaction time for red and green colour in male gender shows p value<0.05 whereas in female gender shows p<0.001. It was observed that response latency for red colour was lesser than that of green colour which can be explained on the basis of trichromatic theory.

  1. Object-based implicit learning in visual search: perceptual segmentation constrains contextual cueing.

    Science.gov (United States)

    Conci, Markus; Müller, Hermann J; von Mühlenen, Adrian

    2013-07-09

    In visual search, detection of a target is faster when it is presented within a spatial layout of repeatedly encountered nontarget items, indicating that contextual invariances can guide selective attention (contextual cueing; Chun & Jiang, 1998). However, perceptual regularities may interfere with contextual learning; for instance, no contextual facilitation occurs when four nontargets form a square-shaped grouping, even though the square location predicts the target location (Conci & von Mühlenen, 2009). Here, we further investigated potential causes for this interference-effect: We show that contextual cueing can reliably occur for targets located within the region of a segmented object, but not for targets presented outside of the object's boundaries. Four experiments demonstrate an object-based facilitation in contextual cueing, with a modulation of context-based learning by relatively subtle grouping cues including closure, symmetry, and spatial regularity. Moreover, the lack of contextual cueing for targets located outside the segmented region was due to an absence of (latent) learning of contextual layouts, rather than due to an attentional bias towards the grouped region. Taken together, these results indicate that perceptual segmentation provides a basic structure within which contextual scene regularities are acquired. This in turn argues that contextual learning is constrained by object-based selection.

  2. Impaired integration of object knowledge and visual input in a case of ventral simultanagnosia with bilateral damage to area V4.

    Science.gov (United States)

    Leek, E Charles; d'Avossa, Giovanni; Tainturier, Marie-Josèphe; Roberts, Daniel J; Yuen, Sung Lai; Hu, Mo; Rafal, Robert

    2012-01-01

    This study examines how brain damage can affect the cognitive processes that support the integration of sensory input and prior knowledge during shape perception. It is based on the first detailed study of acquired ventral simultanagnosia, which was found in a patient (M.T.) with posterior occipitotemporal lesions encompassing V4 bilaterally. Despite showing normal object recognition for single items in both accuracy and response times (RTs), and intact low-level vision assessed across an extensive battery of tests, M.T. was impaired in object identification with overlapping figures displays. Task performance was modulated by familiarity: Unlike controls, M.T. was faster with overlapping displays of abstract shapes than with overlapping displays of common objects. His performance with overlapping common object displays was also influenced by both the semantic relatedness and visual similarity of the display items. These findings challenge claims that visual perception is driven solely by feedforward mechanisms and show how brain damage can selectively impair high-level perceptual processes supporting the integration of stored knowledge and visual sensory input.

  3. Real-time visual tracking of less textured three-dimensional objects on mobile platforms

    Science.gov (United States)

    Seo, Byung-Kuk; Park, Jungsik; Park, Hanhoon; Park, Jong-Il

    2012-12-01

    Natural feature-based approaches are still challenging for mobile applications (e.g., mobile augmented reality), because they are feasible only in limited environments such as highly textured and planar scenes/objects, and they need powerful mobile hardware for fast and reliable tracking. In many cases where conventional approaches are not effective, three-dimensional (3-D) knowledge of target scenes would be beneficial. We present a well-established framework for real-time visual tracking of less textured 3-D objects on mobile platforms. Our framework is based on model-based tracking that efficiently exploits partially known 3-D scene knowledge such as object models and a background's distinctive geometric or photometric knowledge. Moreover, we elaborate on implementation in order to make it suitable for real-time vision processing on mobile hardware. The performance of the framework is tested and evaluated on recent commercially available smartphones, and its feasibility is shown by real-time demonstrations.

  4. Visual Representations of Microcosm in Textbooks of Chemistry: Constructing a Systemic Network for Their Main Conceptual Framework

    Science.gov (United States)

    Papageorgiou, George; Amariotakis, Vasilios; Spiliotopoulou, Vasiliki

    2017-01-01

    The main objective of this work is to analyse the visual representations (VRs) of the microcosm depicted in nine Greek secondary chemistry school textbooks of the last three decades in order to construct a systemic network for their main conceptual framework and to evaluate the contribution of each one of the resulting categories to the network.…

  5. The Effect of Inversion on 3- to 5-Year-Olds' Recognition of Face and Nonface Visual Objects

    Science.gov (United States)

    Picozzi, Marta; Cassia, Viola Macchi; Turati, Chiara; Vescovo, Elena

    2009-01-01

    This study compared the effect of stimulus inversion on 3- to 5-year-olds' recognition of faces and two nonface object categories matched with faces for a number of attributes: shoes (Experiment 1) and frontal images of cars (Experiments 2 and 3). The inversion effect was present for faces but not shoes at 3 years of age (Experiment 1). Analogous…

  6. Relations of Preschoolers' Visual-Motor and Object Manipulation Skills with Executive Function and Social Behavior

    Science.gov (United States)

    MacDonald, Megan; Lipscomb, Shannon; McClelland, Megan M.; Duncan, Rob; Becker, Derek; Anderson, Kim; Kile, Molly

    2016-01-01

    Purpose: The purpose of this article was to examine specific linkages between early visual-motor integration skills and executive function, as well as between early object manipulation skills and social behaviors in the classroom during the preschool year. Method: Ninety-two children aged 3 to 5 years old (M[subscript age] = 4.31 years) were…

  7. Object-Spatial Visualization and Verbal Cognitive Styles, and Their Relation to Cognitive Abilities and Mathematical Performance

    Science.gov (United States)

    Haciomeroglu, Erhan Selcuk

    2016-01-01

    The present study investigated the object-spatial visualization and verbal cognitive styles among high school students and related differences in spatial ability, verbal-logical reasoning ability, and mathematical performance of those students. Data were collected from 348 students enrolled in Advanced Placement calculus courses at six high…

  8. Prevalence of oral health status in visually impaired children

    Directory of Open Access Journals (Sweden)

    KVKK Reddy

    2011-01-01

    Full Text Available Introduction: The epidemiological investigation was carried out among 228 children selected from two schools of similar socioeconomic strata in and around Chennai city. Materials and Methods: The study population consisted of 128 visually impaired and 100 normal school going children in the age group of 6-15 years. The examination procedure and criteria were those recommended by W.H.O. in 1997. Results: The mean DMFT/deft was 1.1 and 0.17,0.87 and 0.47 in visually impaired and normal children, respectively. Oral hygiene levels in both groups were: mean value in good category was 0.19 and 0.67, in fair category was 0.22 and 0.1, and in poor category 0.40 and 0.23 in visually impaired children and normal children, respectively. Trauma experienced children were 0.29 and 0.13 in visually impaired children and normal children, respectively. Conclusion: The conclusions drawn from this study were that there was a greater prevalence of dental caries, poorer oral hygiene, and higher incidence of trauma in visually impaired children.

  9. Social negative bootstrapping for visual categorization

    NARCIS (Netherlands)

    Li, X.; Snoek, C.G.M.; Worring, M.; Smeulders, A.W.M.

    2011-01-01

    To learn classifiers for many visual categories, obtaining labeled training examples in an efficient way is crucial. Since a classifier tends to misclassify negative examples which are visually similar to positive examples, inclusion of such informative negatives should be stressed in the learning

  10. Reproducibility and discriminability of brain patterns of semantic categories enhanced by congruent audiovisual stimuli.

    Directory of Open Access Journals (Sweden)

    Yuanqing Li

    Full Text Available One of the central questions in cognitive neuroscience is the precise neural representation, or brain pattern, associated with a semantic category. In this study, we explored the influence of audiovisual stimuli on the brain patterns of concepts or semantic categories through a functional magnetic resonance imaging (fMRI experiment. We used a pattern search method to extract brain patterns corresponding to two semantic categories: "old people" and "young people." These brain patterns were elicited by semantically congruent audiovisual, semantically incongruent audiovisual, unimodal visual, and unimodal auditory stimuli belonging to the two semantic categories. We calculated the reproducibility index, which measures the similarity of the patterns within the same category. We also decoded the semantic categories from these brain patterns. The decoding accuracy reflects the discriminability of the brain patterns between two categories. The results showed that both the reproducibility index of brain patterns and the decoding accuracy were significantly higher for semantically congruent audiovisual stimuli than for unimodal visual and unimodal auditory stimuli, while the semantically incongruent stimuli did not elicit brain patterns with significantly higher reproducibility index or decoding accuracy. Thus, the semantically congruent audiovisual stimuli enhanced the within-class reproducibility of brain patterns and the between-class discriminability of brain patterns, and facilitate neural representations of semantic categories or concepts. Furthermore, we analyzed the brain activity in superior temporal sulcus and middle temporal gyrus (STS/MTG. The strength of the fMRI signal and the reproducibility index were enhanced by the semantically congruent audiovisual stimuli. Our results support the use of the reproducibility index as a potential tool to supplement the fMRI signal amplitude for evaluating multimodal integration.

  11. Object Tracking Using Adaptive Covariance Descriptor and Clustering-Based Model Updating for Visual Surveillance

    Directory of Open Access Journals (Sweden)

    Lei Qin

    2014-05-01

    Full Text Available We propose a novel approach for tracking an arbitrary object in video sequences for visual surveillance. The first contribution of this work is an automatic feature extraction method that is able to extract compact discriminative features from a feature pool before computing the region covariance descriptor. As the feature extraction method is adaptive to a specific object of interest, we refer to the region covariance descriptor computed using the extracted features as the adaptive covariance descriptor. The second contribution is to propose a weakly supervised method for updating the object appearance model during tracking. The method performs a mean-shift clustering procedure among the tracking result samples accumulated during a period of time and selects a group of reliable samples for updating the object appearance model. As such, the object appearance model is kept up-to-date and is prevented from contamination even in case of tracking mistakes. We conducted comparing experiments on real-world video sequences, which confirmed the effectiveness of the proposed approaches. The tracking system that integrates the adaptive covariance descriptor and the clustering-based model updating method accomplished stable object tracking on challenging video sequences.

  12. Visual agnosia for line drawings and silhouettes without apparent impairment of real-object recognition: a case report.

    Science.gov (United States)

    Hiraoka, Kotaro; Suzuki, Kyoko; Hirayama, Kazumi; Mori, Etsuro

    2009-01-01

    We report on a patient with visual agnosia for line drawings and silhouette pictures following cerebral infarction in the region of the right posterior cerebral artery. The patient retained the ability to recognize real objects and their photographs, and could precisely copy line drawings of objects that she could not name. This case report highlights the importance of clinicians and researchers paying special attention to avoid overlooking agnosia in such cases. The factors that lead to problems in the identification of stimuli other than real objects in agnosic cases are discussed.

  13. Visual Agnosia for Line Drawings and Silhouettes without Apparent Impairment of Real-Object Recognition: A Case Report

    Directory of Open Access Journals (Sweden)

    Kotaro Hiraoka

    2009-01-01

    Full Text Available We report on a patient with visual agnosia for line drawings and silhouette pictures following cerebral infarction in the region of the right posterior cerebral artery. The patient retained the ability to recognize real objects and their photographs, and could precisely copy line drawings of objects that she could not name. This case report highlights the importance of clinicians and researchers paying special attention to avoid overlooking agnosia in such cases. The factors that lead to problems in the identification of stimuli other than real objects in agnosic cases are discussed.

  14. Refining Visually Detected Object poses

    DEFF Research Database (Denmark)

    Holm, Preben; Petersen, Henrik Gordon

    2010-01-01

    to the particular object and in order to handle the demand for flexibility, there is an increasing demand for avoiding such dedicated mechanical alignment systems. Rather, it would be desirable to automatically locate and grasp randomly placed objects from tables, conveyor belts or even bins with a high accuracy...

  15. Brain dynamics of upstream perceptual processes leading to visual object recognition: a high density ERP topographic mapping study.

    Science.gov (United States)

    Schettino, Antonio; Loeys, Tom; Delplanque, Sylvain; Pourtois, Gilles

    2011-04-01

    Recent studies suggest that visual object recognition is a proactive process through which perceptual evidence accumulates over time before a decision can be made about the object. However, the exact electrophysiological correlates and time-course of this complex process remain unclear. In addition, the potential influence of emotion on this process has not been investigated yet. We recorded high density EEG in healthy adult participants performing a novel perceptual recognition task. For each trial, an initial blurred visual scene was first shown, before the actual content of the stimulus was gradually revealed by progressively adding diagnostic high spatial frequency information. Participants were asked to stop this stimulus sequence as soon as they could correctly perform an animacy judgment task. Behavioral results showed that participants reliably gathered perceptual evidence before recognition. Furthermore, prolonged exploration times were observed for pleasant, relative to either neutral or unpleasant scenes. ERP results showed distinct effects starting at 280 ms post-stimulus onset in distant brain regions during stimulus processing, mainly characterized by: (i) a monotonic accumulation of evidence, involving regions of the posterior cingulate cortex/parahippocampal gyrus, and (ii) true categorical recognition effects in medial frontal regions, including the dorsal anterior cingulate cortex. These findings provide evidence for the early involvement, following stimulus onset, of non-overlapping brain networks during proactive processes eventually leading to visual object recognition. Copyright © 2011 Elsevier Inc. All rights reserved.

  16. STDP-based spiking deep convolutional neural networks for object recognition.

    Science.gov (United States)

    Kheradpisheh, Saeed Reza; Ganjtabesh, Mohammad; Thorpe, Simon J; Masquelier, Timothée

    2018-03-01

    Previous studies have shown that spike-timing-dependent plasticity (STDP) can be used in spiking neural networks (SNN) to extract visual features of low or intermediate complexity in an unsupervised manner. These studies, however, used relatively shallow architectures, and only one layer was trainable. Another line of research has demonstrated - using rate-based neural networks trained with back-propagation - that having many layers increases the recognition robustness, an approach known as deep learning. We thus designed a deep SNN, comprising several convolutional (trainable with STDP) and pooling layers. We used a temporal coding scheme where the most strongly activated neurons fire first, and less activated neurons fire later or not at all. The network was exposed to natural images. Thanks to STDP, neurons progressively learned features corresponding to prototypical patterns that were both salient and frequent. Only a few tens of examples per category were required and no label was needed. After learning, the complexity of the extracted features increased along the hierarchy, from edge detectors in the first layer to object prototypes in the last layer. Coding was very sparse, with only a few thousands spikes per image, and in some cases the object category could be reasonably well inferred from the activity of a single higher-order neuron. More generally, the activity of a few hundreds of such neurons contained robust category information, as demonstrated using a classifier on Caltech 101, ETH-80, and MNIST databases. We also demonstrate the superiority of STDP over other unsupervised techniques such as random crops (HMAX) or auto-encoders. Taken together, our results suggest that the combination of STDP with latency coding may be a key to understanding the way that the primate visual system learns, its remarkable processing speed and its low energy consumption. These mechanisms are also interesting for artificial vision systems, particularly for hardware

  17. Categories children find easy and difficult to process in figural analogies

    Directory of Open Access Journals (Sweden)

    Claire E Stevenson

    2014-08-01

    Full Text Available Analogical reasoning, the ability to learn about novel phenomena by relating it to structurally similar knowledge, develops with great variability in children. Furthermore, the development of analogical reasoning coincides with greater working memory efficiency and increasing knowledge of the objects and rules present in analogy problems. In figural matrices, a classical form of analogical reasoning assessment, some categories, such as color, appear easier for children to encode and infer than others, such as orientation. Yet, few studies have structurally examined differences in the difficulty of rule-types across different age-groups. This cross-sectional study of figural analogical reasoning examined which underlying rules in figural analogies were easier or more difficult for children to correctly process. School children (N=1422, M=7.0 years, SD=21 months, range 4.5-12.5 years were assessed in analogical reasoning using classical figural matrices and memory measures. The transformations the children had to induce and apply concerned the categories: animal, color, orientation, position, quantity and size. The role of age and memory span on the children’s ability to correctly process each type of transformation was examined using explanatory item response theory models. The results showed that with increasing age and/or greater memory span all transformations were processed more accurately. The what transformations animal, color, quantity and size were easiest, whereas the where transformations orientation and position were most difficult. However, animal, orientation and position became relatively easier with age and increased memory efficiency. The implications are discussed in terms of the development of visual processing in object recognition versus position and motion encoding, the ventral (what and dorsal (where pathways respectively.

  18. A Visualization of Evolving Clinical Sentiment Using Vector Representations of Clinical Notes.

    Science.gov (United States)

    Ghassemi, Mohammad M; Mark, Roger G; Nemati, Shamim

    2015-09-01

    Our objective in this paper was to visualize the evolution of clinical language and sentiment with respect to several common population-level categories including: time in the hospital, age, mortality, gender and race. Our analysis utilized seven years of unstructured free text notes from the Multiparameter Intelligent Monitoring in Intensive Care (MIMIC) database. The text data was partitioned by category and used to generate several high dimensional vector space representations. We generated visualizations of the vector spaces using Distributed Stochastic Neighbor Embedding (tSNE) and Principal Component Analysis (PCA). We also investigated representative words from clusters in the vector space. Lastly, we inferred the general sentiment of the clinical notes toward each parameter by gauging the average distance between positive and negative keywords and all other terms in the space. We found intriguing differences in the sentiment of clinical notes over time, outcome, and demographic features. We noted a decrease in the homogeneity and complexity of clusters over time for patients with poor outcomes. We also found greater positive sentiment for females, unmarried patients, and patients of African ethnicity.

  19. The Impact of Colour, Spatial Resolution, and Presentation Speed on Category Naming

    Science.gov (United States)

    Laws, Keith R.; Hunter, Maria Z.

    2006-01-01

    Studies of neurological patients with category-specific agnosia have provided important contributions to our understanding of object recognition, although the meaning of such disorders is still hotly debated. One crucial line of research for our understanding of category effects, is through the examination of category biases in healthy normal…

  20. NCWin — A Component Object Model (COM) for processing and visualizing NetCDF data

    Science.gov (United States)

    Liu, Jinxun; Chen, J.M.; Price, D.T.; Liu, S.

    2005-01-01

    NetCDF (Network Common Data Form) is a data sharing protocol and library that is commonly used in large-scale atmospheric and environmental data archiving and modeling. The NetCDF tool described here, named NCWin and coded with Borland C + + Builder, was built as a standard executable as well as a COM (component object model) for the Microsoft Windows environment. COM is a powerful technology that enhances the reuse of applications (as components). Environmental model developers from different modeling environments, such as Python, JAVA, VISUAL FORTRAN, VISUAL BASIC, VISUAL C + +, and DELPHI, can reuse NCWin in their models to read, write and visualize NetCDF data. Some Windows applications, such as ArcGIS and Microsoft PowerPoint, can also call NCWin within the application. NCWin has three major components: 1) The data conversion part is designed to convert binary raw data to and from NetCDF data. It can process six data types (unsigned char, signed char, short, int, float, double) and three spatial data formats (BIP, BIL, BSQ); 2) The visualization part is designed for displaying grid map series (playing forward or backward) with simple map legend, and displaying temporal trend curves for data on individual map pixels; and 3) The modeling interface is designed for environmental model development by which a set of integrated NetCDF functions is provided for processing NetCDF data. To demonstrate that the NCWin can easily extend the functions of some current GIS software and the Office applications, examples of calling NCWin within ArcGIS and MS PowerPoint for showing NetCDF map animations are given.

  1. Staging Visual Methods

    DEFF Research Database (Denmark)

    Flensborg, Ingelise

    2009-01-01

    A visual methodological approach of exploring postures and movemenets in young childrens communication with art. How do we translate bodily postures and movements into methodological categories to access data of the interactive processes? These issues will be discussed through video matrials...

  2. Find Services for People Who Are Blind or Visually Impaired

    Science.gov (United States)

    ... Are Blind or Visually Impaired Find Services for People Who Are Blind or Visually Impaired Category All ... Territory Other (International) Organization Name Find Services for People Who Are Blind or Visually Impaired Browse All ...

  3. Sparsity-regularized HMAX for visual recognition.

    Directory of Open Access Journals (Sweden)

    Xiaolin Hu

    Full Text Available About ten years ago, HMAX was proposed as a simple and biologically feasible model for object recognition, based on how the visual cortex processes information. However, the model does not encompass sparse firing, which is a hallmark of neurons at all stages of the visual pathway. The current paper presents an improved model, called sparse HMAX, which integrates sparse firing. This model is able to learn higher-level features of objects on unlabeled training images. Unlike most other deep learning models that explicitly address global structure of images in every layer, sparse HMAX addresses local to global structure gradually along the hierarchy by applying patch-based learning to the output of the previous layer. As a consequence, the learning method can be standard sparse coding (SSC or independent component analysis (ICA, two techniques deeply rooted in neuroscience. What makes SSC and ICA applicable at higher levels is the introduction of linear higher-order statistical regularities by max pooling. After training, high-level units display sparse, invariant selectivity for particular individuals or for image categories like those observed in human inferior temporal cortex (ITC and medial temporal lobe (MTL. Finally, on an image classification benchmark, sparse HMAX outperforms the original HMAX by a large margin, suggesting its great potential for computer vision.

  4. The role of hemifield sector analysis in multifocal visual evoked potential objective perimetry in the early detection of glaucomatous visual field defects

    Directory of Open Access Journals (Sweden)

    Mousa MF

    2013-05-01

    Full Text Available Mohammad F Mousa,1 Robert P Cubbidge,2 Fatima Al-Mansouri,1 Abdulbari Bener3,41Department of Ophthalmology, Hamad Medical Corporation, Doha, Qatar; 2School of Life and Health Sciences, Aston University, Birmingham, UK; 3Department of Medical Statistics and Epidemiology, Hamad Medical Corporation, Department of Public Health, Weill Cornell Medical College, Doha, Qatar; 4Department Evidence for Population Health Unit, School of Epidemiology and Health Sciences, University of Manchester, Manchester, UKObjective: The purpose of this study was to examine the effectiveness of a new analysis method of mfVEP objective perimetry in the early detection of glaucomatous visual field defects compared to the gold standard technique.Methods and patients: Three groups were tested in this study; normal controls (38 eyes, glaucoma patients (36 eyes, and glaucoma suspect patients (38 eyes. All subjects underwent two standard 24-2 visual field tests: one with the Humphrey Field Analyzer and a single mfVEP test in one session. Analysis of the mfVEP results was carried out using the new analysis ­protocol: the hemifield sector analysis protocol.Results: Analysis of the mfVEP showed that the signal to noise ratio (SNR difference between superior and inferior hemifields was statistically significant between the three groups (analysis of variance, P < 0.001 with a 95% confidence interval, 2.82, 2.89 for normal group; 2.25, 2.29 for glaucoma suspect group; 1.67, 1.73 for glaucoma group. The difference between superior and inferior hemifield sectors and hemi-rings was statistically significant in 11/11 pair of sectors and hemi-rings in the glaucoma patients group (t-test P < 0.001, statistically significant in 5/11 pairs of sectors and hemi-rings in the glaucoma suspect group (t-test P < 0.01, and only 1/11 pair was statistically significant (t-test P < 0.9. The sensitivity and specificity of the hemifield sector analysis protocol in detecting glaucoma was 97% and 86

  5. Reducing consistency in human realism increases the uncanny valley effect; increasing category uncertainty does not.

    Science.gov (United States)

    MacDorman, Karl F; Chattopadhyay, Debaleena

    2016-01-01

    Human replicas may elicit unintended cold, eerie feelings in viewers, an effect known as the uncanny valley. Masahiro Mori, who proposed the effect in 1970, attributed it to inconsistencies in the replica's realism with some of its features perceived as human and others as nonhuman. This study aims to determine whether reducing realism consistency in visual features increases the uncanny valley effect. In three rounds of experiments, 548 participants categorized and rated humans, animals, and objects that varied from computer animated to real. Two sets of features were manipulated to reduce realism consistency. (For humans, the sets were eyes-eyelashes-mouth and skin-nose-eyebrows.) Reducing realism consistency caused humans and animals, but not objects, to appear eerier and colder. However, the predictions of a competing theory, proposed by Ernst Jentsch in 1906, were not supported: The most ambiguous representations-those eliciting the greatest category uncertainty-were neither the eeriest nor the coldest. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  6. A new 2-dimensional method for constructing visualized treatment objectives for distraction osteogenesis of the short mandible

    NARCIS (Netherlands)

    van Beek, H.

    2010-01-01

    Open bite development during distraction of the mandible is common and partly due to inaccurate planning of the treatment. Conflicting guidelines exist in the literature. A method for Visualized Treatment Objective (VTO) construction is presented as an aid for determining the correct orientation of

  7. Problem solving of student with visual impairment related to mathematical literacy problem

    Science.gov (United States)

    Pratama, A. R.; Saputro, D. R. S.; Riyadi

    2018-04-01

    The student with visual impairment, total blind category depends on the sense of touch and hearing in obtaining information. In fact, the two senses can receive information less than 20%. Thus, students with visual impairment of the total blind categories in the learning process must have difficulty, including learning mathematics. This study aims to describe the problem-solving process of the student with visual impairment, total blind category on mathematical literacy issues based on Polya phase. This research using test method similar problems mathematical literacy in PISA and in-depth interviews. The subject of this study was a student with visual impairment, total blind category. Based on the result of the research, problem-solving related to mathematical literacy based on Polya phase is quite good. In the phase of understanding the problem, the student read about twice by brushing the text and assisted with information through hearing three times. The student with visual impairment in problem-solving based on the Polya phase, devising a plan by summoning knowledge and experience gained previously. At the phase of carrying out the plan, students with visual impairment implement the plan in accordance with pre-made. In the looking back phase, students with visual impairment need to check the answers three times but have not been able to find a way.

  8. Incremental Learning of Perceptual Categories for Open-Domain Sketch Recognition

    National Research Council Canada - National Science Library

    Lovett, Andrew; Dehghani, Morteza; Forbus, Kenneth

    2007-01-01

    .... This paper describes an incremental learning technique for opendomain recognition. Our system builds generalizations for categories of objects based upon previous sketches of those objects and uses those generalizations to classify new sketches...

  9. Memory-Based Specification of Verbal Features for Classifying Animals into Super-Ordinate and Sub-Ordinate Categories

    OpenAIRE

    Takahiro Soshi; Norio Fujimaki; Atsushi Matsumoto; Aya S. Ihara

    2017-01-01

    Accumulating evidence suggests that category representations are based on features. Distinguishing features are considered to define categories, because of all-or-none responses for objects in different categories; however, it is unclear how distinguishing features actually classify objects at various category levels. The present study included 75 animals within three classes (mammal, bird, and fish), along with 195 verbal features. Healthy adults participated in memory-based feature-animal m...

  10. 一种基于并行对象的可视化描述%A Visual Description Based on Concurrent Objects

    Institute of Scientific and Technical Information of China (English)

    黄永忠; 李国巨; 郭金庚

    2001-01-01

    This paper puts forward a visual concurrent programming model based on concurrent objects,which absorbs the basic thought of UML,class diagram is used to describe concurrent classes,shared classes ,general classes in SPC++ and the relationships among these olasses. Through the visual description system can generate the code framework automatically.

  11. Visual Stability of Objects and Environments Viewed through Head-Mounted Displays

    Science.gov (United States)

    Ellis, Stephen R.; Adelstein, Bernard D.

    2015-01-01

    Virtual Environments (aka Virtual Reality) is again catching the public imagination and a number of startups (e.g. Oculus) and even not-so-startup companies (e.g. Microsoft) are trying to develop display systems to capitalize on this renewed interest. All acknowledge that this time they will get it right by providing the required dynamic fidelity, visual quality, and interesting content for the concept of VR to take off and change the world in ways it failed to do so in past incarnations. Some of the surprisingly long historical background of the technology that the form of direct simulation that underlies virtual environment and augmented reality displays will be briefly reviewed. An example of a mid 1990's augmented reality display system with good dynamic performance from our lab will be used to illustrate some of the underlying phenomena and technology concerning visual stability of virtual environments and objects during movement. In conclusion some idealized performance characteristics for a reference system will be proposed. Interestingly, many systems more or less on the market now may actually meet many of these proposed technical requirements. This observation leads to the conclusion that the current success of the IT firms trying to commercialize the technology will depend on the hidden costs of using the systems as well as the development of interesting and compelling content.

  12. Visual search of Mooney faces

    Directory of Open Access Journals (Sweden)

    Jessica Emeline Goold

    2016-02-01

    Full Text Available Faces spontaneously capture attention. However, which special attributes of a face underlie this effect are unclear. To address this question, we investigate how gist information, specific visual properties and differing amounts of experience with faces affect the time required to detect a face. Three visual search experiments were conducted investigating the rapidness of human observers to detect Mooney face images. Mooney images are two-toned, ambiguous images. They were used in order to have stimuli that maintain gist information but limit low-level image properties. Results from the experiments show: 1 although upright Mooney faces were searched inefficiently, they were detected more rapidly than inverted Mooney face targets, demonstrating the important role of gist information in guiding attention towards a face. 2 Several specific Mooney face identities were searched efficiently while others were not, suggesting the involvement of specific visual properties in face detection. 3 By providing participants with unambiguous gray-scale versions of the Mooney face targets prior to the visual search task, the targets were detected significantly more efficiently, suggesting that prior experience with Mooney faces improves the ability to extract gist information for rapid face detection. However, a week of training with Mooney face categorization did not lead to even more efficient visual search of Mooney face targets. In summary, these results reveal that specific local image properties cannot account for how faces capture attention. On the other hand, gist information alone cannot account for how faces capture attention either. Prior experience facilitates the effect of gist on visual search of faces, making faces a special object category for guiding attention.

  13. Functional network connectivity underlying food processing: disturbed salience and visual processing in overweight and obese adults.

    Science.gov (United States)

    Kullmann, Stephanie; Pape, Anna-Antonia; Heni, Martin; Ketterer, Caroline; Schick, Fritz; Häring, Hans-Ulrich; Fritsche, Andreas; Preissl, Hubert; Veit, Ralf

    2013-05-01

    In order to adequately explore the neurobiological basis of eating behavior of humans and their changes with body weight, interactions between brain areas or networks need to be investigated. In the current functional magnetic resonance imaging study, we examined the modulating effects of stimulus category (food vs. nonfood), caloric content of food, and body weight on the time course and functional connectivity of 5 brain networks by means of independent component analysis in healthy lean and overweight/obese adults. These functional networks included motor sensory, default-mode, extrastriate visual, temporal visual association, and salience networks. We found an extensive modulation elicited by food stimuli in the 2 visual and salience networks, with a dissociable pattern in the time course and functional connectivity between lean and overweight/obese subjects. Specifically, only in lean subjects, the temporal visual association network was modulated by the stimulus category and the salience network by caloric content, whereas overweight and obese subjects showed a generalized augmented response in the salience network. Furthermore, overweight/obese subjects showed changes in functional connectivity in networks important for object recognition, motivational salience, and executive control. These alterations could potentially lead to top-down deficiencies driving the overconsumption of food in the obese population.

  14. Feature-Based versus Category-Based Induction with Uncertain Categories

    Science.gov (United States)

    Griffiths, Oren; Hayes, Brett K.; Newell, Ben R.

    2012-01-01

    Previous research has suggested that when feature inferences have to be made about an instance whose category membership is uncertain, feature-based inductive reasoning is used to the exclusion of category-based induction. These results contrast with the observation that people can and do use category-based induction when category membership is…

  15. Binocular Fusion and Invariant Category Learning due to Predictive Remapping during Scanning of a Depthful Scene with Eye Movements

    Directory of Open Access Journals (Sweden)

    Stephen eGrossberg

    2015-01-01

    Full Text Available How does the brain maintain stable fusion of 3D scenes when the eyes move? Every eye movement causes each retinal position to process a different set of scenic features, and thus the brain needs to binocularly fuse new combinations of features at each position after an eye movement. Despite these breaks in retinotopic fusion due to each movement, previously fused representations of a scene in depth often appear stable. The 3D ARTSCAN neural model proposes how the brain does this by unifying concepts about how multiple cortical areas in the What and Where cortical streams interact to coordinate processes of 3D boundary and surface perception, spatial attention, invariant object category learning, predictive remapping, eye movement control, and learned coordinate transformations. The model explains data from single neuron and psychophysical studies of covert visual attention shifts prior to eye movements. The model further clarifies how perceptual, attentional, and cognitive interactions among multiple brain regions (LGN, V1, V2, V3A, V4, MT, MST, PPC, LIP, ITp, ITa, SC may accomplish predictive remapping as part of the process whereby view-invariant object categories are learned. These results build upon earlier neural models of 3D vision and figure-ground separation and the learning of invariant object categories as the eyes freely scan a scene. A key process concerns how an object’s surface representation generates a form-fitting distribution of spatial attention, or attentional shroud, in parietal cortex that helps maintain the stability of multiple perceptual and cognitive processes. Predictive eye movement signals maintain the stability of the shroud, as well as of binocularly fused perceptual boundaries and surface representations.

  16. Typical load shapes for six categories of Swedish commercial buildings

    Energy Technology Data Exchange (ETDEWEB)

    Noren, C.

    1997-01-01

    In co-operation with several Swedish electricity suppliers, typical load shapes have been developed for six categories of commercial buildings located in the south of Sweden. The categories included in the study are: hotels, warehouses/grocery stores, schools with no kitchen, schools with kitchen, office buildings, health, health buildings. Load shapes are developed for different mean daily outdoor temperatures and for different day types, normally standard weekdays and standard weekends. The load shapes are presented as non-dimensional normalized 1-hour load. All measured loads for an object are divided by the object`s mean load during the measuring period and typical load shapes are developed for each category of buildings. Thus errors were kept lower as compared to use of W/m{sup 2}-terms. Typical daytime (9 a.m. - 5 p.m.) standard deviations are 7-10% of the mean values for standard weekdays but during very cold or warm weather conditions, single objects can deviate from the typical load shape. On weekends, errors are higher and depending on very different activity levels in the buildings, it is difficult to develop weekend load shapes with good accuracy. The method presented is very easy to use for similar studies and no building simulation programs are needed. If more load data is available, a good method to lower the errors is to make sure that every category only consists of objects with the same activity level, both on weekdays and weekends. To make it easier to use the load shapes, Excel load shape workbooks have been developed, where it is even possible to compare typical load shapes with measured data. 23 refs, 53 figs, 20 tabs

  17. Difference in Subjective Accessibility of On Demand Recall of Visual, Taste, and Olfactory Memories

    Directory of Open Access Journals (Sweden)

    Petr Zach

    2018-01-01

    Full Text Available We present here significant difference in the evocation capability between sensory memories (visual, taste, and olfactory throughout certain categories of the population. As object for this memory recall we selected French fries that are simple and generally known. From daily life we may intuitively feel that there is much better recall of the visual and auditory memory compared to the taste and olfactory ones. Our results in young (age 12–21 years mostly females and some males show low capacity for smell and taste memory recall compared to far greater visual memory recall. This situation raises question whether we could train smell and taste memory recall so that it could become similar to visual or auditory ones. In our article we design technique of the volunteers training that could potentially lead to an increase in the capacity of their taste and olfactory memory recollection.

  18. Do infant Japanese macaques ( Macaca fuscata) categorize objects without specific training?

    Science.gov (United States)

    Murai, Chizuko; Tomonaga, Masaki; Kamegai, Kimi; Terazawa, Naoko; Yamaguchi, Masami K

    2004-01-01

    In the present study, we examined whether infant Japanese macaques categorize objects without any training, using a similar technique also used with human infants (the paired-preference method). During the familiarization phase, subjects were presented twice with two pairs of different objects from one global-level category. During the test phase, they were presented twice with a pair consisting of a novel familiar-category object and a novel global-level category object. The subjects were tested with three global-level categories (animal, furniture, and vehicle). It was found that they showed significant novelty preferences as a whole, indicating that they processed similarities between familiarization objects and novel familiar-category objects. These results suggest that subjects responded distinctively to objects without training, indicating the possibility that infant macaques possess the capacity for categorization.

  19. Beyond scene gist: Objects guide search more than scene background.

    Science.gov (United States)

    Koehler, Kathryn; Eckstein, Miguel P

    2017-06-01

    Although the facilitation of visual search by contextual information is well established, there is little understanding of the independent contributions of different types of contextual cues in scenes. Here we manipulated 3 types of contextual information: object co-occurrence, multiple object configurations, and background category. We isolated the benefits of each contextual cue to target detectability, its impact on decision bias, confidence, and the guidance of eye movements. We find that object-based information guides eye movements and facilitates perceptual judgments more than scene background. The degree of guidance and facilitation of each contextual cue can be related to its inherent informativeness about the target spatial location as measured by human explicit judgments about likely target locations. Our results improve the understanding of the contributions of distinct contextual scene components to search and suggest that the brain's utilization of cues to guide eye movements is linked to the cue's informativeness about the target's location. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  20. Automated volumetric breast density estimation: A comparison with visual assessment

    International Nuclear Information System (INIS)

    Seo, J.M.; Ko, E.S.; Han, B.-K.; Ko, E.Y.; Shin, J.H.; Hahn, S.Y.

    2013-01-01

    Aim: To compare automated volumetric breast density (VBD) measurement with visual assessment according to Breast Imaging Reporting and Data System (BI-RADS), and to determine the factors influencing the agreement between them. Materials and methods: One hundred and ninety-three consecutive screening mammograms reported as negative were included in the study. Three radiologists assigned qualitative BI-RADS density categories to the mammograms. An automated volumetric breast-density method was used to measure VBD (% breast density) and density grade (VDG). Each case was classified into an agreement or disagreement group according to the comparison between visual assessment and VDG. The correlation between visual assessment and VDG was obtained. Various physical factors were compared between the two groups. Results: Agreement between visual assessment by the radiologists and VDG was good (ICC value = 0.757). VBD showed a highly significant positive correlation with visual assessment (Spearman's ρ = 0.754, p < 0.001). VBD and the x-ray tube target was significantly different between the agreement group and the disagreement groups (p = 0.02 and 0.04, respectively). Conclusion: Automated VBD is a reliable objective method to measure breast density. The agreement between VDG and visual assessment by radiologist might be influenced by physical factors

  1. Channels as Objects in Concurrent Object-Oriented Programming

    Directory of Open Access Journals (Sweden)

    Joana Campos

    2011-10-01

    Full Text Available There is often a sort of a protocol associated to each class, stating when and how certain methods should be called. Given that this protocol is, if at all, described in the documentation accompanying the class, current mainstream object-oriented languages cannot provide for the verification of client code adherence against the sought class behaviour. We have defined a class-based concurrent object-oriented language that formalises such protocols in the form of usage types. Usage types are attached to class definitions, allowing for the specification of (1 the available methods, (2 the tests clients must perform on the result of methods, and (3 the object status - linear or shared - all of which depend on the object's state. Our work extends the recent approach on modular session types by eliminating channel operations, and defining the method call as the single communication primitive in both sequential and concurrent settings. In contrast to previous works, we define a single category for objects, instead of distinct categories for linear and for shared objects, and let linear objects evolve into shared ones. We introduce a standard sync qualifier to prevent thread interference in certain operations on shared objects. We formalise the language syntax, the operational semantics, and a type system that enforces by static typing that methods are called only when available, and by a single client if so specified in the usage type. We illustrate the language via a complete example.

  2. The anatomy of object recognition--visual form agnosia caused by medial occipitotemporal stroke.

    Science.gov (United States)

    Karnath, Hans-Otto; Rüter, Johannes; Mandler, André; Himmelbach, Marc

    2009-05-06

    The influential model on visual information processing by Milner and Goodale (1995) has suggested a dissociation between action- and perception-related processing in a dorsal versus ventral stream projection. It was inspired substantially by the observation of a double dissociation of disturbed visual action versus perception in patients with optic ataxia on the one hand and patients with visual form agnosia (VFA) on the other. Unfortunately, almost all cases with VFA reported so far suffered from inhalational intoxication, the majority with carbon monoxide (CO). Since CO induces a diffuse and widespread pattern of neuronal and white matter damage throughout the whole brain, precise conclusions from these patients with VFA on the selective role of ventral stream structures for shape and orientation perception were difficult. Here, we report patient J.S., who demonstrated VFA after a well circumscribed brain lesion due to stroke etiology. Like the famous patient D.F. with VFA after CO intoxication studied by Milner, Goodale, and coworkers (Goodale et al., 1991, 1994; Milner et al., 1991; Servos et al., 1995; Mon-Williams et al., 2001a,b; Wann et al., 2001; Westwood et al., 2002; McIntosh et al., 2004; Schenk and Milner, 2006), J.S. showed an obvious dissociation between disturbed visual perception of shape and orientation information on the one side and preserved visuomotor abilities based on the same information on the other. In both hemispheres, damage primarily affected the fusiform and the lingual gyri as well as the adjacent posterior cingulate gyrus. We conclude that these medial structures of the ventral occipitotemporal cortex are integral for the normal flow of shape and of contour information into the ventral stream system allowing to recognize objects.

  3. Timing the impact of literacy on visual processing

    Science.gov (United States)

    Pegado, Felipe; Comerlato, Enio; Ventura, Fabricio; Jobert, Antoinette; Nakamura, Kimihiro; Buiatti, Marco; Ventura, Paulo; Dehaene-Lambertz, Ghislaine; Kolinsky, Régine; Morais, José; Braga, Lucia W.; Cohen, Laurent; Dehaene, Stanislas

    2014-01-01

    Learning to read requires the acquisition of an efficient visual procedure for quickly recognizing fine print. Thus, reading practice could induce a perceptual learning effect in early vision. Using functional magnetic resonance imaging (fMRI) in literate and illiterate adults, we previously demonstrated an impact of reading acquisition on both high- and low-level occipitotemporal visual areas, but could not resolve the time course of these effects. To clarify whether literacy affects early vs. late stages of visual processing, we measured event-related potentials to various categories of visual stimuli in healthy adults with variable levels of literacy, including completely illiterate subjects, early-schooled literate subjects, and subjects who learned to read in adulthood (ex-illiterates). The stimuli included written letter strings forming pseudowords, on which literacy is expected to have a major impact, as well as faces, houses, tools, checkerboards, and false fonts. To evaluate the precision with which these stimuli were encoded, we studied repetition effects by presenting the stimuli in pairs composed of repeated, mirrored, or unrelated pictures from the same category. The results indicate that reading ability is correlated with a broad enhancement of early visual processing, including increased repetition suppression, suggesting better exemplar discrimination, and increased mirror discrimination, as early as ∼100–150 ms in the left occipitotemporal region. These effects were found with letter strings and false fonts, but also were partially generalized to other visual categories. Thus, learning to read affects the magnitude, precision, and invariance of early visual processing. PMID:25422460

  4. A novel no-reference objective stereoscopic video quality assessment method based on visual saliency analysis

    Science.gov (United States)

    Yang, Xinyan; Zhao, Wei; Ye, Long; Zhang, Qin

    2017-07-01

    This paper proposes a no-reference objective stereoscopic video quality assessment method with the motivation that making the effect of objective experiments close to that of subjective way. We believe that the image regions with different visual salient degree should not have the same weights when designing an assessment metric. Therefore, we firstly use GBVS algorithm to each frame pairs and separate both the left and right viewing images into the regions with strong, general and week saliency. Besides, local feature information like blockiness, zero-crossing and depth are extracted and combined with a mathematical model to calculate a quality assessment score. Regions with different salient degree are assigned with different weights in the mathematical model. Experiment results demonstrate the superiority of our method compared with the existed state-of-the-art no-reference objective Stereoscopic video quality assessment methods.

  5. Object-based spatial attention when objects have sufficient depth cues.

    Science.gov (United States)

    Takeya, Ryuji; Kasai, Tetsuko

    2015-01-01

    Attention directed to a part of an object tends to obligatorily spread over all of the spatial regions that belong to the object, which may be critical for rapid object-recognition in cluttered visual scenes. Previous studies have generally used simple rectangles as objects and have shown that attention spreading is reflected by amplitude modulation in the posterior N1 component (150-200 ms poststimulus) of event-related potentials, while other interpretations (i.e., rectangular holes) may arise implicitly in early visual processing stages. By using modified Kanizsa-type stimuli that provided less ambiguity of depth ordering, the present study examined early event-related potential spatial-attention effects for connected and separated objects, both of which were perceived in front of (Experiment 1) and in back of (Experiment 2) the surroundings. Typical P1 (100-140 ms) and N1 (150-220 ms) attention effects of ERP in response to unilateral probes were observed in both experiments. Importantly, the P1 attention effect was decreased for connected objects compared to separated objects only in Experiment 1, and the typical object-based modulations of N1 were not observed in either experiment. These results suggest that spatial attention spreads over a figural object at earlier stages of processing than previously indicated, in three-dimensional visual scenes with multiple depth cues.

  6. Contralateral delay activity tracks object identity information in visual short term memory.

    Science.gov (United States)

    Gao, Zaifeng; Xu, Xiaotian; Chen, Zhibo; Yin, Jun; Shen, Mowei; Shui, Rende

    2011-08-11

    Previous studies suggested that ERP component contralateral delay activity (CDA) tracks the number of objects containing identity information stored in visual short term memory (VSTM). Later MEG and fMRI studies implied that its neural source lays in superior IPS. However, since the memorized stimuli in previous studies were displayed in distinct spatial locations, hence possibly CDA tracks the object-location information instead. Moreover, a recent study implied the activation in superior IPS reflected the location load. The current research thus explored whether CDA tracks the object-location load or the object-identity load, and its neural sources. Participants were asked to remember one color, four identical colors or four distinct colors. The four-identical-color condition was the critical one because it contains the same amount of identity information as that of one color while the same amount of location information as that of four distinct colors. To ensure the participants indeed selected four colors in the four-identical-color condition, we also split the participants into two groups (low- vs. high-capacity), analyzed late positive component (LPC) in the prefrontal area, and collected participant's subjective-report. Our results revealed that most of the participants selected four identical colors. Moreover, regardless of capacity-group, there was no difference on CDA between one color and four identical colors yet both were lower than 4 distinct colors. Besides, the source of CDA was located in the superior parietal lobule, which is very close to the superior IPS. These results support the statement that CDA tracks the object identity information in VSTM. Copyright © 2011 Elsevier B.V. All rights reserved.

  7. Sleep Benefits Memory for Semantic Category Structure While Preserving Exemplar-Specific Information.

    Science.gov (United States)

    Schapiro, Anna C; McDevitt, Elizabeth A; Chen, Lang; Norman, Kenneth A; Mednick, Sara C; Rogers, Timothy T

    2017-11-01

    Semantic memory encompasses knowledge about both the properties that typify concepts (e.g. robins, like all birds, have wings) as well as the properties that individuate conceptually related items (e.g. robins, in particular, have red breasts). We investigate the impact of sleep on new semantic learning using a property inference task in which both kinds of information are initially acquired equally well. Participants learned about three categories of novel objects possessing some properties that were shared among category exemplars and others that were unique to an exemplar, with exposure frequency varying across categories. In Experiment 1, memory for shared properties improved and memory for unique properties was preserved across a night of sleep, while memory for both feature types declined over a day awake. In Experiment 2, memory for shared properties improved across a nap, but only for the lower-frequency category, suggesting a prioritization of weakly learned information early in a sleep period. The increase was significantly correlated with amount of REM, but was also observed in participants who did not enter REM, suggesting involvement of both REM and NREM sleep. The results provide the first evidence that sleep improves memory for the shared structure of object categories, while simultaneously preserving object-unique information.

  8. The impact of visual gaze direction on auditory object tracking

    OpenAIRE

    Pomper, U.; Chait, M.

    2017-01-01

    Subjective experience suggests that we are able to direct our auditory attention independent of our visual gaze, e.g when shadowing a nearby conversation at a cocktail party. But what are the consequences at the behavioural and neural level? While numerous studies have investigated both auditory attention and visual gaze independently, little is known about their interaction during selective listening. In the present EEG study, we manipulated visual gaze independently of auditory attention wh...

  9. Visual feature extraction and establishment of visual tags in the intelligent visual internet of things

    Science.gov (United States)

    Zhao, Yiqun; Wang, Zhihui

    2015-12-01

    The Internet of things (IOT) is a kind of intelligent networks which can be used to locate, track, identify and supervise people and objects. One of important core technologies of intelligent visual internet of things ( IVIOT) is the intelligent visual tag system. In this paper, a research is done into visual feature extraction and establishment of visual tags of the human face based on ORL face database. Firstly, we use the principal component analysis (PCA) algorithm for face feature extraction, then adopt the support vector machine (SVM) for classifying and face recognition, finally establish a visual tag for face which is already classified. We conducted a experiment focused on a group of people face images, the result show that the proposed algorithm have good performance, and can show the visual tag of objects conveniently.

  10. Gender differences in emotion recognition: Impact of sensory modality and emotional category.

    Science.gov (United States)

    Lambrecht, Lena; Kreifelts, Benjamin; Wildgruber, Dirk

    2014-04-01

    Results from studies on gender differences in emotion recognition vary, depending on the types of emotion and the sensory modalities used for stimulus presentation. This makes comparability between different studies problematic. This study investigated emotion recognition of healthy participants (N = 84; 40 males; ages 20 to 70 years), using dynamic stimuli, displayed by two genders in three different sensory modalities (auditory, visual, audio-visual) and five emotional categories. The participants were asked to categorise the stimuli on the basis of their nonverbal emotional content (happy, alluring, neutral, angry, and disgusted). Hit rates and category selection biases were analysed. Women were found to be more accurate in recognition of emotional prosody. This effect was partially mediated by hearing loss for the frequency of 8,000 Hz. Moreover, there was a gender-specific selection bias for alluring stimuli: Men, as compared to women, chose "alluring" more often when a stimulus was presented by a woman as compared to a man.

  11. Human V4 Activity Patterns Predict Behavioral Performance in Imagery of Object Color.

    Science.gov (United States)

    Bannert, Michael M; Bartels, Andreas

    2018-04-11

    Color is special among basic visual features in that it can form a defining part of objects that are engrained in our memory. Whereas most neuroimaging research on human color vision has focused on responses related to external stimulation, the present study investigated how sensory-driven color vision is linked to subjective color perception induced by object imagery. We recorded fMRI activity in male and female volunteers during viewing of abstract color stimuli that were red, green, or yellow in half of the runs. In the other half we asked them to produce mental images of colored, meaningful objects (such as tomato, grapes, banana) corresponding to the same three color categories. Although physically presented color could be decoded from all retinotopically mapped visual areas, only hV4 allowed predicting colors of imagined objects when classifiers were trained on responses to physical colors. Importantly, only neural signal in hV4 was predictive of behavioral performance in the color judgment task on a trial-by-trial basis. The commonality between neural representations of sensory-driven and imagined object color and the behavioral link to neural representations in hV4 identifies area hV4 as a perceptual hub linking externally triggered color vision with color in self-generated object imagery. SIGNIFICANCE STATEMENT Humans experience color not only when visually exploring the outside world, but also in the absence of visual input, for example when remembering, dreaming, and during imagery. It is not known where neural codes for sensory-driven and internally generated hue converge. In the current study we evoked matching subjective color percepts, one driven by physically presented color stimuli, the other by internally generated color imagery. This allowed us to identify area hV4 as the only site where neural codes of corresponding subjective color perception converged regardless of its origin. Color codes in hV4 also predicted behavioral performance in an

  12. On the time required for identification of visual objects

    DEFF Research Database (Denmark)

    Petersen, Anders

    The starting point for this thesis is a review of Bundesen’s theory of visual attention. This theory has been widely accepted as an appropriate model for describing data from an important class of psychological experiments known as whole and partial report. Analysing data from this class of exper......The starting point for this thesis is a review of Bundesen’s theory of visual attention. This theory has been widely accepted as an appropriate model for describing data from an important class of psychological experiments known as whole and partial report. Analysing data from this class...... of experiments with the help of the theory of visual attention – have proven to be an effective approach to examine cognitive parameters that are essential for a broad range of different patient groups. The theory of visual attention relies on a psychometric function that describes the ability to identify......, with the dataset that we collected, to directly analyse how confusability develops as a certain letter is exposed for increasingly longer time. An important scientific question is what shapes the psychometric function. It is conceivable that the function reflects both limitations and structure of the physical...

  13. Iris Image Classification Based on Hierarchical Visual Codebook.

    Science.gov (United States)

    Zhenan Sun; Hui Zhang; Tieniu Tan; Jianyu Wang

    2014-06-01

    Iris recognition as a reliable method for personal identification has been well-studied with the objective to assign the class label of each iris image to a unique subject. In contrast, iris image classification aims to classify an iris image to an application specific category, e.g., iris liveness detection (classification of genuine and fake iris images), race classification (e.g., classification of iris images of Asian and non-Asian subjects), coarse-to-fine iris identification (classification of all iris images in the central database into multiple categories). This paper proposes a general framework for iris image classification based on texture analysis. A novel texture pattern representation method called Hierarchical Visual Codebook (HVC) is proposed to encode the texture primitives of iris images. The proposed HVC method is an integration of two existing Bag-of-Words models, namely Vocabulary Tree (VT), and Locality-constrained Linear Coding (LLC). The HVC adopts a coarse-to-fine visual coding strategy and takes advantages of both VT and LLC for accurate and sparse representation of iris texture. Extensive experimental results demonstrate that the proposed iris image classification method achieves state-of-the-art performance for iris liveness detection, race classification, and coarse-to-fine iris identification. A comprehensive fake iris image database simulating four types of iris spoof attacks is developed as the benchmark for research of iris liveness detection.

  14. Evaluation of Post-Operative Visual Outcomes of Cataract Surgery in ...

    African Journals Online (AJOL)

    Data was compiled on demographic characteristics, pre- and postoperative visual acuities and surgical complications. The preoperative and postoperative visual status was classified using the World Health Organization (WHO) category of Visual Impairment and Blindness. The standard parameters of assessing outcome of ...

  15. Estimated capacity of object files in visual short-term memory is not improved by retrieval cueing.

    Science.gov (United States)

    Saiki, Jun; Miyatsuji, Hirofumi

    2009-03-23

    Visual short-term memory (VSTM) has been claimed to maintain three to five feature-bound object representations. Some results showing smaller capacity estimates for feature binding memory have been interpreted as the effects of interference in memory retrieval. However, change-detection tasks may not properly evaluate complex feature-bound representations such as triple conjunctions in VSTM. To understand the general type of feature-bound object representation, evaluation of triple conjunctions is critical. To test whether interference occurs in memory retrieval for complete object file representations in a VSTM task, we cued retrieval in novel paradigms that directly evaluate the memory for triple conjunctions, in comparison with a simple change-detection task. In our multiple object permanence tracking displays, observers monitored for a switch in feature combination between objects during an occlusion period, and we found that a retrieval cue provided no benefit with the triple conjunction tasks, but significant facilitation with the change-detection task, suggesting that low capacity estimates of object file memory in VSTM reflect a limit on maintenance, not retrieval.

  16. Mobile visual object identification: from SIFT-BoF-RANSAC to Sketchprint

    Science.gov (United States)

    Voloshynovskiy, Sviatoslav; Diephuis, Maurits; Holotyak, Taras

    2015-03-01

    Mobile object identification based on its visual features find many applications in the interaction with physical objects and security. Discriminative and robust content representation plays a central role in object and content identification. Complex post-processing methods are used to compress descriptors and their geometrical information, aggregate them into more compact and discriminative representations and finally re-rank the results based on the similarity geometries of descriptors. Unfortunately, most of the existing descriptors are not very robust and discriminative once applied to the various contend such as real images, text or noise-like microstructures next to requiring at least 500-1'000 descriptors per image for reliable identification. At the same time, the geometric re-ranking procedures are still too complex to be applied to the numerous candidates obtained from the feature similarity based search only. This restricts that list of candidates to be less than 1'000 which obviously causes a higher probability of miss. In addition, the security and privacy of content representation has become a hot research topic in multimedia and security communities. In this paper, we introduce a new framework for non- local content representation based on SketchPrint descriptors. It extends the properties of local descriptors to a more informative and discriminative, yet geometrically invariant content representation. In particular it allows images to be compactly represented by 100 SketchPrint descriptors without being fully dependent on re-ranking methods. We consider several use cases, applying SketchPrint descriptors to natural images, text documents, packages and micro-structures and compare them with the traditional local descriptors.

  17. Procedural-Based Category Learning in Patients with Parkinson's Disease: Impact of Category Number and Category Continuity

    Directory of Open Access Journals (Sweden)

    J. Vincent eFiloteo

    2014-02-01

    Full Text Available Previously we found that Parkinson's disease (PD patients are impaired in procedural-based category learning when category membership is defined by a nonlinear relationship between stimulus dimensions, but these same patients are normal when the rule is defined by a linear relationship (Filoteo et al., 2005; Maddox & Filoteo, 2001. We suggested that PD patients' impairment was due to a deficit in recruiting ‘striatal units' to represent complex nonlinear rules. In the present study, we further examined the nature of PD patients' procedural-based deficit in two experiments designed to examine the impact of (1 the number of categories, and (2 category discontinuity on learning. Results indicated that PD patients were impaired only under discontinuous category conditions but were normal when the number of categories was increased from two to four. The lack of impairment in the four-category condition suggests normal integrity of striatal medium spiny cells involved in procedural-based category learning. In contrast, and consistent with our previous observation of a nonlinear deficit, the finding that PD patients were impaired in the discontinuous condition suggests that these patients are impaired when they have to associate perceptually distinct exemplars with the same category. Theoretically, this deficit might be related to dysfunctional communication among medium spiny neurons within the striatum, particularly given that these are cholinergic neurons and a cholinergic deficiency could underlie some of PD patients’ cognitive impairment.

  18. Clonal selection versus clonal cooperation: the integrated perception of immune objects [version 1; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Serge Nataf

    2016-09-01

    Full Text Available Analogies between the immune and nervous systems were first envisioned by the immunologist Niels Jerne who introduced the concepts of antigen "recognition" and immune "memory". However, since then, it appears that only the cognitive immunology paradigm proposed by Irun Cohen, attempted to further theorize the immune system functions through the prism of neurosciences. The present paper is aimed at revisiting this analogy-based reasoning. In particular, a parallel is drawn between the brain pathways of visual perception and the processes allowing the global perception of an "immune object". Thus, in the visual system, distinct features of a visual object (shape, color, motion are perceived separately by distinct neuronal populations during a primary perception task. The output signals generated during this first step instruct then an integrated perception task performed by other neuronal networks. Such a higher order perception step is by essence a cooperative task that is mandatory for the global perception of visual objects. Based on a re-interpretation of recent experimental data, it is suggested that similar general principles drive the integrated perception of immune objects in secondary lymphoid organs (SLOs. In this scheme, the four main categories of signals characterizing an immune object (antigenic, contextual, temporal and localization signals are first perceived separately by distinct networks of immunocompetent cells.  Then, in a multitude of SLO niches, the output signals generated during this primary perception step are integrated by TH-cells at the single cell level. This process eventually generates a multitude of T-cell and B-cell clones that perform, at the scale of SLOs, an integrated perception of immune objects. Overall, this new framework proposes that integrated immune perception and, consequently, integrated immune responses, rely essentially on clonal cooperation rather than clonal selection.

  19. Top-level categories of constitutively organized material entities--suggestions for a formal top-level ontology.

    Directory of Open Access Journals (Sweden)

    Lars Vogt

    2011-04-01

    Full Text Available Application oriented ontologies are important for reliably communicating and managing data in databases. Unfortunately, they often differ in the definitions they use and thus do not live up to their potential. This problem can be reduced when using a standardized and ontologically consistent template for the top-level categories from a top-level formal foundational ontology. This would support ontological consistency within application oriented ontologies and compatibility between them. The Basic Formal Ontology (BFO is such a foundational ontology for the biomedical domain that has been developed following the single inheritance policy. It provides the top-level template within the Open Biological and Biomedical Ontologies Foundry. If it wants to live up to its expected role, its three top-level categories of material entity (i.e., 'object', 'fiat object part', 'object aggregate' must be exhaustive, i.e. every concrete material entity must instantiate exactly one of them.By systematically evaluating all possible basic configurations of material building blocks we show that BFO's top-level categories of material entity are not exhaustive. We provide examples from biology and everyday life that demonstrate the necessity for two additional categories: 'fiat object part aggregate' and 'object with fiat object part aggregate'. By distinguishing topological coherence, topological adherence, and metric proximity we furthermore provide a differentiation of clusters and groups as two distinct subcategories for each of the three categories of material entity aggregates, resulting in six additional subcategories of material entity.We suggest extending BFO to incorporate two additional categories of material entity as well as two subcategories for each of the three categories of material entity aggregates. With these additions, BFO would exhaustively cover all top-level types of material entity that application oriented ontologies may use as templates. Our

  20. Foundations of computer vision computational geometry, visual image structures and object shape detection

    CERN Document Server

    Peters, James F

    2017-01-01

    This book introduces the fundamentals of computer vision (CV), with a focus on extracting useful information from digital images and videos. Including a wealth of methods used in detecting and classifying image objects and their shapes, it is the first book to apply a trio of tools (computational geometry, topology and algorithms) in solving CV problems, shape tracking in image object recognition and detecting the repetition of shapes in single images and video frames. Computational geometry provides a visualization of topological structures such as neighborhoods of points embedded in images, while image topology supplies us with structures useful in the analysis and classification of image regions. Algorithms provide a practical, step-by-step means of viewing image structures. The implementations of CV methods in Matlab and Mathematica, classification of chapter problems with the symbols (easily solved) and (challenging) and its extensive glossary of key words, examples and connections with the fabric of C...

  1. Contested Categories

    DEFF Research Database (Denmark)

    Drawing on social science perspectives, Contested Categories presents a series of empirical studies that engage with the often shifting and day-to-day realities of life sciences categories. In doing so, it shows how such categories remain contested and dynamic, and that the boundaries they create...

  2. Rich in vitamin C or just a convenient snack? Multiple-category reasoning with cross-classified foods.

    Science.gov (United States)

    Hayes, Brett K; Kurniawan, Hendy; Newell, Ben R

    2011-01-01

    Two studies examined multiple category reasoning in property induction with cross-classified foods. Pilot tests identified foods that were more typical of a taxonomic category (e.g., "fruit"; termed 'taxonomic primary') or a script based category (e.g., "snack foods"; termed 'script primary'). They also confirmed that taxonomic categories were perceived as more coherent than script categories. In Experiment 1 participants completed an induction task in which information from multiple categories could be searched and combined to generate a property prediction about a target food. Multiple categories were more often consulted and used in prediction for script primary than for taxonomic primary foods. Experiment 2 replicated this finding across a range of property types but found that multiple category reasoning was reduced in the presence of a concurrent cognitive load. Property type affected which categories were consulted first and how information from multiple categories was weighted. The results show that multiple categories are more likely to be used for property predictions about cross-classified objects when an object is primarily associated with a category that has low coherence.

  3. Feature Types and Object Categories: Is Sensorimotoric Knowledge Different for Living and Nonliving Things?

    Science.gov (United States)

    Ankerstein, Carrie A.; Varley, Rosemary A.; Cowell, Patricia E.

    2012-01-01

    Some models of semantic memory claim that items from living and nonliving domains have different feature-type profiles. Data from feature generation and perceptual modality rating tasks were compared to evaluate this claim. Results from two living (animals, fruits/vegetables) and two nonliving (tools, vehicles) categories showed that…

  4. Misremembering emotion: Inductive category effects for complex emotional stimuli.

    Science.gov (United States)

    Corbin, Jonathan C; Crawford, L Elizabeth; Vavra, Dylan T

    2017-07-01

    Memories of objects are biased toward what is typical of the category to which they belong. Prior research on memory for emotional facial expressions has demonstrated a bias towards an emotional expression prototype (e.g., slightly happy faces are remembered as happier). We investigate an alternate source of bias in memory for emotional expressions - the central tendency bias. The central tendency bias skews reconstruction of a memory trace towards the center of the distribution for a particular attribute. This bias has been attributed to a Bayesian combination of an imprecise memory for a particular object with prior information about its category. Until now, studies examining the central tendency bias have focused on simple stimuli. We extend this work to socially relevant, complex, emotional facial expressions. We morphed facial expressions on a continuum from sad to happy. Different ranges of emotion were used in four experiments in which participants viewed individual expressions and, after a variable delay, reproduced each face by adjusting a morph to match it. Estimates were biased toward the center of the presented stimulus range, and the bias increased at longer memory delays, consistent with the Bayesian prediction that as trace memory loses precision, category knowledge is given more weight. The central tendency effect persisted within and across emotion categories (sad, neutral, and happy). This article expands the scope of work on inductive category effects to memory for complex, emotional stimuli.

  5. A validated set of tool pictures with matched objects and non-objects for laterality research.

    Science.gov (United States)

    Verma, Ark; Brysbaert, Marc

    2015-01-01

    Neuropsychological and neuroimaging research has established that knowledge related to tool use and tool recognition is lateralized to the left cerebral hemisphere. Recently, behavioural studies with the visual half-field technique have confirmed the lateralization. A limitation of this research was that different sets of stimuli had to be used for the comparison of tools to other objects and objects to non-objects. Therefore, we developed a new set of stimuli containing matched triplets of tools, other objects and non-objects. With the new stimulus set, we successfully replicated the findings of no visual field advantage for objects in an object recognition task combined with a significant right visual field advantage for tools in a tool recognition task. The set of stimuli is available as supplemental data to this article.

  6. Luminance gradient at object borders communicates object location to the human oculomotor system.

    Science.gov (United States)

    Kilpeläinen, Markku; Georgeson, Mark A

    2018-01-25

    The locations of objects in our environment constitute arguably the most important piece of information our visual system must convey to facilitate successful visually guided behaviour. However, the relevant objects are usually not point-like and do not have one unique location attribute. Relatively little is known about how the visual system represents the location of such large objects as visual processing is, both on neural and perceptual level, highly edge dominated. In this study, human observers made saccades to the centres of luminance defined squares (width 4 deg), which appeared at random locations (8 deg eccentricity). The phase structure of the square was manipulated such that the points of maximum luminance gradient at the square's edges shifted from trial to trial. The average saccade endpoints of all subjects followed those shifts in remarkable quantitative agreement. Further experiments showed that the shifts were caused by the edge manipulations, not by changes in luminance structure near the centre of the square or outside the square. We conclude that the human visual system programs saccades to large luminance defined square objects based on edge locations derived from the points of maximum luminance gradients at the square's edges.

  7. A bio-inspired method and system for visual object-based attention and segmentation

    Science.gov (United States)

    Huber, David J.; Khosla, Deepak

    2010-04-01

    This paper describes a method and system of human-like attention and object segmentation in visual scenes that (1) attends to regions in a scene in their rank of saliency in the image, (2) extracts the boundary of an attended proto-object based on feature contours, and (3) can be biased to boost the attention paid to specific features in a scene, such as those of a desired target object in static and video imagery. The purpose of the system is to identify regions of a scene of potential importance and extract the region data for processing by an object recognition and classification algorithm. The attention process can be performed in a default, bottom-up manner or a directed, top-down manner which will assign a preference to certain features over others. One can apply this system to any static scene, whether that is a still photograph or imagery captured from video. We employ algorithms that are motivated by findings in neuroscience, psychology, and cognitive science to construct a system that is novel in its modular and stepwise approach to the problems of attention and region extraction, its application of a flooding algorithm to break apart an image into smaller proto-objects based on feature density, and its ability to join smaller regions of similar features into larger proto-objects. This approach allows many complicated operations to be carried out by the system in a very short time, approaching real-time. A researcher can use this system as a robust front-end to a larger system that includes object recognition and scene understanding modules; it is engineered to function over a broad range of situations and can be applied to any scene with minimal tuning from the user.

  8. Assessment of visual disability using visual evoked potentials.

    Science.gov (United States)

    Jeon, Jihoon; Oh, Seiyul; Kyung, Sungeun

    2012-08-06

    The purpose of this study is to validate the use of visual evoked potential (VEP) to objectively quantify visual acuity in normal and amblyopic patients, and determine if it is possible to predict visual acuity in disability assessment to register visual pathway lesions. A retrospective chart review was conducted of patients diagnosed with normal vision, unilateral amblyopia, optic neuritis, and visual disability who visited the university medical center for registration from March 2007 to October 2009. The study included 20 normal subjects (20 right eyes: 10 females, 10 males, ages 9-42 years), 18 unilateral amblyopic patients (18 amblyopic eyes, ages 19-36 years), 19 optic neuritis patients (19 eyes: ages 9-71 years), and 10 patients with visual disability having visual pathway lesions. Amplitude and latencies were analyzed and correlations with visual acuity (logMAR) were derived from 20 normal and 18 amblyopic subjects. Correlation of VEP amplitude and visual acuity (logMAR) of 19 optic neuritis patients confirmed relationships between visual acuity and amplitude. We calculated the objective visual acuity (logMAR) of 16 eyes from 10 patients to diagnose the presence or absence of visual disability using relations derived from 20 normal and 18 amblyopic eyes. Linear regression analyses between amplitude of pattern visual evoked potentials and visual acuity (logMAR) of 38 eyes from normal (right eyes) and amblyopic (amblyopic eyes) subjects were significant [y = -0.072x + 1.22, x: VEP amplitude, y: visual acuity (logMAR)]. There were no significant differences between visual acuity prediction values, which substituted amplitude values of 19 eyes with optic neuritis into function. We calculated the objective visual acuity of 16 eyes of 10 patients to diagnose the presence or absence of visual disability using relations of y = -0.072x + 1.22 (-0.072). This resulted in a prediction reference of visual acuity associated with malingering vs. real

  9. Reader error, object recognition, and visual search

    Science.gov (United States)

    Kundel, Harold L.

    2004-05-01

    Small abnormalities such as hairline fractures, lung nodules and breast tumors are missed by competent radiologists with sufficient frequency to make them a matter of concern to the medical community; not only because they lead to litigation but also because they delay patient care. It is very easy to attribute misses to incompetence or inattention. To do so may be placing an unjustified stigma on the radiologists involved and may allow other radiologists to continue a false optimism that it can never happen to them. This review presents some of the fundamentals of visual system function that are relevant to understanding the search for and the recognition of small targets embedded in complicated but meaningful backgrounds like chests and mammograms. It presents a model for visual search that postulates a pre-attentive global analysis of the retinal image followed by foveal checking fixations and eventually discovery scanning. The model will be used to differentiate errors of search, recognition and decision making. The implications for computer aided diagnosis and for functional workstation design are discussed.

  10. More than words: Adults learn probabilities over categories and relationships between them.

    Science.gov (United States)

    Hudson Kam, Carla L

    2009-04-01

    This study examines whether human learners can acquire statistics over abstract categories and their relationships to each other. Adult learners were exposed to miniature artificial languages containing variation in the ordering of the Subject, Object, and Verb constituents. Different orders (e.g. SOV, VSO) occurred in the input with different frequencies, but the occurrence of one order versus another was not predictable. Importantly, the language was constructed such that participants could only match the overall input probabilities if they were tracking statistics over abstract categories, not over individual words. At test, participants reproduced the probabilities present in the input with a high degree of accuracy. Closer examination revealed that learner's were matching the probabilities associated with individual verbs rather than the category as a whole. However, individual nouns had no impact on word orders produced. Thus, participants learned the probabilities of a particular ordering of the abstract grammatical categories Subject and Object associated with each verb. Results suggest that statistical learning mechanisms are capable of tracking relationships between abstract linguistic categories in addition to individual items.

  11. Sandwich masking eliminates both visual awareness of faces and face-specific brain activity through a feedforward mechanism.

    Science.gov (United States)

    Harris, Joseph A; Wu, Chien-Te; Woldorff, Marty G

    2011-06-07

    It is generally agreed that considerable amounts of low-level sensory processing of visual stimuli can occur without conscious awareness. On the other hand, the degree of higher level visual processing that occurs in the absence of awareness is as yet unclear. Here, event-related potential (ERP) measures of brain activity were recorded during a sandwich-masking paradigm, a commonly used approach for attenuating conscious awareness of visual stimulus content. In particular, the present study used a combination of ERP activation contrasts to track both early sensory-processing ERP components and face-specific N170 ERP activations, in trials with versus without awareness. The electrophysiological measures revealed that the sandwich masking abolished the early face-specific N170 neural response (peaking at ~170 ms post-stimulus), an effect that paralleled the abolition of awareness of face versus non-face image content. Furthermore, however, the masking appeared to render a strong attenuation of earlier feedforward visual sensory-processing signals. This early attenuation presumably resulted in insufficient information being fed into the higher level visual system pathways specific to object category processing, thus leading to unawareness of the visual object content. These results support a coupling of visual awareness and neural indices of face processing, while also demonstrating an early low-level mechanism of interference in sandwich masking.

  12. Perceptual Discrimination of Basic Object Features Is Not Facilitated When Priming Stimuli Are Prevented From Reaching Awareness by Means of Visual Masking.

    Science.gov (United States)

    Peel, Hayden J; Sperandio, Irene; Laycock, Robin; Chouinard, Philippe A

    2018-01-01

    Our understanding of how form, orientation and size are processed within and outside of awareness is limited and requires further investigation. Therefore, we investigated whether or not the visual discrimination of basic object features can be influenced by subliminal processing of stimuli presented beforehand. Visual masking was used to render stimuli perceptually invisible. Three experiments examined if visible and invisible primes could facilitate the subsequent feature discrimination of visible targets. The experiments differed in the kind of perceptual discrimination that participants had to make. Namely, participants were asked to discriminate visual stimuli on the basis of their form, orientation, or size. In all three experiments, we demonstrated reliable priming effects when the primes were visible but not when the primes were made invisible. Our findings underscore the importance of conscious awareness in facilitating the perceptual discrimination of basic object features.

  13. Perceptual Discrimination of Basic Object Features Is Not Facilitated When Priming Stimuli Are Prevented From Reaching Awareness by Means of Visual Masking

    Science.gov (United States)

    Peel, Hayden J.; Sperandio, Irene; Laycock, Robin; Chouinard, Philippe A.

    2018-01-01

    Our understanding of how form, orientation and size are processed within and outside of awareness is limited and requires further investigation. Therefore, we investigated whether or not the visual discrimination of basic object features can be influenced by subliminal processing of stimuli presented beforehand. Visual masking was used to render stimuli perceptually invisible. Three experiments examined if visible and invisible primes could facilitate the subsequent feature discrimination of visible targets. The experiments differed in the kind of perceptual discrimination that participants had to make. Namely, participants were asked to discriminate visual stimuli on the basis of their form, orientation, or size. In all three experiments, we demonstrated reliable priming effects when the primes were visible but not when the primes were made invisible. Our findings underscore the importance of conscious awareness in facilitating the perceptual discrimination of basic object features. PMID:29725292

  14. Attentional Bias in Human Category Learning: The Case of Deep Learning.

    Science.gov (United States)

    Hanson, Catherine; Caglar, Leyla Roskan; Hanson, Stephen José

    2018-01-01

    . Third, we show that even BP can exhibit human like learning differences between integral and separable category structures when high dimensional stimuli (face exemplars) are used. We conclude, after visualizing the hidden unit representations, that DL appears to extend initial learning due to feature development thereby reducing destructive feature competition by incrementally refining feature detectors throughout later layers until a tipping point (in terms of error) is reached resulting in rapid asymptotic learning.

  15. Attentional Bias in Human Category Learning: The Case of Deep Learning

    Directory of Open Access Journals (Sweden)

    Catherine Hanson

    2018-04-01

    structures. Third, we show that even BP can exhibit human like learning differences between integral and separable category structures when high dimensional stimuli (face exemplars are used. We conclude, after visualizing the hidden unit representations, that DL appears to extend initial learning due to feature development thereby reducing destructive feature competition by incrementally refining feature detectors throughout later layers until a tipping point (in terms of error is reached resulting in rapid asymptotic learning.

  16. COGNITIVE-COMMUNICATIVE PERSONALITY CATEGORY IN THE KAZAKH LANGUAGE

    Directory of Open Access Journals (Sweden)

    Orynay Sagingalievna Zhubaeva

    2018-02-01

    Full Text Available The purpose of the research is to reveal the character of anthropocentricity of grammatical categories in their meaning and functioning. Materials and methods. According to the research objectives and goals, the methods used were as follows: the descriptive method, general scientific methods of analysis and synthesis, cognitive analysis, method of experiment, contextual analysis, structural-semantic analysis, transformation technique, comparative analysis. Results. For the first time in the Kazakh linguistics the substantial aspect of grammatical categories is characterized being a result of both conceptualization and categorization processes. Based on generalization and the comparative analysis of nature and forms of the human factor reflection in the Kazakh grammatical categories there has been revealed the national-cultural specific character of grammatical categories. Practical implications. The research materials can be used in theoretical courses onf grammar and linguistics, as well as in the development of special courses on cognitive linguistics, cognitive grammar, etc.

  17. Bootstrapping Relational Affordances of Object Pairs using Transfer

    DEFF Research Database (Denmark)

    Fichtl, Severin; Kraft, Dirk; Krüger, Norbert

    2018-01-01

    leverage past knowledge to accelerate current learning (which we call bootstrapping). We learn Random Forest based affordance predictors from visual inputs and demonstrate two approaches to knowledge transfer for bootstrapping. In the first approach (direct bootstrapping), the state-space for a new...... affordance predictor is augmented with the output of previously learnt affordances. In the second approach (category based bootstrapping), we form categories that capture underlying commonalities of a pair of existing affordances and augment the state-space with this category classifier’s output. In addition......, we introduce a novel heuristic, which suggests how a large set of potential affordance categories can be pruned to leave only those categories which are most promising for bootstrapping future affordances. Our results show that both bootstrapping approaches outperform learning without bootstrapping...

  18. Real Objects Can Impede Conditional Reasoning but Augmented Objects Do Not.

    Science.gov (United States)

    Sato, Yuri; Sugimoto, Yutaro; Ueda, Kazuhiro

    2018-03-01

    In this study, Knauff and Johnson-Laird's (2002) visual impedance hypothesis (i.e., mental representations with irrelevant visual detail can impede reasoning) is applied to the domain of external representations and diagrammatic reasoning. We show that the use of real objects and augmented real (AR) objects can control human interpretation and reasoning about conditionals. As participants made inferences (e.g., an invalid one from "if P then Q" to "P"), they also moved objects corresponding to premises. Participants who moved real objects made more invalid inferences than those who moved AR objects and those who did not manipulate objects (there was no significant difference between the last two groups). Our results showed that real objects impeded conditional reasoning, but AR objects did not. These findings are explained by the fact that real objects may over-specify a single state that exists, while AR objects suggest multiple possibilities. Copyright © 2017 Cognitive Science Society, Inc.

  19. Gender differences in category-specificity do not reflect innate dispositions

    DEFF Research Database (Denmark)

    Gerlach, Christian; Gainotti, Guido

    2016-01-01

    It is well established that certain categories of objects are processed more efficiently than others in specific tasks; a phenomenon known as category-specificity in perceptual and conceptual processing. In the last two decades there have also been several reports of gender differences in categor...... of this discrepancy is that previous reports of gender differences may have reflected differences in familiarity originating from socially-based gender roles....

  20. Subjective and objective measurements of visual fatigue induced by excessive disparities in stereoscopic images

    Science.gov (United States)

    Jung, Yong Ju; Kim, Dongchan; Sohn, Hosik; Lee, Seong-il; Park, Hyun Wook; Ro, Yong Man

    2013-03-01

    As stereoscopic displays have spread, it is important to know what really causes the visual fatigue and discomfort and what happens in the visual system in the brain behind the retina while viewing stereoscopic 3D images on the displays. In this study, functional magnetic resonance imaging (fMRI) was used for the objective measurement to assess the human brain regions involved in the processing of the stereoscopic stimuli with excessive disparities. Based on the subjective measurement results, we selected two subsets of comfort videos and discomfort videos in our dataset. Then, a fMRI experiment was conducted with the subsets of comfort and discomfort videos in order to identify which brain regions activated while viewing the discomfort videos in a stereoscopic display. We found that, when viewing a stereoscopic display, the right middle frontal gyrus, the right inferior frontal gyrus, the right intraparietal lobule, the right middle temporal gyrus, and the bilateral cuneus were significantly activated during the processing of excessive disparities, compared to those of small disparities (< 1 degree).