WorldWideScience

Sample records for visual object categories

  1. Visual object recognition and category-specificity

    DEFF Research Database (Denmark)

    Gerlach, Christian

    (shape configuration). In the early writings these two topics were examined more or less independently. In later works, findings concerning category-effects and shape configuration merge into an integrated model, termed RACE, advanced to explain category-effects arising at pre-semantic stages in visual...... object recognition. RACE assumes two operations: shape configuration and selection. Shape configuration refers to the binding of visual elements into elaborate shape descriptions corresponding to whole objects or large object parts (operation 1). The output of the shape configuration operation...... is a description that can be matched with structural representations of whole objects or object parts stored in visual long-term memory. The process of finding a match between the configured description and stored object representations is thought of as a race among stored object representations that compete...

  2. Category selectivity in human visual cortex: Beyond visual object recognition

    NARCIS (Netherlands)

    Peelen, M.V.; Downing, P.E.

    2017-01-01

    Human ventral temporal cortex shows a categorical organization, with regions responding selectively to faces, bodies, tools, scenes, words, and other categories. Why is this? Traditional accounts explain category selectivity as arising within a hierarchical system dedicated to visual object

  3. Category-specificity in visual object recognition

    DEFF Research Database (Denmark)

    Gerlach, Christian

    2009-01-01

    binding of shape elements into elaborate shape descriptions) and selection (among competing representations in visual long-term memory), which are held to be differentially affected by the structural similarity between objects. Drawing on evidence from clinical studies, experimental studies...

  4. Normal and abnormal category-effects in visual object recognition

    DEFF Research Database (Denmark)

    Gerlach, Christian

    2017-01-01

    . These limitations have led to the development of a new model of category-effects at pre-semantic stages in visual object processing, which can be considered a further development of the Cascade model: the Pre-semantic Account of Category-Effects (PACE). Here I give a slightly historical, but primarily integrative...

  5. Visual Working Memory Capacity for Objects from Different Categories: A Face-Specific Maintenance Effect

    Science.gov (United States)

    Wong, Jason H.; Peterson, Matthew S.; Thompson, James C.

    2008-01-01

    The capacity of visual working memory was examined when complex objects from different categories were remembered. Previous studies have not examined how visual similarity affects object memory, though it has long been known that similar-sounding phonological information interferes with rehearsal in auditory working memory. Here, experiments…

  6. Visual search for object categories is predicted by the representational architecture of high-level visual cortex.

    Science.gov (United States)

    Cohen, Michael A; Alvarez, George A; Nakayama, Ken; Konkle, Talia

    2017-01-01

    Visual search is a ubiquitous visual behavior, and efficient search is essential for survival. Different cognitive models have explained the speed and accuracy of search based either on the dynamics of attention or on similarity of item representations. Here, we examined the extent to which performance on a visual search task can be predicted from the stable representational architecture of the visual system, independent of attentional dynamics. Participants performed a visual search task with 28 conditions reflecting different pairs of categories (e.g., searching for a face among cars, body among hammers, etc.). The time it took participants to find the target item varied as a function of category combination. In a separate group of participants, we measured the neural responses to these object categories when items were presented in isolation. Using representational similarity analysis, we then examined whether the similarity of neural responses across different subdivisions of the visual system had the requisite structure needed to predict visual search performance. Overall, we found strong brain/behavior correlations across most of the higher-level visual system, including both the ventral and dorsal pathways when considering both macroscale sectors as well as smaller mesoscale regions. These results suggest that visual search for real-world object categories is well predicted by the stable, task-independent architecture of the visual system. Here, we ask which neural regions have neural response patterns that correlate with behavioral performance in a visual processing task. We found that the representational structure across all of high-level visual cortex has the requisite structure to predict behavior. Furthermore, when directly comparing different neural regions, we found that they all had highly similar category-level representational structures. These results point to a ubiquitous and uniform representational structure in high-level visual cortex

  7. The role of object categories in hybrid visual and memory search.

    Science.gov (United States)

    Cunningham, Corbin A; Wolfe, Jeremy M

    2014-08-01

    In hybrid search, observers search for any of several possible targets in a visual display containing distracting items and, perhaps, a target. Wolfe (2012) found that response times (RTs) in such tasks increased linearly with increases in the number of items in the display. However, RT increased linearly with the log of the number of items in the memory set. In earlier work, all items in the memory set were unique instances (e.g., this apple in this pose). Typical real-world tasks involve more broadly defined sets of stimuli (e.g., any "apple" or, perhaps, "fruit"). The present experiments show how sets or categories of targets are handled in joint visual and memory search. In Experiment 1, searching for a digit among letters was not like searching for targets from a 10-item memory set, though searching for targets from an N-item memory set of arbitrary alphanumeric characters was like searching for targets from an N-item memory set of arbitrary objects. In Experiment 2, observers searched for any instance of N sets or categories held in memory. This hybrid search was harder than search for specific objects. However, memory search remained logarithmic. Experiment 3 illustrates the interaction of visual guidance and memory search when a subset of visual stimuli are drawn from a target category. Furthermore, we outline a conceptual model, supported by our results, defining the core components that would be necessary to support such categorical hybrid searches. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  8. Combining features from ERP components in single-trial EEG for discriminating four-category visual objects

    Science.gov (United States)

    Wang, Changming; Xiong, Shi; Hu, Xiaoping; Yao, Li; Zhang, Jiacai

    2012-10-01

    Categorization of images containing visual objects can be successfully recognized using single-trial electroencephalograph (EEG) measured when subjects view images. Previous studies have shown that task-related information contained in event-related potential (ERP) components could discriminate two or three categories of object images. In this study, we investigated whether four categories of objects (human faces, buildings, cats and cars) could be mutually discriminated using single-trial EEG data. Here, the EEG waveforms acquired while subjects were viewing four categories of object images were segmented into several ERP components (P1, N1, P2a and P2b), and then Fisher linear discriminant analysis (Fisher-LDA) was used to classify EEG features extracted from ERP components. Firstly, we compared the classification results using features from single ERP components, and identified that the N1 component achieved the highest classification accuracies. Secondly, we discriminated four categories of objects using combining features from multiple ERP components, and showed that combination of ERP components improved four-category classification accuracies by utilizing the complementarity of discriminative information in ERP components. These findings confirmed that four categories of object images could be discriminated with single-trial EEG and could direct us to select effective EEG features for classifying visual objects.

  9. Animate and Inanimate Objects in Human Visual Cortex: Evidence for Task-Independent Category Effects

    Science.gov (United States)

    Wiggett, Alison J.; Pritchard, Iwan C.; Downing, Paul E.

    2009-01-01

    Evidence from neuropsychology suggests that the distinction between animate and inanimate kinds is fundamental to human cognition. Previous neuroimaging studies have reported that viewing animate objects activates ventrolateral visual brain regions, whereas inanimate objects activate ventromedial regions. However, these studies have typically…

  10. Stimulus dependency of object-evoked responses in human visual cortex: an inverse problem for category specificity.

    Directory of Open Access Journals (Sweden)

    Britta Graewe

    Full Text Available Many studies have linked the processing of different object categories to specific event-related potentials (ERPs such as the face-specific N170. Despite reports showing that object-related ERPs are influenced by visual stimulus features, there is consensus that these components primarily reflect categorical aspects of the stimuli. Here, we re-investigated this idea by systematically measuring the effects of visual feature manipulations on ERP responses elicited by both structure-from-motion (SFM-defined and luminance-defined object stimuli. SFM objects elicited a novel component at 200-250 ms (N250 over parietal and posterior temporal sites. We found, however, that the N250 amplitude was unaffected by restructuring SFM stimuli into meaningless objects based on identical visual cues. This suggests that this N250 peak was not uniquely linked to categorical aspects of the objects, but is strongly determined by visual stimulus features. We provide strong support for this hypothesis by parametrically manipulating the depth range of both SFM- and luminance-defined object stimuli and showing that the N250 evoked by SFM stimuli as well as the well-known N170 to static faces were sensitive to this manipulation. Importantly, this effect could not be attributed to compromised object categorization in low depth stimuli, confirming a strong impact of visual stimulus features on object-related ERP signals. As ERP components linked with visual categorical object perception are likely determined by multiple stimulus features, this creates an interesting inverse problem when deriving specific perceptual processes from variations in ERP components.

  11. Fourier power, subjective distance and object categories all provide plausible models of BOLD responses in scene-selective visual areas

    Directory of Open Access Journals (Sweden)

    Mark Daniel Lescroart

    2015-11-01

    Full Text Available Perception of natural visual scenes activates several functional areas in the human brain, including the Parahippocampal Place Area (PPA, Retrosplenial Complex (RSC, and the Occipital Place Area (OPA. It is currently unclear what specific scene-related features are represented in these areas. Previous studies have suggested that PPA, RSC, and/or OPA might represent at least three qualitatively different classes of features: (1 2D features related to Fourier power; (2 3D spatial features such as the distance to objects in a scene; or (3 abstract features such as the categories of objects in a scene. To determine which of these hypotheses best describes the visual representation in scene-selective areas, we applied voxel-wise modeling (VM to BOLD fMRI responses elicited by a set of 1,386 images of natural scenes. VM provides an efficient method for testing competing hypotheses by comparing predictions of brain activity based on encoding models that instantiate each hypothesis. Here we evaluated three different encoding models that instantiate each of the three hypotheses listed above. We used linear regression to fit each encoding model to the fMRI data recorded from each voxel, and we evaluated each fit model by estimating the amount of variance it predicted in a withheld portion of the data set. We found that voxel-wise models based on Fourier power or the subjective distance to objects in each scene predicted much of the variance predicted by a model based on object categories. Furthermore, the response variance explained by these three models is largely shared, and the individual models explain little unique variance in responses. Based on an evaluation of previous studies and the data we present here, we conclude that there is currently no good basis to favor any one of the three alternative hypotheses about visual representation in scene-selective areas. We offer suggestions for further studies that may help resolve this issue.

  12. Fourier power, subjective distance, and object categories all provide plausible models of BOLD responses in scene-selective visual areas

    Science.gov (United States)

    Lescroart, Mark D.; Stansbury, Dustin E.; Gallant, Jack L.

    2015-01-01

    Perception of natural visual scenes activates several functional areas in the human brain, including the Parahippocampal Place Area (PPA), Retrosplenial Complex (RSC), and the Occipital Place Area (OPA). It is currently unclear what specific scene-related features are represented in these areas. Previous studies have suggested that PPA, RSC, and/or OPA might represent at least three qualitatively different classes of features: (1) 2D features related to Fourier power; (2) 3D spatial features such as the distance to objects in a scene; or (3) abstract features such as the categories of objects in a scene. To determine which of these hypotheses best describes the visual representation in scene-selective areas, we applied voxel-wise modeling (VM) to BOLD fMRI responses elicited by a set of 1386 images of natural scenes. VM provides an efficient method for testing competing hypotheses by comparing predictions of brain activity based on encoding models that instantiate each hypothesis. Here we evaluated three different encoding models that instantiate each of the three hypotheses listed above. We used linear regression to fit each encoding model to the fMRI data recorded from each voxel, and we evaluated each fit model by estimating the amount of variance it predicted in a withheld portion of the data set. We found that voxel-wise models based on Fourier power or the subjective distance to objects in each scene predicted much of the variance predicted by a model based on object categories. Furthermore, the response variance explained by these three models is largely shared, and the individual models explain little unique variance in responses. Based on an evaluation of previous studies and the data we present here, we conclude that there is currently no good basis to favor any one of the three alternative hypotheses about visual representation in scene-selective areas. We offer suggestions for further studies that may help resolve this issue. PMID:26594164

  13. The Role of Sensory-Motor Information in Object Recognition: Evidence from Category-Specific Visual Agnosia

    Science.gov (United States)

    Wolk, D.A.; Coslett, H.B.; Glosser, G.

    2005-01-01

    The role of sensory-motor representations in object recognition was investigated in experiments involving AD, a patient with mild visual agnosia who was impaired in the recognition of visually presented living as compared to non-living entities. AD named visually presented items for which sensory-motor information was available significantly more…

  14. Selecting category specific visual information: Top-down and bottom-up control of object based attention.

    Science.gov (United States)

    Corradi-Dell'Acqua, Corrado; Fink, Gereon R; Weidner, Ralph

    2015-09-01

    The ability to select, within the complexity of sensory input, the information most relevant for our purposes is influenced by both internal settings (i.e., top-down control) and salient features of external stimuli (i.e., bottom-up control). We here investigated using fMRI the neural underpinning of the interaction of top-down and bottom-up processes, as well as their effects on extrastriate areas processing visual stimuli in a category-selective fashion. We presented photos of bodies or buildings embedded into frequency-matched visual noise to the subjects. Stimulus saliency changed gradually due to an altered degree to which photos stood-out in relation to the surrounding noise (hence generating stronger bottom-up control signals). Top-down settings were manipulated via instruction: participants were asked to attend one stimulus category (i.e., "is there a body?" or "is there a building?"). Highly salient stimuli that were inconsistent with participants' attentional top-down template activated the inferior frontal junction and dorsal parietal regions bilaterally. Stimuli consistent with participants' current attentional set additionally activated insular cortex and the parietal operculum. Furthermore, the extrastriate body area (EBA) exhibited increased neural activity when attention was directed to bodies. However, the latter effect was found only when stimuli were presented at intermediate saliency levels, thus suggesting a top-down modulation of this region only in the presence of weak bottom-up signals. Taken together, our results highlight the role of the inferior frontal junction and posterior parietal regions in integrating bottom-up and top-down attentional control signals. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Top-down modulation of visual processing and knowledge after 250 ms supports object constancy of category decisions

    Directory of Open Access Journals (Sweden)

    Haline E. Schendan

    2015-09-01

    Full Text Available People categorize objects slowly when visual input is highly impoverished instead of optimal. While bottom-up models may explain a decision with optimal input, perceptual hypothesis testing (PHT theories implicate top-down processes with impoverished input. Brain mechanisms and the time course of PHT are largely unknown. This event-related potential study used a neuroimaging paradigm that implicated prefrontal cortex in top-down modulation of occipitotemporal cortex. Subjects categorized more impoverished and less impoverished real and pseudo objects. PHT theories predict larger impoverishment effects for real than pseudo objects because top-down processes modulate knowledge only for real objects, but different PHT variants predict different timing. Consistent with parietal-prefrontal PHT variants, around 250 ms, the earliest impoverished real object interaction started on an N3 complex, which reflects interactive cortical activity for object cognition. N3 impoverishment effects localized to both prefrontal and occipitotemporal cortex for real objects only. The N3 also showed knowledge effects by 230 ms that localized to occipitotemporal cortex. Later effects reflected (a word meaning in temporal cortex during the N400, (b internal evaluation of prior decision and memory processes and secondary higher-order memory involving anterotemporal parts of a default mode network during posterior positivity (P600, and (c response related activity in posterior cingulate during an anterior slow wave (SW after 700 ms. Finally, response activity in supplementary motor area during a posterior SW after 900 ms showed impoverishment effects that correlated with RTs. Convergent evidence from studies of vision, memory, and mental imagery which reflects purely top-down inputs, indicates that the N3 reflects the critical top-down processes of PHT. A hybrid multiple-state interactive, PHT and decision theory best explains the visual constancy of object cognition.

  16. Now you see it, now you don’t: The context dependent nature of category-effects in visual object recognition

    DEFF Research Database (Denmark)

    Gerlach, Christian; Toft, Kristian Olesen

    2011-01-01

    In two experiments, we test predictions regarding processing advantages/disadvantages for natural objects and artefacts in visual object recognition. Varying three important parameters*degree of perceptual differentiation, stimulus format, and stimulus exposure duration*we show how different......-effects are products of common operations which are differentially affected by the structural similarity among objects (with natural objects being more structurally similar than artefacts). The potentially most important aspect of the present study is the demonstration that category-effects are very context dependent...

  17. Category vs. Object Knowledge in Category-Based Induction

    Science.gov (United States)

    Murphy, Gregory L.; Ross, Brian H.

    2010-01-01

    In one form of category-based induction, people make predictions about unknown properties of objects. There is a tension between predictions made based on the object's specific features (e.g., objects above a certain size tend not to fly) and those made by reference to category-level knowledge (e.g., birds fly). Seven experiments with artificial…

  18. Two Types of Visual Objects

    Directory of Open Access Journals (Sweden)

    Skrzypulec Błażej

    2015-06-01

    Full Text Available While it is widely accepted that human vision represents objects, it is less clear which of the various philosophical notions of ‘object’ adequately characterizes visual objects. In this paper, I show that within contemporary cognitive psychology visual objects are characterized in two distinct, incompatible ways. On the one hand, models of visual organization describe visual objects in terms of combinations of features, in accordance with the philosophical bundle theories of objects. However, models of visual persistence apply a notion of visual objects that is more similar to that endorsed in philosophical substratum theories. Here I discuss arguments that might show either that only one of the above notions of visual objects is adequate in the context of human vision, or that the category of visual objects is not uniform and contains entities properly characterized by different philosophical conceptions.

  19. Assessing the Cartographic Visualization of Moving Objects ...

    African Journals Online (AJOL)

    Nowadays, there is a lot of interest in studying dynamic spatial phenomena. There are various dynamic phenomena in the world among which moving objects are worth exemplifying. Recently, moving objects are getting attention in database applications and in visualization. Moving objects are of two categories: individual ...

  20. Category-Specific Visual Recognition and Aging from the PACE Theory Perspective: Evidence for a Presemantic Deficit in Aging Object Recognition

    DEFF Research Database (Denmark)

    Bordaberry, Pierre; Gerlach, Christian; Lenoble, Quentin

    2016-01-01

    in the selection stage of the PACE theory (visual long-term memory matching) could be responsible for these impairments. Indeed, the older group showed a deficit when this stage was most relevant. This article emphasize on the critical need for taking into account structural component of the stimuli and type...

  1. Incremental Visualizer for Visible Objects

    DEFF Research Database (Denmark)

    Bukauskas, Linas; Bøhlen, Michael Hanspeter

    path. IVVO is the novel solution which allows data to be visualized and loaded on the fly from the database and which regards visibilities of objects. We run a set of experiments to convince that IVVO is feasible in terms of I/O operations and CPU load. We consider the example of data which uses......This paper discusses the integration of database back-end and visualizer front-end into a one tightly coupled system. The main aim which we achieve is to reduce the data pipeline from database to visualization by using incremental data extraction of visible objects in a fly-through scenarios. We...... also argue that passing only relevant data from the database will substantially reduce the overall load of the visualization system. We propose the system Incremental Visualizer for Visible Objects (IVVO) which considers visible objects and enables incremental visualization along the observer movement...

  2. Bayesian Tracking of Visual Objects

    Science.gov (United States)

    Zheng, Nanning; Xue, Jianru

    Tracking objects in image sequences involves performing motion analysis at the object level, which is becoming an increasingly important technology in a wide range of computer video applications, including video teleconferencing, security and surveillance, video segmentation, and editing. In this chapter, we focus on sequential Bayesian estimation techniques for visual tracking. We first introduce the sequential Bayesian estimation framework, which acts as the theoretic basis for visual tracking. Then, we present approaches to constructing representation models for specific objects.

  3. Shape-independent object category responses revealed by MEG and fMRI decoding.

    Science.gov (United States)

    Kaiser, Daniel; Azzalini, Damiano C; Peelen, Marius V

    2016-04-01

    Neuroimaging research has identified category-specific neural response patterns to a limited set of object categories. For example, faces, bodies, and scenes evoke activity patterns in visual cortex that are uniquely traceable in space and time. It is currently debated whether these apparently categorical responses truly reflect selectivity for categories or instead reflect selectivity for category-associated shape properties. In the present study, we used a cross-classification approach on functional MRI (fMRI) and magnetoencephalographic (MEG) data to reveal both category-independent shape responses and shape-independent category responses. Participants viewed human body parts (hands and torsos) and pieces of clothing that were closely shape-matched to the body parts (gloves and shirts). Category-independent shape responses were revealed by training multivariate classifiers on discriminating shape within one category (e.g., hands versus torsos) and testing these classifiers on discriminating shape within the other category (e.g., gloves versus shirts). This analysis revealed significant decoding in large clusters in visual cortex (fMRI) starting from 90 ms after stimulus onset (MEG). Shape-independent category responses were revealed by training classifiers on discriminating object category (bodies and clothes) within one shape (e.g., hands versus gloves) and testing these classifiers on discriminating category within the other shape (e.g., torsos versus shirts). This analysis revealed significant decoding in bilateral occipitotemporal cortex (fMRI) and from 130 to 200 ms after stimulus onset (MEG). Together, these findings provide evidence for concurrent shape and category selectivity in high-level visual cortex, including category-level responses that are not fully explicable by two-dimensional shape properties. Copyright © 2016 the American Physiological Society.

  4. Understanding visualization: a formal approach using category theory and semiotics.

    Science.gov (United States)

    Vickers, Paul; Faith, Joe; Rossiter, Nick

    2013-06-01

    This paper combines the vocabulary of semiotics and category theory to provide a formal analysis of visualization. It shows how familiar processes of visualization fit the semiotic frameworks of both Saussure and Peirce, and extends these structures using the tools of category theory to provide a general framework for understanding visualization in practice, including: Relationships between systems, data collected from those systems, renderings of those data in the form of representations, the reading of those representations to create visualizations, and the use of those visualizations to create knowledge and understanding of the system under inspection. The resulting framework is validated by demonstrating how familiar information visualization concepts (such as literalness, sensitivity, redundancy, ambiguity, generalizability, and chart junk) arise naturally from it and can be defined formally and precisely. This paper generalizes previous work on the formal characterization of visualization by, inter alia, Ziemkiewicz and Kosara and allows us to formally distinguish properties of the visualization process that previous work does not.

  5. Perceptual differentiation and category effects in normal object recognition

    DEFF Research Database (Denmark)

    Gerlach, Christian; Law, I; Gade, A

    1999-01-01

    The purpose of the present PET study was (i) to investigate the neural correlates of object recognition, i.e. the matching of visual forms to memory, and (ii) to test the hypothesis that this process is more difficult for natural objects than for artefacts. This was done by using object decision ...

  6. Object detection in natural scenes: Independent effects of spatial and category-based attention.

    Science.gov (United States)

    Stein, Timo; Peelen, Marius V

    2017-04-01

    Humans are remarkably efficient in detecting highly familiar object categories in natural scenes, with evidence suggesting that such object detection can be performed in the (near) absence of attention. Here we systematically explored the influences of both spatial attention and category-based attention on the accuracy of object detection in natural scenes. Manipulating both types of attention additionally allowed for addressing how these factors interact: whether the requirement for spatial attention depends on the extent to which observers are prepared to detect a specific object category-that is, on category-based attention. The results showed that the detection of targets from one category (animals or vehicles) was better than the detection of targets from two categories (animals and vehicles), demonstrating the beneficial effect of category-based attention. This effect did not depend on the semantic congruency of the target object and the background scene, indicating that observers attended to visual features diagnostic of the foreground target objects from the cued category. Importantly, in three experiments the detection of objects in scenes presented in the periphery was significantly impaired when observers simultaneously performed an attentionally demanding task at fixation, showing that spatial attention affects natural scene perception. In all experiments, the effects of category-based attention and spatial attention on object detection performance were additive rather than interactive. Finally, neither spatial nor category-based attention influenced metacognitive ability for object detection performance. These findings demonstrate that efficient object detection in natural scenes is independently facilitated by spatial and category-based attention.

  7. Neuronal integration in visual cortex elevates face category tuning to conscious face perception.

    Science.gov (United States)

    Fahrenfort, Johannes J; Snijders, Tineke M; Heinen, Klaartje; van Gaal, Simon; Scholte, H Steven; Lamme, Victor A F

    2012-12-26

    The human brain has the extraordinary capability to transform cluttered sensory input into distinct object representations. For example, it is able to rapidly and seemingly without effort detect object categories in complex natural scenes. Surprisingly, category tuning is not sufficient to achieve conscious recognition of objects. What neural process beyond category extraction might elevate neural representations to the level where objects are consciously perceived? Here we show that visible and invisible faces produce similar category-selective responses in the ventral visual cortex. The pattern of neural activity evoked by visible faces could be used to decode the presence of invisible faces and vice versa. However, only visible faces caused extensive response enhancements and changes in neural oscillatory synchronization, as well as increased functional connectivity between higher and lower visual areas. We conclude that conscious face perception is more tightly linked to neural processes of sustained information integration and binding than to processes accommodating face category tuning.

  8. Category Specificity in Normal Episodic Learning: Applications to Object Recognition and Category-Specific Agnosia

    Science.gov (United States)

    Bukach, Cindy M.; Bub, Daniel N.; Masson, Michael E. J.; Lindsay, D. Stephen

    2004-01-01

    Studies of patients with category-specific agnosia (CSA) have given rise to multiple theories of object recognition, most of which assume the existence of a stable, abstract semantic memory system. We applied an episodic view of memory to questions raised by CSA in a series of studies examining normal observers' recall of newly learned attributes…

  9. Object Category Understanding via Eye Fixations on Freehand Sketches

    Science.gov (United States)

    Sarvadevabhatla, Ravi Kiran; Suresh, Sudharshan; Venkatesh Babu, R.

    2017-05-01

    The study of eye gaze fixations on photographic images is an active research area. In contrast, the image subcategory of freehand sketches has not received as much attention for such studies. In this paper, we analyze the results of a free-viewing gaze fixation study conducted on 3904 freehand sketches distributed across 160 object categories. Our analysis shows that fixation sequences exhibit marked consistency within a sketch, across sketches of a category and even across suitably grouped sets of categories. This multi-level consistency is remarkable given the variability in depiction and extreme image content sparsity that characterizes hand-drawn object sketches. In our paper, we show that the multi-level consistency in the fixation data can be exploited to (a) predict a test sketch's category given only its fixation sequence and (b) build a computational model which predicts part-labels underlying fixations on objects. We hope that our findings motivate the community to deem sketch-like representations worthy of gaze-based studies vis-a-vis photographic images.

  10. Prior auditory information shapes visual category-selectivity in ventral occipito-temporal cortex.

    Science.gov (United States)

    Adam, Ruth; Noppeney, Uta

    2010-10-01

    Objects in our natural environment generate signals in multiple sensory modalities. This fMRI study investigated the influence of prior task-irrelevant auditory information on visually-evoked category-selective activations in the ventral occipito-temporal cortex. Subjects categorized pictures as landmarks or animal faces, while ignoring the preceding congruent or incongruent sound. Behaviorally, subjects responded slower to incongruent than congruent stimuli. At the neural level, the lateral and medial prefrontal cortices showed increased activations for incongruent relative to congruent stimuli consistent with their role in response selection. In contrast, the parahippocampal gyri combined visual and auditory information additively: activation was greater for visual landmarks than animal faces and landmark-related sounds than animal vocalizations resulting in increased parahippocampal selectivity for congruent audiovisual landmarks. Effective connectivity analyses showed that this amplification of visual landmark-selectivity was mediated by increased negative coupling of the parahippocampal gyrus with the superior temporal sulcus for congruent stimuli. Thus, task-irrelevant auditory information influences visual object categorization at two stages. In the ventral occipito-temporal cortex auditory and visual category information are combined additively to sharpen visual category-selective responses. In the left inferior frontal sulcus, as indexed by a significant incongruency effect, visual and auditory category information are integrated interactively for response selection. Copyright 2010 Elsevier Inc. All rights reserved.

  11. Category-specific visual responses: an intracranial study comparing gamma, beta, alpha and ERP response selectivity

    Directory of Open Access Journals (Sweden)

    Juan R Vidal

    2010-11-01

    Full Text Available The specificity of neural responses to visual objects is a major topic in visual neuroscience. In humans, functional magnetic resonance imaging (fMRI studies have identified several regions of the occipital and temporal lobe that appear specific to faces, letter-strings, scenes, or tools. Direct electrophysiological recordings in the visual cortical areas of epileptic patients have largely confirmed this modular organization, using either single-neuron peri-stimulus time-histogram or intracerebral event-related potentials (iERP. In parallel, a new research stream has emerged using high-frequency gamma-band activity (50-150 Hz (GBR and low-frequency alpha/beta activity (8-24 Hz (ABR to map functional networks in humans. An obvious question is now whether the functional organization of the visual cortex revealed by fMRI, ERP, GBR, and ABR coincide. We used direct intracerebral recordings in 18 epileptic patients to directly compare GBR, ABR, and ERP elicited by the presentation of seven major visual object categories (faces, scenes, houses, consonants, pseudowords, tools, and animals, in relation to previous fMRI studies. Remarkably both GBR and iERP showed strong category-specificity that was in many cases sufficient to infer stimulus object category from the neural response at single-trial level. However, we also found a strong discrepancy between the selectivity of GBR, ABR, and ERP with less than 10% of spatial overlap between sites eliciting the same category-specificity. Overall, we found that selective neural responses to visual objects were broadly distributed in the brain with a prominent spatial cluster located in the posterior temporal cortex. Moreover, the different neural markers (GBR, ABR, and iERP that elicit selectivity towards specific visual object categories present little spatial overlap suggesting that the information content of each marker can uniquely characterize high-level visual information in the brain.

  12. Perceptual category learning and visual processing: An exercise in computational cognitive neuroscience.

    Science.gov (United States)

    Cantwell, George; Riesenhuber, Maximilian; Roeder, Jessica L; Ashby, F Gregory

    2017-05-01

    The field of computational cognitive neuroscience (CCN) builds and tests neurobiologically detailed computational models that account for both behavioral and neuroscience data. This article leverages a key advantage of CCN-namely, that it should be possible to interface different CCN models in a plug-and-play fashion-to produce a new and biologically detailed model of perceptual category learning. The new model was created from two existing CCN models: the HMAX model of visual object processing and the COVIS model of category learning. Using bitmap images as inputs and by adjusting only a couple of learning-rate parameters, the new HMAX/COVIS model provides impressively good fits to human category-learning data from two qualitatively different experiments that used different types of category structures and different types of visual stimuli. Overall, the model provides a comprehensive neural and behavioral account of basal ganglia-mediated learning. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Refining Visually Detected Object poses

    DEFF Research Database (Denmark)

    Holm, Preben; Petersen, Henrik Gordon

    2010-01-01

    to the particular object and in order to handle the demand for flexibility, there is an increasing demand for avoiding such dedicated mechanical alignment systems. Rather, it would be desirable to automatically locate and grasp randomly placed objects from tables, conveyor belts or even bins with a high accuracy......Automated industrial assembly today require that the 3D position and orientation (hereafter ''pose`) of the objects to be assembled are known precisely. Today this precision is mostly established by a dedicated mechanical object alignment system. However, such systems are often dedicated...... that enables direct assembly. Conventional vision systems and laser triangulation systems can locate randomly placed known objects (with 3D CAD models available) with some accuracy, but not necessarily a good enough accuracy. In this paper, we present a novel method for refining the pose accuracy of an object...

  14. Right away: A late, right-lateralized category effect complements an early, left-lateralized category effect in visual search.

    Science.gov (United States)

    Constable, Merryn D; Becker, Stefanie I

    2017-10-01

    According to the Sapir-Whorf hypothesis, learned semantic categories can influence early perceptual processes. A central finding in support of this view is the lateralized category effect-namely, the finding that categorically different colors (e.g., blue and green hues) can be discriminated faster than colors within the same color category (e.g., different hues of green), especially when they are presented in the right visual field. Because the right visual field projects to the left hemisphere, this finding has been popularly couched in terms of the left-lateralization of language. However, other studies have reported bilateral category effects, which has led some researchers to question the linguistic origins of the effect. Here we examined the time course of lateralized and bilateral category effects in the classical visual search paradigm by means of eyetracking and RT distribution analyses. Our results show a bilateral category effect in the manual responses, which is combined of an early, left-lateralized category effect and a later, right-lateralized category effect. The newly discovered late, right-lateralized category effect occurred only when observers had difficulty locating the target, indicating a specialization of the right hemisphere to find categorically different targets after an initial error. The finding that early and late stages of visual search show different lateralized category effects can explain a wide range of previously discrepant findings.

  15. Adaptive learning in a compartmental model of visual cortex - how feedback enables stable category learning and refinement

    Directory of Open Access Journals (Sweden)

    Georg eLayher

    2014-12-01

    Full Text Available The categorization of real world objects is often reflected in the similarity of their visual appearances. Such categories of objects do not necessarily form disjunct sets of objects, neither semantically nor visually. The relationship between categories can often be described in terms of a hierarchical structure. For instance, tigers and leopards build two separate mammalian categories, but both belong to the category of felines. In other words, tigers and leopards are subcategories of the category Felidae. In the last decades, the unsupervised learning of categories of visual input stimuli has been addressed by numerous approaches in machine learning as well as in the computational neurosciences. However, the question of what kind of mechanisms might be involved in the process of subcategory learning, or category refinement, remains a topic of active investigation. We propose a recurrent computational network architecture for the unsupervised learning of categorial and subcategorial visual input representations. During learning, the connection strengths of bottom-up weights from input to higher-level category representations are adapted according to the input activity distribution. In a similar manner, top-down weights learn to encode the characteristics of a specific stimulus category. Feedforward and feedback learning in combination realize an associative memory mechanism, enabling the selective top-down propagation of a category's feedback weight distribution. We suggest that the difference between the expected input encoded in the projective field of a category node and the current input pattern controls the amplification of feedforward-driven representations. Large enough differences trigger the recruitment of new representational resources and the establishment of (sub- category representations. We demonstrate the temporal evolution of such learning and show how the approach successully establishes category and subcategory

  16. Semantic Wavelet-Induced Frequency-Tagging (SWIFT Periodically Activates Category Selective Areas While Steadily Activating Early Visual Areas.

    Directory of Open Access Journals (Sweden)

    Roger Koenig-Robert

    Full Text Available Primate visual systems process natural images in a hierarchical manner: at the early stage, neurons are tuned to local image features, while neurons in high-level areas are tuned to abstract object categories. Standard models of visual processing assume that the transition of tuning from image features to object categories emerges gradually along the visual hierarchy. Direct tests of such models remain difficult due to confounding alteration in low-level image properties when contrasting distinct object categories. When such contrast is performed in a classic functional localizer method, the desired activation in high-level visual areas is typically accompanied with activation in early visual areas. Here we used a novel image-modulation method called SWIFT (semantic wavelet-induced frequency-tagging, a variant of frequency-tagging techniques. Natural images modulated by SWIFT reveal object semantics periodically while keeping low-level properties constant. Using functional magnetic resonance imaging (fMRI, we indeed found that faces and scenes modulated with SWIFT periodically activated the prototypical category-selective areas while they elicited sustained and constant responses in early visual areas. SWIFT and the localizer were selective and specific to a similar extent in activating category-selective areas. Only SWIFT progressively activated the visual pathway from low- to high-level areas, consistent with predictions from standard hierarchical models. We confirmed these results with criterion-free methods, generalizing the validity of our approach and show that it is possible to dissociate neural activation in early and category-selective areas. Our results provide direct evidence for the hierarchical nature of the representation of visual objects along the visual stream and open up future applications of frequency-tagging methods in fMRI.

  17. Commentary: Visual object recognition: building invariant ...

    Indian Academy of Sciences (India)

    2008-11-13

    Nov 13, 2008 ... http://www.ias.ac.in/article/fulltext/jbsc/033/05/0639-0642. Keywords. Interferotemporal cortex; object invariance; object recognition; positional tolerance; saccadic eye movements. Author Affiliations. Duje Tadin1 Raphael Pinaud1. Department of Brain and Cognitive Sciences and Center for Visual Science, ...

  18. Modulation of visual attention by object affordance

    Directory of Open Access Journals (Sweden)

    Patricia eGarrido-Vásquez

    2014-02-01

    Full Text Available Some objects in our environment are strongly tied to motor actions, a phenomenon called object affordance. A cup, for example, affords us to reach out to it and grasp it by its handle. Studies indicate that merely viewing an affording object triggers motor activations in the brain. The present study investigated whether object affordance would also result in an attention bias, that is, whether observers would rather attend to graspable objects within reach compared to non-graspable but reachable objects or to graspable objects out of reach. To this end, we conducted a combined reaction time and motion tracking study with a table in a virtual three-dimensional space. Two objects were positioned on the table, one near, the other one far from the observer. In each trial, two graspable objects, two non-graspable objects, or a combination of both was presented. Participants were instructed to detect a probe appearing on one of the objects as quickly as possible. Detection times served as indirect measure of attention allocation. The motor association with the graspable object was additionally enhanced by having participants grasp a real object in some of the trials. We hypothesized that visual attention would be preferentially allocated to the near graspable object, which should be reflected in reduced reaction times in this condition. Our results confirm this assumption: probe detection was fastest at the graspable object at the near position compared to the far position or to a non-graspable object. A follow-up experiment revealed that in addition to object affordance per se, immediate graspability of an affording object may also influence this near-space advantage. Our results suggest that visuospatial attention is preferentially allocated to affording objects which are immediately graspable, and thus establish a strong link between an object’s motor affordance and visual attention.

  19. Categorization and category effects in normal object recognition

    DEFF Research Database (Denmark)

    Gerlach, Christian; Law, I; Gade, A

    2000-01-01

    ). The object decision tasks were associated with activation of areas involved in structural processing (fusiform gyri, right inferior frontal gyrus). In contrast, the categorization tasks were associated with activation of the left inferior temporal gyrus, a structure believed to be involved in semantic...

  20. Handling categories properly: a novel objective of clinical research

    NARCIS (Netherlands)

    Cleophas, Ton J.; Atiqi, Roya; Zwinderman, Aeilko H.

    2012-01-01

    A major objective of clinical research is to study outcome effects in subgroups. Such effects generally have stepping functions that are not strictly linear. Analyzing stepping functions in linear models thus raises the risk of underestimating the effects. In the past few years, recoding subgroup

  1. Mapping brain activation and information during category-specific visual working memory

    National Research Council Canada - National Science Library

    Linden, David E J; Oosterhof, Nikolaas N; Klein, Christoph; Downing, Paul E

    2012-01-01

    How is working memory for different visual categories supported in the brain? Do the same principles of cortical specialization that govern the initial processing and encoding of visual stimuli also apply to their short-term maintenance...

  2. Cross-category adaptation: objects produce gender adaptation in the perception of faces.

    Science.gov (United States)

    Javadi, Amir Homayoun; Wee, Natalie

    2012-01-01

    Adaptation aftereffects have been found for low-level visual features such as colour, motion and shape perception, as well as higher-level features such as gender, race and identity in domains such as faces and biological motion. It is not yet clear if adaptation effects in humans extend beyond this set of higher order features. The aim of this study was to investigate whether objects highly associated with one gender, e.g. high heels for females or electric shavers for males can modulate gender perception of a face. In two separate experiments, we adapted subjects to a series of objects highly associated with one gender and subsequently asked participants to judge the gender of an ambiguous face. Results showed that participants are more likely to perceive an ambiguous face as male after being exposed to objects highly associated to females and vice versa. A gender adaptation aftereffect was obtained despite the adaptor and test stimuli being from different global categories (objects and faces respectively). These findings show that our perception of gender from faces is highly affected by our environment and recent experience. This suggests two possible mechanisms: (a) that perception of the gender associated with an object shares at least some brain areas with those responsible for gender perception of faces and (b) adaptation to gender, which is a high-level concept, can modulate brain areas that are involved in facial gender perception through top-down processes.

  3. Culture shapes eye movements for visually homogeneous objects

    Directory of Open Access Journals (Sweden)

    David J Kelly

    2010-04-01

    Full Text Available Culture affects the way people move their eyes to extract information in their visual world. Adults from Eastern societies (e.g., China display a disposition to process information holistically, whereas individuals from Western societies (e.g., Britain process information analytically. In terms of face processing, adults from Western cultures typically fixate the eyes and mouth, while adults from Eastern cultures fixate centrally on the nose region, yet face recognition accuracy is comparable across populations. A potential explanation for the observed differences relates to social norms concerning eye gaze avoidance/engagement when interacting with conspecifics. Furthermore, it has been argued that faces represent a ‘special’ stimulus category and are processed holistically, with the whole face processed as a single unit. The extent to which the holistic eye movement strategy deployed by East Asian observers is related to holistic processing for faces is undetermined. To investigate these hypotheses, we recorded eye movements of adults from Western and Eastern cultural backgrounds while learning and recognizing visually homogeneous objects: human faces, sheep faces and greebles. Both group of observers recognized faces better than any other visual category, as predicted by the specificity of faces. However, East Asian participants deployed central fixations across all the visual categories. This cultural perceptual strategy was not specific to faces, discarding any parallel between the eye movements of Easterners with the holistic processing specific to faces. Cultural diversity in the eye movements used to extract information from visual homogenous objects is rooted in more general and fundamental mechanisms.

  4. Conceptual Distinctiveness Supports Detailed Visual Long-Term Memory for Real-World Objects

    Science.gov (United States)

    Konkle, Talia; Brady, Timothy F.; Alvarez, George A.; Oliva, Aude

    2010-01-01

    Humans have a massive capacity to store detailed information in visual long-term memory. The present studies explored the fidelity of these visual long-term memory representations and examined how conceptual and perceptual features of object categories support this capacity. Observers viewed 2,800 object images with a different number of exemplars…

  5. Task alters category representations in prefrontal but not high-level visual cortex.

    Science.gov (United States)

    Bugatus, Lior; Weiner, Kevin S; Grill-Spector, Kalanit

    2017-07-15

    A central question in neuroscience is how cognitive tasks affect category representations across the human brain. Regions in lateral occipito-temporal cortex (LOTC), ventral temporal cortex (VTC), and ventro-lateral prefrontal cortex (VLFPC) constitute the extended "what" pathway, which is considered instrumental for visual category processing. However, it is unknown (1) whether distributed responses across LOTC, VTC, and VLPFC explicitly represent category, task, or some combination of both, and (2) in what way representations across these subdivisions of the extended 'what' pathway may differ. To fill these gaps in knowledge, we scanned 12 participants using fMRI to test the effect of category and task on distributed responses across LOTC, VTC, and VLPFC. Results reveal that task and category modulate responses in both high-level visual regions, as well as prefrontal cortex. However, we found fundamentally different types of representations across the brain. Distributed responses in high-level visual regions are more strongly driven by category than task, and exhibit task-independent category representations. In contrast, distributed responses in prefrontal cortex are more strongly driven by task than category, and contain task-dependent category representations. Together, these findings of differential representations across the brain support a new idea that LOTC and VTC maintain stable category representations allowing efficient processing of visual information, while prefrontal cortex contains flexible representations in which category information may emerge only when relevant to the task. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. MM-MDS: a multidimensional scaling database with similarity ratings for 240 object categories from the Massive Memory picture database.

    Science.gov (United States)

    Hout, Michael C; Goldinger, Stephen D; Brady, Kyle J

    2014-01-01

    Cognitive theories in visual attention and perception, categorization, and memory often critically rely on concepts of similarity among objects, and empirically require measures of "sameness" among their stimuli. For instance, a researcher may require similarity estimates among multiple exemplars of a target category in visual search, or targets and lures in recognition memory. Quantifying similarity, however, is challenging when everyday items are the desired stimulus set, particularly when researchers require several different pictures from the same category. In this article, we document a new multidimensional scaling database with similarity ratings for 240 categories, each containing color photographs of 16-17 exemplar objects. We collected similarity ratings using the spatial arrangement method. Reports include: the multidimensional scaling solutions for each category, up to five dimensions, stress and fit measures, coordinate locations for each stimulus, and two new classifications. For each picture, we categorized the item's prototypicality, indexed by its proximity to other items in the space. We also classified pairs of images along a continuum of similarity, by assessing the overall arrangement of each MDS space. These similarity ratings will be useful to any researcher that wishes to control the similarity of experimental stimuli according to an objective quantification of "sameness."

  7. The Importance of Visual Features in Generic versus Specialized Object Recognition: A Computational Study

    Directory of Open Access Journals (Sweden)

    Masoud eGhodrati

    2014-08-01

    Full Text Available It is debated whether the representation of objects in inferior temporal (IT cortex is distributed over activities of many neurons or there are restricted islands of neurons responsive to a specific set of objects. There are lines of evidence demonstrating that fusiform face area (FFA-in human processes information related to specialized object recognition (here we say within category object recognition such as face identification. Physiological studies have also discovered several patches in monkey ventral temporal lobe that are responsible for facial processing. Neuronal recording from these patches shows that neurons are highly selective for face images whereas for other objects we do not see such selectivity in IT. However, it is also well-supported that objects are encoded through distributed patterns of neural activities that are distinctive for each object category. It seems that visual cortex utilize different mechanisms for between category object recognition (e.g. face vs. non-face objects versus within category object recognition (e.g. two different faces. In this study, we address this question with computational simulations. We use two biologically inspired object recognition models (one proposed in our group and define two experiments which address these issues. The models have a hierarchical structure of several processing layers that simply simulate visual processing from V1 to aIT. We show, through computational modeling, that the difference between these two mechanisms of recognition can underlie the visual feature and extraction mechanism. It is argued that in order to perform generic and specialized object recognition, visual cortex must separate the mechanisms involved in within category from between categories object recognition. High recognition performance in within category object recognition can be guaranteed when class-specific features with intermediate size and complexity are extracted. However, generic object

  8. Word, thought, and deed: the role of object categories in children's inductive inferences and exploratory play.

    Science.gov (United States)

    Schulz, Laura E; Standing, Holly R; Bonawitz, Elizabeth B

    2008-09-01

    Previous research (e.g., S. A. Gelman & E. M. Markman, 1986; A. Gopnik & D. M. Sobel, 2000) suggests that children can use category labels to make inductive inferences about nonobvious causal properties of objects. However, such inductive generalizations can fail to predict objects' causal properties when (a) the property being projected varies within the category, (b) the category is arbitrary (e.g., things smaller than a bread box), or (c) the property being projected is due to an exogenous intervention rather than intrinsic to the object kind. In 4 studies, the authors showed that preschoolers (M = 48 months; range = 42-57 months) were sensitive to these constraints on induction and selectively engaged in exploration when evidence about objects' causal properties conflicted with inductive generalizations from the objects' kind to their causal powers. This suggests that the exploratory actions children generate in free play could support causal learning.

  9. Object Localization Does Not Imply Awareness of Object Category at the Break of Continuous Flash Suppression

    Directory of Open Access Journals (Sweden)

    Florian Kobylka

    2017-06-01

    Full Text Available In continuous flash suppression (CFS, a dynamic noise masker, presented to one eye, suppresses conscious perception of a test stimulus, presented to the other eye, until the suppressed stimulus comes to awareness after few seconds. But what do we see breaking the dominance of the masker in the transition period? We addressed this question with a dual-task in which observers indicated (i whether the test object was left or right of the fixation mark (localization and (ii whether it was a face or a house (categorization. As done recently Stein et al. (2011a, we used two experimental varieties to rule out confounds with decisional strategy. In the terminated mode, stimulus and masker were presented for distinct durations, and the observers were asked to give both judgments at the end of the trial. In the self-paced mode, presentation lasted until the observers responded. In the self-paced mode, b-CFS durations for object categorization were about half a second longer than for object localization. In the terminated mode, correct categorization rates were consistently lower than correct detection rates, measured at five duration intervals ranging up to 2 s. In both experiments we observed an upright face advantage compared to inverted faces and houses, as concurrently reported in b-CFS studies. Our findings reveal that more time is necessary to enable observers judging the nature of the object, compared to judging that there is “something other” than the noise which can be localized, but not recognized. This suggests gradual transitions in the first break of CFS. Further, the results imply that suppression is such that no cues to object identity are conveyed in potential “leaks” of CFS (Gelbard-Sagiv et al., 2016.

  10. Nouns, verbs, objects, actions, and abstractions: Local fMRI activity indexes semantics, not lexical categories

    Science.gov (United States)

    Moseley, Rachel L.; Pulvermüller, Friedemann

    2014-01-01

    Noun/verb dissociations in the literature defy interpretation due to the confound between lexical category and semantic meaning; nouns and verbs typically describe concrete objects and actions. Abstract words, pertaining to neither, are a critical test case: dissociations along lexical-grammatical lines would support models purporting lexical category as the principle governing brain organisation, whilst semantic models predict dissociation between concrete words but not abstract items. During fMRI scanning, participants read orthogonalised word categories of nouns and verbs, with or without concrete, sensorimotor meaning. Analysis of inferior frontal/insula, precentral and central areas revealed an interaction between lexical class and semantic factors with clear category differences between concrete nouns and verbs but not abstract ones. Though the brain stores the combinatorial and lexical-grammatical properties of words, our data show that topographical differences in brain activation, especially in the motor system and inferior frontal cortex, are driven by semantics and not by lexical class. PMID:24727103

  11. Tracking Multiple Statistics: Simultaneous Learning of Object Names and Categories in English and Mandarin Speakers

    Science.gov (United States)

    Chen, Chi-hsin; Gershkoff-Stowe, Lisa; Wu, Chih-Yi; Cheung, Hintat; Yu, Chen

    2017-01-01

    Two experiments were conducted to examine adult learners' ability to extract multiple statistics in simultaneously presented visual and auditory input. Experiment 1 used a cross-situational learning paradigm to test whether English speakers were able to use co-occurrences to learn word-to-object mappings and concurrently form object categories…

  12. VOS: A New Method for Visualizing Similarities between Objects

    NARCIS (Netherlands)

    N.J.P. van Eck (Nees Jan); L. Waltman (Ludo)

    2006-01-01

    textabstractWe present a new method for visualizing similarities between objects. The method is called VOS, which is an abbreviation for visualization of similarities. The aim of VOS is to provide a low-dimensional visualization in which objects are located in such a way that the distance between

  13. Spatial versus object visualizers: A new characterization of visual cognitive style

    National Research Council Canada - National Science Library

    Kozhevnikov, Maria; Kosslyn, Stephen; Shephard, Jennifer

    2005-01-01

    .... Specifically, scores on spatial and object imagery tasks, along with a visualizer-verbalizer cognitive style questionnaire, identified a group of visualizers who scored poorly on spatial imagery...

  14. Mapping brain activation and information during category-specific visual working memory.

    Science.gov (United States)

    Linden, David E J; Oosterhof, Nikolaas N; Klein, Christoph; Downing, Paul E

    2012-01-01

    How is working memory for different visual categories supported in the brain? Do the same principles of cortical specialization that govern the initial processing and encoding of visual stimuli also apply to their short-term maintenance? We investigated these questions with a delayed discrimination paradigm for faces, bodies, flowers, and scenes and applied both univariate and multivariate analyses to functional magnetic resonance imaging (fMRI) data. Activity during encoding followed the well-known specialization in posterior areas. During the delay interval, activity shifted to frontal and parietal regions but was not specialized for category. Conversely, activity in visual areas returned to baseline during that interval but showed some evidence of category specialization on multivariate pattern analysis (MVPA). We conclude that principles of cortical activation differ between encoding and maintenance of visual material. Whereas perceptual processes rely on specialized regions in occipitotemporal cortex, maintenance involves the activation of a frontoparietal network that seems to require little specialization at the category level. We also confirm previous findings that MVPA can extract information from fMRI signals in the absence of suprathreshold activation and that such signals from visual areas can reflect the material stored in memory.

  15. Deep learning based multi-category object detection in aerial images

    Science.gov (United States)

    Sommer, Lars W.; Schuchert, Tobias; Beyerer, Jürgen

    2017-05-01

    Multi-category object detection in aerial images is an important task for many applications such as surveillance, tracking or search and rescue tasks. In recent years, deep learning approaches using features extracted by convolutional neural networks (CNN) significantly improved the detection accuracy on detection benchmark datasets compared to traditional approaches based on hand-crafted features as used for object detection in aerial images. However, these approaches are not transferable one to one on aerial images as the used network architectures have an insufficient resolution of feature maps for handling small instances. This consequently results in poor localization accuracy or missed detections as the network architectures are explored and optimized for datasets that considerably differ from aerial images in particular in object size and image fraction occupied by an object. In this work, we propose a deep neural network derived from the Faster R-CNN approach for multi- category object detection in aerial images. We show how the detection accuracy can be improved by replacing the network architecture by an architecture especially designed for handling small object sizes. Furthermore, we investigate the impact of different parameters of the detection framework on the detection accuracy for small objects. Finally, we demonstrate the suitability of our network for object detection in aerial images by comparing our network to traditional baseline approaches and deep learning based approaches on the publicly available DLR 3K Munich Vehicle Aerial Image Dataset that comprises multiple object classes such as car, van, truck, bus and camper.

  16. Distinctive neural mechanisms supporting visual object individuation and identification.

    Science.gov (United States)

    Xu, Yaoda

    2009-03-01

    Many everyday activities, such as driving on a busy street, require the encoding of distinctive visual objects from crowded scenes. Given resource limitations of our visual system, one solution to this difficult and challenging task is to first select individual objects from a crowded scene (object individuation) and then encode their details (object identification). Using functional magnetic resonance imaging, two distinctive brain mechanisms were recently identified that support these two stages of visual object processing. While the inferior intraparietal sulcus (IPS) selects a fixed number of about four objects via their spatial locations, the superior IPS and the lateral occipital complex (LOC) encode the features of a subset of the selected objects in great detail (object shapes in this case). Thus, the inferior IPS individuates visual objects from a crowded display and the superior IPS and higher visual areas participate in subsequent object identification. Consistent with the prediction of this theory, even when only object shape identity but not its location is task relevant, this study shows that object individuation in the inferior IPS treats four identical objects similarly as four objects that are all different, whereas object shape identification in the superior IPS and the LOC treat four identical objects as a single unique object. These results provide independent confirmation supporting the dissociation between visual object individuation and identification in the brain.

  17. Orienting attention to objects in visual short-term memory

    NARCIS (Netherlands)

    Dell'Acqua, Roberto; Sessa, Paola; Toffanin, Paolo; Luria, Roy; Joliccoeur, Pierre

    We measured electroencephalographic activity during visual search of a target object among objects available to perception or among objects held in visual short-term memory (VSTM). For perceptual search, a single shape was shown first (pre-cue) followed by a search-array and the task was to decide

  18. A Computational Approach towards Visual Object Recognition at Taxonomic Levels of Concepts

    Directory of Open Access Journals (Sweden)

    Zahra Sadeghi

    2015-01-01

    Full Text Available It has been argued that concepts can be perceived at three main levels of abstraction. Generally, in a recognition system, object categories can be viewed at three levels of taxonomic hierarchy which are known as superordinate, basic, and subordinate levels. For instance, “horse” is a member of subordinate level which belongs to basic level of “animal” and superordinate level of “natural objects.” Our purpose in this study is to take an investigation into visual features at each taxonomic level. We first present a recognition tree which is more general in terms of inclusiveness with respect to visual representation of objects. Then we focus on visual feature definition, that is, how objects from the same conceptual category can be visually represented at each taxonomic level. For the first level we define global features based on frequency patterns to illustrate visual distinctions among artificial and natural. In contrast, our approach for the second level is based on shape descriptors which are defined by recruiting moment based representation. Finally, we show how conceptual knowledge can be utilized for visual feature definition in order to enhance recognition of subordinate categories.

  19. Changes in visual object recognition precede the shape bias in early noun learning

    Directory of Open Access Journals (Sweden)

    Meagan N Yee

    2012-12-01

    Full Text Available Two of the most formidable skills that characterize human beings are language and our prowess in visual object recognition. They may also be developmentally intertwined. Two experiments, a large sample cross-sectional study and a smaller sample 6-month longitudinal study of 18- 24 month olds tested a hypothesized developmental link between changes in the visual object representation and noun learning. Previous findings in visual object recognition indicate that children’s ability to recognize common basic level categories from sparse structural shape representations of object shape emerges between the ages of 18 and 24 months, is related to noun vocabulary size, and is lacking in children with language delay. Other research shows that in artificial noun learning tasks, during this same developmental period, young children systematically generalize object names by shape, that this shape bias predicts future noun learning, and is lacking in children with language delay. The two experiments examine the developmental relation between visual object recognition and the shape bias for the first time. The results show that developmental changes in visual object recognition systematically preceded the emergence of the shape bias. The results suggest a developmental pathway in which early changes in visual object recognition that are themselves linked to category learning enable the discovery of higher-order regularities in category structure and thus the shape bias in novel noun learning tasks. The proposed developmental pathway has implications for understanding the role of specific experience in the development of both visual object recognition and the shape bias in early noun learning.

  20. Auditory-visual object recognition time suggests specific processing for animal sounds.

    Directory of Open Access Journals (Sweden)

    Clara Suied

    Full Text Available BACKGROUND: Recognizing an object requires binding together several cues, which may be distributed across different sensory modalities, and ignoring competing information originating from other objects. In addition, knowledge of the semantic category of an object is fundamental to determine how we should react to it. Here we investigate the role of semantic categories in the processing of auditory-visual objects. METHODOLOGY/FINDINGS: We used an auditory-visual object-recognition task (go/no-go paradigm. We compared recognition times for two categories: a biologically relevant one (animals and a non-biologically relevant one (means of transport. Participants were asked to react as fast as possible to target objects, presented in the visual and/or the auditory modality, and to withhold their response for distractor objects. A first main finding was that, when participants were presented with unimodal or bimodal congruent stimuli (an image and a sound from the same object, similar reaction times were observed for all object categories. Thus, there was no advantage in the speed of recognition for biologically relevant compared to non-biologically relevant objects. A second finding was that, in the presence of a biologically relevant auditory distractor, the processing of a target object was slowed down, whether or not it was itself biologically relevant. It seems impossible to effectively ignore an animal sound, even when it is irrelevant to the task. CONCLUSIONS/SIGNIFICANCE: These results suggest a specific and mandatory processing of animal sounds, possibly due to phylogenetic memory and consistent with the idea that hearing is particularly efficient as an alerting sense. They also highlight the importance of taking into account the auditory modality when investigating the way object concepts of biologically relevant categories are stored and retrieved.

  1. Visual Form Perception Can Be a Cognitive Correlate of Lower Level Math Categories for Teenagers

    Science.gov (United States)

    Cui, Jiaxin; Zhang, Yiyun; Cheng, Dazhi; Li, Dawei; Zhou, Xinlin

    2017-01-01

    Numerous studies have assessed the cognitive correlates of performance in mathematics, but little research has been conducted to systematically examine the relations between visual perception as the starting point of visuospatial processing and typical mathematical performance. In the current study, we recruited 223 seventh graders to perform a visual form perception task (figure matching), numerosity comparison, digit comparison, exact computation, approximate computation, and curriculum-based mathematical achievement tests. Results showed that, after controlling for gender, age, and five general cognitive processes (choice reaction time, visual tracing, mental rotation, spatial working memory, and non-verbal matrices reasoning), visual form perception had unique contributions to numerosity comparison, digit comparison, and exact computation, but had no significant relation with approximate computation or curriculum-based mathematical achievement. These results suggest that visual form perception is an important independent cognitive correlate of lower level math categories, including the approximate number system, digit comparison, and exact computation. PMID:28824513

  2. A system to program projects to meet visual quality objectives

    Science.gov (United States)

    Fred L. Henley; Frank L. Hunsaker

    1979-01-01

    The U. S. Forest Service has established Visual Quality Objectives for National Forest lands and determined a method to ascertain the Visual Absorption Capability of those lands. Combining the two mapping inventories has allowed the Forest Service to retain the visual quality while managing natural resources.

  3. VISSION : An Object Oriented Dataflow System for Simulation and Visualization

    NARCIS (Netherlands)

    Telea, Alexandru; Wijk, Jarke J. van

    1999-01-01

    Scientific visualization and simulation specification and monitoring are sometimes addressed by object-oriented environments. Even though object orientation powerfully and elegantly models many application domains, integration of OO libraries in such systems remains a difficult task. The elegance

  4. The visual extent of an object: suppose we know the object locations

    NARCIS (Netherlands)

    Uijlings, J.R.R.; Smeulders, A.W.M.; Scha, R.J.H.

    2012-01-01

    The visual extent of an object reaches beyond the object itself. This is a long standing fact in psychology and is reflected in image retrieval techniques which aggregate statistics from the whole image in order to identify the object within. However, it is unclear to what degree and how the visual

  5. The influence of location and visual features on visual object memory.

    Science.gov (United States)

    Sun, Hsin-Mei; Gordon, Robert D

    2010-12-01

    In five experiments, we examined the influence of contextual objects' location and visual features on visual memory. Participants' visual memory was tested with a change detection task in which they had to judge whether the orientation (Experiments 1A, 1B, and 2) or color (Experiments 3A and 3B) of a target object was the same. Furthermore, contextual objects' locations and visual features were manipulated in the test image. The results showed that change detection performance was better when contextual objects' locations remained the same from study to test, demonstrating that the original spatial configuration is important for subsequent visual memory retrieval. The results further showed that changes to contextual objects' orientation, but not color, reduced orientation change detection performance; and changes to contextual objects' color, but not orientation, impaired color change detection performance. Therefore, contextual objects' visual features are capable of affecting visual memory. However, selective attention plays an influential role in modulating such effects.

  6. Scene Memory Is More Detailed Than You Think: The Role of Categories in Visual Long-Term Memory

    OpenAIRE

    Konkle, Talia; Brady, Timothy F.; Alvarez, George A.; Oliva, Aude

    2010-01-01

    Observers can store thousands of object images in visual long-term memory with high fidelity, but the fidelity of scene representations in long-term memory is not known. Here, we probed scene-representation fidelity by varying the number of studied exemplars in different scene categories and testing memory using exemplar-level foils. Observers viewed thousands of scenes over 5.5 hr and then completed a series of forced-choice tests. Memory performance was high, even with up to 64 scenes from ...

  7. Modeling the Visual and Linguistic Importance of Objects

    Directory of Open Access Journals (Sweden)

    Moreno Ignazio Coco

    2012-05-01

    Full Text Available Previous work measuring the visual importance of objects has shown that only spatial information, such as object position and size, is predictive of importance, whilst low-level visual information, such as saliency, is not (Spain and Perona 2010, IJCV 91, 59–76. Objects are not important solely on the basis of their appearance. Rather, they are important because of their contextual information (eg, a pen in an office versus in a bathroom, which is needed in tasks requiring cognitive control (eg, visual search; Henderson 2007, PsySci 16 219–222. Given that most visual objects have a linguistic counterpart, their importance depends also on linguistic information, especially in tasks where language is actively involved—eg, naming. In an eye-tracking naming study, where participants are asked to name 5 objects in a scene, we investigated how visual saliency, contextual features, and linguistic information of the mentioned objects predicted their importance. We measured object importance based on the urn model of Spain and Perona (2010 and estimated the predictive role of visual and linguistic features using different regression frameworks: LARS (Efron et al 2004, Annals of Statistics 32 407–499 and LME (Baayen et al 2008, JML 59, 390–412. Our results confirmed the role of spatial information in predicting object importance, and in addition, we found effects of saliency. Crucially to our hypothesis, we demonstrated that the lexical frequency of objects and their contextual fit in the scene significantly contributed to object importance.

  8. [BASETY: Meaning extension and typicality of examples for 21 categories of objects].

    Science.gov (United States)

    Léger, Laure; Boumlak, Hind; Tijus, Charles

    2008-12-01

    Basety is a French semantic database of exemplars of 21 categories of objects, with a typicality index associated with each exemplar. These 21 semantic categories are animals, trees, weapons, buildings, flowers, fruits, insects, instruments of music, games, toys, vegetables, mammals, furniture, birds, tools, fish, occupation, containers, sports, vehicles, and clothes. Basety was made up with two groups of 18-to-30 years old French participants, a first group of three subgroups of 100 participants producing exemplars for 7 x 3 categories while a second group of 80 participants evaluating membership of these exemplars. Typicality was computed as the number of occurrences of the exemplar within the set of the five exemplars participants were first producing. Cronbach's coefficient of reliability indicates an internally consistent scale and number of exemplars is correlated with membership ratings: the more the participants of the first group produced exemplars, the more the participants of the second group agreed on the degree of membership of these exemplars. BASETY appears to be a consistent and valid database for French semantic research.

  9. Object formation in visual working memory: Evidence from object-based attention.

    Science.gov (United States)

    Zhou, Jifan; Zhang, Haihang; Ding, Xiaowei; Shui, Rende; Shen, Mowei

    2016-09-01

    We report on how visual working memory (VWM) forms intact perceptual representations of visual objects using sub-object elements. Specifically, when objects were divided into fragments and sequentially encoded into VWM, the fragments were involuntarily integrated into objects in VWM, as evidenced by the occurrence of both positive and negative object-based attention effects: In Experiment 1, when subjects' attention was cued to a location occupied by the VWM object, the target presented at the location of that object was perceived as occurring earlier than that presented at the location of a different object. In Experiment 2, responses to a target were significantly slower when a distractor was presented at the same location as the cued object (Experiment 2). These results suggest that object fragments can be integrated into objects within VWM in a manner similar to that of visual perception. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Feature-saliency and feedback-information interactively impact visual category learning

    Directory of Open Access Journals (Sweden)

    Rubi eHammer

    2015-02-01

    Full Text Available Visual category learning (VCL involves detecting which features are most relevant for categorization. This requires attentional learning, which allows effectively redirecting attention to object’s features most relevant for categorization while also filtering out irrelevant features. When features relevant for categorization are not salient VCL relies also on perceptual learning, which enable becoming more sensitive to subtle yet important differences between objects. Little is known about how attentional learning and perceptual learning interact when VCL relies on both processes at the same time. Here we tested this interaction. Participants performed VCL tasks that varied in feature-saliency (low-saliency tasks that required perceptual learning vs. high-saliency tasks, and in feedback-information (tasks with mid-information, moderately ambiguous feedback that increased attentional load vs. tasks with high-information non-ambiguous feedback. Participants were required learning to categorize novel stimuli by detecting the feature-dimension relevant for categorization. We found that mid-information and high-information feedback were similarly effective for VCL in high-saliency tasks. This suggests that an increased attentional load associated with the processing of moderately ambiguous feedback does not compromise VCL when both the task relevant feature and irrelevant features are salient. In low-saliency VCL tasks performance improvement relied on slower perceptual learning, but when the feedback was highly-informative participants were ultimately capable reaching performances matching those observed in high-saliency VCL tasks. However, VCL was much compromised when features were with low-saliency and the feedback was ambiguous. We suggest that this later learning scenario is characterized by a ‘cognitive loop paradox’ where two interdependent learning processes have to take place simultaneously.

  11. Visual Memory for Objects Following Foveal Vision Loss

    Science.gov (United States)

    Geringswald, Franziska; Herbik, Anne; Hofmüller, Wolfram; Hoffmann, Michael B.; Pollmann, Stefan

    2015-01-01

    Allocation of visual attention is crucial for encoding items into visual long-term memory. In free vision, attention is closely linked to the center of gaze, raising the question whether foveal vision loss entails suboptimal deployment of attention and subsequent impairment of object encoding. To investigate this question, we examined visual…

  12. Visual-Spatial Attention Aids the Maintenance of Object Representations in Visual Working Memory

    OpenAIRE

    Williams, Melonie; Pouget, Pierre; Boucher, Leanne; Woodman, Geoffrey F.

    2013-01-01

    Theories have proposed that the maintenance of object representations in visual working memory is aided by a spatial rehearsal mechanism. In this study, we used two different approaches to test the hypothesis that overt and covert visual-spatial attention mechanisms contribute to the maintenance of object representations in visual working memory. First, we tracked observers’ eye movements while remembering a variable number of objects during change-detection tasks. We observed that during the...

  13. Object detection through search with a foveated visual system.

    Directory of Open Access Journals (Sweden)

    Emre Akbas

    2017-10-01

    Full Text Available Humans and many other species sense visual information with varying spatial resolution across the visual field (foveated vision and deploy eye movements to actively sample regions of interests in scenes. The advantage of such varying resolution architecture is a reduced computational, hence metabolic cost. But what are the performance costs of such processing strategy relative to a scheme that processes the visual field at high spatial resolution? Here we first focus on visual search and combine object detectors from computer vision with a recent model of peripheral pooling regions found at the V1 layer of the human visual system. We develop a foveated object detector that processes the entire scene with varying resolution, uses retino-specific object detection classifiers to guide eye movements, aligns its fovea with regions of interest in the input image and integrates observations across multiple fixations. We compared the foveated object detector against a non-foveated version of the same object detector which processes the entire image at homogeneous high spatial resolution. We evaluated the accuracy of the foveated and non-foveated object detectors identifying 20 different objects classes in scenes from a standard computer vision data set (the PASCAL VOC 2007 dataset. We show that the foveated object detector can approximate the performance of the object detector with homogeneous high spatial resolution processing while bringing significant computational cost savings. Additionally, we assessed the impact of foveation on the computation of bottom-up saliency. An implementation of a simple foveated bottom-up saliency model with eye movements showed agreement in the selection of top salient regions of scenes with those selected by a non-foveated high resolution saliency model. Together, our results might help explain the evolution of foveated visual systems with eye movements as a solution that preserves perceptual performance in visual

  14. Online Feature Transformation Learning for Cross-Domain Object Category Recognition.

    Science.gov (United States)

    Zhang, Xuesong; Zhuang, Yan; Wang, Wei; Pedrycz, Witold

    2017-06-09

    In this paper, we introduce a new research problem termed online feature transformation learning in the context of multiclass object category recognition. The learning of a feature transformation is viewed as learning a global similarity metric function in an online manner. We first consider the problem of online learning a feature transformation matrix expressed in the original feature space and propose an online passive aggressive feature transformation algorithm. Then these original features are mapped to kernel space and an online single kernel feature transformation (OSKFT) algorithm is developed to learn a nonlinear feature transformation. Based on the OSKFT and the existing Hedge algorithm, a novel online multiple kernel feature transformation algorithm is also proposed, which can further improve the performance of online feature transformation learning in large-scale application. The classifier is trained with k nearest neighbor algorithm together with the learned similarity metric function. Finally, we experimentally examined the effect of setting different parameter values in the proposed algorithms and evaluate the model performance on several multiclass object recognition data sets. The experimental results demonstrate the validity and good performance of our methods on cross-domain and multiclass object recognition application.

  15. A Survey on Hardware Implementations of Visual Object Trackers

    OpenAIRE

    El-Shafie, Al-Hussein A.; Habib, S. E. D.

    2017-01-01

    Visual object tracking is an active topic in the computer vision domain with applications extending over numerous fields. The main sub-tasks required to build an object tracker (e.g. object detection, feature extraction and object tracking) are computation-intensive. In addition, real-time operation of the tracker is indispensable for almost all of its applications. Therefore, complete hardware or hardware/software co-design approaches are pursued for better tracker implementations. This pape...

  16. Visual and Non-Visual Contributions to the Perception of Object Motion during Self-Motion

    Science.gov (United States)

    Fajen, Brett R.; Matthis, Jonathan S.

    2013-01-01

    Many locomotor tasks involve interactions with moving objects. When observer (i.e., self-)motion is accompanied by object motion, the optic flow field includes a component due to self-motion and a component due to object motion. For moving observers to perceive the movement of other objects relative to the stationary environment, the visual system could recover the object-motion component – that is, it could factor out the influence of self-motion. In principle, this could be achieved using visual self-motion information, non-visual self-motion information, or a combination of both. In this study, we report evidence that visual information about the speed (Experiment 1) and direction (Experiment 2) of self-motion plays a role in recovering the object-motion component even when non-visual self-motion information is also available. However, the magnitude of the effect was less than one would expect if subjects relied entirely on visual self-motion information. Taken together with previous studies, we conclude that when self-motion is real and actively generated, both visual and non-visual self-motion information contribute to the perception of object motion. We also consider the possible role of this process in visually guided interception and avoidance of moving objects. PMID:23408983

  17. MM-MDS: a multidimensional scaling database with similarity ratings for 240 object categories from the Massive Memory picture database

    National Research Council Canada - National Science Library

    Hout, Michael C; Goldinger, Stephen D; Brady, Kyle J

    2014-01-01

    Cognitive theories in visual attention and perception, categorization, and memory often critically rely on concepts of similarity among objects, and empirically require measures of "sameness" among their stimuli...

  18. Semi-automatic measurement of visual verticality perception in humans reveals a new category of visual field dependency

    Directory of Open Access Journals (Sweden)

    C.R. Kaleff

    2011-08-01

    Full Text Available Previous assessment of verticality by means of rod and rod and frame tests indicated that human subjects can be more (field dependent or less (field independent influenced by a frame placed around a tilted rod. In the present study we propose a new approach to these tests. The judgment of visual verticality (rod test was evaluated in 50 young subjects (28 males, ranging in age from 20 to 27 years by randomly projecting a luminous rod tilted between -18 and +18° (negative values indicating left tilts onto a tangent screen. In the rod and frame test the rod was displayed within a luminous fixed frame tilted at +18 or -18°. Subjects were instructed to verbally indicate the rod’s inclination direction (forced choice. Visual dependency was estimated by means of a Visual Index calculated from rod and rod and frame test values. Based on this index, volunteers were classified as field dependent, intermediate and field independent. A fourth category was created within the field-independent subjects for whom the amount of correct guesses in the rod and frame test exceeded that of the rod test, thus indicating improved performance when a surrounding frame was present. In conclusion, the combined use of subjective visual vertical and the rod and frame test provides a specific and reliable form of evaluation of verticality in healthy subjects and might be of use to probe changes in brain function after central or peripheral lesions.

  19. Visual properties of objects affect manipulative forces and respiration differently.

    Science.gov (United States)

    Lamberg, Eric M; Mateika, Jason H; Gordon, Andrew M

    2005-12-20

    Previously, we demonstrated that the respiratory and motor systems responded differently following consecutive lifts of an object whose weight could be altered (lighter or heavier) without changing the object's visual properties. When the weight of the object was altered in a manner unpredictable to the subject, the motor system response reflected the previous weight of the object (light or heavy) while the respiratory system reflected responses seen when lifting the heavier object regardless of whether a lighter or heavier object was lifted previously. It is possible that the default pattern of the respiratory system was due to a lack of visual size cues, which are known to have robust affects on grasp control. To test this hypothesis, 14 seated subjects performed self-initiated alternating lifts with objects whose size and weight covaried such that the weight of the upcoming lift was known despite the weight of the object previously lifted. Following both consecutive and alternating trials, the load force was scaled to the weight of the object (e.g., the heavier the object the larger the force) while the volume was scaled only following the consecutive trials. This suggests that the load forces were developed entirely based on visual information while lung volume was not. In addition, we suggest that following the consecutive trials, the volume increased as the object's weight increased in an effort to assist with trunk stabilization by indirectly increasing intra-abdominal pressure.

  20. Simulation and Visualization in the VISSION Object Oriented Dataflow System

    NARCIS (Netherlands)

    Telea, A.C.; Wijk, J.J. van

    1999-01-01

    Scientific visualization and simulation steering and design are mostly addressed by non object-oriented (OO) frameworks. Even though OO powerfully and elegantly models many application areas, integration of OO libraries in such systems remains complex. The power and conciseness of object orientation

  1. The Influence of Location and Visual Features on Visual Object Memory

    OpenAIRE

    Sun, Hsin-Mei; Gordon, Robert D.

    2010-01-01

    Three experiments examined the influence of contextual objects’ location and visual features on visual memory. Participants’ visual memory was tested with a change detection task in which they had to judge whether the orientation (Experiments 1 and 2) or color (Experiment 3) of a target object was the same. Furthermore, contextual objects’ locations and visual features were manipulated in the test image. The results showed that change detection performance was better when contextual objects’ ...

  2. Foraging through multiple target categories reveals the flexibility of visual working memory.

    Science.gov (United States)

    Kristjánsson, Tómas; Kristjánsson, Árni

    2018-02-01

    A key assumption in the literature on visual attention is that templates, actively maintained in visual working memory (VWM), guide visual attention. An important question therefore involves the nature and capacity of VWM. According to load theories, more than one search template can be active at the same time and capacity is determined by the total load rather than a precise number of templates. By an alternative account only one search template can be active within visual working memory at any given time, while other templates are in an accessory state - but do not affect visual selection. We addressed this question by varying the number of targets and distractors in a visual foraging task for 40 targets among 40 distractors in two ways: 1) Fixed-distractor-number, involving two distractor types while target categories varied from one to four. 2) Fixed-color-number (7), so that if the target types were two, distractors types were five, while if target number increased to three, distractor types were four (etc.). The two accounts make differing predictions. Under the single-template account, we should expect large switch costs as target types increase to two, but switch-costs should not increase much as target types increase beyond two. Load accounts predict an approximately linear increase in switch costs with increased target type number. The results were that switch costs increased roughly linearly in both conditions, in line with load accounts. The results are discussed in light of recent proposals that working memory reflects lingering neural activity at various sites that operate on the stimuli in each case and findings showing neurally silent working memory representations. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Objective assessment of the human visual attentional state.

    Science.gov (United States)

    Willeford, Kevin T; Ciuffreda, Kenneth J; Yadav, Naveen K; Ludlam, Diana P

    2013-02-01

    The purpose of this study was to develop an objective way to assess human visual attention using the alpha-band component of the visual-evoked potential (VEP). Six different attentional conditions were tested: eyes-open, eyes-closed, eyes-closed with backwards number counting, and three rapid-serial visual presentation (RSVP) tasks. Eighteen visually normal, young-adult subjects (ages 21-28 years) were tested binocularly at 1 m for each condition on two separate days. The Diopsys™ NOVA-TR system was used to obtain the visual-evoked potential (VEP) and extracted alpha wave and its related power spectrum. Additionally, the Visual Search and Attention Test (VSAT) was administered as a subjective measure of visual attention. Subjects exhibited significant decreases in power in the alpha band when comparing the eyes-closed with the eyes-open conditions, with power in the eyes-closed condition being, on average, twice as large. The response from the other four conditions did not reflect the differential attentional demands. The ratio of the power in the eyes-closed condition to the eyes-open condition in the lower-alpha frequencies (8-10 Hz) was found to be significantly correlated with the group's performance on the VSAT, especially the 10-Hz component. An individual's ability to attenuate their alpha component during visual processing may be a predictor of their visual attentional state. These findings solidify the role of the VEP alpha subcomponent as an objective electrophysiological correlate of visual attention, which may be useful in the diagnosis and treatment of human visual attention disorders in the future.

  4. Audio-visual object search is changed by bilingual experience.

    Science.gov (United States)

    Chabal, Sarah; Schroeder, Scott R; Marian, Viorica

    2015-11-01

    The current study examined the impact of language experience on the ability to efficiently search for objects in the face of distractions. Monolingual and bilingual participants completed an ecologically-valid, object-finding task that contained conflicting, consistent, or neutral auditory cues. Bilinguals were faster than monolinguals at locating the target item, and eye movements revealed that this speed advantage was driven by bilinguals' ability to overcome interference from visual distractors and focus their attention on the relevant object. Bilinguals fixated the target object more often than did their monolingual peers, who, in contrast, attended more to a distracting image. Moreover, bilinguals', but not monolinguals', object-finding ability was positively associated with their executive control ability. We conclude that bilinguals' executive control advantages extend to real-world visual processing and object finding within a multi-modal environment.

  5. Category Selectivity of Human Visual Cortex in Perception of Rubin Face–Vase Illusion

    Directory of Open Access Journals (Sweden)

    Xiaogang Wang

    2017-09-01

    Full Text Available When viewing the Rubin face–vase illusion, our conscious perception spontaneously alternates between the face and the vase; this illusion has been widely used to explore bistable perception. Previous functional magnetic resonance imaging (fMRI studies have studied the neural mechanisms underlying bistable perception through univariate and multivariate pattern analyses; however, no studies have investigated the issue of category selectivity. Here, we used fMRI to investigate the neural mechanisms underlying the Rubin face–vase illusion by introducing univariate amplitude and multivariate pattern analyses. The results from the amplitude analysis suggested that the activity in the fusiform face area was likely related to the subjective face perception. Furthermore, the pattern analysis results showed that the early visual cortex (EVC and the face-selective cortex could discriminate the activity patterns of the face and vase perceptions. However, further analysis of the activity patterns showed that only the face-selective cortex contains the face information. These findings indicated that although the EVC and face-selective cortex activities could discriminate the visual information, only the activity and activity pattern in the face-selective areas contained the category information of face perception in the Rubin face–vase illusion.

  6. Integrative and distinctive coding of visual and conceptual object features in the ventral visual stream.

    Science.gov (United States)

    Martin, Chris B; Douglas, Danielle; Newsome, Rachel N; Man, Louisa Ly; Barense, Morgan D

    2018-02-02

    A significant body of research in cognitive neuroscience is aimed at understanding how object concepts are represented in the human brain. However, it remains unknown whether and where the visual and abstract conceptual features that define an object concept are integrated. We addressed this issue by comparing the neural pattern similarities among object-evoked fMRI responses with behavior-based models that independently captured the visual and conceptual similarities among these stimuli. Our results revealed evidence for distinctive coding of visual features in lateral occipital cortex, and conceptual features in the temporal pole and parahippocampal cortex. By contrast, we found evidence for integrative coding of visual and conceptual object features in perirhinal cortex. The neuroanatomical specificity of this effect was highlighted by results from a searchlight analysis. Taken together, our findings suggest that perirhinal cortex uniquely supports the representation of fully specified object concepts through the integration of their visual and conceptual features. © 2018, Martin et al.

  7. Representing multiple objects as an ensemble enhances visual cognition.

    Science.gov (United States)

    Alvarez, George A

    2011-03-01

    The visual system can only accurately represent a handful of objects at once. How do we cope with this severe capacity limitation? One possibility is to use selective attention to process only the most relevant incoming information. A complementary strategy is to represent sets of objects as a group or ensemble (e.g. represent the average size of items). Recent studies have established that the visual system computes accurate ensemble representations across a variety of feature domains and current research aims to determine how these representations are computed, why they are computed and where they are coded in the brain. Ensemble representations enhance visual cognition in many ways, making ensemble coding a crucial mechanism for coping with the limitations on visual processing. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. Distributed Encoding of Spatial and Object Categories in Primate Hippocampal Microcircuits

    Directory of Open Access Journals (Sweden)

    Ioan eOpris

    2015-10-01

    Full Text Available The primate hippocampus plays critical roles in the encoding, representation, categorization and retrieval of cognitive information. Such cognitive abilities may use the transformational input-output properties of hippocampal laminar microcircuitry to generate spatial representations and to categorize features of objects, images and scenes. Four nonhuman primates were trained in a delayed-match-to-sample (DMS task while multi-neuron activity was simultaneously recorded from the CA1 and CA3 hippocampal cell fields. Presented results show differential encoding of spatial location and categorization of images presented as relevant stimuli in the task. Individual hippocampal cells encoded visual stimuli only on specific types of trials in which retention of either, the Sample image, or the spatial position of the Sample image was required at the beginning of the trial. Consistent with such encoding, it was shown that patterned microstimulation applied during Sample image presentation facilitated selection of either Sample image spatial locations or types of images, during the Match phase of the task. These findings support the existence of specific codes for spatial and object representations in primate hippocampus which can be applied to differentially signaled trials. Moreover, the transformational properties of hippocampal microcircuitry, together with the patterned microstimulation are supporting the practical importance of this approach for cognitive enhancement and rehabilitation, needed for memory neuroprosthetics.

  9. Use of subjective and objective criteria to categorise visual disability.

    Science.gov (United States)

    Kajla, Garima; Rohatgi, Jolly; Dhaliwal, Upreet

    2014-04-01

    Visual disability is categorised using objective criteria. Subjective measures are not considered. To use subjective criteria along with objective ones to categorise visual disability. Ophthalmology out-patient department; teaching hospital; observational study. Consecutive persons aged >25 years, with vision disability; group-zero: normal range of vision, to group-X: no perception of light, bilaterally. Snellen's vision; binocular contrast sensitivity (Pelli-Robson chart); automated binocular visual field (Humphrey; Esterman test); and vision-related quality of life (Indian Visual Function Questionnaire-33; IND-VFQ33) were recorded. SPSS version-17; Kruskal-wallis test was used to compare contrast sensitivity and visual fields across groups, and Mann-Whitney U test for pair-wise comparison (Bonferroni adjustment; P visual fields were comparable for differing disability grades except when disability was severe (P disability grades but comparable for groups III (78.51 ± 6.86) and IV (82.64 ± 5.80), and groups IV and V (77.23 ± 3.22); these were merged to generate group 345; similarly, global scores were comparable for adjacent groups V and VI (72.53 ± 6.77), VI and VII (74.46 ± 4.32), and VII and VIII (69.12 ± 5.97); these were merged to generate group 5678; thereafter, contrast sensitivity and global and individual IND-VFQ33 scores could differentiate between different grades of disability in the five new groups. Subjective criteria made it possible to objectively reclassify visual disability. Visual disability grades could be redefined to accommodate all from zero-100%.

  10. Visual working memory capacity and stimulus categories: a behavioral and electrophysiological investigation.

    Science.gov (United States)

    Diamantopoulou, Sofia; Poom, Leo; Klaver, Peter; Talsma, Durk

    2011-04-01

    It has recently been suggested that visual working memory capacity may vary depending on the type of material that has to be memorized. Here, we use a delayed match-to-sample paradigm and event-related potentials (ERP) to investigate the neural correlates that are linked to these changes in capacity. A variable number of stimuli (1-4) were presented in each visual hemifield. Participants were required to selectively memorize the stimuli presented in one hemifield. Following memorization, a test stimulus was presented that had to be matched against the memorized item(s). Two types of stimuli were used: one set consisting of discretely different objects (discrete stimuli) and one set consisting of more continuous variations along a single dimension (continuous stimuli). Behavioral results indicate that memory capacity was much larger for the discrete stimuli, when compared with the continuous stimuli. This behavioral effect correlated with an increase in a contralateral negative slow wave ERP component that is known to be involved in memorization. We therefore conclude that the larger working memory capacity for discrete stimuli can be directly related to an increase in activity in visual areas and propose that this increase in visual activity is due to interactions with other, non-visual representations.

  11. Visual object cognition precedes but also temporally overlaps mental rotation.

    Science.gov (United States)

    Schendan, Haline E; Lucia, Lisa C

    2009-10-19

    Two-dimensional, mental rotation of alphanumeric characters and geometric figures is related to linear increases in parietal negativity between 400 and 800 ms as rotation increases, similar to linear increases with rotation in response times. This suggests that the frontoparietal networks implicated in mental rotation are engaged after 400 ms. However, the time course of three-dimensional object mental rotation using the classic Shepard-Metzler task has not been studied, even though this is one of the most commonly used versions in behavioral and neuroimaging work. Using this task, this study replicated a prior neuroimaging version using event-related potentials. Results confirmed linear mental rotation effects on performance and parietal negativity. In addition, a frontocentral N350 complex that indexes visual object cognition processes was more negative with mental rotation and showed linear trends at frontopolar sites from 200 to 700 ms and centrofrontal sites from 400 to 500 ms. The centrofrontal negativity has been implicated in object working memory processes in ventrolateral prefrontal and occipitotemporal areas. The frontopolar N350 has been implicated in processes that compute the spatial relations among parts of objects to resolve visual differences between object representations and enable an accurate cognitive decision involving a network of ventrocaudal intraparietal, ventral premotor, and inferotemporal cortices. Overall, the time course indicates that visual object cognition processes precede (200-500 ms) but also overlap the initial phase of mental rotation (500-700 ms) indexed by parietal negativity.

  12. Visualizing Data as Objects by DC (Difference of Convex) Optimization

    DEFF Research Database (Denmark)

    Carrizosa, Emilio; Guerrero, Vanesa; Morales, Dolores Romero

    2017-01-01

    In this paper we address the problem of visualizing in a bounded region a set of individuals, which has attached a dissimilarity measure and a statistical value, as convex objects. This problem, which extends the standard Multidimensional Scaling Analysis, is written as a global optimization prob...

  13. Visualizing Data as Objects by DC (Difference of Convex) Optimization

    DEFF Research Database (Denmark)

    Carrizosa, Emilio; Guerrero, Vanesa; Morales, Dolores Romero

    In this paper we address the problem of visualizing in a bounded region a set of individuals, which has attached a dissimilarity measure and a statistical value. This problem, which extends the standard Multidimensional Scaling Analysis, is written as a global optimization problem whose objective...

  14. Computing with Connections in Visual Recognition of Origami Objects.

    Science.gov (United States)

    Sabbah, Daniel

    1985-01-01

    Summarizes an initial foray in tackling artificial intelligence problems using a connectionist approach. The task chosen is visual recognition of Origami objects, and the questions answered are how to construct a connectionist network to represent and recognize projected Origami line drawings and the advantages such an approach would have. (30…

  15. Functional dissociation between action and perception of object shape in developmental visual object agnosia.

    Science.gov (United States)

    Freud, Erez; Ganel, Tzvi; Avidan, Galia; Gilaie-Dotan, Sharon

    2016-03-01

    According to the two visual systems model, the cortical visual system is segregated into a ventral pathway mediating object recognition, and a dorsal pathway mediating visuomotor control. In the present study we examined whether the visual control of action could develop normally even when visual perceptual abilities are compromised from early childhood onward. Using his fingers, LG, an individual with a rare developmental visual object agnosia, manually estimated (perceptual condition) the width of blocks that varied in width and length (but not in overall size), or simply picked them up across their width (grasping condition). LG's perceptual sensitivity to target width was profoundly impaired in the manual estimation task compared to matched controls. In contrast, the sensitivity to object shape during grasping, as measured by maximum grip aperture (MGA), the time to reach the MGA, the reaction time and the total movement time were all normal in LG. Further analysis, however, revealed that LG's sensitivity to object shape during grasping emerged at a later time stage during the movement compared to controls. Taken together, these results demonstrate a dissociation between action and perception of object shape, and also point to a distinction between different stages of the grasping movement, namely planning versus online control. Moreover, the present study implies that visuomotor abilities can develop normally even when perceptual abilities developed in a profoundly impaired fashion. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Object similarity affects the perceptual strategy underlying invariant visual object recognition in rats

    Directory of Open Access Journals (Sweden)

    Federica Bianca Rosselli

    2015-03-01

    Full Text Available In recent years, a number of studies have explored the possible use of rats as models of high-level visual functions. One central question at the root of such an investigation is to understand whether rat object vision relies on the processing of visual shape features or, rather, on lower-order image properties (e.g., overall brightness. In a recent study, we have shown that rats are capable of extracting multiple features of an object that are diagnostic of its identity, at least when those features are, structure-wise, distinct enough to be parsed by the rat visual system. In the present study, we have assessed the impact of object structure on rat perceptual strategy. We trained rats to discriminate between two structurally similar objects, and compared their recognition strategies with those reported in our previous study. We found that, under conditions of lower stimulus discriminability, rat visual discrimination strategy becomes more view-dependent and subject-dependent. Rats were still able to recognize the target objects, in a way that was largely tolerant (i.e., invariant to object transformation; however, the larger structural and pixel-wise similarity affected the way objects were processed. Compared to the findings of our previous study, the patterns of diagnostic features were: i smaller and more scattered; ii only partially preserved across object views; and iii only partially reproducible across rats. On the other hand, rats were still found to adopt a multi-featural processing strategy and to make use of part of the optimal discriminatory information afforded by the two objects. Our findings suggest that, as in humans, rat invariant recognition can flexibly rely on either view-invariant representations of distinctive object features or view-specific object representations, acquired through learning.

  17. The Visual Object Tracking VOT2015 Challenge Results

    KAUST Repository

    Kristan, Matej

    2015-12-07

    The Visual Object Tracking challenge 2015, VOT2015, aims at comparing short-term single-object visual trackers that do not apply pre-learned models of object appearance. Results of 62 trackers are presented. The number of tested trackers makes VOT 2015 the largest benchmark on short-term tracking to date. For each participating tracker, a short description is provided in the appendix. Features of the VOT2015 challenge that go beyond its VOT2014 predecessor are: (i) a new VOT2015 dataset twice as large as in VOT2014 with full annotation of targets by rotated bounding boxes and per-frame attribute, (ii) extensions of the VOT2014 evaluation methodology by introduction of a new performance measure. The dataset, the evaluation kit as well as the results are publicly available at the challenge website.

  18. The Visual Object Tracking VOT2016 Challenge Results

    KAUST Repository

    Kristan, Matej

    2016-11-02

    The Visual Object Tracking challenge VOT2016 aims at comparing short-term single-object visual trackers that do not apply pre-learned models of object appearance. Results of 70 trackers are presented, with a large number of trackers being published at major computer vision conferences and journals in the recent years. The number of tested state-of-the-art trackers makes the VOT 2016 the largest and most challenging benchmark on short-term tracking to date. For each participating tracker, a short description is provided in the Appendix. The VOT2016 goes beyond its predecessors by (i) introducing a new semi-automatic ground truth bounding box annotation methodology and (ii) extending the evaluation system with the no-reset experiment. The dataset, the evaluation kit as well as the results are publicly available at the challenge website (http://votchallenge.net).

  19. Eye movements during object recognition in visual agnosia.

    Science.gov (United States)

    Charles Leek, E; Patterson, Candy; Paul, Matthew A; Rafal, Robert; Cristino, Filipe

    2012-07-01

    This paper reports the first ever detailed study about eye movement patterns during single object recognition in visual agnosia. Eye movements were recorded in a patient with an integrative agnosic deficit during two recognition tasks: common object naming and novel object recognition memory. The patient showed normal directional biases in saccades and fixation dwell times in both tasks and was as likely as controls to fixate within object bounding contour regardless of recognition accuracy. In contrast, following initial saccades of similar amplitude to controls, the patient showed a bias for short saccades. In object naming, but not in recognition memory, the similarity of the spatial distributions of patient and control fixations was modulated by recognition accuracy. The study provides new evidence about how eye movements can be used to elucidate the functional impairments underlying object recognition deficits. We argue that the results reflect a breakdown in normal functional processes involved in the integration of shape information across object structure during the visual perception of shape. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. Average activity, but not variability, is the dominant factor in the representation of object categories in the brain.

    Science.gov (United States)

    Karimi-Rouzbahani, Hamid; Bagheri, Nasour; Ebrahimpour, Reza

    2017-03-27

    To categorize the perceived objects, brain utilizes a broad set of its resources and encoding strategies. Yet, it remains elusive how the category information is encoded in the brain. While many classical studies have sought the category information in the across-trial-averaged activity of neurons/neural populations, several recent studies have observed category information also in the within-trial correlated variability of activities between neural populations (i.e. dependent variability). Moreover, other studies have observed that independent variability of activity, which is the variability of the measured neural activity without any influence from correlated variability with other neurons/populations, could also be modulated for improved categorization. However, it was unknown how important each of the three factors (i.e. average activity, dependent and independent variability of activities) was in category encoding. Therefore, we designed an EEG experiment in which human subjects viewed a set of object exemplars from four categories. Using a computational model, we evaluated the contribution of each factor separately in category encoding. Results showed that the average activity played a significant role while the independent variability, although effective, contributed moderately to the category encoding. The inter-channel dependent variability showed an ignorable effect on the encoding. We also investigated the role of those factors in the encoding of variations which showed similar effects. These results imply that the brain, rather than variability, seems to use the average activity to convey information on the category of the perceived objects. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  1. Objective functional visual outcomes of cataract surgery in patients with good preoperative visual acuity.

    Science.gov (United States)

    Zhu, X; Ye, H; He, W; Yang, J; Dai, J; Lu, Y

    2017-03-01

    PurposeTo explore the objective functional visual outcomes of cataract surgery in patients with good preoperative visual acuity.MethodsWe enrolled 130 cataract patients whose best-corrected visual acuity (BCVA) was 20/40 or better preoperatively. Objective visual functions were evaluated with a KR-1W analyzer before and at 1 month after cataract surgery.ResultsThe nuclear (N), cortical (C), and N+C groups had very high preoperative ocular and internal total high-order aberrations (HOAs), coma, and abnormal spherical aberrations. At 1 month after cataract surgery, in addition to the remarkable increase of both uncorrected visual acuity and BCVA, both ocular and internal HOAs in the three groups decreased significantly after cataract surgery (all Pcataract surgery. This finding shows that the arbitrary threshold of BCVA worse than 20/40 in China cannot always be used to determine who will benefit from cataract surgery.

  2. Objective measurements of lower-level visual stress.

    Science.gov (United States)

    Nahar, Niru K; Sheedy, James E; Hayes, John; Tai, Yu-Chi

    2007-07-01

    To determine the sensitivity of the electromyography (EMG) response of the orbicularis oculi muscle to selected lower-level visually stressful conditions to establish the extent to which it can be used as a measure of visual discomfort. Thirty-one subjects (18 years or older) with 20/20 vision, without history of ocular pathology, oculomotor limitation, or cognitive deficits participated in the study. Subjects read on a computer display for 27 trials of 5 min duration under different low-level asthenopic conditions. The conditions were graded levels of font size, font type, contrast, refractive error, and glare. Orbicularis oculi activity was recorded using surface EMG. Blink-free epochs of EMG data were analyzed for power for all the conditions. Blink rate for all the trials was also measured. At the end of each trial, subjects rated the severity of visual discomfort experienced while reading. Conditions that benefit from squint (refractive error and glare) showed increased EMG power (p EMG response and a significant decrease in blink rate (p = 0.003 and p = 0.01). All conditions resulted in significant visual discomfort; the p value for font type was 0.039 and p EMG response is a sensitive objective measure for the squint-beneficial conditions. However, for the non-squint-beneficial conditions, blink rate may be a more sensitive objective measure, although EMG with longer trial durations should be tested.

  3. Visual-Spatial Attention Aids the Maintenance of Object Representations in Visual Working Memory

    Science.gov (United States)

    Williams, Melonie; Pouget, Pierre; Boucher, Leanne; Woodman, Geoffrey F.

    2013-01-01

    Theories have proposed that the maintenance of object representations in visual working memory is aided by a spatial rehearsal mechanism. In this study, we used two different approaches to test the hypothesis that overt and covert visual-spatial attention mechanisms contribute to the maintenance of object representations in visual working memory. First, we tracked observers’ eye movements while remembering a variable number of objects during change-detection tasks. We observed that during the blank retention interval, participants spontaneously shifted gaze to the locations that the objects had occupied in the memory array. Next, we hypothesized that if attention mechanisms contribute to the maintenance of object representations, then drawing attention away from the object locations during the retention interval would impair object memory during these change-detection tasks. Supporting this prediction, we found that attending to the fixation point in anticipation of a brief probe stimulus during the retention interval reduced change-detection accuracy even on the trials in which no probe occurred. These findings support models of working memory in which visual-spatial selection mechanisms contribute to the maintenance of object representations. PMID:23371773

  4. Visual objects speak louder than words: motor planning and weight in tool use and object transport.

    Science.gov (United States)

    Osiurak, François; Bergot, Morgane; Chainay, Hanna

    2015-11-01

    For theories of embodied cognition, reading a word activates sensorimotor representations in a similar manner to seeing the physical object the word represents. Thus, reading words representing objects of different sizes interfere with motor planning, inducing changes in grip aperture. An outstanding issue is whether word reading can also evoke sensorimotor information about the weight of objects. This issue was addressed in two experiments wherein participants have first to read the name of an object (Experiment 1)/observe the object (Experiment 2) and then to transport versus use bottles of water. The objects presented as primes were either lighter or heavier than the bottles to be grasped. Results indicated that the main parameters of motor planning recorded (initiation times and finger contact points) were not affected by the presentation of words as primes (Experiment 1). By contrast, the presentation of visual objects as primes induced significant changes in these parameters (Experiment 2). Participants changed their way of grasping the bottles, particularly in the use condition. Taken together, these results suggest that the activation of concepts does not automatically evoke sensorimotor representations about the weight of objects, but visual objects do. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Viewing Objects and Planning Actions: On the Potentiation of Grasping Behaviours by Visual Objects

    Science.gov (United States)

    Makris, Stergios; Hadar, Aviad A.; Yarrow, Kielan

    2011-01-01

    How do humans interact with tools? Gibson (1979) suggested that humans perceive directly what tools afford in terms of meaningful actions. This "affordances" hypothesis implies that visual objects can potentiate motor responses even in the absence of an intention to act. Here we explore the temporal evolution of motor plans afforded by common…

  6. Enhanced Recognition Memory in Grapheme-Colour Synaesthesia for Different Categories of Visual Stimuli

    Directory of Open Access Journals (Sweden)

    Jamie eWard

    2013-10-01

    Full Text Available Memory has been shown to be enhanced in grapheme-colour synaesthesia, and this enhancement extends to certain visual stimuli (that don’t induce synaesthesia as well as stimuli comprised of graphemes (which do. Previous studies have used a variety of testing procedures to assess memory in synaesthesia (e.g. free recall, recognition, associative learning making it hard to know the extent to which memory benefits are attributable to the stimulus properties themselves, the testing method, participant strategies, or some combination of these factors. In the first experiment, we use the same testing procedure (recognition memory for a variety of stimuli (written words, nonwords, scenes, and fractals and also check which memorisation strategies were used. We demonstrate that grapheme-colour synaesthetes show enhanced memory across all these stimuli, but this is not found for a non-visual type of synaesthesia (lexical-gustatory. In the second experiment, the memory advantage for scenes is explored further by manipulating the properties of the old and new images (changing colour, orientation, or object presence. Again, grapheme-colour synaesthetes show a memory advantage for scenes across all manipulations. Although recognition memory is generally enhanced in this study, the largest effects were found for abstract visual images (fractals and scenes for which colour can be used to discriminate old/new status.

  7. Enhanced recognition memory in grapheme-color synaesthesia for different categories of visual stimuli.

    Science.gov (United States)

    Ward, Jamie; Hovard, Peter; Jones, Alicia; Rothen, Nicolas

    2013-01-01

    Memory has been shown to be enhanced in grapheme-color synaesthesia, and this enhancement extends to certain visual stimuli (that don't induce synaesthesia) as well as stimuli comprised of graphemes (which do). Previous studies have used a variety of testing procedures to assess memory in synaesthesia (e.g., free recall, recognition, associative learning) making it hard to know the extent to which memory benefits are attributable to the stimulus properties themselves, the testing method, participant strategies, or some combination of these factors. In the first experiment, we use the same testing procedure (recognition memory) for a variety of stimuli (written words, non-words, scenes, and fractals) and also check which memorization strategies were used. We demonstrate that grapheme-color synaesthetes show enhanced memory across all these stimuli, but this is not found for a non-visual type of synaesthesia (lexical-gustatory). In the second experiment, the memory advantage for scenes is explored further by manipulating the properties of the old and new images (changing color, orientation, or object presence). Again, grapheme-color synaesthetes show a memory advantage for scenes across all manipulations. Although recognition memory is generally enhanced in this study, the largest effects were found for abstract visual images (fractals) and scenes for which color can be used to discriminate old/new status.

  8. Tracking Location and Features of Objects within Visual Working Memory

    Directory of Open Access Journals (Sweden)

    Michael Patterson

    2012-10-01

    Full Text Available Four studies examined how color or shape features can be accessed to retrieve the memory of an object's location. In each trial, 6 colored dots (Experiments 1 and 2 or 6 black shapes (Experiments 3 and 4 were displayed in randomly selected locations for 1.5 s. An auditory cue for either the shape or the color to-be-remembered was presented either simultaneously, immediately, or 2 s later. Non-informative cues appeared in some trials to serve as a control condition. After a 4 s delay, 5/6 objects were re-presented, and participants indicated the location of the missing object either by moving the mouse (Experiments 1 and 3, or by typing coordinates using a grid (Experiments 2 and 4. Compared to the control condition, cues presented simultaneously or immediately after stimuli improved location accuracy in all experiments. However, cues presented after 2 s only improved accuracy in Experiment 1. These results suggest that location information may not be addressable within visual working memory using shape features. In Experiment 1, but not Experiments 2–4, cues significantly improved accuracy when they indicated the missing object could be any of the three identical objects. In Experiments 2–4, location accuracy was highly impaired when the missing object came from a group of identical rather than uniquely identifiable objects. This indicates that when items with similar features are presented, location accuracy may be reduced. In summary, both feature type and response mode can influence the accuracy and accessibility of visual working memory for object location.

  9. Visual responses to action between unfamiliar object pairs modulate extinction.

    Science.gov (United States)

    Wulff, Melanie; Humphreys, Glyn W

    2013-03-01

    Previous studies show that positioning familiar pairs of objects for action ameliorates visual extinction in neuropsychological patients (Riddoch, Humphreys, Edwards, Baker, & Willson, 2003). This effect is stronger when objects are viewed from a self-perspective and are placed in locations congruent with the patient's premorbid handedness (Humphreys, Wulff, Yoon, & Riddoch, 2010a), consistent with it being modulated by a motor response to the stimuli. There is also some evidence that extinction can be reduced with unfamiliar object pairs positioned for action (Riddoch et al. 2006), but the effects of reference frame and hand-object congruence have not been examined with such items. This was investigated in the present experiment. There was greater recovery from extinction when objects were action-related compared to when they were not, in line with previous studies. In addition, patients benefited more when they saw action-related pairs from a third-person than from a first-person perspective. Interestingly, on trials where extinction occurred, there was a bias reporting the 'active' object on the extinguished side-a reversal of the standard pattern of extinction-but only when objects were seen from a self-perspective. The data show that several factors contribute to the effects of action relations on attention, depending upon the familiarity of the object pairs and the reference frame that stimuli have been seen in. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. Impairments in part-whole representations of objects in two cases of integrative visual agnosia.

    Science.gov (United States)

    Behrmann, Marlene; Williams, Pepper

    2007-10-01

    How complex multipart visual objects are represented perceptually remains a subject of ongoing investigation. One source of evidence that has been used to shed light on this issue comes from the study of individuals who fail to integrate disparate parts of visual objects. This study reports a series of experiments that examine the ability of two such patients with this form of agnosia (integrative agnosia; IA), S.M. and C.R., to discriminate and categorize exemplars of a rich set of novel objects, "Fribbles", whose visual similarity (number of shared parts) and category membership (shared overall shape) can be manipulated. Both patients performed increasingly poorly as the number of parts required for differentiating one Fribble from another increased. Both patients were also impaired at determining when two Fribbles belonged in the same category, a process that relies on abstracting spatial relations between parts. C.R., the less impaired of the two, but not S.M., eventually learned to categorize the Fribbles but required substantially more training than normal perceivers. S.M.'s failure is not attributable to a problem in learning to use a label for identification nor is it obviously attributable to a visual memory deficit. Rather, the findings indicate that, although the patients may be able to represent a small number of parts independently, in order to represent multipart images, the parts need to be integrated or chunked into a coherent whole. It is this integrative process that is impaired in IA and appears to play a critical role in the normal object recognition of complex images.

  11. Convolutional Neural Network for Multi-Category Rapid Serial Visual Presentation BCI.

    Science.gov (United States)

    Manor, Ran; Geva, Amir B

    2015-01-01

    Brain computer interfaces rely on machine learning (ML) algorithms to decode the brain's electrical activity into decisions. For example, in rapid serial visual presentation (RSVP) tasks, the subject is presented with a continuous stream of images containing rare target images among standard images, while the algorithm has to detect brain activity associated with target images. Here, we continue our previous work, presenting a deep neural network model for the use of single trial EEG classification in RSVP tasks. Deep neural networks have shown state of the art performance in computer vision and speech recognition and thus have great promise for other learning tasks, like classification of EEG samples. In our model, we introduce a novel spatio-temporal regularization for EEG data to reduce overfitting. We show improved classification performance compared to our earlier work on a five categories RSVP experiment. In addition, we compare performance on data from different sessions and validate the model on a public benchmark data set of a P300 speller task. Finally, we discuss the advantages of using neural network models compared to manually designing feature extraction algorithms.

  12. Convolutional Neural Network for Multi-Category Rapid Serial Visual Presentation BCI

    Directory of Open Access Journals (Sweden)

    Ran eManor

    2015-12-01

    Full Text Available Brain computer interfaces rely on machine learning algorithms to decode the brain's electrical activity into decisions. For example, in rapid serial visual presentation (RSVP tasks, the subject is presented with a continuous stream of images containing rare target images among standard images, while the algorithm has to detect brain activity associated with target images. Here, we continue our previous work, presenting a deep neural network model for the use of single trial EEG classification in RSVP tasks. Deep neural networks have shown state of the art performance in computer vision and speech recognition and thus have great promise for other learning tasks, like classification of EEG samples. In our model, we introduce a novel spatio-temporal regularization for EEG data to reduce overfitting. We show improved classification performance compared to our earlier work on a five categories RSVP experiment. In addition, we compare performance on data from different sessions and validate the model on a public benchmark data set of a P300 speller task. Finally, we discuss the advantages of using neural network models compared to manually designing feature extraction algorithms.

  13. Suppression of salient objects prevents distraction in visual search.

    Science.gov (United States)

    Gaspar, John M; McDonald, John J

    2014-04-16

    To find objects of interest in a cluttered and continually changing visual environment, humans must often ignore salient stimuli that are not currently relevant to the task at hand. Recent neuroimaging results indicate that the ability to prevent salience-driven distraction depends on the current level of attentional control activity in frontal cortex, but the specific mechanism by which this control activity prevents salience-driven distraction is still poorly understood. Here, we asked whether salience-driven distraction is prevented by suppressing salient distractors or by preferentially up-weighting the relevant visual dimension. We found that salient distractors were suppressed even when they resided in the same feature dimension as the target (that is, when dimensional weighting was not a viable selection strategy). Our neurophysiological measure of suppression--the PD component of the event-related potential--was associated with variations in the amount of time it took to perform the search task: distractors triggered the PD on fast-response trials, but on slow-response trials they triggered activity associated with working memory representation instead. These results demonstrate that during search salience-driven distraction is mitigated by a suppressive mechanism that reduces the salience of potentially distracting visual objects.

  14. Long-term visual object recognition memory in aged rats.

    Science.gov (United States)

    Platano, Daniela; Fattoretti, Patrizia; Balietti, Marta; Bertoni-Freddari, Carlo; Aicardi, Giorgio

    2008-04-01

    Aging is associated with memory impairments, but the neural bases of this process need to be clarified. To this end, behavioral protocols for memory testing may be applied to aged animals to compare memory performances with functional and structural characteristics of specific brain regions. Visual object recognition memory can be investigated in the rat using a behavioral task based on its spontaneous preference for exploring novel rather than familiar objects. We found that a behavioral task able to elicit long-term visual object recognition memory in adult Long-Evans rats failed in aged (25-27 months old) Wistar rats. Since no tasks effective in aged rats are reported in the literature, we changed the experimental conditions to improve consolidation processes to assess whether this form of memory can still be maintained for long term at this age: the learning trials were performed in a smaller box, identical to the home cage, and the inter-trial delays were shortened. We observed a reduction in anxiety in this box (as indicated by the lower number of fecal boli produced during habituation), and we developed a learning protocol able to elicit a visual object recognition memory that was maintained after 24 h in these aged rats. When we applied the same protocol to adult rats, we obtained similar results. This experimental approach can be useful to study functional and structural changes associated with age-related memory impairments, and may help to identify new behavioral strategies and molecular targets that can be addressed to ameliorate memory performances during aging.

  15. The Effects of Concurrent Verbal and Visual Tasks on Category Learning

    Science.gov (United States)

    Miles, Sarah J.; Minda, John Paul

    2011-01-01

    Current theories of category learning posit separate verbal and nonverbal learning systems. Past research suggests that the verbal system relies on verbal working memory and executive functioning and learns rule-defined categories; the nonverbal system does not rely on verbal working memory and learns non-rule-defined categories (E. M. Waldron…

  16. The visual encoding of tool-object affordances.

    Science.gov (United States)

    Natraj, N; Pella, Y M; Borghi, A M; Wheaton, L A

    2015-12-03

    The perception of tool-object pairs involves understanding their action-relationships (affordances). Here, we sought to evaluate how an observer visually encodes tool-object affordances. Eye-movements were recorded as right-handed participants freely viewed static, right-handed, egocentric tool-object images across three contexts: correct (e.g. hammer-nail), incorrect (e.g. hammer-paper), spatial/ambiguous (e.g. hammer-wood), and three grasp-types: no hand, functional grasp-posture (grasp hammer-handle), non-functional/manipulative grasp-posture (grasp hammer-head). There were three areas of interests (AOI): the object (nail), the operant tool-end (hammer-head), the graspable tool-end (hammer-handle). Participants passively evaluated whether tool-object pairs were functionally correct/incorrect. Clustering of gaze scanpaths and AOI weightings grouped conditions into three distinct grasp-specific clusters, especially across correct and spatial tool-object contexts and to a lesser extent within the incorrect tool-object context. The grasp-specific gaze scanpath clusters were reasonably robust to the temporal order of gaze scanpaths. Gaze was therefore automatically primed to grasp-affordances though the task required evaluating tool-object context. Participants also primarily focused on the object and the operant tool-end and sparsely attended to the graspable tool-end, even in images with functional grasp-postures. In fact, in the absence of a grasp, the object was foveally weighted the most, indicative of a possible object-oriented action priming effect wherein the observer may be evaluating how the tool engages on the object. Unlike the functional grasp-posture, the manipulative grasp-posture caused the greatest disruption in the object-oriented priming effect, ostensibly as it does not afford tool-object action due to its non-functional interaction with the operant tool-end that actually engages with the object (e.g., hammer-head to nail). The enhanced attention

  17. Visual Prediction Error Spreads Across Object Features in Human Visual Cortex.

    Science.gov (United States)

    Jiang, Jiefeng; Summerfield, Christopher; Egner, Tobias

    2016-12-14

    Visual cognition is thought to rely heavily on contextual expectations. Accordingly, previous studies have revealed distinct neural signatures for expected versus unexpected stimuli in visual cortex. However, it is presently unknown how the brain combines multiple concurrent stimulus expectations such as those we have for different features of a familiar object. To understand how an unexpected object feature affects the simultaneous processing of other expected feature(s), we combined human fMRI with a task that independently manipulated expectations for color and motion features of moving-dot stimuli. Behavioral data and neural signals from visual cortex were then interrogated to adjudicate between three possible ways in which prediction error (surprise) in the processing of one feature might affect the concurrent processing of another, expected feature: (1) feature processing may be independent; (2) surprise might "spread" from the unexpected to the expected feature, rendering the entire object unexpected; or (3) pairing a surprising feature with an expected feature might promote the inference that the two features are not in fact part of the same object. To formalize these rival hypotheses, we implemented them in a simple computational model of multifeature expectations. Across a range of analyses, behavior and visual neural signals consistently supported a model that assumes a mixing of prediction error signals across features: surprise in one object feature spreads to its other feature(s), thus rendering the entire object unexpected. These results reveal neurocomputational principles of multifeature expectations and indicate that objects are the unit of selection for predictive vision. We address a key question in predictive visual cognition: how does the brain combine multiple concurrent expectations for different features of a single object such as its color and motion trajectory? By combining a behavioral protocol that independently varies expectation of

  18. Color-Function Categories that Prime Infants to Use Color Information in an Object Individuation Task

    Science.gov (United States)

    Wilcox, Teresa; Woods, Rebecca; Chapa, Catherine

    2008-01-01

    There is evidence for developmental hierarchies in the type of information to which infants attend when reasoning about objects. Investigators have questioned the origin of these hierarchies and how infants come to identify new sources of information when reasoning about objects. The goal of the present experiments was to shed light on this debate…

  19. Visual Tracking of Deformation and Classification of Non-Rigid Objects with Robot Hand Probing

    Directory of Open Access Journals (Sweden)

    Fei Hui

    2017-03-01

    Full Text Available Performing tasks with a robot hand often requires a complete knowledge of the manipulated object, including its properties (shape, rigidity, surface texture and its location in the environment, in order to ensure safe and efficient manipulation. While well-established procedures exist for the manipulation of rigid objects, as well as several approaches for the manipulation of linear or planar deformable objects such as ropes or fabric, research addressing the characterization of deformable objects occupying a volume remains relatively limited. The paper proposes an approach for tracking the deformation of non-rigid objects under robot hand manipulation using RGB-D data. The purpose is to automatically classify deformable objects as rigid, elastic, plastic, or elasto-plastic, based on the material they are made of, and to support recognition of the category of such objects through a robotic probing process in order to enhance manipulation capabilities. The proposed approach combines advantageously classical color and depth image processing techniques and proposes a novel combination of the fast level set method with a log-polar mapping of the visual data to robustly detect and track the contour of a deformable object in a RGB-D data stream. Dynamic time warping is employed to characterize the object properties independently from the varying length of the tracked contour as the object deforms. The proposed solution achieves a classification rate over all categories of material of up to 98.3%. When integrated in the control loop of a robot hand, it can contribute to ensure stable grasp, and safe manipulation capability that will preserve the physical integrity of the object.

  20. Cultural differences in visual object recognition in 3-year-old children.

    Science.gov (United States)

    Kuwabara, Megumi; Smith, Linda B

    2016-07-01

    Recent research indicates that culture penetrates fundamental processes of perception and cognition. Here, we provide evidence that these influences begin early and influence how preschool children recognize common objects. The three tasks (N=128) examined the degree to which nonface object recognition by 3-year-olds was based on individual diagnostic features versus more configural and holistic processing. Task 1 used a 6-alternative forced choice task in which children were asked to find a named category in arrays of masked objects where only three diagnostic features were visible for each object. U.S. children outperformed age-matched Japanese children. Task 2 presented pictures of objects to children piece by piece. U.S. children recognized the objects given fewer pieces than Japanese children, and the likelihood of recognition increased for U.S. children, but not Japanese children, when the piece added was rated by both U.S. and Japanese adults as highly defining. Task 3 used a standard measure of configural progressing, asking the degree to which recognition of matching pictures was disrupted by the rotation of one picture. Japanese children's recognition was more disrupted by inversion than was that of U.S. children, indicating more configural processing by Japanese than U.S. children. The pattern suggests early cross-cultural differences in visual processing; findings that raise important questions about how visual experiences differ across cultures and about universal patterns of cognitive development. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Relating visual to verbal semantic knowledge: the evaluation of object recognition in prosopagnosia

    Science.gov (United States)

    Hanif, Hashim; Ashraf, Sohi

    2009-01-01

    Assessment of face specificity in prosopagnosia is hampered by difficulty in gauging pre-morbid expertise for non-face object categories, for which humans vary widely in interest and experience. In this study, we examined the correlation between visual and verbal semantic knowledge for cars to determine if visual recognition accuracy could be predicted from verbal semantic scores. We had 33 healthy subjects and six prosopagnosic patients first rated their own knowledge of cars. They were then given a test of verbal semantic knowledge that presented them with the names of car models, to which they were to match the manufacturer. Lastly, they were given a test of visual recognition, presenting them with images of cars to which they were to provide information at three levels of specificity: model, manufacturer and decade of make. In controls, while self-ratings were only moderately correlated with either visual recognition or verbal semantic knowledge, verbal semantic knowledge was highly correlated with visual recognition, particularly for more specific levels of information. Item concordance showed that less-expert subjects were more likely to provide the most specific information (model name) for the image when they could also match the manufacturer to its name. Prosopagnosic subjects showed reduced visual recognition of cars after adjusting for verbal semantic scores. We conclude that visual recognition is highly correlated with verbal semantic knowledge, that formal measures of verbal semantic knowledge are a more accurate gauge of expertise than self-ratings, and that verbal semantic knowledge can be used to adjust tests of visual recognition for pre-morbid expertise in prosopagnosia. PMID:19805494

  2. Texas lignite and the visual resource: an objective approach to visual resource evaluation and management

    Science.gov (United States)

    Harlow C. Landphair

    1979-01-01

    This paper relates the evolution of an empirical model used to predict public response to scenic quality objectively. The text relates the methods used to develop the visual quality index model, explains the terms used in the equation and briefly illustrates how the model is applied and how it is tested. While the technical application of the model relies heavily on...

  3. Lifting a familiar object: visual size analysis, not memory for object weight, scales lift force.

    Science.gov (United States)

    Cole, Kelly J

    2008-07-01

    The brain can accurately predict the forces needed to efficiently manipulate familiar objects in relation to mechanical properties such as weight. These predictions involve memory or some type of central representation, but visual analysis of size also yields accurate predictions of the needed fingertip forces. This raises the issue of which process (weight memory or visual size analysis) is used during everyday life when handling familiar objects. Our aim was to determine if subjects use a sensorimotor memory of weight, or a visual size analysis, to predictively set their vertical lift force when lifting a recently handled object. Two groups of subjects lifted an opaque brown bottle filled with water (470 g) during the first experimental session, and then rested for 15 min in a different room. Both groups were told that they would lift the same bottle in their next session. However, the experimental group returned to lift a slightly smaller bottle filled with water (360 g) that otherwise was identical in appearance to the first bottle. The control group returned to lift the same bottle from the first session, which was only partially filled with water so that it also weighed 360 g. At the end of the second session subjects were asked if they observed any changes between sessions, but no subject indicated awareness of a specific change. An acceleration ratio was computed by dividing the peak vertical acceleration during the first lift of the second session by the average peak acceleration of the last five lifts during the first session. This ratio was >1 for the control subjects 1.30 (SEM 0.08), indicating that they scaled their lift force for the first lift of the second session based on a memory of the (heavier) bottle from the first session. In contrast, the acceleration ratio was 0.94 (0.10) for the experimental group (P < 0.011). We conclude that the experimental group processed visual cues concerning the size of the bottle. These findings raise the

  4. Abnormalities of Object Visual Processing in Body Dysmorphic Disorder

    Science.gov (United States)

    Feusner, Jamie D.; Hembacher, Emily; Moller, Hayley; Moody, Teena D.

    2013-01-01

    Background Individuals with body dysmorphic disorder may have perceptual distortions for their appearance. Previous studies suggest imbalances in detailed relative to configural/holistic visual processing when viewing faces. No study has investigated the neural correlates of processing non-symptom-related stimuli. The objective of this study was to determine whether individuals with body dysmorphic disorder have abnormal patterns of brain activation when viewing non-face/non-body object stimuli. Methods Fourteen medication-free participants with DSM-IV body dysmorphic disorder and 14 healthy controls participated. We performed functional magnetic resonance imaging while participants matched photographs of houses that were unaltered, contained only high spatial frequency (high detail) information, or only low spatial frequency (low detail) information. The primary outcome was group differences in blood oxygen level-dependent signal changes. Results The body dysmorphic disorder group showed lesser activity in the parahippocampal gyrus, lingual gyrus, and precuneus for low spatial frequency images. There were greater activations in medial prefrontal regions for high spatial frequency images, although no significant differences when compared to a low-level baseline. Greater symptom severity was associated with lesser activity in dorsal occipital cortex and ventrolateral prefrontal cortex for normal and high spatial frequency images. Conclusions Individuals with body dysmorphic disorder have abnormal brain activation patterns when viewing objects. Hypoactivity in visual association areas for configural and holistic (low detail) elements and abnormal allocation of prefrontal systems for details is consistent with a model of imbalances in global vs. local processing. This may occur not only for appearance but also for general stimuli unrelated to their symptoms. PMID:21557897

  5. Visual object imagery and autobiographical memory: Object Imagers are better at remembering their personal past.

    Science.gov (United States)

    Vannucci, Manila; Pelagatti, Claudia; Chiorri, Carlo; Mazzoni, Giuliana

    2016-01-01

    In the present study we examined whether higher levels of object imagery, a stable characteristic that reflects the ability and preference in generating pictorial mental images of objects, facilitate involuntary and voluntary retrieval of autobiographical memories (ABMs). Individuals with high (High-OI) and low (Low-OI) levels of object imagery were asked to perform an involuntary and a voluntary ABM task in the laboratory. Results showed that High-OI participants generated more involuntary and voluntary ABMs than Low-OI, with faster retrieval times. High-OI also reported more detailed memories compared to Low-OI and retrieved memories as visual images. Theoretical implications of these findings for research on voluntary and involuntary ABMs are discussed.

  6. Brain activity related to integrative processes in visual object recognition

    DEFF Research Database (Denmark)

    Gerlach, Christian; Aaside, C T; Humphreys, G W

    2002-01-01

    to the involvement of re-entrant activation from stored structural knowledge. Evidence in favor of this interpretation comes from the additional finding that activation of the anterior part of the left fusiform gyrus and a more anterior part of the right inferior temporal gyrus, areas previously associated......We report evidence from a PET activation study that the inferior occipital gyri (likely to include area V2) and the posterior parts of the fusiform and inferior temporal gyri are involved in the integration of visual elements into perceptual wholes (single objects). Of these areas, the fusiform...... and inferior temporal gyri were more activated by tasks with recognizable stimuli than by tasks with unrecognizable stimuli. We propose that the posterior parts of the fusiform and inferior temporal gyri, compared with the inferior occipital gyri, are involved in higher level integration, due...

  7. Efficient Cross-Modal Transfer of Shape Information in Visual and Haptic Object Categorization

    Directory of Open Access Journals (Sweden)

    Nina Gaissert

    2011-10-01

    Full Text Available Categorization has traditionally been studied in the visual domain with only a few studies focusing on the abilities of the haptic system in object categorization. During the first years of development, however, touch and vision are closely coupled in the exploratory procedures used by the infant to gather information about objects. Here, we investigate how well shape information can be transferred between those two modalities in a categorization task. Our stimuli consisted of amoeba-like objects that were parametrically morphed in well-defined steps. Participants explored the objects in a categorization task either visually or haptically. Interestingly, both modalities led to similar categorization behavior suggesting that similar shape processing might occur in vision and haptics. Next, participants received training on specific categories in one of the two modalities. As would be expected, training increased performance in the trained modality; however, we also found significant transfer of training to the other, untrained modality after only relatively few training trials. Taken together, our results demonstrate that complex shape information can be transferred efficiently across the two modalities, which speaks in favor of multisensory, higher-level representations of shape.

  8. Visual Tracking Utilizing Object Concept from Deep Learning Network

    Science.gov (United States)

    Xiao, C.; Yilmaz, A.; Lia, S.

    2017-05-01

    Despite having achieved good performance, visual tracking is still an open area of research, especially when target undergoes serious appearance changes which are not included in the model. So, in this paper, we replace the appearance model by a concept model which is learned from large-scale datasets using a deep learning network. The concept model is a combination of high-level semantic information that is learned from myriads of objects with various appearances. In our tracking method, we generate the target's concept by combining the learned object concepts from classification task. We also demonstrate that the last convolutional feature map can be used to generate a heat map to highlight the possible location of the given target in new frames. Finally, in the proposed tracking framework, we utilize the target image, the search image cropped from the new frame and their heat maps as input into a localization network to find the final target position. Compared to the other state-of-the-art trackers, the proposed method shows the comparable and at times better performance in real-time.

  9. Characteristic and intermingled neocortical circuits encode different visual object discriminations.

    Science.gov (United States)

    Zhang, Guo-Rong; Zhao, Hua; Cook, Nathan; Svestka, Michael; Choi, Eui M; Jan, Mary; Cook, Robert G; Geller, Alfred I

    2017-07-28

    Synaptic plasticity and neural network theories hypothesize that the essential information for advanced cognitive tasks is encoded in specific circuits and neurons within distributed neocortical networks. However, these circuits are incompletely characterized, and we do not know if a specific discrimination is encoded in characteristic circuits among multiple animals. Here, we determined the spatial distribution of active neurons for a circuit that encodes some of the essential information for a cognitive task. We genetically activated protein kinase C pathways in several hundred spatially-grouped glutamatergic and GABAergic neurons in rat postrhinal cortex, a multimodal associative area that is part of a distributed circuit that encodes visual object discriminations. We previously established that this intervention enhances accuracy for specific discriminations. Moreover, the genetically-modified, local circuit in POR cortex encodes some of the essential information, and this local circuit is preferentially activated during performance, as shown by activity-dependent gene imaging. Here, we mapped the positions of the active neurons, which revealed that two image sets are encoded in characteristic and different circuits. While characteristic circuits are known to process sensory information, in sensory areas, this is the first demonstration that characteristic circuits encode specific discriminations, in a multimodal associative area. Further, the circuits encoding the two image sets are intermingled, and likely overlapping, enabling efficient encoding. Consistent with reconsolidation theories, intermingled and overlapping encoding could facilitate formation of associations between related discriminations, including visually similar discriminations or discriminations learned at the same time or place. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Temporal buffering and visual capacity: the time course of object formation underlies capacity limits in visual cognition.

    Science.gov (United States)

    Wutz, Andreas; Melcher, David

    2013-07-01

    Capacity limits are a hallmark of visual cognition. The upper boundary of our ability to individuate and remember objects is well known but-despite its central role in visual information processing-not well understood. Here, we investigated the role of temporal limits in the perceptual processes of forming "object files." Specifically, we examined the two fundamental mechanisms of object file formation-individuation and identification-by selectively interfering with visual processing by using forward and backward masking with variable stimulus onset asynchronies. While target detection was almost unaffected by these two types of masking, they showed distinct effects on the two different stages of object formation. Forward "integration" masking selectively impaired object individuation, whereas backward "interruption" masking only affected identification and the consolidation of information into visual working memory. We therefore conclude that the inherent temporal dynamics of visual information processing are an essential component in creating the capacity limits in object individuation and visual working memory.

  11. Visual object recognition and attention in Parkinson's disease patients with visual hallucinations.

    Science.gov (United States)

    Meppelink, Anne Marthe; Koerts, Janneke; Borg, Maarten; Leenders, Klaus Leonard; van Laar, Teus

    2008-10-15

    Visual hallucinations (VH) are common in Parkinson's disease (PD) and are hypothesized to be due to impaired visual perception and attention deficits. We investigated whether PD patients with VH showed attention deficits, a more specific impairment of higher order visual perception, or both. Forty-two volunteers participated in this study, including 14 PD patients with VH, 14 PD patients without VH and 14 healthy controls (HC), matched for age, gender, education level and for level of executive function. We created movies with images of animals, people, and objects dynamically appearing out of random noise. Time until recognition of the image was recorded. Sustained attention was tested using the Test of Attentional Performance. PD patients with VH recognized all images but were significantly slower in image recognition than both PD patients without VH and HC. PD patients with VH showed decreased sustained attention compared to PD patients without VH who again performed worse than HC. In conclusion, the recognition of objects is intact in PD patients with VH; however, these patients where significantly slower in image recognition than patients without VH and HC, which was not explained by executive dysfunction. Both image recognition speed and sustained attention decline in PD, in a more progressive way if VH start to occur. (c) 2008 Movement Disorder Society.

  12. The Internal Structure of "Chaos": Letter Category Determines Visual Word Perceptual Units

    Science.gov (United States)

    Chetail, Fabienne; Content, Alain

    2012-01-01

    The processes and the cues determining the orthographic structure of polysyllabic words remain far from clear. In the present study, we investigated the role of letter category (consonant vs. vowels) in the perceptual organization of letter strings. In the syllabic counting task, participants were presented with written words matched for the…

  13. Visual object agnosia is associated with a breakdown of object-selective responses in the lateral occipital cortex.

    Science.gov (United States)

    Ptak, Radek; Lazeyras, François; Di Pietro, Marie; Schnider, Armin; Simon, Stéphane R

    2014-07-01

    Patients with visual object agnosia fail to recognize the identity of visually presented objects despite preserved semantic knowledge. Object agnosia may result from damage to visual cortex lying close to or overlapping with the lateral occipital complex (LOC), a brain region that exhibits selectivity to the shape of visually presented objects. Despite this anatomical overlap the relationship between shape processing in the LOC and shape representations in object agnosia is unknown. We studied a patient with object agnosia following isolated damage to the left occipito-temporal cortex overlapping with the LOC. The patient showed intact processing of object structure, yet often made identification errors that were mainly based on the global visual similarity between objects. Using functional Magnetic Resonance Imaging (fMRI) we found that the damaged as well as the contralateral, structurally intact right LOC failed to show any object-selective fMRI activity, though the latter retained selectivity for faces. Thus, unilateral damage to the left LOC led to a bilateral breakdown of neural responses to a specific stimulus class (objects and artefacts) while preserving the response to a different stimulus class (faces). These findings indicate that representations of structure necessary for the identification of objects crucially rely on bilateral, distributed coding of shape features. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Learning to Match Auditory and Visual Speech Cues: Social Influences on Acquisition of Phonological Categories

    Science.gov (United States)

    Altvater-Mackensen, Nicole; Grossmann, Tobias

    2015-01-01

    Infants' language exposure largely involves face-to-face interactions providing acoustic and visual speech cues but also social cues that might foster language learning. Yet, both audiovisual speech information and social information have so far received little attention in research on infants' early language development. Using a preferential…

  15. Visual working memory capacity and stimulus categories: a behavioral and electrophysiological investigation

    NARCIS (Netherlands)

    Diamantopoulou, Sofia; Poom, Leo; Klaver, Peter; Talsma, D.

    2011-01-01

    It has recently been suggested that visual working memory capacity may vary depending on the type of material that has to be memorized. Here, we use a delayed match-to-sample paradigm and event-related potentials (ERP) to investigate the neural correlates that are linked to these changes in

  16. Visual space and object space in the cerebral cortex of retinal disease patients.

    Directory of Open Access Journals (Sweden)

    Elfi Goesaert

    Full Text Available The lower areas of the hierarchically organized visual cortex are strongly retinotopically organized, with strong responses to specific retinotopic stimuli, and no response to other stimuli outside these preferred regions. Higher areas in the ventral occipitotemporal cortex show a weak eccentricity bias, and are mainly sensitive for object category (e.g., faces versus buildings. This study investigated how the mapping of eccentricity and category sensitivity using functional magnetic resonance imaging is affected by a retinal lesion in two very different low vision patients: a patient with a large central scotoma, affecting central input to the retina (juvenile macular degeneration, and a patient where input to the peripheral retina is lost (retinitis pigmentosa. From the retinal degeneration, we can predict specific losses of retinotopic activation. These predictions were confirmed when comparing stimulus activations with a no-stimulus fixation baseline. At the same time, however, seemingly contradictory patterns of activation, unexpected given the retinal degeneration, were observed when different stimulus conditions were directly compared. These unexpected activations were due to position-specific deactivations, indicating the importance of investigating absolute activation (relative to a no-stimulus baseline rather than relative activation (comparing different stimulus conditions. Data from two controls, with simulated scotomas that matched the lesions in the two patients also showed that retinotopic mapping results could be explained by a combination of activations at the stimulated locations and deactivations at unstimulated locations. Category sensitivity was preserved in the two patients. In sum, when we take into account the full pattern of activations and deactivations elicited in retinotopic cortex and throughout the ventral object vision pathway in low vision patients, the pattern of (deactivation is consistent with the retinal loss.

  17. Working with visually impaired students: Strategies developed in the transition from 2D geometrical objects to 3D geometrical objects

    OpenAIRE

    Papadaki, Chrysi

    2015-01-01

    International audience; In this paper, I will present some of the results of research that was carried out, during my master studies, that aimed to examine the strategies that visually impaired students develop while coping with the transition from 2-dimensional (2D) to 3-dimensional (3D) geometrical objects, and also their correlation with the concepts of visualization, haptic perception, gestures and language. A teaching experiment took place in a support unit for visually impaired students...

  18. Category Specific Knowledge Modulate Capacity Limitations of Visual Short-Term Memory

    DEFF Research Database (Denmark)

    Dall, Jonas Olsen; Watanabe, Katsumi; Sørensen, Thomas Alrik

    2016-01-01

    We explore whether expertise can modulate the capacity of visual short-term memory, as some seem to argue that training affects capacity of short-term memory [13] while others are not able to find this modulation [12]. We extend on a previous study [3] demonstrating expertise effects...... are in line with the theoretical interpretation that visual short-term memory reflects the sum of the reverberating feedback loops to representations in long-term memory.......), and expert observers (Japanese university students). For both the picture and the letter condition we find no performance difference in memory capacity, however, in the critical hiragana condition we demonstrate a systematic difference relating expertise differences between the groups. These results...

  19. Enhanced recognition memory in grapheme-color synaesthesia for different categories of visual stimuli

    OpenAIRE

    Jamie eWard; Peter eHovard; Alicia eJones; Nicolas eRothen

    2013-01-01

    Memory has been shown to be enhanced in grapheme-colour synaesthesia, and this enhancement extends to certain visual stimuli (that don’t induce synaesthesia) as well as stimuli comprised of graphemes (which do). Previous studies have used a variety of testing procedures to assess memory in synaesthesia (e.g. free recall, recognition, associative learning) making it hard to know the extent to which memory benefits are attributable to the stimulus properties themselves, the testing method, par...

  20. How low can you go? Changing the resolution of novel complex objects in visual working memory according to task demands

    Science.gov (United States)

    Allon, Ayala S.; Balaban, Halely; Luria, Roy

    2014-01-01

    In three experiments we manipulated the resolution of novel complex objects in visual working memory (WM) by changing task demands. Previous studies that investigated the trade-off between quantity and resolution in visual WM yielded mixed results for simple familiar stimuli. We used the contralateral delay activity as an electrophysiological marker to directly track the deployment of visual WM resources while participants preformed a change-detection task. Across three experiments we presented the same novel complex items but changed the task demands. In Experiment 1 we induced a medium resolution task by using change trials in which a random polygon changed to a different type of polygon and replicated previous findings showing that novel complex objects are represented with higher resolution relative to simple familiar objects. In Experiment 2 we induced a low resolution task that required distinguishing between polygons and other types of stimulus categories, but we failed in finding a corresponding decrease in the resolution of the represented item. Finally, in Experiment 3 we induced a high resolution task that required discriminating between highly similar polygons with somewhat different contours. This time, we observed an increase in the item’s resolution. Our findings indicate that the resolution for novel complex objects can be increased but not decreased according to task demands, suggesting that minimal resolution is required in order to maintain these items in visual WM. These findings support studies claiming that capacity and resolution in visual WM reflect different mechanisms. PMID:24734026

  1. Visual Field Preferences of Object Analysis for Grasping with One Hand

    Directory of Open Access Journals (Sweden)

    Ada eLe

    2014-10-01

    Full Text Available When we grasp an object using one hand, the opposite hemisphere predominantly guides the motor control of grasp movements (Davare et al. 2007; Rice et al. 2007. However, it is unclear whether visual object analysis for grasp control relies more on inputs (a from the contralateral than the ipsilateral visual field, (b from one dominant visual field regardless of the grasping hand, or (c from both visual fields equally. For bimanual grasping of a single object we have recently demonstrated a visual field preference for the left visual field (Le and Niemeier 2013a, 2013b, consistent with a general right-hemisphere dominance for sensorimotor control of bimanual grasps (Le et al., 2013. But visual field differences have never been tested for unimanual grasping. Therefore, here we asked right-handed participants to fixate to the left or right of an object and then grasp the object either with their right or left hand using a precision grip. We found that participants grasping with their right hand performed better with objects in the right visual field: maximum grip apertures (MGAs were more closely matched to the object width and were smaller than for objects in the left visual field. In contrast, when people grasped with their left hand, preferences switched to the left visual field. What is more, MGA scaling showed greater visual field differences compared to right-hand grasping. Our data suggest that, visual object analysis for unimanual grasping shows a preference for visual information from the ipsilateral visual field, and that the left hemisphere is better equipped to control grasps in both visual fields.

  2. Social Categories Shape the Neural Representation of Emotion: Evidence from a Visual Face Adaptation Task.

    Directory of Open Access Journals (Sweden)

    Marte eOtten

    2012-02-01

    Full Text Available A number of recent behavioral studies have shown that emotional expressions are differently perceived depending on the race of a face, and that that perception of race cues is influenced by emotional expressions. However, neural processes related to the perception of invariant cues that indicate the identity of a face (such as race are often described to proceed independently of processes related to the perception of cues that can vary over time (such as emotion. Using a visual face adaptation paradigm, we tested whether these behavioral interactions between emotion and race also reflect interdependent neural representation of emotion and race. We compared visual emotion aftereffects when the adapting face and ambiguous test face differed in race or not. Emotion aftereffects were much smaller in different race trials than same race trials, indicating that the neural representation of a facial expression is significantly different depending on whether the emotional face is black or white. It thus seems that invariable cues such as race interact with variable face cues such as emotion not just at a response level, but also at the level of perception and neural representation.

  3. Towards a unified model of face and object recognition in the human visual system

    Directory of Open Access Journals (Sweden)

    Guy eWallis

    2013-08-01

    Full Text Available Our understanding of the mechanisms and neural substrates underlying visual recognition in humans has made considerable progress over the past thirty years. During this period a divide has developed between the fields of object and face recognition. In the psychological literature, in particular, there has been a palpable disconnect between the two fields. This paper follows a trend in part of the face-recognition literature to try to reconcile what we know about these two forms of recognition by considering the effects of learning. Taking a widely accepted, self-organizing model of object recognition, this paper explains how such a system is affected by repeated exposure to specific stimulus classes. In so doing, it explains how many aspects of recognition generally regarded as unusual to faces (holistic processing, configural processing, sensitivity to inversion, the other race effect, the prototype effect, etc. are emergent properties of category-specific learning within such a system. Overall, the paper describes how a single model of recognition learning can and does produce the two, apparently very different types of stimulus representation associated with faces and objects.

  4. Object versus spatial visual mental imagery in patients with schizophrenia

    NARCIS (Netherlands)

    Aleman, A; de Haan, EHF; Kahn, RS

    Objective: Recent research has revealed a larger impairment of object perceptual discrimination than of spatial perceptual discrimination in patients with schizophrenia. It has been suggested that mental imagery may share processing systems with perception. We investigated whether patients with

  5. How Does Using Object Names Influence Visual Recognition Memory?

    Science.gov (United States)

    Richler, Jennifer J.; Palmeri, Thomas J.; Gauthier, Isabel

    2013-01-01

    Two recent lines of research suggest that explicitly naming objects at study influences subsequent memory for those objects at test. Lupyan (2008) suggested that naming "impairs" memory by a representational shift of stored representations of named objects toward the prototype (labeling effect). MacLeod, Gopie, Hourihan, Neary, and Ozubko (2010)…

  6. Uncertainty-aware video visual analytics of tracked moving objects

    Directory of Open Access Journals (Sweden)

    Markus Höferlin

    2011-01-01

    Full Text Available Vast amounts of video data render manual video analysis useless while recent automatic video analytics techniques suffer from insufficient performance. To alleviate these issues, we present a scalable and reliable approach exploiting the visual analytics methodology. This involves the user in the iterative process of exploration, hypotheses generation, and their verification. Scalability is achieved by interactive filter definitions on trajectory features extracted by the automatic computer vision stage. We establish the interface between user and machine adopting the VideoPerpetuoGram (VPG for visualization and enable users to provide filter-based relevance feedback. Additionally, users are supported in deriving hypotheses by context-sensitive statistical graphics. To allow for reliable decision making, we gather uncertainties introduced by the computer vision step, communicate these information to users through uncertainty visualization, and grant fuzzy hypothesis formulation to interact with the machine. Finally, we demonstrate the effectiveness of our approach by the video analysis mini challenge which was part of the IEEE Symposium on Visual Analytics Science and Technology 2009.

  7. Uncertainty-aware video visual analytics of tracked moving objects

    Directory of Open Access Journals (Sweden)

    Markus Höferlin

    1969-12-01

    Full Text Available Vast amounts of video data render manual video analysis useless while recent automatic video analytics techniques suffer from insufficient performance. To alleviate these issues, we present a scalable and reliable approach exploiting the visual analytics methodology. This involves the user in the iterative process of exploration, hypotheses generation, and their verification. Scalability is achieved by interactive filter definitions on trajectory features extracted by the automatic computer vision stage. We establish the interface between user and machine adopting the VideoPerpetuoGram (VPG for visualization and enable users to provide filter-based relevance feedback. Additionally, users are supported in deriving hypotheses by context-sensitive statistical graphics. To allow for reliable decision making, we gather uncertainties introduced by the computer vision step, communicate these information to users through uncertainty visualization, and grant fuzzy hypothesis formulation to interact with the machine. Finally, we demonstrate the effectiveness of our approach by the video analysis mini challenge which was part of the IEEE Symposium on Visual Analytics Science and Technology 2009.

  8. Hydrogel Inlay for Presbyopia: Objective and Subjective Visual Outcomes.

    Science.gov (United States)

    Yoo, Aeri; Kim, Jae Yong; Kim, Myoung Joon; Tchah, Hungwon

    2015-07-01

    To evaluate changes in visual performance and ocular optical quality after implantation of a corneal hydrogel inlay as a treatment for presbyopia. A Raindrop Near Vision Inlay (ReVision Optics, Lake Forest, CA) was implanted monocularly on the stromal bed of a femtosecond laser-assisted generated corneal flap of non-dominant eyes of 22 patients with emmetropic presbyopia (preoperative spherical equivalent range: -0.50 to 1.00 diopters). Efficacy was determined by measuring near and distance visual acuities and ocular aberrations, and satisfaction was assessed by a patient questionnaire. The preoperative monocular uncorrected near visual acuity of the inlay inserted eye was 20/129 ± 1 Snellen (range: 20/135 to 20/61 Snellen) and improved to 20/35 ± 2 Snellen (range: 20/61 to 20/20 Snellen) (P presbyopia with only moderate effect on visual quality. However, the satisfaction with this therapy was relatively lower in these Korean patients than that reported previously in Western patients. Copyright 2015, SLACK Incorporated.

  9. Aging and visual short-term memory: effects of object type and information load.

    Science.gov (United States)

    Vaughan, Leslie; Hartman, Marilyn

    2010-01-01

    Previous research has observed that the size of age differences in short-term memory (STM) depends on the type of material to be remembered, but has not identified the mechanism underlying this pattern. The current study focused on visual STM and examined the contribution of information load, as estimated by the rate of visual search, to STM for two types of stimuli - meaningful and abstract objects. Results demonstrated higher information load and lower STM for abstract objects. Age differences were greater for abstract than meaningful objects in visual search, but not in STM. Nevertheless, older adults demonstrated a decreased capacity in visual STM for meaningful objects. Furthermore, in support of Salthouse's processing speed theory, controlling for search rates eliminated all differences in STM related to object type and age. The overall pattern of findings suggests that STM for visual objects is dependent upon processing rate, regardless of age or object type.

  10. 1/f 2 Characteristics and isotropy in the fourier power spectra of visual art, cartoons, comics, mangas, and different categories of photographs.

    Directory of Open Access Journals (Sweden)

    Michael Koch

    Full Text Available Art images and natural scenes have in common that their radially averaged (1D Fourier spectral power falls according to a power-law with increasing spatial frequency (1/f(2 characteristics, which implies that the power spectra have scale-invariant properties. In the present study, we show that other categories of man-made images, cartoons and graphic novels (comics and mangas, have similar properties. Further on, we extend our investigations to 2D power spectra. In order to determine whether the Fourier power spectra of man-made images differed from those of other categories of images (photographs of natural scenes, objects, faces and plants and scientific illustrations, we analyzed their 2D power spectra by principal component analysis. Results indicated that the first fifteen principal components allowed a partial separation of the different image categories. The differences between the image categories were studied in more detail by analyzing whether the mean power and the slope of the power gradients from low to high spatial frequencies varied across orientations in the power spectra. Mean power was generally higher in cardinal orientations both in real-world photographs and artworks, with no systematic difference between the two types of images. However, the slope of the power gradients showed a lower degree of mean variability across spectral orientations (i.e., more isotropy in art images, cartoons and graphic novels than in photographs of comparable subject matters. Taken together, these results indicate that art images, cartoons and graphic novels possess relatively uniform 1/f(2 characteristics across all orientations. In conclusion, the man-made stimuli studied, which were presumably produced to evoke pleasant and/or enjoyable visual perception in human observers, form a subset of all images and share statistical properties in their Fourier power spectra. Whether these properties are necessary or sufficient to induce aesthetic

  11. 1/f 2 Characteristics and isotropy in the fourier power spectra of visual art, cartoons, comics, mangas, and different categories of photographs.

    Science.gov (United States)

    Koch, Michael; Denzler, Joachim; Redies, Christoph

    2010-08-19

    Art images and natural scenes have in common that their radially averaged (1D) Fourier spectral power falls according to a power-law with increasing spatial frequency (1/f(2) characteristics), which implies that the power spectra have scale-invariant properties. In the present study, we show that other categories of man-made images, cartoons and graphic novels (comics and mangas), have similar properties. Further on, we extend our investigations to 2D power spectra. In order to determine whether the Fourier power spectra of man-made images differed from those of other categories of images (photographs of natural scenes, objects, faces and plants and scientific illustrations), we analyzed their 2D power spectra by principal component analysis. Results indicated that the first fifteen principal components allowed a partial separation of the different image categories. The differences between the image categories were studied in more detail by analyzing whether the mean power and the slope of the power gradients from low to high spatial frequencies varied across orientations in the power spectra. Mean power was generally higher in cardinal orientations both in real-world photographs and artworks, with no systematic difference between the two types of images. However, the slope of the power gradients showed a lower degree of mean variability across spectral orientations (i.e., more isotropy) in art images, cartoons and graphic novels than in photographs of comparable subject matters. Taken together, these results indicate that art images, cartoons and graphic novels possess relatively uniform 1/f(2) characteristics across all orientations. In conclusion, the man-made stimuli studied, which were presumably produced to evoke pleasant and/or enjoyable visual perception in human observers, form a subset of all images and share statistical properties in their Fourier power spectra. Whether these properties are necessary or sufficient to induce aesthetic perception remains

  12. The impact of visual gaze direction on auditory object tracking.

    Science.gov (United States)

    Pomper, Ulrich; Chait, Maria

    2017-07-05

    Subjective experience suggests that we are able to direct our auditory attention independent of our visual gaze, e.g when shadowing a nearby conversation at a cocktail party. But what are the consequences at the behavioural and neural level? While numerous studies have investigated both auditory attention and visual gaze independently, little is known about their interaction during selective listening. In the present EEG study, we manipulated visual gaze independently of auditory attention while participants detected targets presented from one of three loudspeakers. We observed increased response times when gaze was directed away from the locus of auditory attention. Further, we found an increase in occipital alpha-band power contralateral to the direction of gaze, indicative of a suppression of distracting input. Finally, this condition also led to stronger central theta-band power, which correlated with the observed effect in response times, indicative of differences in top-down processing. Our data suggest that a misalignment between gaze and auditory attention both reduce behavioural performance and modulate underlying neural processes. The involvement of central theta-band and occipital alpha-band effects are in line with compensatory neural mechanisms such as increased cognitive control and the suppression of task irrelevant inputs.

  13. Is that a belt or a snake? object attentional selection affects the early stages of visual sensory processing

    Directory of Open Access Journals (Sweden)

    Zani Alberto

    2012-02-01

    Full Text Available Abstract Background There is at present crescent empirical evidence deriving from different lines of ERPs research that, unlike previously observed, the earliest sensory visual response, known as C1 component or P/N80, generated within the striate cortex, might be modulated by selective attention to visual stimulus features. Up to now, evidence of this modulation has been related to space location, and simple features such as spatial frequency, luminance, and texture. Additionally, neurophysiological conditions, such as emotion, vigilance, the reflexive or voluntary nature of input attentional selection, and workload have also been related to C1 modulations, although at least the workload status has received controversial indications. No information is instead available, at present, for objects attentional selection. Methods In this study object- and space-based attention mechanisms were conjointly investigated by presenting complex, familiar shapes of artefacts and animals, intermixed with distracters, in different tasks requiring the selection of a relevant target-category within a relevant spatial location, while ignoring the other shape categories within this location, and, overall, all the categories at an irrelevant location. EEG was recorded from 30 scalp electrode sites in 21 right-handed participants. Results and Conclusions ERP findings showed that visual processing was modulated by both shape- and location-relevance per se, beginning separately at the latency of the early phase of a precocious negativity (60-80 ms at mesial scalp sites consistent with the C1 component, and a positivity at more lateral sites. The data also showed that the attentional modulation progressed conjointly at the latency of the subsequent P1 (100-120 ms and N1 (120-180 ms, as well as later-latency components. These findings support the views that (1 V1 may be precociously modulated by direct top-down influences, and participates to object, besides simple

  14. The Effects of Visual Degradation on Attended Objects and the Ability to Process Unattended Objects within the Visual Array

    Science.gov (United States)

    2010-09-01

    Loraine St. Onge , PhD 334-255-6906 Reset ii   iii   Acknowledgements The authors would like to express their gratitude to the following...process both attended and unattended objects, it should be possible to tax the cognitive mechanism enough so that degradation to the attended object...object. One cognitive mechanism could be processing all of the objects presented on the screen at one time, and this study may have failed to tax that

  15. Visual Object Recognition and Attention in Parkinson's Disease Patients with Visual Hallucinations

    NARCIS (Netherlands)

    Meppelink, Anne Marthe; Koerts, Janneke; Borg, Maarten; Leenders, Klaus Leonard; van Laar, Teus

    2008-01-01

    Visual hallucinations (VH) are common in Parkinson's disease (PD) and are hypothesized to be due to impaired visual perception and attention deficits. We investigated whether PD patients with VH showed attention deficits a more specific impairment of higher order visual perception or both. Forty-two

  16. Modes of Effective Connectivity within Cortical Pathways Are Distinguished for Different Categories of Visual Context: An fMRI Study

    Directory of Open Access Journals (Sweden)

    Qiong Wu

    2017-05-01

    Full Text Available Context contributes to accurate and efficient information processing. To reveal the dynamics of the neural mechanisms that underlie the processing of visual contexts during the recognition of color, shape, and 3D structure of objects, we carried out functional magnetic resonance imaging (fMRI of subjects while judging the contextual validity of the three visual contexts. Our results demonstrated that the modes of effective connectivity in the cortical pathways, as well as the patterns of activation in these pathways, were dynamical depending on the nature of the visual contexts. While the fusiform gyrus, superior parietal lobe, and inferior prefrontal gyrus were activated by the three visual contexts, the temporal and parahippocampal gyrus/Amygdala (PHG/Amg cortices were activated only by the color context. We further carried out dynamic causal modeling (DCM analysis and revealed the nature of the effective connectivity involved in the three contextual information processing. DCM showed that there were dynamic connections and collaborations among the brain regions belonging to the previously identified ventral and dorsal visual pathways.

  17. Is it a face of a woman or a man? Visual mismatch negativity is sensitive to gender category

    Directory of Open Access Journals (Sweden)

    Krisztina eKecskes-Kovacs

    2013-09-01

    Full Text Available The present study investigated whether gender information for human faces was represented by the predictive mechanism indexed by the visual mismatch negativity (vMMN event – related brain potential (ERP. While participants performed a continuous size-change-detection task, random sequences of cropped faces were presented in the background, in an oddball setting: either various female faces were presented infrequently among various male faces, or vice versa. In Experiment 1 the inter-stimulus- interval (ISI was 400 ms, while in Experiment 2 the ISI was 2250 ms. The ISI difference had only a small effect on the P1 component, however the subsequent negativity (N1/N170 was larger and more widely distributed at longer ISI, showing different aspects of stimulus processing.As deviant -minus- standard ERP difference, a parieto – occipital negativity (vMMN emerged in the 200 – 500 ms latency range (~ 350 ms peak latency in both experiments. We argue that regularity of gender on the photographs is automatically registered, and the violation of the gender category is reflected by the vMMN. In conclusion the results can be interpreted as evidence for the automatic activity of a predictive brain mechanism, in case of an ecologically valid category.

  18. Shape similarity, better than semantic membership, accounts for the structure of visual object representations in a population of monkey inferotemporal neurons.

    Directory of Open Access Journals (Sweden)

    Carlo Baldassi

    Full Text Available The anterior inferotemporal cortex (IT is the highest stage along the hierarchy of visual areas that, in primates, processes visual objects. Although several lines of evidence suggest that IT primarily represents visual shape information, some recent studies have argued that neuronal ensembles in IT code the semantic membership of visual objects (i.e., represent conceptual classes such as animate and inanimate objects. In this study, we investigated to what extent semantic, rather than purely visual information, is represented in IT by performing a multivariate analysis of IT responses to a set of visual objects. By relying on a variety of machine-learning approaches (including a cutting-edge clustering algorithm that has been recently developed in the domain of statistical physics, we found that, in most instances, IT representation of visual objects is accounted for by their similarity at the level of shape or, more surprisingly, low-level visual properties. Only in a few cases we observed IT representations of semantic classes that were not explainable by the visual similarity of their members. Overall, these findings reassert the primary function of IT as a conveyor of explicit visual shape information, and reveal that low-level visual properties are represented in IT to a greater extent than previously appreciated. In addition, our work demonstrates how combining a variety of state-of-the-art multivariate approaches, and carefully estimating the contribution of shape similarity to the representation of object categories, can substantially advance our understanding of neuronal coding of visual objects in cortex.

  19. Object Lessons: Teaching Math through the Visual Arts, K-5

    Science.gov (United States)

    Holtzman, Caren; Susholtz, Lynn

    2011-01-01

    When Caren Holtzman and Lynn Susholtz look around a classroom, they see "a veritable goldmine of mathematical investigations" involving number, measurement, size, shape, symmetry, ratio, and proportion. They also think of the ways great artists have employed these concepts in their depictions of objects and space--for example, Picasso's use of…

  20. Visual Servoing for Object Manipulation: A Case Study in Slaughterhouse

    DEFF Research Database (Denmark)

    Wu, Haiyan; Andersen, Thomas Timm; Andersen, Nils Axel

    2016-01-01

    . An online and offline combined path planning algorithm is proposed to generate the desired path for the robot control. An industrial robot arm is applied to execute the path. The system is implemented for a lab-scale experiment, and the results show a high success rate of object manipulation in the pick...

  1. A Proto-Object-Based Computational Model for Visual Saliency

    NARCIS (Netherlands)

    Yanulevskaya, V.; Uijlings, J.; Geusebroek, J.-M.; Sebe, N.; Smeulders, A.

    2013-01-01

    State-of-the-art bottom-up saliency models often assign high saliency values at or near high-contrast edges, whereas people tend to look within the regions delineated by those edges, namely the objects. To resolve this inconsistency, in this work we estimate saliency at the level of coherent image

  2. Development of visual systems for faces and objects: further evidence for prolonged development of the face system.

    Directory of Open Access Journals (Sweden)

    Bozana Meinhardt-Injac

    Full Text Available BACKGROUND: The development of face and object processing has attracted much attention; however, studies that directly compare processing of both visual categories across age are rare. In the present study, we compared the developmental trajectories of face and object processing in younger children (8-10 years, older children (11-13 years, adolescents (14-16 years, and adults (20-37. METHODOLOGY/PRINCIPAL FINDINGS: We used a congruency paradigm in which subjects compared the internal features of two stimuli, while the (unattended external features either agreed or disagreed independent of the identity of the internal features. We found a continuous increase in matching accuracy for faces and watches across childhood and adolescence, with different magnitudes for both visual categories. In watch perception, adult levels were reached at the age of 14-16, but not in face perception. The effect of context and inversion, as measures of holistic and configural processing, were clearly restricted to faces in all age groups. This finding suggests that different mechanisms are involved in face and object perception at any age tested. Moreover, the modulation of context and inversion effects by exposure duration was strongly age-dependent, with the strongest age-related differences found for brief timings below 140 ms. CONCLUSIONS/SIGNIFICANCE: The results of the present study suggest prolonged development of face-specific processing up to young adulthood. The improvement in face processing is qualitatively different from the improvement of general perceptual and cognitive ability.

  3. Computational Modelling of the Neural Representation of Object Shape in the Primate Ventral Visual System

    Directory of Open Access Journals (Sweden)

    Akihiro eEguchi

    2015-08-01

    Full Text Available Neurons in successive stages of the primate ventral visual pathway encode the spatial structure of visual objects. In this paper, we investigate through computer simulation how these cell firing properties may develop through unsupervised visually-guided learning. Individual neurons in the model are shown to exploit statistical regularity and temporal continuity of the visual inputs during training to learn firing properties that are similar to neurons in V4 and TEO. Neurons in V4 encode the conformation of boundary contour elements at a particular position within an object regardless of the location of the object on the retina, while neurons in TEO integrate information from multiple boundary contour elements. This representation goes beyond mere object recognition, in which neurons simply respond to the presence of a whole object, but provides an essential foundation from which the brain is subsequently able to recognise the whole object.

  4. Hand-crafted programming objects and visual perception

    CSIR Research Space (South Africa)

    Smith, Adrew C

    2009-05-01

    Full Text Available stream_source_info Smith1_2009.pdf.txt stream_content_type text/plain stream_size 20071 Content-Encoding UTF-8 stream_name Smith1_2009.pdf.txt Content-Type text/plain; charset=UTF-8 IST-Africa 2009 Conference... Proceedings Paul Cunningham and Miriam Cunningham (Eds) IIMC International Information Management Corporation, 2009 ISBN: 978-1-905824-11-3 Copyright © 2009 The authors www.IST-Africa.org/Conference2009 Page 1 of 7 Hand-Crafted Programming Objects...

  5. MM-MDS: A Multidimensional Scaling Database with Similarity Ratings for 240 Object Categories from the Massive Memory Picture Database: e112644

    National Research Council Canada - National Science Library

    Michael C Hout; Stephen D Goldinger; Kyle J Brady

    2014-01-01

      Cognitive theories in visual attention and perception, categorization, and memory often critically rely on concepts of similarity among objects, and empirically require measures of "sameness" among their stimuli...

  6. Objective assessment of visual attention in mild traumatic brain injury (mTBI) using visual-evoked potentials (VEP).

    Science.gov (United States)

    Yadav, Naveen K; Ciuffreda, Kenneth J

    2015-01-01

    To quantify visual attention objectively using the visual-evoked potential (VEP) in those having mild traumatic brain injury (mTBI) with and without a self-reported attentional deficit. Subjects were comprised of 16 adults with mTBI: 11 with an attentional deficit and five without. Three test conditions were used to assess the visual attentional state to quantify objectively the VEP alpha band attenuation ratio (AR) related to attention: (1) pattern VEP; (2) eyes-closed; and (3) eyes-closed number counting. The AR was calculated for both the individual and combined alpha frequencies (8-13 Hz). The objective results were compared to two subjective tests of visual and general attention (i.e. the VSAT and ASRS, respectively). The AR for both the individual and combined alpha frequencies was found to be abnormal in those with mTBI having an attentional deficit. In contrast, the AR was normal in those with mTBI but without an attentional deficit. The AR correlated with the ASRS, but not with the VSAT, test scores. The objective and subjective tests were able to differentiate between those having mTBI with and without an attentional deficit. The proposed VEP protocol can be used in the clinic to detect and assess objectively and reliably a visual attentional deficit in the mTBI population.

  7. Object-related regularities are processed automatically: Evidence from the visual mismatch negativity

    Directory of Open Access Journals (Sweden)

    Dagmar eMüller

    2013-06-01

    Full Text Available One of the most challenging tasks of our visual systems is to structure and integrate the enormous amount of incoming information into distinct coherent objects. It is an ongoing debate whether or not the formation of visual objects requires attention. Implicit behavioural measures suggest that object formation can occur for task-irrelevant and unattended visual stimuli. The present study investigated pre-attentive visual object formation by combining implicit behavioural measures and an electrophysiological indicator of pre-attentive visual irregularity detection, the visual mismatch negativity (vMMN of the event-related potential. Our displays consisted of two symmetrically arranged, task-irrelevant ellipses, the objects. In addition, there were two discs of either high or low luminance presented on the objects, which served as targets. Participants had to indicate whether the targets were of the same or different luminance. In separate conditions, the targets either usually were enclosed in the same object or in two different objects (standards. Occasionally, the regular target-to-object assignment was changed (deviants. That is, standards and deviants were exclusively defined on the basis of the task-irrelevant target-to-object assignment but not on the basis of some feature regularity. Although participants did not notice the regularity nor the occurrence of the deviation in the sequences, task-irrelevant deviations resulted in increased reaction times. Moreover, compared with physically identical standard displays deviating target-to-object assignments elicited a negative potential in the 246 – 280 ms time window over posterio-temporal electrode positions which was identified as vMMN. With variable resolution electromagnetic tomography (VARETA object-related vMMN was localized to the inferior temporal gyrus. Our results support the notion that the visual system automatically structures even task-irrelevant aspects of the incoming

  8. Influence of Active Manipulation of an Object on Visual Motion Perception

    Directory of Open Access Journals (Sweden)

    Kazumichi Matsumiya

    2011-10-01

    Full Text Available When we manipulate an object by hand, the movements of the object are produced with the visual and haptic movements of our hands. Studies of multimodal perception show the interaction between touch and vision in visual motion perception(1,2. The influence of touch on visual motion perception is shown by the fact that adaptation to tactile motion across the observer's hand induces a visual motion aftereffect, which is a visual illusion in which exposure to a moving visual pattern makes a subsequently viewed stationary visual pattern appear to move in the opposite direction(2. This visuo-tactile interaction plays an important role in skillful manipulation(3. However, it is not clear how haptic information influences visual motion perception. We measured the strength of a visual motion aftereffect after visuo-haptic adaptation to a windmill rotated by observers. We found that the visual motion aftereffect was enhanced when observers actively rotated the windmill. The motion aftereffect was not enhanced when the observer's hand was passively moved. Our results suggest the presence of a visual motion system that is linked to the intended haptic movements.

  9. Blindness to background: an inbuilt bias for visual objects.

    Science.gov (United States)

    O'Hanlon, Catherine G; Read, Jenny C A

    2017-09-01

    Sixty-eight 2- to 12-year-olds and 30 adults were shown colorful displays on a touchscreen monitor and trained to point to the location of a named color. Participants located targets near-perfectly when presented with four abutting colored patches. When presented with three colored patches on a colored background, toddlers failed to locate targets in the background. Eye tracking demonstrated that the effect was partially mediated by a tendency not to fixate the background. However, the effect was abolished when the targets were named as nouns, whilst the change to nouns had little impact on eye movement patterns. Our results imply a powerful, inbuilt tendency to attend to objects, which may slow the development of color concepts and acquisition of color words. A video abstract of this article can be viewed at: https://youtu.be/TKO1BPeAiOI. [Correction added on 27 January 2017, after first online publication: The video abstract link was added.]. © 2016 John Wiley & Sons Ltd.

  10. The early development of object knowledge: A study of infants' visual anticipations during action observation

    NARCIS (Netherlands)

    Hunnius, S.; Bekkering, H.

    2010-01-01

    This study examined the developing object knowledge of infants through their visual anticipation of action targets during action observation. Infants (6, 8, 12, 14, and 16 months) and adults watched short movies of a person using 3 different everyday objects. Participants were presented with objects

  11. Visual Short-Term Memory for Complex Objects in 6- and 8-Month-Old Infants

    Science.gov (United States)

    Kwon, Mee-Kyoung; Luck, Steven J.; Oakes, Lisa M.

    2014-01-01

    Infants' visual short-term memory (VSTM) for simple objects undergoes dramatic development: Six-month-old infants can store in VSTM information about only a simple object presented in isolation, whereas 8-month-old infants can store information about simple objects presented in multiple-item arrays. This study extended this work to examine…

  12. The Strategic Retention of Task-Relevant Objects in Visual Working Memory

    Science.gov (United States)

    Maxcey-Richard, Ashleigh M.; Hollingworth, Andrew

    2013-01-01

    The serial and spatially extended nature of many real-world visual tasks suggests the need for control over the content of visual working memory (VWM). We examined the management of VWM in a task that required participants to prioritize individual objects for retention during scene viewing. There were 5 principal findings: (a) Strategic retention…

  13. Hard-wired feed-forward visual mechanisms of the brain compensate for affine variations in object recognition.

    Science.gov (United States)

    Karimi-Rouzbahani, Hamid; Bagheri, Nasour; Ebrahimpour, Reza

    2017-05-04

    Humans perform object recognition effortlessly and accurately. However, it is unknown how the visual system copes with variations in objects' appearance and the environmental conditions. Previous studies have suggested that affine variations such as size and position are compensated for in the feed-forward sweep of visual information processing while feedback signals are needed for precise recognition when encountering non-affine variations such as pose and lighting. Yet, no empirical data exist to support this suggestion. We systematically investigated the impact of the above-mentioned affine and non-affine variations on the categorization performance of the feed-forward mechanisms of the human brain. For that purpose, we designed a backward-masking behavioral categorization paradigm as well as a passive viewing EEG recording experiment. On a set of varying stimuli, we found that the feed-forward visual pathways contributed more dominantly to the compensation of variations in size and position compared to lighting and pose. This was reflected in both the amplitude and the latency of the category separability indices obtained from the EEG signals. Using a feed-forward computational model of the ventral visual stream, we also confirmed a more dominant role for the feed-forward visual mechanisms of the brain in the compensation of affine variations. Taken together, our experimental results support the theory that non-affine variations such as pose and lighting may need top-down feedback information from higher areas such as IT and PFC for precise object recognition. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  14. Research on Visual Servo Grasping of Household Objects for Nonholonomic Mobile Manipulator

    Directory of Open Access Journals (Sweden)

    Huangsheng Xie

    2014-01-01

    Full Text Available This paper focuses on the problem of visual servo grasping of household objects for nonholonomic mobile manipulator. Firstly, a new kind of artificial object mark based on QR (Quick Response Code is designed, which can be affixed to the surface of household objects. Secondly, after summarizing the vision-based autonomous mobile manipulation system as a generalized manipulator, the generalized manipulator’s kinematic model is established, the analytical inverse kinematic solutions of the generalized manipulator are acquired, and a novel active vision based camera calibration method is proposed to determine the hand-eye relationship. Finally, a visual servo switching control law is designed to control the service robot to finish object grasping operation. Experimental results show that QR Code-based artificial object mark can overcome the difficulties brought by household objects’ variety and operation complexity, and the proposed visual servo scheme makes it possible for service robot to grasp and deliver objects efficiently.

  15. Activity in human visual and parietal cortex reveals object-based attention in working memory.

    Science.gov (United States)

    Peters, Benjamin; Kaiser, Jochen; Rahm, Benjamin; Bledowski, Christoph

    2015-02-25

    Visual attention enables observers to select behaviorally relevant information based on spatial locations, features, or objects. Attentional selection is not limited to physically present visual information, but can also operate on internal representations maintained in working memory (WM) in service of higher-order cognition. However, only little is known about whether attention to WM contents follows the same principles as attention to sensory stimuli. To address this question, we investigated in humans whether the typically observed effects of object-based attention in perception are also evident for object-based attentional selection of internal object representations in WM. In full accordance with effects in visual perception, the key behavioral and neuronal characteristics of object-based attention were observed in WM. Specifically, we found that reaction times were shorter when shifting attention to memory positions located on the currently attended object compared with equidistant positions on a different object. Furthermore, functional magnetic resonance imaging and multivariate pattern analysis of visuotopic activity in visual (areas V1-V4) and parietal cortex revealed that directing attention to one position of an object held in WM also enhanced brain activation for other positions on the same object, suggesting that attentional selection in WM activates the entire object. This study demonstrated that all characteristic features of object-based attention are present in WM and thus follows the same principles as in perception. Copyright © 2015 the authors 0270-6474/15/353360-10$15.00/0.

  16. Virtual Exertions: a user interface combining visual information, kinesthetics and biofeedback for virtual object manipulation

    OpenAIRE

    Ponto, Kevin; Kimmel, Ryan; Kohlmann, Joe; Bartholomew, Aaron; Radwin, Robert G.

    2012-01-01

    Virtual Reality environments have the ability to present users with rich visual representations of simulated environments. However, means to interact with these types of illusions are generally unnatural in the sense that they do not match the methods humans use to grasp and move objects in the physical world. We demonstrate a system that enables users to interact with virtual objects with natural body movements by combining visual information, kinesthetics and biofeedback from electromyogram...

  17. Visual SLAM and Moving-object Detection for a Small-size Humanoid Robot

    Directory of Open Access Journals (Sweden)

    Yin-Tien Wang

    2010-09-01

    Full Text Available In the paper, a novel moving object detection (MOD algorithm is developed and integrated with robot visual Simultaneous Localization and Mapping (vSLAM. The moving object is assumed to be a rigid body and its coordinate system in space is represented by a position vector and a rotation matrix. The MOD algorithm is composed of detection of image features, initialization of image features, and calculation of object coordinates. Experimentation is implemented on a small-size humanoid robot and the results show that the performance of the proposed algorithm is efficient for robot visual SLAM and moving object detection.

  18. Visual object and visuospatial cognition in Huntington's disease: implications for information processing in corticostriatal circuits.

    Science.gov (United States)

    Lawrence, A D; Watkins, L H; Sahakian, B J; Hodges, J R; Robbins, T W

    2000-07-01

    The primate visual system contains two major streams of visual information processing. The ventral stream is directed into the inferior temporal cortex and is concerned with visual object cognition, whereas the dorsal stream is directed into the posterior parietal cortex and is concerned with visuospatial cognition. Both of these processing streams send projections to the basal ganglia, and the ventral stream may also receive reciprocal connections from the basal ganglia. Although a role for the basal ganglia in visual object and visuospatial cognition has been suggested, little work has been carried out in this area in humans. The primary site of neuropathology in Huntington's disease is the basal ganglia, and hence Huntington's disease provides an important model for the role of the human basal ganglia in visual object and visuospatial cognition, and its breakdown in disease. We examined performance on a wide battery of tests of both visual object and visuospatial recognition memory, working memory, attention, associative learning and perception, enabling us to specify more fully the role of the basal ganglia in visual object and visuospatial cognition, and the disruption of these processes in Huntington's disease. Huntington's disease patients exhibited deficits on tests of pattern and spatial recognition memory; showed impaired simultaneous matching and delay-independent delayed matching-to-sample deficits; showed spared accuracy but impaired reaction times in visual search; were impaired in spatial but not visual object working memory; and showed impaired pattern-location associative learning. The results of our investigations suggest a particular role for the striatum in context-dependent action selection, in line with current computational theories of basal ganglia function.

  19. Visual field meridians modulate the reallocation of object-based attention.

    Science.gov (United States)

    Barnas, Adam J; Greenberg, Adam S

    2016-10-01

    Object-based attention (OBA) enhances processing within the boundaries of a selected object. Larger OBA effects have been observed for horizontal compared to vertical rectangles, which were eliminated when controlling for attention shifts across the visual field meridians. We aimed to elucidate the modulatory role of the meridians on OBA. We hypothesized that the contralateral organization of visual cortex accounts for these differences in OBA prioritization. Participants viewed "L"-shaped objects and, following a peripheral cue at the object vertex, detected the presence of a target at the cued location (valid), or at a non-cued location (invalid) offset either horizontally or vertically. In Experiment 1, the single displayed object contained components crossing both meridians. In Experiment 2, one cued object and one non-cued object were displayed such that both crossed the meridians. In Experiment 3, one cued object was sequestered into one screen quadrant, with its vertex either near or far from fixation. Results from Experiments 1 and 2 revealed a horizontal shift advantage (faster RTs for horizontal shifts across the vertical meridian compared to vertical shifts across the horizontal meridian), regardless of whether shifts take place within a cued object (Experiment 1) or between objects (Experiment 2). Results from Experiment 3 revealed no difference between horizontal and vertical shifts for objects that were positioned far from fixation, although the horizontal shift advantage reappeared for objects near fixation. These findings suggest a critical modulatory role of visual field meridians in the efficiency of reorienting object-based attention.

  20. Object class recognition based on compressive sensing with sparse features inspired by hierarchical model in visual cortex

    Science.gov (United States)

    Lu, Pei; Xu, Zhiyong; Yu, Huapeng; Chang, Yongxin; Fu, Chengyu; Shao, Jianxin

    2012-11-01

    According to models of object recognition in cortex, the brain uses a hierarchical approach in which simple, low-level features having high position and scale specificity are pooled and combined into more complex, higher-level features having greater location invariance. At higher levels, spatial structure becomes implicitly encoded into the features themselves, which may overlap, while explicit spatial information is coded more coarsely. In this paper, the importance of sparsity and localized patch features in a hierarchical model inspired by visual cortex is investigated. As in the model of Serre, Wolf, and Poggio, we first apply Gabor filters at all positions and scales; feature complexity and position/scale invariance are then built up by alternating template matching and max pooling operations. In order to improve generalization performance, the sparsity is proposed and data dimension is reduced by means of compressive sensing theory and sparse representation algorithm. Similarly, within computational neuroscience, adding the sparsity on the number of feature inputs and feature selection is critical for learning biologically model from the statistics of natural images. Then, a redundancy dictionary of patch-based features that could distinguish object class from other categories is designed and then object recognition is implemented by the process of iterative optimization. The method is test on the UIUC car database. The success of this approach suggests a proof for the object class recognition in visual cortex.

  1. Empirical Analysis of the Subjective Impressions and Objective Measures of Domain Scientists’ Analytical Judgment Using Visualizations

    Energy Technology Data Exchange (ETDEWEB)

    Dasgupta, Aritra; Burrows, Susannah M.; Han, Kyungsik; Rasch, Philip J.

    2017-04-15

    Scientists working in a particular domain often adhere to conventional data analysis and presentation methods and this leads to familiarity with these methods over time. But does high familiarity always lead to better analytical judgment? This question is especially relevant when visualizations are used in scientific tasks, as there can be discrepancies between visualization best practices and domain conventions. However, there is little empirical evidence of the relationships between scientists’ subjective impressions about familiar and unfamiliar visualizations and objective measures of their effect on scientific judgment. To address this gap and to study these factors, we focus on the climate science domain, specifically on visualizations used for comparison of model performance. We present a comprehensive user study with 47 climate scientists where we explored the following factors: i) relationships between scientists’ familiarity, their perceived levels of com- fort, confidence, accuracy, and objective measures of accuracy, and ii) relationships among domain experience, visualization familiarity, and post-study preference.

  2. Tensor categories

    CERN Document Server

    Etingof, Pavel; Nikshych, Dmitri; Ostrik, Victor

    2015-01-01

    Is there a vector space whose dimension is the golden ratio? Of course not-the golden ratio is not an integer! But this can happen for generalizations of vector spaces-objects of a tensor category. The theory of tensor categories is a relatively new field of mathematics that generalizes the theory of group representations. It has deep connections with many other fields, including representation theory, Hopf algebras, operator algebras, low-dimensional topology (in particular, knot theory), homotopy theory, quantum mechanics and field theory, quantum computation, theory of motives, etc. This bo

  3. Deformation-specific and deformation-invariant visual object recognition: pose vs. identity recognition of people and deforming objects.

    Science.gov (United States)

    Webb, Tristan J; Rolls, Edmund T

    2014-01-01

    When we see a human sitting down, standing up, or walking, we can recognize one of these poses independently of the individual, or we can recognize the individual person, independently of the pose. The same issues arise for deforming objects. For example, if we see a flag deformed by the wind, either blowing out or hanging languidly, we can usually recognize the flag, independently of its deformation; or we can recognize the deformation independently of the identity of the flag. We hypothesize that these types of recognition can be implemented by the primate visual system using temporo-spatial continuity as objects transform as a learning principle. In particular, we hypothesize that pose or deformation can be learned under conditions in which large numbers of different people are successively seen in the same pose, or objects in the same deformation. We also hypothesize that person-specific representations that are independent of pose, and object-specific representations that are independent of deformation and view, could be built, when individual people or objects are observed successively transforming from one pose or deformation and view to another. These hypotheses were tested in a simulation of the ventral visual system, VisNet, that uses temporal continuity, implemented in a synaptic learning rule with a short-term memory trace of previous neuronal activity, to learn invariant representations. It was found that depending on the statistics of the visual input, either pose-specific or deformation-specific representations could be built that were invariant with respect to individual and view; or that identity-specific representations could be built that were invariant with respect to pose or deformation and view. We propose that this is how pose-specific and pose-invariant, and deformation-specific and deformation-invariant, perceptual representations are built in the brain.

  4. Deformation-specific and deformation-invariant visual object recognition: pose vs identity recognition of people and deforming objects

    Directory of Open Access Journals (Sweden)

    Tristan J Webb

    2014-04-01

    Full Text Available When we see a human sitting down, standing up, or walking, we can recognise one of these poses independently of the individual, or we can recognise the individual person, independently of the pose. The same issues arise for deforming objects. For example, if we see a flag deformed by the wind, either blowing out or hanging languidly, we can usually recognise the flag, independently of its deformation; or we can recognise the deformation independently of the identity of the flag. We hypothesize that these types of recognition can be implemented by the primate visual system using temporo-spatial continuity as objects transform as a learning principle. In particular, we hypothesize that pose or deformation can be learned under conditions in which large numbers of different people are successively seen in the same pose, or objects in the same deformation. We also hypothesize that person-specific representations that are independent of pose, and object-specific representations that are independent of deformation and view, could be built, when individual people or objects are observed successively transforming from one pose or deformation and view to another. These hypotheses were tested in a simulation of the ventral visual system, VisNet, that uses temporal continuity, implemented in a synaptic learning rule with a short-term memory trace of previous neuronal activity, to learn invariant representations. It was found that depending on the statistics of the visual input, either pose-specific or deformation-specific representations could be built that were invariant with respect to individual and view; or that identity-specific representations could be built that were invariant with respect to pose or deformation and view. We propose that this is how pose-specific and pose-invariant, and deformation-specific and deformation-invariant, perceptual representations are built in the brain.

  5. Supporting Sensemaking of Complex Objects with Visualizations: Visibility and Complementarity of Interactions

    Directory of Open Access Journals (Sweden)

    Kamran Sedig

    2016-10-01

    Full Text Available Making sense of complex objects is difficult, and typically requires the use of external representations to support cognitive demands while reasoning about the objects. Visualizations are one type of external representation that can be used to support sensemaking activities. In this paper, we investigate the role of two design strategies in making the interactive features of visualizations more supportive of users’ exploratory needs when trying to make sense of complex objects. These two strategies are visibility and complementarity of interactions. We employ a theoretical framework concerned with human–information interaction and complex cognitive activities to inform, contextualize, and interpret the effects of the design strategies. The two strategies are incorporated in the design of Polyvise, a visualization tool that supports making sense of complex four-dimensional geometric objects. A mixed-methods study was conducted to evaluate the design strategies and the overall usability of Polyvise. We report the findings of the study, discuss some implications for the design of visualization tools that support sensemaking of complex objects, and propose five design guidelines. We anticipate that our results are transferrable to other contexts, and that these two design strategies can be used broadly in visualization tools intended to support activities with complex objects and information spaces.

  6. BlueJ Visual Debugger for Learning the Execution of Object-Oriented Programs?

    DEFF Research Database (Denmark)

    Bennedsen, Jens B.; Schulte, Carsten

    2010-01-01

    This article reports on an experiment undertaken in order to evaluate the effect of a program visualization tool for helping students to better understand the dynamics of object-orientedprograms. The concrete tool used was BlueJ?s debugger and object inspector. The study was done as a control...

  7. BlueJ Visual Debugger for Learning the Execution of Object-Oriented Programs?

    Science.gov (United States)

    Bennedsen, Jens; Schulte, Carsten

    2010-01-01

    This article reports on an experiment undertaken in order to evaluate the effect of a program visualization tool for helping students to better understand the dynamics of object-oriented programs. The concrete tool used was BlueJ's debugger and object inspector. The study was done as a control-group experiment in an introductory programming…

  8. Perceptual Organization of Shape, Color, Shade, and Lighting in Visual and Pictorial Objects

    Directory of Open Access Journals (Sweden)

    Baingio Pinna

    2012-06-01

    Full Text Available The main questions we asked in this work are the following: Where are representations of shape, color, depth, and lighting mostly located? Does their formation take time to develop? How do they contribute to determining and defining a visual object, and how do they differ? How do visual artists use them to create objects and scenes? Is the way artists use them related to the way we perceive them? To answer these questions, we studied the microgenetic development of the object perception and formation. Our hypothesis is that the main object properties are extracted in sequential order and in the same order that these roles are also used by artists and children of different age to paint objects. The results supported the microgenesis of object formation according to the following sequence: contours, color, shading, and lighting.

  9. Thickness and clearance visualization based on distance field of 3D objects

    Directory of Open Access Journals (Sweden)

    Masatomo Inui

    2015-07-01

    Full Text Available This paper proposes a novel method for visualizing the thickness and clearance of 3D objects in a polyhedral representation. The proposed method uses the distance field of the objects in the visualization. A parallel algorithm is developed for constructing the distance field of polyhedral objects using the GPU. The distance between a voxel and the surface polygons of the model is computed many times in the distance field construction. Similar sets of polygons are usually selected as close polygons for close voxels. By using this spatial coherence, a parallel algorithm is designed to compute the distances between a cluster of close voxels and the polygons selected by the culling operation so that the fast shared memory mechanism of the GPU can be fully utilized. The thickness/clearance of the objects is visualized by distributing points on the visible surfaces of the objects and painting them with a unique color corresponding to the thickness/clearance values at those points. A modified ray casting method is developed for computing the thickness/clearance using the distance field of the objects. A system based on these algorithms can compute the distance field of complex objects within a few minutes for most cases. After the distance field construction, thickness/clearance visualization at a near interactive rate is achieved.

  10. Barack Obama Blindness (BOB): Absence of Visual Awareness to a Single Object.

    Science.gov (United States)

    Persuh, Marjan; Melara, Robert D

    2016-01-01

    In two experiments, we evaluated whether a perceiver's prior expectations could alone obliterate his or her awareness of a salient visual stimulus. To establish expectancy, observers first made a demanding visual discrimination on each of three baseline trials. Then, on a fourth, critical trial, a single, salient and highly visible object appeared in full view at the center of the visual field and in the absence of any competing visual input. Surprisingly, fully half of the participants were unaware of the solitary object in front of their eyes. Dramatically, observers were blind even when the only stimulus on display was the face of U.S. President Barack Obama. We term this novel, counterintuitive phenomenon, Barack Obama Blindness (BOB). Employing a method that rules out putative memory effects by probing awareness immediately after presentation of the critical stimulus, we demonstrate that the BOB effect is a true failure of conscious vision.

  11. Visual hull method for tomographic PIV measurement of flow around moving objects

    Energy Technology Data Exchange (ETDEWEB)

    Adhikari, D.; Longmire, E.K. [University of Minnesota, Department of Aerospace Engineering and Mechanics, Minneapolis, MN (United States)

    2012-10-15

    Tomographic particle image velocimetry (PIV) is a recently developed method to measure three components of velocity within a volumetric space. We present a visual hull technique that automates identification and masking of discrete objects within the measurement volume, and we apply existing tomographic PIV reconstruction software to measure the velocity surrounding the objects. The technique is demonstrated by considering flow around falling bodies of different shape with Reynolds number {proportional_to}1,000. Acquired image sets are processed using separate routines to reconstruct both the volumetric mask around the object and the surrounding tracer particles. After particle reconstruction, the reconstructed object mask is used to remove any ghost particles that otherwise appear within the object volume. Velocity vectors corresponding with fluid motion can then be determined up to the boundary of the visual hull without being contaminated or affected by the neighboring object velocity. Although the visual hull method is not meant for precise tracking of objects, the reconstructed object volumes nevertheless can be used to estimate the object location and orientation at each time step. (orig.)

  12. Visual hull method for tomographic PIV measurement of flow around moving objects

    Science.gov (United States)

    Adhikari, D.; Longmire, E. K.

    2012-10-01

    Tomographic particle image velocimetry (PIV) is a recently developed method to measure three components of velocity within a volumetric space. We present a visual hull technique that automates identification and masking of discrete objects within the measurement volume, and we apply existing tomographic PIV reconstruction software to measure the velocity surrounding the objects. The technique is demonstrated by considering flow around falling bodies of different shape with Reynolds number ~1,000. Acquired image sets are processed using separate routines to reconstruct both the volumetric mask around the object and the surrounding tracer particles. After particle reconstruction, the reconstructed object mask is used to remove any ghost particles that otherwise appear within the object volume. Velocity vectors corresponding with fluid motion can then be determined up to the boundary of the visual hull without being contaminated or affected by the neighboring object velocity. Although the visual hull method is not meant for precise tracking of objects, the reconstructed object volumes nevertheless can be used to estimate the object location and orientation at each time step.

  13. Characterizing the information content of a newly hatched chick's first visual object representation.

    Science.gov (United States)

    Wood, Justin N

    2015-03-01

    How does object recognition emerge in the newborn brain? To address this question, I examined the information content of the first visual object representation built by newly hatched chicks (Gallus gallus). In their first week of life, chicks were raised in controlled-rearing chambers that contained a single virtual object rotating around a single axis. In their second week of life, I tested whether subjects had encoded information about the identity and viewpoint of the virtual object. The results showed that chicks built object representations that contained both object identity information and view-specific information. However, there was a trade-off between these two types of information: subjects who were more sensitive to identity information were less sensitive to view-specific information, and vice versa. This pattern of results is predicted by iterative, hierarchically organized visual processing machinery, the machinery that supports object recognition in adult primates. More generally, this study shows that invariant object recognition is a core cognitive ability that can be operational at the onset of visual object experience. © 2014 John Wiley & Sons Ltd.

  14. Visual and somatosensory information about object shape control manipulative fingertip forces.

    Science.gov (United States)

    Jenmalm, P; Johansson, R S

    1997-06-01

    We investigated the importance of visual versus somatosensory information for the adaptation of the fingertip forces to object shape when humans used the tips of the right index finger and thumb to lift a test object. The angle of the two flat grip surfaces in relation to the vertical plane was changed between trials from -40 to 30 degrees. At 0 degrees the two surfaces were parallel, and at positive and negative angles the object tapered upward and downward, respectively. Subjects automatically adapted the balance between the horizontal grip force and the vertical lift force to the object shape and thereby maintained a rather constant safety margin against frictional slips, despite the huge variation in finger force requirements. Subjects used visual cues to adapt force to object shape parametrically in anticipation of the force requirements imposed once the object was contacted. In the absence of somatosensory information from the digits, sighted subjects still adapted the force coordination to object shape, but without vision and somatosensory inputs the performance was severely impaired. With normal digital sensibility, subjects adapted the force coordination to object shape even without vision. Shape cues obtained by somatosensory mechanisms were expressed in the motor output about 0. 1 sec after contact. Before this point in time, memory of force coordination used in the previous trial controlled the force output. We conclude that both visual and somatosensory inputs can be used in conjunction with sensorimotor memories to adapt the force output to object shape automatically for grasp stability.

  15. Contextual effects of scene on the visual perception of object orientation in depth.

    Directory of Open Access Journals (Sweden)

    Ryosuke Niimi

    Full Text Available We investigated the effect of background scene on the human visual perception of depth orientation (i.e., azimuth angle of three-dimensional common objects. Participants evaluated the depth orientation of objects. The objects were surrounded by scenes with an apparent axis of the global reference frame, such as a sidewalk scene. When a scene axis was slightly misaligned with the gaze line, object orientation perception was biased, as if the gaze line had been assimilated into the scene axis (Experiment 1. When the scene axis was slightly misaligned with the object, evaluated object orientation was biased, as if it had been assimilated into the scene axis (Experiment 2. This assimilation may be due to confusion between the orientation of the scene and object axes (Experiment 3. Thus, the global reference frame may influence object orientation perception when its orientation is similar to that of the gaze-line or object.

  16. Reading visually embodied meaning from the brain: Visually grounded computational models decode visual-object mental imagery induced by written text.

    Science.gov (United States)

    Anderson, Andrew James; Bruni, Elia; Lopopolo, Alessandro; Poesio, Massimo; Baroni, Marco

    2015-10-15

    Embodiment theory predicts that mental imagery of object words recruits neural circuits involved in object perception. The degree of visual imagery present in routine thought and how it is encoded in the brain is largely unknown. We test whether fMRI activity patterns elicited by participants reading objects' names include embodied visual-object representations, and whether we can decode the representations using novel computational image-based semantic models. We first apply the image models in conjunction with text-based semantic models to test predictions of visual-specificity of semantic representations in different brain regions. Representational similarity analysis confirms that fMRI structure within ventral-temporal and lateral-occipital regions correlates most strongly with the image models and conversely text models correlate better with posterior-parietal/lateral-temporal/inferior-frontal regions. We use an unsupervised decoding algorithm that exploits commonalities in representational similarity structure found within both image model and brain data sets to classify embodied visual representations with high accuracy (8/10) and then extend it to exploit model combinations to robustly decode different brain regions in parallel. By capturing latent visual-semantic structure our models provide a route into analyzing neural representations derived from past perceptual experience rather than stimulus-driven brain activity. Our results also verify the benefit of combining multimodal data to model human-like semantic representations. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Simultaneous object perception deficits are related to reduced visual processing speed in amnestic mild cognitive impairment.

    Science.gov (United States)

    Ruiz-Rizzo, Adriana L; Bublak, Peter; Redel, Petra; Grimmer, Timo; Müller, Hermann J; Sorg, Christian; Finke, Kathrin

    2017-07-01

    Simultanagnosia, an impairment in simultaneous object perception, has been attributed to deficits in visual attention and, specifically, to processing speed. Increasing visual attention deficits manifest over the course of Alzheimer's disease (AD), where the first changes are present already in its symptomatic predementia phase: amnestic mild cognitive impairment (aMCI). In this study, we examined whether patients with aMCI due to AD show simultaneous object perception deficits and whether and how these deficits relate to visual attention. Sixteen AD patients with aMCI and 16 age-, gender-, and education-matched healthy controls were assessed with a simultaneous perception task, with shapes presented in an adjacent, embedded, or overlapping manner, under free viewing without temporal constraints. We used a parametric assessment of visual attention based on the Theory of Visual Attention. Results show that patients make significantly more errors than controls when identifying overlapping shapes, which correlate with reduced processing speed. Our findings suggest simultaneous object perception deficits in very early AD, and a visual processing speed reduction underlying these deficits. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Convolutional neural network-based encoding and decoding of visual object recognition in space and time.

    Science.gov (United States)

    Seeliger, K; Fritsche, M; Güçlü, U; Schoenmakers, S; Schoffelen, J-M; Bosch, S E; van Gerven, M A J

    2017-07-16

    Representations learned by deep convolutional neural networks (CNNs) for object recognition are a widely investigated model of the processing hierarchy in the human visual system. Using functional magnetic resonance imaging, CNN representations of visual stimuli have previously been shown to correspond to processing stages in the ventral and dorsal streams of the visual system. Whether this correspondence between models and brain signals also holds for activity acquired at high temporal resolution has been explored less exhaustively. Here, we addressed this question by combining CNN-based encoding models with magnetoencephalography (MEG). Human participants passively viewed 1,000 images of objects while MEG signals were acquired. We modelled their high temporal resolution source-reconstructed cortical activity with CNNs, and observed a feed-forward sweep across the visual hierarchy between 75 and 200 ms after stimulus onset. This spatiotemporal cascade was captured by the network layer representations, where the increasingly abstract stimulus representation in the hierarchical network model was reflected in different parts of the visual cortex, following the visual ventral stream. We further validated the accuracy of our encoding model by decoding stimulus identity in a left-out validation set of viewed objects, achieving state-of-the-art decoding accuracy. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Is an objective refraction optimised using the visual Strehl ratio better than a subjective refraction?

    Science.gov (United States)

    Hastings, Gareth D; Marsack, Jason D; Nguyen, Lan Chi; Cheng, Han; Applegate, Raymond A

    2017-05-01

    To prospectively examine whether using the visual image quality metric, visual Strehl (VSX), to optimise objective refraction from wavefront error measurements can provide equivalent or better visual performance than subjective refraction and which refraction is preferred in free viewing. Subjective refractions and wavefront aberrations were measured on 40 visually-normal eyes of 20 subjects, through natural and dilated pupils. For each eye a sphere, cylinder, and axis prescription was also objectively determined that optimised visual image quality (VSX) for the measured wavefront error. High contrast (HC) and low contrast (LC) logMAR visual acuity (VA) and short-term monocular distance vision preference were recorded and compared between the VSX-objective and subjective prescriptions both undilated and dilated. For 36 myopic eyes, clinically equivalent (and not statistically different) HC VA was provided with both the objective and subjective refractions (undilated mean ± S.D. was -0.06 ± 0.04 with both refractions; dilated was -0.05 ± 0.04 with the objective, and -0.05 ± 0.05 with the subjective refraction). LC logMAR VA provided by the objective refraction was also clinically equivalent and not statistically different to that provided by the subjective refraction through both natural and dilated pupils for myopic eyes. In free viewing the objective prescription was preferred over the subjective by 72% of myopic eyes when not dilated. For four habitually undercorrected high hyperopic eyes, the VSX-objective refraction was more positive in spherical power and VA poorer than with the subjective refraction. A method of simultaneously optimising sphere, cylinder, and axis from wavefront error measurements, using the visual image quality metric VSX, is described. In myopic subjects, visual performance, as measured by HC and LC VA, with this VSX-objective refraction was found equivalent to that provided by subjective refraction, and was typically preferred

  20. Use of interactive data visualization in multi-objective forest planning.

    Science.gov (United States)

    Haara, Arto; Pykäläinen, Jouni; Tolvanen, Anne; Kurttila, Mikko

    2018-01-10

    Common to multi-objective forest planning situations is that they all require comparisons, searches and evaluation among decision alternatives. Through these actions, the decision maker can learn from the information presented and thus make well-justified decisions. Interactive data visualization is an evolving approach that supports learning and decision making in multidimensional decision problems and planning processes. Data visualization contributes the formation of mental image data and this process is further boosted by allowing interaction with the data. In this study, we introduce a multi-objective forest planning decision problem framework and the corresponding characteristics of data. We utilize the framework with example planning data to illustrate and evaluate the potential of 14 interactive data visualization techniques to support multi-objective forest planning decisions. Furthermore, broader utilization possibilities of these techniques to incorporate the provisioning of ecosystem services into forest management and planning are discussed. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. Visual objects and universal meanings: AIDS posters and the politics of globalisation and history.

    Science.gov (United States)

    Stein, Claudia; Cooter, Roger

    2011-01-01

    Drawing on recent visual and spatial turns in history writing, this paper considers AIDS posters from the perspective of their museum 'afterlife' as collected material objects. Museum spaces serve changing political and epistemological projects, and the visual objects they house are not immune from them. A recent globally themed exhibition of AIDS posters at an arts and crafts museum in Hamburg is cited in illustration. The exhibition also serves to draw attention to institutional continuities in collecting agendas. Revealed, contrary to postmodernist expectations, is how today's application of aesthetic display for the purpose of making 'global connections' does not radically break with the virtues and morals attached to the visual at the end of the nineteenth century. The historicisation of such objects needs to take into account this complicated mix of change and continuity in aesthetic concepts and political inscriptions. Otherwise, historians fall prey to seductive aesthetics without being aware of the politics of them. This article submits that aesthetics is politics.

  2. Grasping two-dimensional images and three-dimensional objects in visual-form agnosia.

    Science.gov (United States)

    Westwood, David A; Danckert, James; Servos, Philip; Goodale, Melvyn A

    2002-05-01

    Visually guided prehension is controlled by a specialized visuomotor system in the posterior parietal cortex. It is not clear how this system responds to visual stimuli that lack three-dimensional (3D) structure, such as two-dimensional (2D) images of objects. We asked a neurological patient with visual-form agnosia (patient D.F.) to grasp 3D objects and 2D images of the same objects and to estimate their sizes manually. D.F.'s grip aperture was scaled to the sizes of the 2D and 3D target stimuli, but her manual estimates were poorly correlated with object size. Control participants demonstrated appropriate size-scaling in both the grasping and manual size-estimation tasks, but tended to use a smaller peak aperture when reaching to grasp 2D images. We conclude that: (1) the dorsal stream grasping system does not discriminate in a fundamental way between 2D and 3D objects, and (2) neurologically normal participants might adopt a different visuomotor strategy for target objects that are recognized to be ungraspable. These findings are consistent with the view that the dorsal grasping system accesses a pragmatic, spatial representation of the target object, whereas the ventral system accesses a more comprehensive, volumetric description of the object.

  3. Beyond colour perception: auditory-visual synaesthesia induces experiences of geometric objects in specific locations.

    Science.gov (United States)

    Chiou, Rocco; Stelter, Marleen; Rich, Anina N

    2013-06-01

    Our brain constantly integrates signals across different senses. Auditory-visual synaesthesia is an unusual form of cross-modal integration in which sounds evoke involuntary visual experiences. Previous research primarily focuses on synaesthetic colour, but little is known about non-colour synaesthetic visual features. Here we studied a group of synaesthetes for whom sounds elicit consistent visual experiences of coloured 'geometric objects' located at specific spatial location. Changes in auditory pitch alter the brightness, size, and spatial height of synaesthetic experiences in a systematic manner resembling the cross-modal correspondences of non-synaesthetes, implying synaesthesia may recruit cognitive/neural mechanisms for 'normal' cross-modal processes. To objectively assess the impact of synaesthetic objects on behaviour, we devised a multi-feature cross-modal synaesthetic congruency paradigm and asked participants to perform speeded colour or shape discrimination. We found irrelevant sounds influenced performance, as quantified by congruency effects, demonstrating that synaesthetes were not able to suppress their synaesthetic experiences even when these were irrelevant for the task. Furthermore, we found some evidence for task-specific effects consistent with feature-based attention acting on the constituent features of synaesthetic objects: synaesthetic colours appeared to have a stronger impact on performance than synaesthetic shapes when synaesthetes attended to colour, and vice versa when they attended to shape. We provide the first objective evidence that visual synaesthetic experience can involve multiple features forming object-like percepts and suggest that each feature can be selected by attention despite it being internally generated. These findings suggest theories of the brain mechanisms of synaesthesia need to incorporate a broader neural network underpinning multiple visual features, perceptual knowledge, and feature integration, rather than

  4. Different measures of structural similarity tap different aspects of visual object processing

    DEFF Research Database (Denmark)

    Gerlach, Christian

    2017-01-01

    The structural similarity of objects has been an important variable in explaining why some objects are easier to categorize at a superordinate level than to individuate, and also why some patients with brain injury have more difficulties in recognizing natural (structurally similar) objects than...... artifacts (structurally distinct objects). In spite of its merits as an explanatory variable, structural similarity is not a unitary construct, and it has been operationalized in different ways. Furthermore, even though measures of structural similarity have been successful in explaining task and category......-effects, this has been based more on implication than on direct empirical demonstrations. Here, the direct influence of two different measures of structural similarity, contour overlap and within-item structural diversity, on object individuation (object decision) and superordinate categorization performance...

  5. Objects Classification by Learning-Based Visual Saliency Model and Convolutional Neural Network

    Directory of Open Access Journals (Sweden)

    Na Li

    2016-01-01

    Full Text Available Humans can easily classify different kinds of objects whereas it is quite difficult for computers. As a hot and difficult problem, objects classification has been receiving extensive interests with broad prospects. Inspired by neuroscience, deep learning concept is proposed. Convolutional neural network (CNN as one of the methods of deep learning can be used to solve classification problem. But most of deep learning methods, including CNN, all ignore the human visual information processing mechanism when a person is classifying objects. Therefore, in this paper, inspiring the completed processing that humans classify different kinds of objects, we bring forth a new classification method which combines visual attention model and CNN. Firstly, we use the visual attention model to simulate the processing of human visual selection mechanism. Secondly, we use CNN to simulate the processing of how humans select features and extract the local features of those selected areas. Finally, not only does our classification method depend on those local features, but also it adds the human semantic features to classify objects. Our classification method has apparently advantages in biology. Experimental results demonstrated that our method made the efficiency of classification improve significantly.

  6. Crossmodal Activation of Visual Object Regions for Auditorily Presented Concrete Words

    Directory of Open Access Journals (Sweden)

    Jasper J F van den Bosch

    2011-10-01

    Full Text Available Dual-coding theory (Paivio, 1986 postulates that the human mind represents objects not just with an analogous, or semantic code, but with a perceptual representation as well. Previous studies (eg, Fiebach & Friederici, 2004 indicated that the modality of this representation is not necessarily the one that triggers the representation. The human visual cortex contains several regions, such as the Lateral Occipital Complex (LOC, that respond specifically to object stimuli. To investigate whether these principally visual representations regions are also recruited for auditory stimuli, we presented subjects with spoken words with specific, concrete meanings (‘car’ as well as words with abstract meanings (‘hope’. Their brain activity was measured with functional magnetic resonance imaging. Whole-brain contrasts showed overlap between regions differentially activated by words for concrete objects compared to words for abstract concepts with visual regions activated by a contrast of object versus non-object visual stimuli. We functionally localized LOC for individual subjects and a preliminary analysis showed a trend for a concreteness effect in this region-of-interest on the group level. Appropriate further analysis might include connectivity and classification measures. These results can shed light on the role of crossmodal representations in cognition.

  7. Deconstructing visual scenes in cortex: gradients of object and spatial layout information.

    Science.gov (United States)

    Harel, Assaf; Kravitz, Dwight J; Baker, Chris I

    2013-04-01

    Real-world visual scenes are complex cluttered, and heterogeneous stimuli engaging scene- and object-selective cortical regions including parahippocampal place area (PPA), retrosplenial complex (RSC), and lateral occipital complex (LOC). To understand the unique contribution of each region to distributed scene representations, we generated predictions based on a neuroanatomical framework adapted from monkey and tested them using minimal scenes in which we independently manipulated both spatial layout (open, closed, and gradient) and object content (furniture, e.g., bed, dresser). Commensurate with its strong connectivity with posterior parietal cortex, RSC evidenced strong spatial layout information but no object information, and its response was not even modulated by object presence. In contrast, LOC, which lies within the ventral visual pathway, contained strong object information but no background information. Finally, PPA, which is connected with both the dorsal and the ventral visual pathway, showed information about both objects and spatial backgrounds and was sensitive to the presence or absence of either. These results suggest that 1) LOC, PPA, and RSC have distinct representations, emphasizing different aspects of scenes, 2) the specific representations in each region are predictable from their patterns of connectivity, and 3) PPA combines both spatial layout and object information as predicted by connectivity.

  8. Navon's classical paradigm concerning local and global processing relates systematically to visual object classification performance.

    Science.gov (United States)

    Gerlach, Christian; Poirel, Nicolas

    2018-01-10

    Forty years ago David Navon tried to tackle a central problem in psychology concerning the time course of perceptual processing: Do we first see the details (local level) followed by the overall outlay (global level) or is it rather the other way around? He did this by developing a now classical paradigm involving the presentation of compound stimuli; large letters composed of smaller letters. Despite the usefulness of this paradigm it remains uncertain whether effects found with compound stimuli relate directly to visual object recognition. It does so because compound stimuli are not actual objects but rather formations of elements and because the elements that form the global shape of compound stimuli are not features of the global shape but rather objects in their own right. To examine the relationship between performance on Navon's paradigm and visual object processing we derived two indexes from Navon's paradigm that reflect different aspects of the relationship between global and local processing. We find that individual differences on these indexes can explain a considerable amount of variance in two standard object classification paradigms; object decision and superordinate categorization, suggesting that Navon's paradigm does relate to visual object processing.

  9. A case of impaired shape integration: Implications for models of visual object processing

    DEFF Research Database (Denmark)

    Gerlach, Christian; Marstrand, Lisbeth; Habekost, Thomas

    2005-01-01

    integration is not a prerequisite for normal object naming. However, on more demanding tests of visual object recognition, HE's performance deteriorated, with her performance being inversely related to the demand placed on integration of local elements into more elaborate shape descriptions. From this we...... conclude that shape integration is important for normal object recognition....... the notion that grouping may be divided into two general steps: (i) element clustering and (ii) shape configuration, with the latter operation being impaired in HE. As opposed to previous cases with shape integration deficits, HE was able to name objects accurately. Initially, this might suggest that shape...

  10. Object Recognition in Mental Representations: Directions for Exploring Diagnostic Features through Visual Mental Imagery

    Science.gov (United States)

    Roldan, Stephanie M.

    2017-01-01

    One of the fundamental goals of object recognition research is to understand how a cognitive representation produced from the output of filtered and transformed sensory information facilitates efficient viewer behavior. Given that mental imagery strongly resembles perceptual processes in both cortical regions and subjective visual qualities, it is reasonable to question whether mental imagery facilitates cognition in a manner similar to that of perceptual viewing: via the detection and recognition of distinguishing features. Categorizing the feature content of mental imagery holds potential as a reverse pathway by which to identify the components of a visual stimulus which are most critical for the creation and retrieval of a visual representation. This review will examine the likelihood that the information represented in visual mental imagery reflects distinctive object features thought to facilitate efficient object categorization and recognition during perceptual viewing. If it is the case that these representational features resemble their sensory counterparts in both spatial and semantic qualities, they may well be accessible through mental imagery as evaluated through current investigative techniques. In this review, methods applied to mental imagery research and their findings are reviewed and evaluated for their efficiency in accessing internal representations, and implications for identifying diagnostic features are discussed. An argument is made for the benefits of combining mental imagery assessment methods with diagnostic feature research to advance the understanding of visual perceptive processes, with suggestions for avenues of future investigation. PMID:28588538

  11. Virtual Exertions: a user interface combining visual information, kinesthetics and biofeedback for virtual object manipulation.

    Science.gov (United States)

    Ponto, Kevin; Kimmel, Ryan; Kohlmann, Joe; Bartholomew, Aaron; Radwin, Robert G

    2012-01-01

    Virtual Reality environments have the ability to present users with rich visual representations of simulated environments. However, means to interact with these types of illusions are generally unnatural in the sense that they do not match the methods humans use to grasp and move objects in the physical world. We demonstrate a system that enables users to interact with virtual objects with natural body movements by combining visual information, kinesthetics and biofeedback from electromyograms (EMG). Our method allows virtual objects to be grasped, moved and dropped through muscle exertion classification based on physical world masses. We show that users can consistently reproduce these calibrated exertions, allowing them to interface with objects in a novel way.

  12. Visual perspective in autobiographical memories: reliability, consistency, and relationship to objective memory performance.

    Science.gov (United States)

    Siedlecki, Karen L

    2015-01-01

    Visual perspective in autobiographical memories was examined in terms of reliability, consistency, and relationship to objective memory performance in a sample of 99 individuals. Autobiographical memories may be recalled from two visual perspectives--a field perspective in which individuals experience the memory through their own eyes, or an observer perspective in which individuals experience the memory from the viewpoint of an observer in which they can see themselves. Participants recalled nine word-cued memories that differed in emotional valence (positive, negative and neutral) and rated their memories on 18 scales. Results indicate that visual perspective was the most reliable memory characteristic overall and is consistently related to emotional intensity at the time of recall and amount of emotion experienced during the memory. Visual perspective is unrelated to memory for words, stories, abstract line drawings or faces.

  13. Methodology for the Efficient Progressive Distribution and Visualization of 3D Building Objects

    Directory of Open Access Journals (Sweden)

    Bo Mao

    2016-10-01

    Full Text Available Three-dimensional (3D, city models have been applied in a variety of fields. One of the main problems in 3D city model utilization, however, is the large volume of data. In this paper, a method is proposed to generalize the 3D building objects in 3D city models at different levels of detail, and to combine multiple Levels of Detail (LODs for a progressive distribution and visualization of the city models. First, an extended structure for multiple LODs of building objects, BuildingTree, is introduced that supports both single buildings and building groups; second, constructive solid geometry (CSG representations of buildings are created and generalized. Finally, the BuildingTree is stored in the NoSQL database MongoDB for dynamic visualization requests. The experimental results indicate that the proposed progressive method can efficiently visualize 3D city models, especially for large areas.

  14. On hierarchical models for visual recognition and learning of objects, scenes, and activities

    CERN Document Server

    Spehr, Jens

    2015-01-01

    In many computer vision applications, objects have to be learned and recognized in images or image sequences. This book presents new probabilistic hierarchical models that allow an efficient representation of multiple objects of different categories, scales, rotations, and views. The idea is to exploit similarities between objects and object parts in order to share calculations and avoid redundant information. Furthermore inference approaches for fast and robust detection are presented. These new approaches combine the idea of compositional and similarity hierarchies and overcome limitations of previous methods. Besides classical object recognition the book shows the use for detection of human poses in a project for gait analysis. The use of activity detection is presented for the design of environments for ageing, to identify activities and behavior patterns in smart homes. In a presented project for parking spot detection using an intelligent vehicle, the proposed approaches are used to hierarchically model...

  15. A Transient Auditory Signal Shifts the Perceived Offset Position of a Moving Visual Object

    Directory of Open Access Journals (Sweden)

    Sung-En eChien

    2013-02-01

    Full Text Available Information received from different sensory modalities profoundly influences human perception. For example, changes in the auditory flutter rate induce changes in the apparent flicker rate of a flashing light (Shipley, 1964. In the present study, we investigated whether auditory information would affect the perceived offset position of a moving object. In Experiment 1, a visual object moved toward the center of the computer screen and disappeared abruptly. A transient auditory signal was presented at different times relative to the moment when the object disappeared. The results showed that if the auditory signal was presented before the abrupt offset of the moving object, the perceived final position was shifted backward, implying that the perceived offset position was affected by the transient auditory information. In Experiment 2, we presented the transient auditory signal to either the left or the right ear. The results showed that the perceived offset shifted backward more strongly when the auditory signal was presented to the same side from which the moving object originated. In Experiment 3, we found that the perceived timing of the visual offset was not affected by the spatial relation between the auditory signal and the visual offset. The present results are interpreted as indicating that an auditory signal may influence the offset position of a moving object through both spatial and temporal processes.

  16. The strategic retention of task-relevant objects in visual working memory.

    Science.gov (United States)

    Maxcey-Richard, Ashleigh M; Hollingworth, Andrew

    2013-05-01

    The serial and spatially extended nature of many real-world visual tasks suggests the need for control over the content of visual working memory (VWM). We examined the management of VWM in a task that required participants to prioritize individual objects for retention during scene viewing. There were 5 principal findings: (a) Strategic retention of task-relevant objects was effective and was dissociable from the current locus of visual attention; (b) strategic retention was implemented by protection from interference rather than by preferential encoding; (c) this prioritization was flexibly transferred to a new object as task demands changed; (d) no-longer-relevant items were efficiently eliminated from VWM; and (e) despite this level of control, attended and fixated objects were consolidated into VWM regardless of task relevance. These results are consistent with a model of VWM control in which each fixated object is automatically encoded into VWM, replacing a portion of the content in VWM. However, task-relevant objects can be selectively protected from replacement.

  17. Real-time tracking of visually attended objects in virtual environments and its application to LOD.

    Science.gov (United States)

    Lee, Sungkil; Kim, Gerard Jounghyun; Choi, Seungmoon

    2009-01-01

    This paper presents a real-time framework for computationally tracking objects visually attended by the user while navigating in interactive virtual environments. In addition to the conventional bottom-up (stimulus-driven) saliency map, the proposed framework uses top-down (goal-directed) contexts inferred from the user's spatial and temporal behaviors, and identifies the most plausibly attended objects among candidates in the object saliency map. The computational framework was implemented using GPU, exhibiting high computational performance adequate for interactive virtual environments. A user experiment was also conducted to evaluate the prediction accuracy of the tracking framework by comparing objects regarded as visually attended by the framework to actual human gaze collected with an eye tracker. The results indicated that the accuracy was in the level well supported by the theory of human cognition for visually identifying single and multiple attentive targets, especially owing to the addition of top-down contextual information. Finally, we demonstrate how the visual attention tracking framework can be applied to managing the level of details in virtual environments, without any hardware for head or eye tracking.

  18. An Objective Measurement System To Assess Employment Outcomes for People Who Are Visually Impaired.

    Science.gov (United States)

    Becker, H. E., Jr.

    1998-01-01

    Proposes the following empowerment objectives be used for placing individuals with visual impairments in jobs: compensation and benefits package, proximity to community, ethics and respect for human dignity, self-fulfilling opportunities, physical and emotional safety, sense of belonging, and opportunities for personal improvement and upward…

  19. Visual Short-Term Memory Capacity for Simple and Complex Objects

    Science.gov (United States)

    Luria, Roy; Sessa, Paola; Gotler, Alex; Jolicoeur, Pierre; Dell'Acqua, Roberto

    2010-01-01

    Does the capacity of visual short-term memory (VSTM) depend on the complexity of the objects represented in memory? Although some previous findings indicated lower capacity for more complex stimuli, other results suggest that complexity effects arise during retrieval (due to errors in the comparison process with what is in memory) that is not…

  20. Relations of Preschoolers' Visual-Motor and Object Manipulation Skills with Executive Function and Social Behavior

    Science.gov (United States)

    MacDonald, Megan; Lipscomb, Shannon; McClelland, Megan M.; Duncan, Rob; Becker, Derek; Anderson, Kim; Kile, Molly

    2016-01-01

    Purpose: The purpose of this article was to examine specific linkages between early visual-motor integration skills and executive function, as well as between early object manipulation skills and social behaviors in the classroom during the preschool year. Method: Ninety-two children aged 3 to 5 years old (M[subscript age] = 4.31 years) were…

  1. Visualization: A Tool for Enhancing Students' Concept Images of Basic Object-Oriented Concepts

    Science.gov (United States)

    Cetin, Ibrahim

    2013-01-01

    The purpose of this study was twofold: to investigate students' concept images about class, object, and their relationship and to help them enhance their learning of these notions with a visualization tool. Fifty-six second-year university students participated in the study. To investigate his/her concept images, the researcher developed a survey…

  2. Role of early visual cortex in trans-saccadic memory of object features.

    Science.gov (United States)

    Malik, Pankhuri; Dessing, Joost C; Crawford, J Douglas

    2015-08-01

    Early visual cortex (EVC) participates in visual feature memory and the updating of remembered locations across saccades, but its role in the trans-saccadic integration of object features is unknown. We hypothesized that if EVC is involved in updating object features relative to gaze, feature memory should be disrupted when saccades remap an object representation into a simultaneously perturbed EVC site. To test this, we applied transcranial magnetic stimulation (TMS) over functional magnetic resonance imaging-localized EVC clusters corresponding to the bottom left/right visual quadrants (VQs). During experiments, these VQs were probed psychophysically by briefly presenting a central object (Gabor patch) while subjects fixated gaze to the right or left (and above). After a short memory interval, participants were required to detect the relative change in orientation of a re-presented test object at the same spatial location. Participants either sustained fixation during the memory interval (fixation task) or made a horizontal saccade that either maintained or reversed the VQ of the object (saccade task). Three TMS pulses (coinciding with the pre-, peri-, and postsaccade intervals) were applied to the left or right EVC. This had no effect when (a) fixation was maintained, (b) saccades kept the object in the same VQ, or (c) the EVC quadrant corresponding to the first object was stimulated. However, as predicted, TMS reduced performance when saccades (especially larger saccades) crossed the remembered object location and brought it into the VQ corresponding to the TMS site. This suppression effect was statistically significant for leftward saccades and followed a weaker trend for rightward saccades. These causal results are consistent with the idea that EVC is involved in the gaze-centered updating of object features for trans-saccadic memory and perception.

  3. Colour Terms Affect Detection of Colour and Colour-Associated Objects Suppressed from Visual Awareness.

    Science.gov (United States)

    Forder, Lewis; Taylor, Olivia; Mankin, Helen; Scott, Ryan B; Franklin, Anna

    2016-01-01

    The idea that language can affect how we see the world continues to create controversy. A potentially important study in this field has shown that when an object is suppressed from visual awareness using continuous flash suppression (a form of binocular rivalry), detection of the object is differently affected by a preceding word prime depending on whether the prime matches or does not match the object. This may suggest that language can affect early stages of vision. We replicated this paradigm and further investigated whether colour terms likewise influence the detection of colours or colour-associated object images suppressed from visual awareness by continuous flash suppression. This method presents rapidly changing visual noise to one eye while the target stimulus is presented to the other. It has been shown to delay conscious perception of a target for up to several minutes. In Experiment 1 we presented greyscale photos of objects. They were either preceded by a congruent object label, an incongruent label, or white noise. Detection sensitivity (d') and hit rates were significantly poorer for suppressed objects preceded by an incongruent label compared to a congruent label or noise. In Experiment 2, targets were coloured discs preceded by a colour term. Detection sensitivity was significantly worse for suppressed colour patches preceded by an incongruent colour term as compared to a congruent term or white noise. In Experiment 3 targets were suppressed greyscale object images preceded by an auditory presentation of a colour term. On congruent trials the colour term matched the object's stereotypical colour and on incongruent trials the colour term mismatched. Detection sensitivity was significantly poorer on incongruent trials than congruent trials. Overall, these findings suggest that colour terms affect awareness of coloured stimuli and colour- associated objects, and provide new evidence for language-perception interaction in the brain.

  4. Deep neural networks rival the representation of primate IT cortex for core visual object recognition.

    Directory of Open Access Journals (Sweden)

    Charles F Cadieu

    2014-12-01

    Full Text Available The primate visual system achieves remarkable visual object recognition performance even in brief presentations, and under changes to object exemplar, geometric transformations, and background variation (a.k.a. core visual object recognition. This remarkable performance is mediated by the representation formed in inferior temporal (IT cortex. In parallel, recent advances in machine learning have led to ever higher performing models of object recognition using artificial deep neural networks (DNNs. It remains unclear, however, whether the representational performance of DNNs rivals that of the brain. To accurately produce such a comparison, a major difficulty has been a unifying metric that accounts for experimental limitations, such as the amount of noise, the number of neural recording sites, and the number of trials, and computational limitations, such as the complexity of the decoding classifier and the number of classifier training examples. In this work, we perform a direct comparison that corrects for these experimental limitations and computational considerations. As part of our methodology, we propose an extension of "kernel analysis" that measures the generalization accuracy as a function of representational complexity. Our evaluations show that, unlike previous bio-inspired models, the latest DNNs rival the representational performance of IT cortex on this visual object recognition task. Furthermore, we show that models that perform well on measures of representational performance also perform well on measures of representational similarity to IT, and on measures of predicting individual IT multi-unit responses. Whether these DNNs rely on computational mechanisms similar to the primate visual system is yet to be determined, but, unlike all previous bio-inspired models, that possibility cannot be ruled out merely on representational performance grounds.

  5. Incidental auditory category learning.

    Science.gov (United States)

    Gabay, Yafit; Dick, Frederic K; Zevin, Jason D; Holt, Lori L

    2015-08-01

    Very little is known about how auditory categories are learned incidentally, without instructions to search for category-diagnostic dimensions, overt category decisions, or experimenter-provided feedback. This is an important gap because learning in the natural environment does not arise from explicit feedback and there is evidence that the learning systems engaged by traditional tasks are distinct from those recruited by incidental category learning. We examined incidental auditory category learning with a novel paradigm, the Systematic Multimodal Associations Reaction Time (SMART) task, in which participants rapidly detect and report the appearance of a visual target in 1 of 4 possible screen locations. Although the overt task is rapid visual detection, a brief sequence of sounds precedes each visual target. These sounds are drawn from 1 of 4 distinct sound categories that predict the location of the upcoming visual target. These many-to-one auditory-to-visuomotor correspondences support incidental auditory category learning. Participants incidentally learn categories of complex acoustic exemplars and generalize this learning to novel exemplars and tasks. Further, learning is facilitated when category exemplar variability is more tightly coupled to the visuomotor associations than when the same stimulus variability is experienced across trials. We relate these findings to phonetic category learning. (c) 2015 APA, all rights reserved).

  6. Colour Terms Affect Detection of Colour and Colour-Associated Objects Suppressed from Visual Awareness

    Science.gov (United States)

    Forder, Lewis; Taylor, Olivia; Mankin, Helen; Scott, Ryan B.; Franklin, Anna

    2016-01-01

    The idea that language can affect how we see the world continues to create controversy. A potentially important study in this field has shown that when an object is suppressed from visual awareness using continuous flash suppression (a form of binocular rivalry), detection of the object is differently affected by a preceding word prime depending on whether the prime matches or does not match the object. This may suggest that language can affect early stages of vision. We replicated this paradigm and further investigated whether colour terms likewise influence the detection of colours or colour-associated object images suppressed from visual awareness by continuous flash suppression. This method presents rapidly changing visual noise to one eye while the target stimulus is presented to the other. It has been shown to delay conscious perception of a target for up to several minutes. In Experiment 1 we presented greyscale photos of objects. They were either preceded by a congruent object label, an incongruent label, or white noise. Detection sensitivity (d’) and hit rates were significantly poorer for suppressed objects preceded by an incongruent label compared to a congruent label or noise. In Experiment 2, targets were coloured discs preceded by a colour term. Detection sensitivity was significantly worse for suppressed colour patches preceded by an incongruent colour term as compared to a congruent term or white noise. In Experiment 3 targets were suppressed greyscale object images preceded by an auditory presentation of a colour term. On congruent trials the colour term matched the object’s stereotypical colour and on incongruent trials the colour term mismatched. Detection sensitivity was significantly poorer on incongruent trials than congruent trials. Overall, these findings suggest that colour terms affect awareness of coloured stimuli and colour- associated objects, and provide new evidence for language-perception interaction in the brain. PMID:27023274

  7. Real-world spatial regularities affect visual working memory for objects.

    Science.gov (United States)

    Kaiser, Daniel; Stein, Timo; Peelen, Marius V

    2015-12-01

    Traditional memory research has focused on measuring and modeling the capacity of visual working memory for simple stimuli such as geometric shapes or colored disks. Although these studies have provided important insights, it is unclear how their findings apply to memory for more naturalistic stimuli. An important aspect of real-world scenes is that they contain a high degree of regularity: For instance, lamps appear above tables, not below them. In the present study, we tested whether such real-world spatial regularities affect working memory capacity for individual objects. Using a delayed change-detection task with concurrent verbal suppression, we found enhanced visual working memory performance for objects positioned according to real-world regularities, as compared to irregularly positioned objects. This effect was specific to upright stimuli, indicating that it did not reflect low-level grouping, because low-level grouping would be expected to equally affect memory for upright and inverted displays. These results suggest that objects can be held in visual working memory more efficiently when they are positioned according to frequently experienced real-world regularities. We interpret this effect as the grouping of single objects into larger representational units.

  8. Objective and subjective visual performance of multifocal contact lenses: pilot study.

    Science.gov (United States)

    Vasudevan, Balamurali; Flores, Michael; Gaib, Sara

    2014-06-01

    The aim of the present study was to compare the objective and subjective visual performance of three different soft multifocal contact lenses. 10 subjects (habitual soft contact lens wearers) between the ages of 40 and 45 years participated in the study. Three different multifocal silicone hydrogel contact lenses (Acuvue Oasys, Air Optix and Biofinity) were fit within the same visit. All the lenses were fit according to the manufacturers' recommendation using the respective fitting guide. Visual performance tests included low and high contrast distance and near visual acuity, contrast sensitivity, range of clear vision and through-focus curve. Objective visual performance tests included measurement of open field accommodative response at different defocus levels and optical aberrations at different viewing distances. Accommodative response was not significantly different between the three types of multifocal contact lenses at each of the accommodative stimulus levels (p>0.05). Accommodative lag increased for higher stimulus levels for all 3 types of contact lenses. Ocular aberrations were not significantly different between these 3 contact lens designs at each of the different viewing distances (p>0.05). In addition, optical aberrations did not significantly differ between different viewing distances for any of these lenses (p>0.05). ANOVA revealed no significant difference in high and low contrast distance visual acuity as well as near visual acuity and contrast sensitivity function between the 3 multifocal contact lenses and spectacles (p>0.05). There was no statistically significant difference in accommodative response, optical aberrations or visual performance between the 3 multifocal contact lenses in early presbyopes. Copyright © 2013 British Contact Lens Association. Published by Elsevier Ltd. All rights reserved.

  9. Relations of Preschoolers' Visual-Motor and Object Manipulation Skills With Executive Function and Social Behavior.

    Science.gov (United States)

    MacDonald, Megan; Lipscomb, Shannon; McClelland, Megan M; Duncan, Rob; Becker, Derek; Anderson, Kim; Kile, Molly

    2016-12-01

    The purpose of this article was to examine specific linkages between early visual-motor integration skills and executive function, as well as between early object manipulation skills and social behaviors in the classroom during the preschool year. Ninety-two children aged 3 to 5 years old (Mage = 4.31 years) were recruited to participate. Comprehensive measures of visual-motor integration skills, object manipulation skills, executive function, and social behaviors were administered in the fall and spring of the preschool year. Our findings indicated that children who had better visual-motor integration skills in the fall had better executive function scores (B = 0.47 [0.20], p skills in the fall showed significantly stronger social behavior in their classrooms (as rated by teachers) in the spring, including more self-control (B - 0.03 [0.00], p social behavior in the fall and other covariates. Children's visual-motor integration and object manipulation skills in the fall have modest to moderate relations with executive function and social behaviors later in the preschool year. These findings have implications for early learning initiatives and school readiness.

  10. How hearing aids, background noise, and visual cues influence objective listening effort.

    Science.gov (United States)

    Picou, Erin M; Ricketts, Todd A; Hornsby, Benjamin W Y

    2013-09-01

    The purpose of this article was to evaluate factors that influence the listening effort experienced when processing speech for people with hearing loss. Specifically, the change in listening effort resulting from introducing hearing aids, visual cues, and background noise was evaluated. An additional exploratory aim was to investigate the possible relationships between the magnitude of listening effort change and individual listeners' working memory capacity, verbal processing speed, or lipreading skill. Twenty-seven participants with bilateral sensorineural hearing loss were fitted with linear behind-the-ear hearing aids and tested using a dual-task paradigm designed to evaluate listening effort. The primary task was monosyllable word recognition and the secondary task was a visual reaction time task. The test conditions varied by hearing aids (unaided, aided), visual cues (auditory-only, auditory-visual), and background noise (present, absent). For all participants, the signal to noise ratio was set individually so that speech recognition performance in noise was approximately 60% in both the auditory-only and auditory-visual conditions. In addition to measures of listening effort, working memory capacity, verbal processing speed, and lipreading ability were measured using the Automated Operational Span Task, a Lexical Decision Task, and the Revised Shortened Utley Lipreading Test, respectively. In general, the effects measured using the objective measure of listening effort were small (~10 msec). Results indicated that background noise increased listening effort, and hearing aids reduced listening effort, while visual cues did not influence listening effort. With regard to the individual variables, verbal processing speed was negatively correlated with hearing aid benefit for listening effort; faster processors were less likely to derive benefit. Working memory capacity, verbal processing speed, and lipreading ability were related to benefit from visual cues. No

  11. ["Associative" visual agnosia for objects, pictures, faces and letters with altitudinal hemianopia].

    Science.gov (United States)

    Suzuki, K; Nomura, H; Yamadori, A; Nakasato, N; Takase, S

    1997-01-01

    We report a 63-year-old right-handed man with the associative visual agnosia and bilateral altitudinal hemianopia. Neurological examination revealed fair visual acuity and normal ocular movement. Other cranial-nerve, motor, sensory, and autonomic functions were normal. The brain MRI showed multiple infarction involving the right fusiform and lingual gyri extending to the adjacent white matter of the occipito-temporal lobes and posterior part of the parahippocampus, the left fusiform and lingual gyri, and multiple lacunae in bilateral basal ganglia. Cerebral angiography demonstrated occlusion at the P1 portions of bilateral posterior cerebral arteries. 123I IMP-SPECT revealed decreased perfusion in bilateral occipital lobes, worse on the right. Visual evoked fields showed normal pattern of P100 m on bilateral occipital lobes. Neuropsychologically he was alert and oriented in place. In WAIS-R, he could not perform any of performance subtests, while his VIQ was 72. His verbal and visual memory was impaired. His visual perception of forms seemed to be almost preserved. He could copy simple drawing precisely, although he could not recognize the drawing just copied. He could match pictures, letters and photographs of faces. His visual identification of forms, on the other hand, was severely disturbed. He could identify only simple geometrical figures, but not simple drawings such as an apple or the face of his daughter. Reading Kanji was impaired and he read Kana in letter-by-letter manner. Tactile identification of objects was much better than visual one. Naming objects from verbal description was well preserved. Drawing fruits or cars from memory was intact. These data suggests that the present case had fairly good visual perception as was demonstrated by good copying and matching performance, and the case could be classified into the associative type of visual agnosia if the dichotomized classification of apperceptive and associative type is employed. However, closer

  12. Learning and retrieving holistic and componential visual-verbal associations in reading and object naming.

    Science.gov (United States)

    Quinn, Connor; Taylor, J S H; Davis, Matthew H

    2017-04-01

    Understanding the neural processes that underlie learning to read can provide a scientific foundation for literacy education but studying these processes in real-world contexts remains challenging. We present behavioural data from adult participants learning to read artificial words and name artificial objects over two days. Learning profiles and generalisation confirmed that componential learning of visual-verbal associations distinguishes reading from object naming. Functional MRI data collected on the second day allowed us to identify the neural systems that support componential reading as distinct from systems supporting holistic visual-verbal associations in object naming. Results showed increased activation in posterior ventral occipitotemporal (vOT), parietal, and frontal cortices when reading an artificial orthography compared to naming artificial objects, and the reverse profile in anterior vOT regions. However, activation differences between trained and untrained words were absent, suggesting a lack of cortical representations for whole words. Despite this, hippocampal responses provided some evidence for overnight consolidation of both words and objects learned on day 1. The comparison between neural activity for artificial words and objects showed extensive overlap with systems differentially engaged for real object naming and English word/pseudoword reading in the same participants. These findings therefore provide evidence that artificial learning paradigms offer an alternative method for studying the neural systems supporting language and literacy. Implications for literacy acquisition are discussed. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  13. Finding the Correspondence of Audio-Visual Events by Object Manipulation

    Science.gov (United States)

    Nishibori, Kento; Takeuchi, Yoshinori; Matsumoto, Tetsuya; Kudo, Hiroaki; Ohnishi, Noboru

    A human being understands the objects in the environment by integrating information obtained by the senses of sight, hearing and touch. In this integration, active manipulation of objects plays an important role. We propose a method for finding the correspondence of audio-visual events by manipulating an object. The method uses the general grouping rules in Gestalt psychology, i.e. “simultaneity” and “similarity” among motion command, sound onsets and motion of the object in images. In experiments, we used a microphone, a camera, and a robot which has a hand manipulator. The robot grasps an object like a bell and shakes it or grasps an object like a stick and beat a drum in a periodic, or non-periodic motion. Then the object emits periodical/non-periodical events. To create more realistic scenario, we put other event source (a metronome) in the environment. As a result, we had a success rate of 73.8 percent in finding the correspondence between audio-visual events (afferent signal) which are relating to robot motion (efferent signal).

  14. Objectively measured physical activity in Brazilians with visual impairment: description and associated factors.

    Science.gov (United States)

    Barbosa Porcellis da Silva, Rafael; Marques, Alexandre Carriconde; Reichert, Felipe Fossati

    2017-05-19

    Low level of physical activity is a serious health issue in individuals with visual impairment. Few studies have objectively measured physical activity in this population group, particularly outside high-income countries. The aim of this study was to describe physical activity measured by accelerometry and its associated factors in Brazilian adults with visual impairment. In a cross-sectional design, 90 adults (18-95 years old) answered a questionnaire and wore an accelerometer for at least 3 days (including one weekend day) to measure physical activity (min/day). Sixty percent of the individuals practiced at least 30 min/day of moderate-to-vigorous physical activity. Individuals who were blind were less active, spent more time in sedentary activities and spent less time in moderate and vigorous activities than those with low vision. Individuals who walked mainly without any assistance were more active, spent less time in sedentary activities and spent more time in light and moderate activities than those who walked with a long cane or sighted guide. Our data highlight factors associated with lower levels of physical activity in people with visual impairment. These factors, such as being blind and walking without assistance should be tackled in interventions to increase physical activity levels among visual impairment individuals. Implications for Rehabilitation Physical inactivity worldwide is a serious health issue in people with visual impairments and specialized institutions and public policies must work to increase physical activity level of this population. Those with lower visual acuity and walking with any aid are at a higher risk of having low levels of physical activity. The association between visual response profile, living for less than 11 years with visual impairment and PA levels deserves further investigations Findings of the present study provide reliable data to support rehabilitation programs, observing the need of taking special attention to

  15. Selective use of visual information signaling objects' center of mass for anticipatory control of manipulative fingertip forces.

    Science.gov (United States)

    Salimi, Iran; Frazier, Wendy; Reilmann, Ralf; Gordon, Andrew M

    2003-05-01

    The present study examines whether visual information indicating the center of mass (CM) of an object can be used for the appropriate scaling of fingertip forces at each digit during precision grip. In separate experiments subjects lifted an object with various types of visual cues concerning the CM location several times and then rotated and lifted it again to determine whether the visual cues signaling the new location of the CM could be used to appropriately scale the fingertip forces. Specifically, subjects had either no visual cues, visual instructional cues (i.e., an indicator) or visual geometric cues where the longer axis of the object indicated the CM. When no visual cues were provided, subjects were unable to appropriately scale the load forces at each digit following rotation despite their knowledge of the new weight distribution. When visual cues regarding the CM location were provided, the nature of the visual cues determined their effectiveness in retrieval of internal representations underlying the anticipatory scaling of fingertip forces. Specifically, when subjects were provided with visual instructional information, they were unable to appropriately scale the forces. More appropriate scaling of the load forces occurred when the visual cues were ecologically meaningful, i.e., when the shape of the object indicated the CM location. We suggest that visual instructional cues do not have access to the implicit processes underlying dynamic force control, whereas visual geometric cues can be used for the retrieval of the internal representation related to CM for appropriate partitioning of the forces in each digit.

  16. A FragTrack algorithm enhancement for total occlusion management in visual object tracking

    Science.gov (United States)

    Adamo, F.; Mazzeo, P. L.; Spagnolo, P.; Distante, C.

    2015-05-01

    In recent years, "FragTrack" has become one of the most cited real time algorithms for visual tracking of an object in a video sequence. However, this algorithm fails when the object model is not present in the image or it is completely occluded, and in long term video sequences. In these sequences, the target object appearance is considerably modified during the time and its comparison with the template established at the first frame is hard to compute. In this work we introduce improvements to the original FragTrack: the management of total object occlusions and the update of the object template. Basically, we use a voting map generated by a non-parametric kernel density estimation strategy that allows us to compute a probability distribution for the distances of the histograms between template and object patches. In order to automatically determine whether the target object is present or not in the current frame, an adaptive threshold is introduced. A Bayesian classifier establishes, frame by frame, the presence of template object in the current frame. The template is partially updated at every frame. We tested the algorithm on well-known benchmark sequences, in which the object is always present, and on video sequences showing total occlusion of the target object to demonstrate the effectiveness of the proposed method.

  17. D Modelling and Interactive Web-Based Visualization of Cultural Heritage Objects

    Science.gov (United States)

    Koeva, M. N.

    2016-06-01

    Nowadays, there are rapid developments in the fields of photogrammetry, laser scanning, computer vision and robotics, together aiming to provide highly accurate 3D data that is useful for various applications. In recent years, various LiDAR and image-based techniques have been investigated for 3D modelling because of their opportunities for fast and accurate model generation. For cultural heritage preservation and the representation of objects that are important for tourism and their interactive visualization, 3D models are highly effective and intuitive for present-day users who have stringent requirements and high expectations. Depending on the complexity of the objects for the specific case, various technological methods can be applied. The selected objects in this particular research are located in Bulgaria - a country with thousands of years of history and cultural heritage dating back to ancient civilizations. This motivates the preservation, visualisation and recreation of undoubtedly valuable historical and architectural objects and places, which has always been a serious challenge for specialists in the field of cultural heritage. In the present research, comparative analyses regarding principles and technological processes needed for 3D modelling and visualization are presented. The recent problems, efforts and developments in interactive representation of precious objects and places in Bulgaria are presented. Three technologies based on real projects are described: (1) image-based modelling using a non-metric hand-held camera; (2) 3D visualization based on spherical panoramic images; (3) and 3D geometric and photorealistic modelling based on architectural CAD drawings. Their suitability for web-based visualization are demonstrated and compared. Moreover the possibilities for integration with additional information such as interactive maps, satellite imagery, sound, video and specific information for the objects are described. This comparative study

  18. 3D MODELLING AND INTERACTIVE WEB-BASED VISUALIZATION OF CULTURAL HERITAGE OBJECTS

    Directory of Open Access Journals (Sweden)

    M. N. Koeva

    2016-06-01

    Full Text Available Nowadays, there are rapid developments in the fields of photogrammetry, laser scanning, computer vision and robotics, together aiming to provide highly accurate 3D data that is useful for various applications. In recent years, various LiDAR and image-based techniques have been investigated for 3D modelling because of their opportunities for fast and accurate model generation. For cultural heritage preservation and the representation of objects that are important for tourism and their interactive visualization, 3D models are highly effective and intuitive for present-day users who have stringent requirements and high expectations. Depending on the complexity of the objects for the specific case, various technological methods can be applied. The selected objects in this particular research are located in Bulgaria – a country with thousands of years of history and cultural heritage dating back to ancient civilizations. \\this motivates the preservation, visualisation and recreation of undoubtedly valuable historical and architectural objects and places, which has always been a serious challenge for specialists in the field of cultural heritage. In the present research, comparative analyses regarding principles and technological processes needed for 3D modelling and visualization are presented. The recent problems, efforts and developments in interactive representation of precious objects and places in Bulgaria are presented. Three technologies based on real projects are described: (1 image-based modelling using a non-metric hand-held camera; (2 3D visualization based on spherical panoramic images; (3 and 3D geometric and photorealistic modelling based on architectural CAD drawings. Their suitability for web-based visualization are demonstrated and compared. Moreover the possibilities for integration with additional information such as interactive maps, satellite imagery, sound, video and specific information for the objects are described. This

  19. Comparison of visual sensitivity to human and object motion in autism spectrum disorder.

    Science.gov (United States)

    Kaiser, Martha D; Delmolino, Lara; Tanaka, James W; Shiffrar, Maggie

    2010-08-01

    Successful social behavior requires the accurate detection of other people's movements. Consistent with this, typical observers demonstrate enhanced visual sensitivity to human movement relative to equally complex, nonhuman movement [e.g., Pinto & Shiffrar, 2009]. A psychophysical study investigated visual sensitivity to human motion relative to object motion in observers with autism spectrum disorder (ASD). Participants viewed point-light depictions of a moving person and, for comparison, a moving tractor and discriminated between coherent and scrambled versions of these stimuli in unmasked and masked displays. There were three groups of participants: young adults with ASD, typically developing young adults, and typically developing children. Across masking conditions, typical observers showed enhanced visual sensitivity to human movement while observers in the ASD group did not. Because the human body is an inherently social stimulus, this result is consistent with social brain theories [e.g., Pelphrey & Carter, 2008; Schultz, 2005] and suggests that the visual systems of individuals with ASD may not be tuned for the detection of socially relevant information such as the presence of another person. Reduced visual sensitivity to human movements could compromise important social behaviors including, for example, gesture comprehension.

  20. Activity Limitation in Glaucoma: Objective Assessment by the Cambridge Glaucoma Visual Function Test.

    Science.gov (United States)

    Skalicky, Simon E; McAlinden, Colm; Khatib, Tasneem; Anthony, Louise May; Sim, Sing Yue; Martin, Keith R; Goldberg, Ivan; McCluskey, Peter

    2016-11-01

    We design and evaluate a computer-based objective simulation of activity limitation related to glaucoma. A cross-sectional study was performed involving 70 glaucoma patients and 14 controls. Mean age was 69.0 ± 10.2 years; 49 (58.3%) were male. The Cambridge Glaucoma Visual Function Test (CGVFT) was administered to all participants. Rasch analysis and criterion, convergent, and divergent validity tests assessed the psychometric properties of the CGVFT. Regression modeling was used to determine factors predictive of CGVFT person measures. Sociodemographic information, better and worse eye visual field parameters, visual acuity, contrast sensitivity, and the Rasch-analyzed Glaucoma Activity Limitation-9 (GAL-9) and Visual Function Questionnaire Utility Index (VFQUI) questionnaire responses were recorded. From 139 pilot CGVFT items, 59 had acceptable fit to the Rasch model, with acceptable precision (person separation index, 2.13) and targeting. Cambridge Glaucoma Visual Function Test person measure (logit) scores increased between controls (-0.20 ± 0.08) and patients with mild (-0.15 ± 0.08), moderate (-0.13 ± 0.10), and severe (-0.05 ± 0.10) glaucoma (P test administered to a cohort of glaucoma patients. It may benefit glaucoma patients, careers, health care providers, and policy makers, providing increased awareness of activity limitation due to glaucoma.

  1. Retrospective cues based on object features improve visual working memory performance in older adults.

    Science.gov (United States)

    Gilchrist, Amanda L; Duarte, Audrey; Verhaeghen, Paul

    2016-01-01

    Research with younger adults has shown that retrospective cues can be used to orient top-down attention toward relevant items in working memory. We examined whether older adults could take advantage of these cues to improve memory performance. Younger and older adults were presented with visual arrays of five colored shapes; during maintenance, participants were presented either with an informative cue based on an object feature (here, object shape or color) that would be probed, or with an uninformative, neutral cue. Although older adults were less accurate overall, both age groups benefited from the presentation of an informative, feature-based cue relative to a neutral cue. Surprisingly, we also observed differences in the effectiveness of shape versus color cues and their effects upon post-cue memory load. These results suggest that older adults can use top-down attention to remove irrelevant items from visual working memory, provided that task-relevant features function as cues.

  2. Swarming visual sensor network for real-time multiple object tracking

    Science.gov (United States)

    Baranov, Yuri P.; Yarishev, Sergey N.; Medvedev, Roman V.

    2016-04-01

    Position control of multiple objects is one of the most actual problems in various technology areas. For example, in construction area this problem is represented as multi-point deformation control of bearing constructions in order to prevent collapse, in mining - deformation control of lining constructions, in rescue operations - potential victims and sources of ignition location, in transport - traffic control and traffic violations detection, in robotics -traffic control for organized group of robots and many other problems in different areas. Usage of stationary devices for solving these problems is inappropriately due to complex and variable geometry of control areas. In these cases self-organized systems of moving visual sensors is the best solution. This paper presents a concept of scalable visual sensor network with swarm architecture for multiple object pose estimation and real-time tracking. In this article recent developments of distributed measuring systems were reviewed with consequent investigation of advantages and disadvantages of existing systems, whereupon theoretical principles of design of swarming visual sensor network (SVSN) were declared. To measure object coordinates in the world coordinate system using TV-camera intrinsic (focal length, pixel size, principal point position, distortion) and extrinsic (rotation matrix, translation vector) calibration parameters were needed to be determined. Robust camera calibration was a too resource-intensive task for using moving camera. In this situation position of the camera is usually estimated using a visual mark with known parameters. All measurements were performed in markcentered coordinate systems. In this article a general adaptive algorithm of coordinate conversion of devices with various intrinsic parameters was developed. Various network topologies were reviewed. Minimum error in objet tracking was realized by finding the shortest path between object of tracking and bearing sensor, which set

  3. Short-term storage capacity for visual objects depends on expertise

    DEFF Research Database (Denmark)

    Sørensen, Thomas Alrik; Kyllingsbæk, Søren

    2012-01-01

    Visual short-term memory (VSTM) has traditionally been thought to have a very limited capacity of around 3–4 objects. However, recently several researchers have argued that VSTM may be limited in the amount of information retained rather than by a specific number of objects. Here we present a study...... of the effect of long-term practice on VSTM capacity. We investigated four age groups ranging from pre-school children to adults and measured the change in VSTM capacity for letters and pictures. We found a clear increase in VSTM capacity for letters with age but not for pictures. Our results indicate that VSTM...

  4. Sex differences in visual realism in drawings of animate and inanimate objects.

    Science.gov (United States)

    Lange-Küttner, Chris

    2011-10-01

    Sex differences in a visually realistic drawing style were examined using the model of a curvy cup as an inanimate object, and the Draw-A-Person test (DAP) as a task involving animate objects, with 7- to 12-year-old children (N = 60; 30 boys). Accurately drawing the internal detail of the cup--indicating interest in a depth feature--was not dependent on age in boys, but only in girls, as 7-year-old boys were already engaging with this cup feature. However, the age effect of the correct omission of an occluded handle--indicating a transition from realism in terms of function (intellectual realism) to one of appearance (visual realism)--was the same for both sexes. The correct omission of the occluded handle was correlated with bilingualism and drawing the internal cup detail in girls, but with drawing the silhouette contour of the cup in boys. Because a figure's silhouette enables object identification from a distance, while perception of detail and language occurs in nearer space, it was concluded that boys and girls may differ in the way they conceptualize depth in pictorial space, rather than in visual realism as such.

  5. Visual Objects and Universal Meanings: AIDS Posters and the Politics of Globalisation and History

    Science.gov (United States)

    STEIN, CLAUDIA; COOTER, ROGER

    2011-01-01

    Drawing on recent visual and spatial turns in history writing, this paper considers AIDS posters from the perspective of their museum ‘afterlife’ as collected material objects. Museum spaces serve changing political and epistemological projects, and the visual objects they house are not immune from them. A recent globally themed exhibition of AIDS posters at an arts and crafts museum in Hamburg is cited in illustration. The exhibition also serves to draw attention to institutional continuities in collecting agendas. Revealed, contrary to postmodernist expectations, is how today’s application of aesthetic display for the purpose of making ‘global connections’ does not radically break with the virtues and morals attached to the visual at the end of the nineteenth century. The historicisation of such objects needs to take into account this complicated mix of change and continuity in aesthetic concepts and political inscriptions. Otherwise, historians fall prey to seductive aesthetics without being aware of the politics of them. This article submits that aesthetics is politics. PMID:23752866

  6. Adhesive Categories

    DEFF Research Database (Denmark)

    Lack, Stephen; Sobocinski, Pawel

    2003-01-01

    We introduce adhesive categories, which are categories with structure ensuring that pushouts along monomorphisms are well-behaved. Many types of graphical structures used in computer science are shown to be examples of adhesive categories. Double-pushout graph rewriting generalises well...... to rewriting on arbitrary adhesive categories....

  7. Adhesive Categories

    DEFF Research Database (Denmark)

    Lack, Stephen; Sobocinski, Pawel

    2004-01-01

    We introduce adhesive categories, which are categories with structure ensuring that pushouts along monomorphisms are well-behaved. Many types of graphical structures used in computer science are shown to be examples of adhesive categories. Double-pushout graph rewriting generalises well...... to rewriting on arbitrary adhesive categories....

  8. The life-span trajectory of visual perception of 3D objects.

    Science.gov (United States)

    Freud, Erez; Behrmann, Marlene

    2017-09-08

    Deriving a 3D structural representation of an object from its 2D input is one of the great challenges for the visual system and yet, this type of representation is critical for the successful recognition of and interaction with objects. Perhaps reflecting the importance of this computation, infants have some sensitivity to 3D structural information, and this sensitivity is, at least, partially preserved in the elderly population. To map precisely the life-span trajectory of this key visual computation, in a series of experiments, we compared the performance of observers from ages 4 to 86 years on displays of objects that either obey or violate possible 3D structure. The major findings indicate that the ability to derive fine-grained 3D object representations emerges after a prolonged developmental trajectory and is contingent on the explicit processing of depth information even in late childhood. In contrast, the sensitivity to object 3D structure remains stable even through late adulthood despite the overall reduction in perceptual competence. Together, these results uncover the developmental process of an important perceptual skill, revealing that the initial, coarse sensitivity to 3D information is refined, automatized and retained over the lifespan.

  9. Neural mechanisms of repetition priming of familiar and globally unfamiliar visual objects.

    Science.gov (United States)

    Soldan, Anja; Habeck, Christian; Gazes, Yunglin; Stern, Yaakov

    2010-07-09

    Functional magnetic resonance imaging (fMRI) studies have shown that repetition priming of visual objects is typically accompanied by a reduction in activity for repeated compared to new stimuli (repetition suppression). However, the spatial distribution and direction (suppression vs. enhancement) of neural repetition effects can depend on the pre-experimental familiarity of stimuli. The first goal of this study was to further probe the link between repetition priming and repetition suppression/enhancement for visual objects and how this link is affected by stimulus familiarity. A second goal was to examine whether priming of familiar and unfamiliar objects following a single stimulus repetition is supported by the same processes as priming following multiple repetitions within the same task. In this endeavor, we examined both between and within-subject correlations between priming and fMRI repetition effects for familiar and globally unfamiliar visual objects during the first and third repetitions of the stimuli. We included reaction time of individual trials as a linear regressor to identify brain regions whose repetition effects varied with response facilitation on a trial-by-trial basis. The results showed that repetition suppression in bilateral fusiform gyrus, was selectively correlated with priming of familiar objects that had been repeated once, likely reflecting facilitated perceptual processing or the sharpening of perceptual representations. Priming during the third repetition was correlated with repetition suppression in prefrontal and parietal areas for both familiar and unfamiliar stimuli, possibly reflecting a shift from top-down controlled to more automatic processing that occurs for both item types. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  10. Ways of Seeing Data : Toward a Critical Literacy for Data Visualizations as Research Objects and Research Devices

    NARCIS (Netherlands)

    Gray, J.; Bounegru, L.; Milan, S.; Ciuccarelli, P.; Kubitschko, S.; Kaun, A.

    2016-01-01

    Gray, Bounegru, Milan and Ciuccarelli contribute towards a critical literacy for data visualizations as research objects and devices. The chapter argues for methodological reflexivity around the use of data visualizations in research as both instruments and objects of study. The authors develop a

  11. Ways of Seeing Data: Towards a Critical Literacy for Data Visualizations as Research Objects and Research Devices

    NARCIS (Netherlands)

    Gray, J.; Bounegru, L.; Milan, S.; Ciuccarelli, P.; Kubitschko, S.; Kaun, A.

    2016-01-01

    Gray, Bounegru, Milan and Ciuccarelli contribute towards a critical literacy for data visualizations as research objects and devices. The chapter argues for methodological reflexivity around the use of data visualizations in research as both instruments and objects of study. The authors develop a

  12. Binocular visual tracking and grasping of a moving object with a 3D trajectory predictor

    Directory of Open Access Journals (Sweden)

    J. Fuentes‐Pacheco

    2009-12-01

    Full Text Available This paper presents a binocular eye‐to‐hand visual servoing system that is able to track and grasp a moving object in real time.Linear predictors are employed to estimate the object trajectory in three dimensions and are capable of predicting futurepositions even if the object is temporarily occluded. For its development we have used a CRS T475 manipulator robot with sixdegrees of freedom and two fixed cameras in a stereo pair configuration. The system has a client‐server architecture and iscomposed of two main parts: the vision system and the control system. The vision system uses color detection to extract theobject from the background and a tracking technique based on search windows and object moments. The control system usesthe RobWork library to generate the movement instructions and to send them to a C550 controller by means of the serial port.Experimental results are presented to verify the validity and the efficacy of the proposed visual servoing system.

  13. Visual hull method for tomographic PIV of flow around moving objects

    Science.gov (United States)

    Adhikari, Deepak; Longmire, Ellen

    2011-11-01

    Measurement of velocity around arbitrarily moving objects is of interest in many applications. This includes flow around marine animals and flying insects, flow around supercavitating projectiles, and flow around discrete drops or particles in multiphase flows. We present a visual hull technique that employs existing tomographic PIV reconstruction software to automate identification, masking and tracking of discrete objects within a three-dimensional volume, while allowing computation and avoiding contamination of the surrounding three-component fluid velocity vectors. The technique is demonstrated by considering flow around falling objects of different shape, namely a sphere, cube, tetrahedron and cylinder. Four high-speed cameras and a laser are used to acquire images of these objects falling within liquid seeded with tracer particles. The acquired image sets are then processed to reconstruct both the object and the surrounding tracer particles. The reconstructed object is used to estimate the object location at each time step and mask the reconstructed particle volume, while the reconstructed tracer particles are cross-correlated with subsequent particle volumes to obtain the fluid velocity vectors. Supported by NSF IDBR Grant #0852875.

  14. Coding of visual object features and feature conjunctions in the human brain.

    Directory of Open Access Journals (Sweden)

    Jasna Martinovic

    Full Text Available Object recognition is achieved through neural mechanisms reliant on the activity of distributed coordinated neural assemblies. In the initial steps of this process, an object's features are thought to be coded very rapidly in distinct neural assemblies. These features play different functional roles in the recognition process--while colour facilitates recognition, additional contours and edges delay it. Here, we selectively varied the amount and role of object features in an entry-level categorization paradigm and related them to the electrical activity of the human brain. We found that early synchronizations (approx. 100 ms increased quantitatively when more image features had to be coded, without reflecting their qualitative contribution to the recognition process. Later activity (approx. 200-400 ms was modulated by the representational role of object features. These findings demonstrate that although early synchronizations may be sufficient for relatively crude discrimination of objects in visual scenes, they cannot support entry-level categorization. This was subserved by later processes of object model selection, which utilized the representational value of object features such as colour or edges to select the appropriate model and achieve identification.

  15. Determining next best view based on occlusion information in a single depth image of visual object

    Directory of Open Access Journals (Sweden)

    Shihui Zhang

    2016-12-01

    Full Text Available How to determine the camera’s next best view is a challenging problem in vision field. A next best view approach is proposed based on occlusion information in a single depth image. First, the occlusion detection is accomplished for the depth image of visual object in current view to obtain the occlusion boundary and the nether adjacent boundary. Second, the external surface of occluded region is constructed and modeled according to the occlusion boundary and the nether adjacent boundary. Third, the observation direction, observation center point, and area information of external surface of occluded region are solved. And then, the set of candidate observation directions and the visual space of each candidate direction are determined. Finally, the next best view is achieved by solving the next best observation direction and camera’s observation position. The proposed approach does not need the prior knowledge of visual object or limit the camera position on a specially appointed surface. Experimental results demonstrate that the approach is feasible and effective.

  16. A review of functional imaging studies on category-specificity

    DEFF Research Database (Denmark)

    Gerlach, Christian

    2007-01-01

    such as familiarity and visual complexity. Of the most consistent activations found, none appear to be selective for natural objects or artefacts. The findings reviewed are compatible with theories of category-specificity that assume a widely distributed conceptual system not organized by category....

  17. A novel no-reference objective stereoscopic video quality assessment method based on visual saliency analysis

    Science.gov (United States)

    Yang, Xinyan; Zhao, Wei; Ye, Long; Zhang, Qin

    2017-07-01

    This paper proposes a no-reference objective stereoscopic video quality assessment method with the motivation that making the effect of objective experiments close to that of subjective way. We believe that the image regions with different visual salient degree should not have the same weights when designing an assessment metric. Therefore, we firstly use GBVS algorithm to each frame pairs and separate both the left and right viewing images into the regions with strong, general and week saliency. Besides, local feature information like blockiness, zero-crossing and depth are extracted and combined with a mathematical model to calculate a quality assessment score. Regions with different salient degree are assigned with different weights in the mathematical model. Experiment results demonstrate the superiority of our method compared with the existed state-of-the-art no-reference objective Stereoscopic video quality assessment methods.

  18. The Improved SVM Multi Objects' Identification For the Uncalibrated Visual Servoing

    Directory of Open Access Journals (Sweden)

    Min Wang

    2009-03-01

    Full Text Available For the assembly of multi micro objects in micromanipulation, the first task is to identify multi micro parts. We present an improved support vector machine algorithm, which employs invariant moments based edge extraction to obtain feature attribute and then presents a heuristic attribute reduction algorithm based on rough set's discernibility matrix to obtain attribute reduction, with using support vector machine to identify and classify the targets. The visual servoing is the second task. For avoiding the complicated calibration of intrinsic parameter of camera, We apply an improved broyden's method to estimate the image jacobian matrix online, which employs chebyshev polynomial to construct a cost function to approximate the optimization value, obtaining a fast convergence for online estimation. Last, a two DOF visual controller based fuzzy adaptive PD control law for micro-manipulation is presented. The experiments of micro-assembly of micro parts in microscopes confirm that the proposed methods are effective and feasible.

  19. Shape configuration and category-specificity

    DEFF Research Database (Denmark)

    Gerlach, Christian; Law, Ian; Paulson, Olaf B

    2006-01-01

    and fragmented drawings. We also examined whether fragmentation had different impact on the recognition of natural objects and artefacts and found that recognition of artefacts was more affected by fragmentation than recognition of natural objects. Thus, the usual finding of an advantage for artefacts...... in difficult object decision tasks, which is also found in the present experiments with outlines, is reversed when the stimuli are fragmented. This interaction between category (natural versus artefacts) and stimulus type (outlines versus fragmented forms) is in accordance with predictions derived from...... a recent account of category-specificity and lends support to the notion that category-specific impairments can occur for both natural objects and artefacts following damage to pre-semantic stages in visual object recognition. The implications of the present findings are discussed in relation to theories...

  20. Integrating Spherical Panoramas and Maps for Visualization of Cultural Heritage Objects Using Virtual Reality Technology.

    Science.gov (United States)

    Koeva, Mila; Luleva, Mila; Maldjanski, Plamen

    2017-04-11

    Development and virtual representation of 3D models of Cultural Heritage (CH) objects has triggered great interest over the past decade. The main reason for this is the rapid development in the fields of photogrammetry and remote sensing, laser scanning, and computer vision. The advantages of using 3D models for restoration, preservation, and documentation of valuable historical and architectural objects have been numerously demonstrated by scientists in the field. Moreover, 3D model visualization in virtual reality has been recognized as an efficient, fast, and easy way of representing a variety of objects worldwide for present-day users, who have stringent requirements and high expectations. However, the main focus of recent research is the visual, geometric, and textural characteristics of a single concrete object, while integration of large numbers of models with additional information-such as historical overview, detailed description, and location-are missing. Such integrated information can be beneficial, not only for tourism but also for accurate documentation. For that reason, we demonstrate in this paper an integration of high-resolution spherical panoramas, a variety of maps, GNSS, sound, video, and text information for representation of numerous cultural heritage objects. These are then displayed in a web-based portal with an intuitive interface. The users have the opportunity to choose freely from the provided information, and decide for themselves what is interesting to visit. Based on the created web application, we provide suggestions and guidelines for similar studies. We selected objects, which are located in Bulgaria-a country with thousands of years of history and cultural heritage dating back to ancient civilizations. The methods used in this research are applicable for any type of spherical or cylindrical images and can be easily followed and applied in various domains. After a visual and metric assessment of the panoramas and the evaluation of

  1. NCWin — A Component Object Model (COM) for processing and visualizing NetCDF data

    Science.gov (United States)

    Liu, Jinxun; Chen, J.M.; Price, D.T.; Liu, S.

    2005-01-01

    NetCDF (Network Common Data Form) is a data sharing protocol and library that is commonly used in large-scale atmospheric and environmental data archiving and modeling. The NetCDF tool described here, named NCWin and coded with Borland C + + Builder, was built as a standard executable as well as a COM (component object model) for the Microsoft Windows environment. COM is a powerful technology that enhances the reuse of applications (as components). Environmental model developers from different modeling environments, such as Python, JAVA, VISUAL FORTRAN, VISUAL BASIC, VISUAL C + +, and DELPHI, can reuse NCWin in their models to read, write and visualize NetCDF data. Some Windows applications, such as ArcGIS and Microsoft PowerPoint, can also call NCWin within the application. NCWin has three major components: 1) The data conversion part is designed to convert binary raw data to and from NetCDF data. It can process six data types (unsigned char, signed char, short, int, float, double) and three spatial data formats (BIP, BIL, BSQ); 2) The visualization part is designed for displaying grid map series (playing forward or backward) with simple map legend, and displaying temporal trend curves for data on individual map pixels; and 3) The modeling interface is designed for environmental model development by which a set of integrated NetCDF functions is provided for processing NetCDF data. To demonstrate that the NCWin can easily extend the functions of some current GIS software and the Office applications, examples of calling NCWin within ArcGIS and MS PowerPoint for showing NetCDF map animations are given.

  2. From objects to landmarks: the function of visual location information in spatial navigation

    Directory of Open Access Journals (Sweden)

    Edgar eChan

    2012-08-01

    Full Text Available Landmarks play an important role in guiding navigational behavior. A host of studies in the last 15 years has demonstrated that environmental objects can act as landmarks for navigation in different ways. In this review, we propose a parsimonious four-part taxonomy for conceptualizing object location information during navigation. We begin by outlining object properties that appear to be important for a landmark to attain salience. We then systematically examine the different functions of objects as navigational landmarks based on previous behavioral and neuroanatomical findings in rodents and humans. Evidence is presented showing that single environmental objects can function as navigational beacons, or act as associative or orientation cues. In addition, we argue that extended surfaces or boundaries can act as landmarks by providing a frame of reference for encoding spatial information. The present review provides a concise taxonomy of the use of visual objects as landmarks in navigation and should serve as a useful reference for future research into landmark-based spatial navigation.

  3. Nonlinear dynamic model for visual object tracking on Grassmann manifolds with partial occlusion handling.

    Science.gov (United States)

    Khan, Zulfiqar Hasan; Gu, Irene Yu-Hua

    2013-12-01

    This paper proposes a novel Bayesian online learning and tracking scheme for video objects on Grassmann manifolds. Although manifold visual object tracking is promising, large and fast nonplanar (or out-of-plane) pose changes and long-term partial occlusions of deformable objects in video remain a challenge that limits the tracking performance. The proposed method tackles these problems with the main novelties on: 1) online estimation of object appearances on Grassmann manifolds; 2) optimal criterion-based occlusion handling for online updating of object appearances; 3) a nonlinear dynamic model for both the appearance basis matrix and its velocity; and 4) Bayesian formulations, separately for the tracking process and the online learning process, that are realized by employing two particle filters: one is on the manifold for generating appearance particles and another on the linear space for generating affine box particles. Tracking and online updating are performed in an alternating fashion to mitigate the tracking drift. Experiments using the proposed tracker on videos captured by a single dynamic/static camera have shown robust tracking performance, particularly for scenarios when target objects contain significant nonplanar pose changes and long-term partial occlusions. Comparisons with eight existing state-of-the-art/most relevant manifold/nonmanifold trackers with evaluations have provided further support to the proposed scheme.

  4. Object Tracking Using Adaptive Covariance Descriptor and Clustering-Based Model Updating for Visual Surveillance

    Directory of Open Access Journals (Sweden)

    Lei Qin

    2014-05-01

    Full Text Available We propose a novel approach for tracking an arbitrary object in video sequences for visual surveillance. The first contribution of this work is an automatic feature extraction method that is able to extract compact discriminative features from a feature pool before computing the region covariance descriptor. As the feature extraction method is adaptive to a specific object of interest, we refer to the region covariance descriptor computed using the extracted features as the adaptive covariance descriptor. The second contribution is to propose a weakly supervised method for updating the object appearance model during tracking. The method performs a mean-shift clustering procedure among the tracking result samples accumulated during a period of time and selects a group of reliable samples for updating the object appearance model. As such, the object appearance model is kept up-to-date and is prevented from contamination even in case of tracking mistakes. We conducted comparing experiments on real-world video sequences, which confirmed the effectiveness of the proposed approaches. The tracking system that integrates the adaptive covariance descriptor and the clustering-based model updating method accomplished stable object tracking on challenging video sequences.

  5. Holding an object one is looking at : Kinesthetic information on the object's distance does not improve visual judgments of its size

    NARCIS (Netherlands)

    Brenner, Eli; Van Damme, Wim J.M.; Smeets, Jeroen B.J.

    1997-01-01

    Visual judgments of distance are often inaccurate. Nevertheless, information on distance must be procured if retinal image size is to be used to judge an object's dimensions. In the present study, we examined whether kinesthetic information about an object's distance - based on the posture of the

  6. Visual Event-Related Potentials to Novel Objects Predict Rapid Word Learning Ability in 20-Month-Olds.

    Science.gov (United States)

    Borgström, Kristina; Torkildsen, Janne von Koss; Lindgren, Magnus

    In an event-related potentials (ERP) study, twenty-month-old children (n = 37) were presented with pseudowords to map to novel object referents in five presentations. Quicker attenuation of the visual Negative central component (Nc) to novel objects predicted a larger difference in N400 amplitude between congruous and incongruous presentations of pseudowords at test. Furthermore, better initial recognition of familiar objects (Nc difference between familiar and novel objects) predicted the strength of the N400 incongruity effect to the verbal labels of these real objects. This study presents novel evidence for a link between efficient visual processing of objects and word learning ability.

  7. Onboard Robust Visual Tracking for UAVs Using a Reliable Global-Local Object Model

    Directory of Open Access Journals (Sweden)

    Changhong Fu

    2016-08-01

    Full Text Available In this paper, we present a novel onboard robust visual algorithm for long-term arbitrary 2D and 3D object tracking using a reliable global-local object model for unmanned aerial vehicle (UAV applications, e.g., autonomous tracking and chasing a moving target. The first main approach in this novel algorithm is the use of a global matching and local tracking approach. In other words, the algorithm initially finds feature correspondences in a way that an improved binary descriptor is developed for global feature matching and an iterative Lucas–Kanade optical flow algorithm is employed for local feature tracking. The second main module is the use of an efficient local geometric filter (LGF, which handles outlier feature correspondences based on a new forward-backward pairwise dissimilarity measure, thereby maintaining pairwise geometric consistency. In the proposed LGF module, a hierarchical agglomerative clustering, i.e., bottom-up aggregation, is applied using an effective single-link method. The third proposed module is a heuristic local outlier factor (to the best of our knowledge, it is utilized for the first time to deal with outlier features in a visual tracking application, which further maximizes the representation of the target object in which we formulate outlier feature detection as a binary classification problem with the output features of the LGF module. Extensive UAV flight experiments show that the proposed visual tracker achieves real-time frame rates of more than thirty-five frames per second on an i7 processor with 640 × 512 image resolution and outperforms the most popular state-of-the-art trackers favorably in terms of robustness, efficiency and accuracy.

  8. Image Processing Strategies Based on a Visual Saliency Model for Object Recognition Under Simulated Prosthetic Vision.

    Science.gov (United States)

    Wang, Jing; Li, Heng; Fu, Weizhen; Chen, Yao; Li, Liming; Lyu, Qing; Han, Tingting; Chai, Xinyu

    2016-01-01

    Retinal prostheses have the potential to restore partial vision. Object recognition in scenes of daily life is one of the essential tasks for implant wearers. Still limited by the low-resolution visual percepts provided by retinal prostheses, it is important to investigate and apply image processing methods to convey more useful visual information to the wearers. We proposed two image processing strategies based on Itti's visual saliency map, region of interest (ROI) extraction, and image segmentation. Itti's saliency model generated a saliency map from the original image, in which salient regions were grouped into ROI by the fuzzy c-means clustering. Then Grabcut generated a proto-object from the ROI labeled image which was recombined with background and enhanced in two ways--8-4 separated pixelization (8-4 SP) and background edge extraction (BEE). Results showed that both 8-4 SP and BEE had significantly higher recognition accuracy in comparison with direct pixelization (DP). Each saliency-based image processing strategy was subject to the performance of image segmentation. Under good and perfect segmentation conditions, BEE and 8-4 SP obtained noticeably higher recognition accuracy than DP, and under bad segmentation condition, only BEE boosted the performance. The application of saliency-based image processing strategies was verified to be beneficial to object recognition in daily scenes under simulated prosthetic vision. They are hoped to help the development of the image processing module for future retinal prostheses, and thus provide more benefit for the patients. Copyright © 2015 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  9. The difference in subjective and objective complexity in the visual short-term memory

    DEFF Research Database (Denmark)

    Dall, Jonas Olsen; Sørensen, Thomas Alrik

    Several studies discuss the influence of complexity on the visual short term memory; some have demonstrated that short-term memory is surprisingly stable regardless of content (e.g. Luck & Vogel, 1997) where others have shown that memory can be influenced by the complexity of stimulus (e.g. Alvarez...... of expertise (e.g. Dall, et al., 2016). We will present a paradigm testing the proposed distinction using specific isolation of attentional components (see Bundesen, 1990; Sørensen, Vangkilde, & Bundesen, 2015). We propose that objective complexity can be manipulated through the number of strokes in Chinese...

  10. DEEP-SEE: Joint Object Detection, Tracking and Recognition with Application to Visually Impaired Navigational Assistance

    Directory of Open Access Journals (Sweden)

    Ruxandra Tapu

    2017-10-01

    Full Text Available In this paper, we introduce the so-called DEEP-SEE framework that jointly exploits computer vision algorithms and deep convolutional neural networks (CNNs to detect, track and recognize in real time objects encountered during navigation in the outdoor environment. A first feature concerns an object detection technique designed to localize both static and dynamic objects without any a priori knowledge about their position, type or shape. The methodological core of the proposed approach relies on a novel object tracking method based on two convolutional neural networks trained offline. The key principle consists of alternating between tracking using motion information and predicting the object location in time based on visual similarity. The validation of the tracking technique is performed on standard benchmark VOT datasets, and shows that the proposed approach returns state-of-the-art results while minimizing the computational complexity. Then, the DEEP-SEE framework is integrated into a novel assistive device, designed to improve cognition of VI people and to increase their safety when navigating in crowded urban scenes. The validation of our assistive device is performed on a video dataset with 30 elements acquired with the help of VI users. The proposed system shows high accuracy (>90% and robustness (>90% scores regardless on the scene dynamics.

  11. Development of the Object-Oriented Dynamic Simulation Models Using Visual C++ Freeware

    Directory of Open Access Journals (Sweden)

    Alexander I. Kozynchenko

    2016-01-01

    Full Text Available The paper mostly focuses on the methodological and programming aspects of developing a versatile desktop framework to provide the available basis for the high-performance simulation of dynamical models of different kinds and for diverse applications. So the paper gives some basic structure for creating a dynamical simulation model in C++ which is built on the Win32 platform with an interactive multiwindow interface and uses the lightweight Visual C++ Express as a free integrated development environment. The resultant simulation framework could be a more acceptable alternative to other solutions developed on the basis of commercial tools like Borland C++ or Visual C++ Professional, not to mention the domain specific languages and more specialized ready-made software such as Matlab, Simulink, and Modelica. This approach seems to be justified in the case of complex research object-oriented dynamical models having nonstandard structure, relationships, algorithms, and solvers, as it allows developing solutions of high flexibility. The essence of the model framework is shown using a case study of simulation of moving charged particles in the electrostatic field. The simulation model possesses the necessary visualization and control features such as an interactive input, real time graphical and text output, start, stop, and rate control.

  12. ROCIT : a visual object recognition algorithm based on a rank-order coding scheme.

    Energy Technology Data Exchange (ETDEWEB)

    Gonzales, Antonio Ignacio; Reeves, Paul C.; Jones, John J.; Farkas, Benjamin D.

    2004-06-01

    This document describes ROCIT, a neural-inspired object recognition algorithm based on a rank-order coding scheme that uses a light-weight neuron model. ROCIT coarsely simulates a subset of the human ventral visual stream from the retina through the inferior temporal cortex. It was designed to provide an extensible baseline from which to improve the fidelity of the ventral stream model and explore the engineering potential of rank order coding with respect to object recognition. This report describes the baseline algorithm, the model's neural network architecture, the theoretical basis for the approach, and reviews the history of similar implementations. Illustrative results are used to clarify algorithm details. A formal benchmark to the 1998 FERET fafc test shows above average performance, which is encouraging. The report concludes with a brief review of potential algorithmic extensions for obtaining scale and rotational invariance.

  13. Foundations of computer vision computational geometry, visual image structures and object shape detection

    CERN Document Server

    Peters, James F

    2017-01-01

    This book introduces the fundamentals of computer vision (CV), with a focus on extracting useful information from digital images and videos. Including a wealth of methods used in detecting and classifying image objects and their shapes, it is the first book to apply a trio of tools (computational geometry, topology and algorithms) in solving CV problems, shape tracking in image object recognition and detecting the repetition of shapes in single images and video frames. Computational geometry provides a visualization of topological structures such as neighborhoods of points embedded in images, while image topology supplies us with structures useful in the analysis and classification of image regions. Algorithms provide a practical, step-by-step means of viewing image structures. The implementations of CV methods in Matlab and Mathematica, classification of chapter problems with the symbols (easily solved) and (challenging) and its extensive glossary of key words, examples and connections with the fabric of C...

  14. Development of a vocabulary of object shapes in a child with a very-early-acquired visual agnosia: a unique case.

    Science.gov (United States)

    Funnell, Elaine; Wilding, John

    2011-02-01

    We report a longitudinal study of an exceptional child (S.R.) whose early-acquired visual agnosia, following encephalitis at 8 weeks of age, did not prevent her from learning to construct an increasing vocabulary of visual object forms (drawn from different categories), albeit slowly. S.R. had problems perceiving subtle differences in shape; she was unable to segment local letters within global displays; and she would bring complex scenes close to her eyes: a symptom suggestive of an attempt to reduce visual crowding. Investigations revealed a robust ability to use the gestalt grouping factors of proximity and collinearity to detect fragmented forms in noisy backgrounds, compared with a very weak ability to segment fragmented forms on the basis of contrasts of shape. When contrasts in spatial grouping and shape were pitted against each other, shape made little contribution, consistent with problems in perceiving complex scenes, but when shape contrast was varied, and spatial grouping was held constant, S.R. showed the same hierarchy of difficulty as the controls, although her responses were slowed. This is the first report of a child's visual-perceptual development following very early neurological impairments to the visual cortex. Her ability to learn to perceive visual shape following damage at a rudimentary stage of perceptual development contrasts starkly with the loss of such ability in childhood cases of acquired visual agnosia that follow damage to the established perceptual system. Clearly, there is a critical period during which neurological damage to the highly active, early developing visual-perceptual system does not prevent but only impairs further learning.

  15. Distributed dendritic processing facilitates object detection: a computational analysis on the visual system of the fly.

    Directory of Open Access Journals (Sweden)

    Patrick Hennig

    Full Text Available BACKGROUND: Detecting objects is an important task when moving through a natural environment. Flies, for example, may land on salient objects or may avoid collisions with them. The neuronal ensemble of Figure Detection cells (FD-cells in the visual system of the fly is likely to be involved in controlling these behaviours, as these cells are more sensitive to objects than to extended background structures. Until now the computations in the presynaptic neuronal network of FD-cells and, in particular, the functional significance of the experimentally established distributed dendritic processing of excitatory and inhibitory inputs is not understood. METHODOLOGY/PRINCIPAL FINDINGS: We use model simulations to analyse the neuronal computations responsible for the preference of FD-cells for small objects. We employed a new modelling approach which allowed us to account for the spatial spread of electrical signals in the dendrites while avoiding detailed compartmental modelling. The models are based on available physiological and anatomical data. Three models were tested each implementing an inhibitory neural circuit, but differing by the spatial arrangement of the inhibitory interaction. Parameter optimisation with an evolutionary algorithm revealed that only distributed dendritic processing satisfies the constraints arising from electrophysiological experiments. In contrast to a direct dendro-dendritic inhibition of the FD-cell (Direct Distributed Inhibition model, an inhibition of its presynaptic retinotopic elements (Indirect Distributed Inhibition model requires smaller changes in input resistance in the inhibited neurons during visual stimulation. CONCLUSIONS/SIGNIFICANCE: Distributed dendritic inhibition of retinotopic elements as implemented in our Indirect Distributed Inhibition model is the most plausible wiring scheme for the neuronal circuit of FD-cells. This microcircuit is computationally similar to lateral inhibition between the

  16. Distributed dendritic processing facilitates object detection: a computational analysis on the visual system of the fly.

    Science.gov (United States)

    Hennig, Patrick; Möller, Ralf; Egelhaaf, Martin

    2008-08-28

    Detecting objects is an important task when moving through a natural environment. Flies, for example, may land on salient objects or may avoid collisions with them. The neuronal ensemble of Figure Detection cells (FD-cells) in the visual system of the fly is likely to be involved in controlling these behaviours, as these cells are more sensitive to objects than to extended background structures. Until now the computations in the presynaptic neuronal network of FD-cells and, in particular, the functional significance of the experimentally established distributed dendritic processing of excitatory and inhibitory inputs is not understood. We use model simulations to analyse the neuronal computations responsible for the preference of FD-cells for small objects. We employed a new modelling approach which allowed us to account for the spatial spread of electrical signals in the dendrites while avoiding detailed compartmental modelling. The models are based on available physiological and anatomical data. Three models were tested each implementing an inhibitory neural circuit, but differing by the spatial arrangement of the inhibitory interaction. Parameter optimisation with an evolutionary algorithm revealed that only distributed dendritic processing satisfies the constraints arising from electrophysiological experiments. In contrast to a direct dendro-dendritic inhibition of the FD-cell (Direct Distributed Inhibition model), an inhibition of its presynaptic retinotopic elements (Indirect Distributed Inhibition model) requires smaller changes in input resistance in the inhibited neurons during visual stimulation. Distributed dendritic inhibition of retinotopic elements as implemented in our Indirect Distributed Inhibition model is the most plausible wiring scheme for the neuronal circuit of FD-cells. This microcircuit is computationally similar to lateral inhibition between the retinotopic elements. Hence, distributed inhibition might be an alternative explanation of

  17. Objective measurement of visual resolution using the P300 to self-facial images.

    Science.gov (United States)

    Marhöfer, David J; Bach, Michael; Heinrich, Sven P

    2015-10-01

    To assess visual acuity objectively "beyond V1", the P300 event-related potential is a promising candidate and closely associated with conscious perception. However, the P300 can be willfully modulated, a disadvantage for objective visual acuity estimation. Faces are very salient stimuli and difficult to ignore. Here, we present a P300-type paradigm to assess visual acuity with faces. Gray-scale portraits of the respective subject served as oddball stimuli (probability 1/7), scrambled versions of these as the standard stimuli (probability 6/7). Furthermore, stimuli were spatially high-pass filtered (at 0, 2.2, 4.2 and 8.3 cpd), making them recognizable only with sufficient acuity. Acuity was systematically reduced by dioptric blur, chosen individually to render faces unrecognizable when high-passed at ≥ 4.2 cpd. EEG was recorded from 11 subjects at 32 scalp positions and re-referenced to the average of TP9 and TP10. One of the rare face variants was designated as target, for which a button had to be pressed. The event-related potential was dominated by the P300 at 300-800 ms. All subjects had a significant (P P300 for 0- to 8.3-cpd filtering. When vision was blurred, the fraction of significant P300 responses to 8.3-cpd filtered faces dropped to 18%, but stayed at 100% for 4.2 cpd. Another component, the vertex positive potential (VPP) at 170 ms, was undetectable in most participants with blur and all levels of filtering, even when the images were recognizable. The study demonstrates the feasibility of a face-based P300 approach to objectively assess visual acuity. The sensitivity to stimulus degradation was comparable to that of a grating-based approach as previously reported. An unexpected finding was the differing behavior of the P300 and the VPP. The VPP was quite sensitive to high-pass filtering, while the P300 sustained stronger filtering, although for its generation, the faces must also be discriminated from scrambled faces.

  18. The anatomy of object recognition--visual form agnosia caused by medial occipitotemporal stroke.

    Science.gov (United States)

    Karnath, Hans-Otto; Rüter, Johannes; Mandler, André; Himmelbach, Marc

    2009-05-06

    The influential model on visual information processing by Milner and Goodale (1995) has suggested a dissociation between action- and perception-related processing in a dorsal versus ventral stream projection. It was inspired substantially by the observation of a double dissociation of disturbed visual action versus perception in patients with optic ataxia on the one hand and patients with visual form agnosia (VFA) on the other. Unfortunately, almost all cases with VFA reported so far suffered from inhalational intoxication, the majority with carbon monoxide (CO). Since CO induces a diffuse and widespread pattern of neuronal and white matter damage throughout the whole brain, precise conclusions from these patients with VFA on the selective role of ventral stream structures for shape and orientation perception were difficult. Here, we report patient J.S., who demonstrated VFA after a well circumscribed brain lesion due to stroke etiology. Like the famous patient D.F. with VFA after CO intoxication studied by Milner, Goodale, and coworkers (Goodale et al., 1991, 1994; Milner et al., 1991; Servos et al., 1995; Mon-Williams et al., 2001a,b; Wann et al., 2001; Westwood et al., 2002; McIntosh et al., 2004; Schenk and Milner, 2006), J.S. showed an obvious dissociation between disturbed visual perception of shape and orientation information on the one side and preserved visuomotor abilities based on the same information on the other. In both hemispheres, damage primarily affected the fusiform and the lingual gyri as well as the adjacent posterior cingulate gyrus. We conclude that these medial structures of the ventral occipitotemporal cortex are integral for the normal flow of shape and of contour information into the ventral stream system allowing to recognize objects.

  19. Object-based implicit learning in visual search: perceptual segmentation constrains contextual cueing.

    Science.gov (United States)

    Conci, Markus; Müller, Hermann J; von Mühlenen, Adrian

    2013-07-09

    In visual search, detection of a target is faster when it is presented within a spatial layout of repeatedly encountered nontarget items, indicating that contextual invariances can guide selective attention (contextual cueing; Chun & Jiang, 1998). However, perceptual regularities may interfere with contextual learning; for instance, no contextual facilitation occurs when four nontargets form a square-shaped grouping, even though the square location predicts the target location (Conci & von Mühlenen, 2009). Here, we further investigated potential causes for this interference-effect: We show that contextual cueing can reliably occur for targets located within the region of a segmented object, but not for targets presented outside of the object's boundaries. Four experiments demonstrate an object-based facilitation in contextual cueing, with a modulation of context-based learning by relatively subtle grouping cues including closure, symmetry, and spatial regularity. Moreover, the lack of contextual cueing for targets located outside the segmented region was due to an absence of (latent) learning of contextual layouts, rather than due to an attentional bias towards the grouped region. Taken together, these results indicate that perceptual segmentation provides a basic structure within which contextual scene regularities are acquired. This in turn argues that contextual learning is constrained by object-based selection.

  20. An objective electrophysiological marker of face individualisation impairment in acquired prosopagnosia with fast periodic visual stimulation.

    Science.gov (United States)

    Liu-Shuang, Joan; Torfs, Katrien; Rossion, Bruno

    2016-03-01

    One of the most striking pieces of evidence for a specialised face processing system in humans is acquired prosopagnosia, i.e. the inability to individualise faces following brain damage. However, a sensitive and objective non-behavioural marker for this deficit is difficult to provide with standard event-related potentials (ERPs), such as the well-known face-related N170 component reported and investigated in-depth by our late distinguished colleague Shlomo Bentin. Here we demonstrate that fast periodic visual stimulation (FPVS) in electrophysiology can quantify face individualisation impairment in acquired prosopagnosia. In Experiment 1 (Liu-Shuang et al., 2014), identical faces were presented at a rate of 5.88 Hz (i.e., ≈ 6 images/s, SOA=170 ms, 1 fixation per image), with different faces appearing every 5th face (5.88 Hz/5=1.18 Hz). Responses of interest were identified at these predetermined frequencies (i.e., objectively) in the EEG frequency-domain data. A well-studied case of acquired prosopagnosia (PS) and a group of age- and gender-matched controls completed only 4 × 1-min stimulation sequences, with an orthogonal fixation cross task. Contrarily to controls, PS did not show face individualisation responses at 1.18 Hz, in line with her prosopagnosia. However, her response at 5.88 Hz, reflecting general visual processing, was within the normal range. In Experiment 2 (Rossion et al., 2015), we presented natural (i.e., unsegmented) images of objects at 5.88 Hz, with face images shown every 5th image (1.18 Hz). In accordance with her preserved ability to categorise a face as a face, and despite extensive brain lesions potentially affecting the overall EEG signal-to-noise ratio, PS showed 1.18 Hz face-selective responses within the normal range. Collectively, these findings show that fast periodic visual stimulation provides objective and sensitive electrophysiological markers of preserved and impaired face processing abilities in the neuropsychological

  1. Structural similarity and category-specificity

    DEFF Research Database (Denmark)

    Gerlach, Christian; Law, Ian; Paulson, Olaf B

    2004-01-01

    It has been suggested that category-specific recognition disorders for natural objects may reflect that natural objects are more structurally (visually) similar than artefacts and therefore more difficult to recognize following brain damage. On this account one might expect a positive relationship...... range of candidate integral units will be activated and compete for selection, thus explaining the higher error rate associated with animals. We evaluate the model based on previous evidence from both normal subjects and patients with category-specific disorders and argue that this model can help...

  2. Object Representations in Human Visual Cortex Formed Through Temporal Integration of Dynamic Partial Shape Views.

    Science.gov (United States)

    Orlov, Tanya; Zohary, Ehud

    2018-01-17

    We typically recognize visual objects using the spatial layout of their parts, which are present simultaneously on the retina. Therefore, shape extraction is based on integration of the relevant retinal information over space. The lateral occipital complex (LOC) can represent shape faithfully in such conditions. However, integration over time is sometimes required to determine object shape. To study shape extraction through temporal integration of successive partial shape views, we presented human participants (both men and women) with artificial shapes that moved behind a narrow vertical or horizontal slit. Only a tiny fraction of the shape was visible at any instant at the same retinal location. However, observers perceived a coherent whole shape instead of a jumbled pattern. Using fMRI and multivoxel pattern analysis, we searched for brain regions that encode temporally integrated shape identity. We further required that the representation of shape should be invariant to changes in the slit orientation. We show that slit-invariant shape information is most accurate in the LOC. Importantly, the slit-invariant shape representations matched the conventional whole-shape representations assessed during full-image runs. Moreover, when the same slit-dependent shape slivers were shuffled, thereby preventing their spatiotemporal integration, slit-invariant shape information was reduced dramatically. The slit-invariant representation of the various shapes also mirrored the structure of shape perceptual space as assessed by perceptual similarity judgment tests. Therefore, the LOC is likely to mediate temporal integration of slit-dependent shape views, generating a slit-invariant whole-shape percept. These findings provide strong evidence for a global encoding of shape in the LOC regardless of integration processes required to generate the shape percept. SIGNIFICANCE STATEMENT Visual objects are recognized through spatial integration of features available simultaneously on

  3. Does visual working memory represent the predicted locations of future target objects? An event-related brain potential study.

    Science.gov (United States)

    Grubert, Anna; Eimer, Martin

    2015-11-11

    During the maintenance of task-relevant objects in visual working memory, the contralateral delay activity (CDA) is elicited over the hemisphere opposite to the visual field where these objects are presented. The presence of this lateralised CDA component demonstrates the existence of position-dependent object representations in working memory. We employed a change detection task to investigate whether the represented object locations in visual working memory are shifted in preparation for the known location of upcoming comparison stimuli. On each trial, bilateral memory displays were followed after a delay period by bilateral test displays. Participants had to encode and maintain three visual objects on one side of the memory display, and to judge whether they were identical or different to three objects in the test display. Task-relevant memory and test stimuli were located in the same visual hemifield in the no-shift task, and on opposite sides in the horizontal shift task. CDA components of similar size were triggered contralateral to the memorized objects in both tasks. The absence of a polarity reversal of the CDA in the horizontal shift task demonstrated that there was no preparatory shift of memorized object location towards the side of the upcoming comparison stimuli. These results suggest that visual working memory represents the locations of visual objects during encoding, and that the matching of memorized and test objects at different locations is based on a comparison process that can bridge spatial translations between these objects. This article is part of a Special Issue entitled SI: Prediction and Attention. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Contested Categories

    DEFF Research Database (Denmark)

    Drawing on social science perspectives, Contested Categories presents a series of empirical studies that engage with the often shifting and day-to-day realities of life sciences categories. In doing so, it shows how such categories remain contested and dynamic, and that the boundaries they create...... to life science categories. With contributions from an international team of scholars, this book will be essential reading for anyone interested in the social, legal, policy and ethical implications of science and technology and the life sciences....

  5. Gravity influences the visual representation of object tilt in parietal cortex.

    Science.gov (United States)

    Rosenberg, Ari; Angelaki, Dora E

    2014-10-22

    Sensory systems encode the environment in egocentric (e.g., eye, head, or body) reference frames, creating inherently unstable representations that shift and rotate as we move. However, it is widely speculated that the brain transforms these signals into an allocentric, gravity-centered representation of the world that is stable and independent of the observer's spatial pose. Where and how this representation may be achieved is currently unknown. Here we demonstrate that a subpopulation of neurons in the macaque caudal intraparietal area (CIP) visually encodes object tilt in nonegocentric coordinates defined relative to the gravitational vector. Neuronal responses to the tilt of a visually presented planar surface were measured with the monkey in different spatial orientations (upright and rolled left/right ear down) and then compared. This revealed a continuum of representations in which planar tilt was encoded in a gravity-centered reference frame in approximately one-tenth of the comparisons, intermediate reference frames ranging between gravity-centered and egocentric in approximately two-tenths of the comparisons, and in an egocentric reference frame in less than half of the comparisons. Altogether, almost half of the comparisons revealed a shift in the preferred tilt and/or a gain change consistent with encoding object orientation in nonegocentric coordinates. Through neural network modeling, we further show that a purely gravity-centered representation of object tilt can be achieved directly from the population activity of CIP-like units. These results suggest that area CIP may play a key role in creating a stable, allocentric representation of the environment defined relative to an "earth-vertical" direction. Copyright © 2014 the authors 0270-6474/14/3414170-11$15.00/0.

  6. How does aging affect the types of error made in a visual short-term memory 'object-recall' task?

    National Research Council Canada - National Science Library

    Sapkota, Raju P; van der Linde, Ian; Pardhan, Shahina

    This study examines how normal aging affects the occurrence of different types of incorrect responses in a visual short-term memory (VSTM) object-recall task. Seventeen young (Mean = 23.3 years, SD = 3.76...

  7. Structural similarity causes different category-effects depending on task characteristics

    DEFF Research Database (Denmark)

    Gerlach, Christian

    2001-01-01

    It has been suggested that category-specific impairments for natural objects may reflect that natural objects are more globally visually similar than artefacts and therefore more difficult to recognize following brain damage [Aphasiology 13 (1992) 169]. This account has been challenged...... whether category effects could be found on object decision tasks (deciding whether pictures represented real objects or not), when the stimulus material was matched across categories. In experiment 1, a disadvantage for natural objects was found on difficult object decision tasks whereas no category...

  8. Statistics of Visual Responses to Image Object Stimuli from Primate AIT Neurons to DNN Neurons.

    Science.gov (United States)

    Dong, Qiulei; Wang, Hong; Hu, Zhanyi

    2018-02-01

    Under the goal-driven paradigm, Yamins et al. ( 2014 ; Yamins & DiCarlo, 2016 ) have shown that by optimizing only the final eight-way categorization performance of a four-layer hierarchical network, not only can its top output layer quantitatively predict IT neuron responses but its penultimate layer can also automatically predict V4 neuron responses. Currently, deep neural networks (DNNs) in the field of computer vision have reached image object categorization performance comparable to that of human beings on ImageNet, a data set that contains 1.3 million training images of 1000 categories. We explore whether the DNN neurons (units in DNNs) possess image object representational statistics similar to monkey IT neurons, particularly when the network becomes deeper and the number of image categories becomes larger, using VGG19, a typical and widely used deep network of 19 layers in the computer vision field. Following Lehky, Kiani, Esteky, and Tanaka ( 2011 , 2014 ), where the response statistics of 674 IT neurons to 806 image stimuli are analyzed using three measures (kurtosis, Pareto tail index, and intrinsic dimensionality), we investigate the three issues in this letter using the same three measures: (1) the similarities and differences of the neural response statistics between VGG19 and primate IT cortex, (2) the variation trends of the response statistics of VGG19 neurons at different layers from low to high, and (3) the variation trends of the response statistics of VGG19 neurons when the numbers of stimuli and neurons increase. We find that the response statistics on both single-neuron selectivity and population sparseness of VGG19 neurons are fundamentally different from those of IT neurons in most cases; by increasing the number of neurons in different layers and the number of stimuli, the response statistics of neurons at different layers from low to high do not substantially change; and the estimated intrinsic dimensionality values at the low

  9. Metacognition of visual short-term memory: Dissociation between objective and subjective components of VSTM

    Directory of Open Access Journals (Sweden)

    Silvia eBona

    2013-02-01

    Full Text Available The relationship between the objective accuracy of visual-short term memory (VSTM representations and their subjective conscious experience is unknown. We investigated this issue by assessing how the objective and subjective components of VSTM in a delayed cue-target orientation discrimination task are affected by intervening distracters. On each trial, participants were shown a memory cue (a grating, the orientation of which they were asked to hold in memory. On approximately half of the trials, a distractor grating appeared during the maintenance interval; its orientation was either identical to that of the memory cue, or it differed by 10 or 40 degrees. The distractors were masked and presented briefly, so they were only consciously perceived on a subset of trials. At the end of the delay period, a memory test probe was presented, and participants were asked to indicate whether it was tilted to the left or right relative to the memory cue (VSTM accuracy; objective performance. In order to assess subjective metacognition, participants were asked indicate the vividness of their memory for the original memory cue. Finally, participants were asked rate their awareness of the distracter. Results showed that objective VSTM performance was impaired by distractors only when the distractors were very different from the cue, and that this occurred with both subjectively visible and invisible distractors. Subjective metacognition, however, was impaired by distractors of all orientations, but only when these distractors were subjectively invisible. Our results thus indicate that the objective and subjective components of VSTM are to some extent dissociable.

  10. Stochastic process underlying emergent recognition of visual objects hidden in degraded images.

    Science.gov (United States)

    Murata, Tsutomu; Hamada, Takashi; Shimokawa, Tetsuya; Tanifuji, Manabu; Yanagida, Toshio

    2014-01-01

    When a degraded two-tone image such as a "Mooney" image is seen for the first time, it is unrecognizable in the initial seconds. The recognition of such an image is facilitated by giving prior information on the object, which is known as top-down facilitation and has been intensively studied. Even in the absence of any prior information, however, we experience sudden perception of the emergence of a salient object after continued observation of the image, whose processes remain poorly understood. This emergent recognition is characterized by a comparatively long reaction time ranging from seconds to tens of seconds. In this study, to explore this time-consuming process of emergent recognition, we investigated the properties of the reaction times for recognition of degraded images of various objects. The results show that the time-consuming component of the reaction times follows a specific exponential function related to levels of image degradation and subject's capability. Because generally an exponential time is required for multiple stochastic events to co-occur, we constructed a descriptive mathematical model inspired by the neurophysiological idea of combination coding of visual objects. Our model assumed that the coincidence of stochastic events complement the information loss of a degraded image leading to the recognition of its hidden object, which could successfully explain the experimental results. Furthermore, to see whether the present results are specific to the task of emergent recognition, we also conducted a comparison experiment with the task of perceptual decision making of degraded images, which is well known to be modeled by the stochastic diffusion process. The results indicate that the exponential dependence on the level of image degradation is specific to emergent recognition. The present study suggests that emergent recognition is caused by the underlying stochastic process which is based on the coincidence of multiple stochastic events.

  11. Stochastic process underlying emergent recognition of visual objects hidden in degraded images.

    Directory of Open Access Journals (Sweden)

    Tsutomu Murata

    Full Text Available When a degraded two-tone image such as a "Mooney" image is seen for the first time, it is unrecognizable in the initial seconds. The recognition of such an image is facilitated by giving prior information on the object, which is known as top-down facilitation and has been intensively studied. Even in the absence of any prior information, however, we experience sudden perception of the emergence of a salient object after continued observation of the image, whose processes remain poorly understood. This emergent recognition is characterized by a comparatively long reaction time ranging from seconds to tens of seconds. In this study, to explore this time-consuming process of emergent recognition, we investigated the properties of the reaction times for recognition of degraded images of various objects. The results show that the time-consuming component of the reaction times follows a specific exponential function related to levels of image degradation and subject's capability. Because generally an exponential time is required for multiple stochastic events to co-occur, we constructed a descriptive mathematical model inspired by the neurophysiological idea of combination coding of visual objects. Our model assumed that the coincidence of stochastic events complement the information loss of a degraded image leading to the recognition of its hidden object, which could successfully explain the experimental results. Furthermore, to see whether the present results are specific to the task of emergent recognition, we also conducted a comparison experiment with the task of perceptual decision making of degraded images, which is well known to be modeled by the stochastic diffusion process. The results indicate that the exponential dependence on the level of image degradation is specific to emergent recognition. The present study suggests that emergent recognition is caused by the underlying stochastic process which is based on the coincidence of multiple

  12. Categorial Graphs

    NARCIS (Netherlands)

    de Haas, E.; Reichel, H.

    1995-01-01

    In this paper we present a denotational semantics for a class of database definition languages. We present a language, called categorial graph language, that combines both graphical and textual phrases and is tailored to define databases. The categorial graph language is modeled after a number of

  13. Time course of processes and representations supporting visual object identification and memory.

    Science.gov (United States)

    Schendan, Haline E; Kutas, Marta

    2003-01-01

    Event-related potentials (ERPs) were used to delineate the time course of activation of the processes and representations supporting visual object identification and memory. Following K. Srinivas (1993), 66 young people named objects in canonical or unusual views during study and an indirect memory test. Test views were the same or different from those at study. The first ERP repetition effect and earliest ERP format effect started at approximately 150 msec. Multiple ERP repetition effects appeared over time. All but the latest ones were largest for same views, although other aspects of their form specificity varied. Initial ERP format effects support multiple-views-plus-transformation accounts of identification and indicate the timing of processes of object model selection (frontal N350 from 148-250 to 500-700 msec) and view transformation via mental rotation (posterior N400/P600 from 250-356 to 700 msec). Thereafter, a late slow wave reflects a memory process more strongly recruited by different than same views. Overall, the ERP data demonstrate the activation of multiple memory processes over time during an indirect test, with earlier ones (within 148-400 msec) characterized by a pattern of form specificity consistent with the specific identification-related neural process or representational system supporting each memory function.

  14. High frequency gamma activity in the left hippocampus predicts visual object naming performance.

    Science.gov (United States)

    Hamamé, Carlos M; Alario, F-Xavier; Llorens, Anais; Liégeois-Chauvel, Catherine; Trébuchon-Da Fonseca, Agnés

    2014-08-01

    Access to an object's name requires the retrieval of an arbitrary association between it's identity and a word-label. The hippocampus is essential in retrieving arbitrary associations, and thus could be involved in retrieving the link between an object and its name. To test this hypothesis we recorded the iEEG signal from epileptic patients, directly implanted in the hippocampus, while they performed a picture naming task. High-frequency broadband gamma (50-150 Hz) responses were computed as an index of population-level spiking activity. Our results show, for the first time, single-trial hippocampal dynamics between visual confrontation and naming. Remarkably, the latency of the hippocampal response predicts naming latency, while inefficient hippocampal activation is associated with "tip-of-the-tongue" states (a failure to retrieve the name of a recognized object) suggesting that the hippocampus is an active component of the naming network and that its dynamics are closely related to efficient word production. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. Visual working memory modulates within-object metrics of saccade landing position.

    Science.gov (United States)

    Hollingworth, Andrew

    2015-03-01

    In two experiments, we examined the influence of visual working memory (VWM) on oculomotor selection, testing whether the landing positions of rapidly generated saccades are biased toward the region of an object that matches a feature held in VWM. Participants executed a saccade to the center of a single saccade target, divided into two colored regions and presented on the horizontal midline. Concurrently, participants maintained a color in VWM for an unrelated memory task. This color either matched one of the two regions or neither of the regions. Relative to the no-match baseline, the landing positions of rapidly generated saccades (mean latency < 150 ms) were biased toward the region that matched the remembered color. The results support the hypothesis that VWM modulates early, spatially organized sensory representations to bias selection toward locations with features that match VWM content. In addition, the results demonstrate that saccades to spatially extended objects are sensitive to within-object differences in salience. © 2015 New York Academy of Sciences.

  16. Effect of repetition lag on priming of unfamiliar visual objects in young and older adults.

    Science.gov (United States)

    Gordon, Leamarie T; Soldan, Anja; Thomas, Ayanna K; Stern, Yaakov

    2013-03-01

    Across three experiments, we examined the effect of repetition lag on priming of unfamiliar visual objects in healthy young and older adults. Multiple levels of lag were examined, ranging from short (one to four intervening stimuli) to long (50 + intervening stimuli). In each experiment, subjects viewed a series of new and repeated line drawings of objects and decided whether they depicted structurally possible or impossible figures. Experiment 1 and 2 found similar levels of priming in young and older adults at short and medium lags. At the longer repetition lags (∼20 + intervening stimuli), older adults showed less overall priming, as measured by reaction time (RT) facilitation, than young adults. This indicates that older adults can rapidly encode unfamiliar three-dimensional objects to support priming at shorter lags; however, they cannot maintain these representations over longer intervals. In addition to repetition lag, we also explored the relationship between priming and cognitive reserve, as measured by education and verbal intelligence. In the older adults, higher levels of cognitive reserve were associated with greater RT priming, suggesting that cognitive reserve may mediate the relationship between aging and priming.

  17. Visual Stability of Objects and Environments Viewed through Head-Mounted Displays

    Science.gov (United States)

    Ellis, Stephen R.; Adelstein, Bernard D.

    2015-01-01

    Virtual Environments (aka Virtual Reality) is again catching the public imagination and a number of startups (e.g. Oculus) and even not-so-startup companies (e.g. Microsoft) are trying to develop display systems to capitalize on this renewed interest. All acknowledge that this time they will get it right by providing the required dynamic fidelity, visual quality, and interesting content for the concept of VR to take off and change the world in ways it failed to do so in past incarnations. Some of the surprisingly long historical background of the technology that the form of direct simulation that underlies virtual environment and augmented reality displays will be briefly reviewed. An example of a mid 1990's augmented reality display system with good dynamic performance from our lab will be used to illustrate some of the underlying phenomena and technology concerning visual stability of virtual environments and objects during movement. In conclusion some idealized performance characteristics for a reference system will be proposed. Interestingly, many systems more or less on the market now may actually meet many of these proposed technical requirements. This observation leads to the conclusion that the current success of the IT firms trying to commercialize the technology will depend on the hidden costs of using the systems as well as the development of interesting and compelling content.

  18. BUILDING A BILLION SPATIO-TEMPORAL OBJECT SEARCH AND VISUALIZATION PLATFORM

    Directory of Open Access Journals (Sweden)

    D. Kakkar

    2017-10-01

    Full Text Available With funding from the Sloan Foundation and Harvard Dataverse, the Harvard Center for Geographic Analysis (CGA has developed a prototype spatio-temporal visualization platform called the Billion Object Platform or BOP. The goal of the project is to lower barriers for scholars who wish to access large, streaming, spatio-temporal datasets. The BOP is now loaded with the latest billion geo-tweets, and is fed a real-time stream of about 1 million tweets per day. The geo-tweets are enriched with sentiment and census/admin boundary codes when they enter the system. The system is open source and is currently hosted on Massachusetts Open Cloud (MOC, an OpenStack environment with all components deployed in Docker orchestrated by Kontena. This paper will provide an overview of the BOP architecture, which is built on an open source stack consisting of Apache Lucene, Solr, Kafka, Zookeeper, Swagger, scikit-learn, OpenLayers, and AngularJS. The paper will further discuss the approach used for harvesting, enriching, streaming, storing, indexing, visualizing and querying a billion streaming geo-tweets.

  19. Building a Billion Spatio-Temporal Object Search and Visualization Platform

    Science.gov (United States)

    Kakkar, D.; Lewis, B.

    2017-10-01

    With funding from the Sloan Foundation and Harvard Dataverse, the Harvard Center for Geographic Analysis (CGA) has developed a prototype spatio-temporal visualization platform called the Billion Object Platform or BOP. The goal of the project is to lower barriers for scholars who wish to access large, streaming, spatio-temporal datasets. The BOP is now loaded with the latest billion geo-tweets, and is fed a real-time stream of about 1 million tweets per day. The geo-tweets are enriched with sentiment and census/admin boundary codes when they enter the system. The system is open source and is currently hosted on Massachusetts Open Cloud (MOC), an OpenStack environment with all components deployed in Docker orchestrated by Kontena. This paper will provide an overview of the BOP architecture, which is built on an open source stack consisting of Apache Lucene, Solr, Kafka, Zookeeper, Swagger, scikit-learn, OpenLayers, and AngularJS. The paper will further discuss the approach used for harvesting, enriching, streaming, storing, indexing, visualizing and querying a billion streaming geo-tweets.

  20. Multisensory Integration of Visual and Vestibular Signals Improves Heading Discrimination in the Presence of a Moving Object

    Science.gov (United States)

    Dokka, Kalpana; DeAngelis, Gregory C.

    2015-01-01

    Humans and animals are fairly accurate in judging their direction of self-motion (i.e., heading) from optic flow when moving through a stationary environment. However, an object moving independently in the world alters the optic flow field and may bias heading perception if the visual system cannot dissociate object motion from self-motion. We investigated whether adding vestibular self-motion signals to optic flow enhances the accuracy of heading judgments in the presence of a moving object. Macaque monkeys were trained to report their heading (leftward or rightward relative to straight-forward) when self-motion was specified by vestibular, visual, or combined visual-vestibular signals, while viewing a display in which an object moved independently in the (virtual) world. The moving object induced significant biases in perceived heading when self-motion was signaled by either visual or vestibular cues alone. However, this bias was greatly reduced when visual and vestibular cues together signaled self-motion. In addition, multisensory heading discrimination thresholds measured in the presence of a moving object were largely consistent with the predictions of an optimal cue integration strategy. These findings demonstrate that multisensory cues facilitate the perceptual dissociation of self-motion and object motion, consistent with computational work that suggests that an appropriate decoding of multisensory visual-vestibular neurons can estimate heading while discounting the effects of object motion. SIGNIFICANCE STATEMENT Objects that move independently in the world alter the optic flow field and can induce errors in perceiving the direction of self-motion (heading). We show that adding vestibular (inertial) self-motion signals to optic flow almost completely eliminates the errors in perceived heading induced by an independently moving object. Furthermore, this increased accuracy occurs without a substantial loss in the precision. Our results thus demonstrate that

  1. 3D geospatial visualizations: Animation and motion effects on spatial objects

    Science.gov (United States)

    Evangelidis, Konstantinos; Papadopoulos, Theofilos; Papatheodorou, Konstantinos; Mastorokostas, Paris; Hilas, Constantinos

    2018-02-01

    Digital Elevation Models (DEMs), in combination with high quality raster graphics provide realistic three-dimensional (3D) representations of the globe (virtual globe) and amazing navigation experience over the terrain through earth browsers. In addition, the adoption of interoperable geospatial mark-up languages (e.g. KML) and open programming libraries (Javascript) makes it also possible to create 3D spatial objects and convey on them the sensation of any type of texture by utilizing open 3D representation models (e.g. Collada). One step beyond, by employing WebGL frameworks (e.g. Cesium.js, three.js) animation and motion effects are attributed on 3D models. However, major GIS-based functionalities in combination with all the above mentioned visualization capabilities such as for example animation effects on selected areas of the terrain texture (e.g. sea waves) as well as motion effects on 3D objects moving in dynamically defined georeferenced terrain paths (e.g. the motion of an animal over a hill, or of a big fish in an ocean etc.) are not widely supported at least by open geospatial applications or development frameworks. Towards this we developed and made available to the research community, an open geospatial software application prototype that provides high level capabilities for dynamically creating user defined virtual geospatial worlds populated by selected animated and moving 3D models on user specified locations, paths and areas. At the same time, the generated code may enhance existing open visualization frameworks and programming libraries dealing with 3D simulations, with the geospatial aspect of a virtual world.

  2. What is a Visual Object? Evidence from the Reduced Interference of Grouping in Multiple Object Tracking for Children with Autism Spectrum Disorders

    Directory of Open Access Journals (Sweden)

    Lee de-Wit

    2012-05-01

    Full Text Available Objects offer a critical unit with which we can organise our experience of the world. However, whilst their influence on perception and cognition may be fundamental, understanding how objects are constructed from sensory input remains a key challenge for vision research and psychology in general. A potential window into the means by which objects are constructed in the visual system is offered by the influence that they have on the allocation of attention. In Multiple Object Tracking (MOT, for example, attention is automatically allocated to whole objects, even when this interferes with the tracking of the parts of these objects. In this study we demonstrate that this default tendency to track whole objects is reduced in children with Autisim Spectrum Disorders (ASD. This result both validates the use of MOT as a window into how objects are generated in the visual system and highlights how the reduced bias towards more global processing in ASD could influence further stages of cognition by altering the way in which attention selects information for further processing.

  3. Automatic detection of orientation changes of faces versus non-face objects: a visual MMN study.

    Science.gov (United States)

    Wang, Wei; Miao, Danmin; Zhao, Lun

    2014-07-01

    To investigate the automatic change detection of faces versus non-face objects, the visual mismatch negativity (vMMN) elicited by deviant orientation (90° versus 0°) for faces and houses, respectively, was recorded using the deviant-standard-reversed paradigm. The present face stimuli elicited larger N170 than did houses, regardless of the orientation. A larger and delayed N170 for deviant rotated faces was elicited than that for standard rotated faces, whereas the N170 did not differ between deviant and standard rotated houses. The rotated faces elicited increased vMMN amplitude and decreased vMMN latency than did the rotated houses. The face-MMN with a right occipito-temporal scalp distribution was larger for the rotated than upright conditions but the orientation did not modulate the amplitudes of house MMN. These data provided electrophysiological evidence for larger sensitivity for orientation changes of faces than those of objects even in the absence of attention, due to the disruption of configural processing caused by face rotation. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. PROFESSIONAL CATEGORIES

    Directory of Open Access Journals (Sweden)

    Sorin Fildan

    2012-01-01

    Full Text Available The transition process which Romanian commercial law underwent has affected both the term of ‘trader’, by redefining it, and the classification of professional categories. Currently, the term of ‘professional’ is conveyed by a descriptive listing of the categories of persons it comprises: traders, entrepreneurs, business operators, as well as any other person authorized to carry out economic or professional activities.

  5. Visual long-term memory is not unitary: Flexible storage of visual information as features or objects as a function of affect.

    Science.gov (United States)

    Spachtholz, Philipp; Kuhbandner, Christof

    2017-12-01

    Research has shown that observers store surprisingly highly detailed long-term memory representations of visual objects after only a single viewing. However, the nature of these representations is currently not well understood. In particular, it may be that the nature of such memory representations is not unitary but reflects the flexible operating of two separate memory subsystems: a feature-based subsystem that stores visual experiences in the form of independent features, and an object-based subsystem that stores visual experiences in the form of coherent objects. Such an assumption is usually difficult to test, because overt memory responses reflect the joint output of both systems. Therefore, to disentangle the two systems, we (1) manipulated the affective state of observers (negative vs. positive) during initial object perception, to introduce systematic variance in the way that visual experiences are stored, and (2) measured both the electrophysiological activity at encoding (via electroencephalography) and later feature memory performance for the objects. The results showed that the nature of stored memory representations varied qualitatively as a function of affective state. Negative affect promoted the independent storage of object features, driven by preattentive brain activities (feature-based memory representations), whereas positive affect promoted the dependent storage of object features, driven by attention-related brain activities (object-based memory representations). Taken together, these findings suggest that visual long-term memory is not a unitary phenomenon. Instead, incoming information can be stored flexibly by means of two qualitatively different long-term memory subsystems, based on the requirements of the current situation.

  6. Objectivity

    CERN Document Server

    Daston, Lorraine

    2010-01-01

    Objectivity has a history, and it is full of surprises. In Objectivity, Lorraine Daston and Peter Galison chart the emergence of objectivity in the mid-nineteenth-century sciences--and show how the concept differs from its alternatives, truth-to-nature and trained judgment. This is a story of lofty epistemic ideals fused with workaday practices in the making of scientific images. From the eighteenth through the early twenty-first centuries, the images that reveal the deepest commitments of the empirical sciences--from anatomy to crystallography--are those featured in scientific atlases, the compendia that teach practitioners what is worth looking at and how to look at it. Galison and Daston use atlas images to uncover a hidden history of scientific objectivity and its rivals. Whether an atlas maker idealizes an image to capture the essentials in the name of truth-to-nature or refuses to erase even the most incidental detail in the name of objectivity or highlights patterns in the name of trained judgment is a...

  7. Auditory Scene Analysis and sonified visual images. Does consonance negatively impact on object formation when using complex sonified stimuli?

    Directory of Open Access Journals (Sweden)

    David J Brown

    2015-10-01

    Full Text Available A critical task for the brain is the sensory representation and identification of perceptual objects in the world. When the visual sense is impaired, hearing and touch must take primary roles and in recent times compensatory techniques have been developed that employ the tactile or auditory system as a substitute for the visual system. Visual-to-auditory sonifications provide a complex, feature-based auditory representation that must be decoded and integrated into an object-based representation by the listener. However, we don’t yet know what role the auditory system plays in the object integration stage and whether the principles of auditory scene analysis apply. Here we used coarse sonified images in a two-tone discrimination task to test whether auditory feature-based representations of visual objects would be confounded when their features conflicted with the principles of auditory consonance. We found that listeners (N = 36 performed worse in an object recognition task when the auditory feature-based representation was harmonically consonant. We also found that this conflict was not negated with the provision of congruent audio-visual information. The findings suggest that early auditory processes of harmonic grouping dominate the object formation process and that the complexity of the signal, and additional sensory information have limited effect on this.

  8. Structural similarity and category-specificity: a refined account

    DEFF Research Database (Denmark)

    Gerlach, Christian; Law, Ian; Law, Ian

    2004-01-01

    It has been suggested that category-specific recognition disorders for natural objects may reflect that natural objects are more structurally (visually) similar than artefacts and therefore more difficult to recognize following brain damage. On this account one might expect a positive relationshi...

  9. The Anatomy of Object Recognition--Visual Form Agnosia Caused by Medial Occipitotemporal Stroke

    National Research Council Canada - National Science Library

    Karnath, Hans-Otto; Ruter, Johannes; Mandler, Andre; Himmelbach, Marc

    2009-01-01

    .... It was inspired substantially by the observation of a double dissociation of disturbed visual action versus perception in patients with optic ataxia on the one hand and patients with visual form agnosia (VFA) on the other...

  10. The Effect of Inversion on 3- to 5-Year-Olds' Recognition of Face and Nonface Visual Objects

    Science.gov (United States)

    Picozzi, Marta; Cassia, Viola Macchi; Turati, Chiara; Vescovo, Elena

    2009-01-01

    This study compared the effect of stimulus inversion on 3- to 5-year-olds' recognition of faces and two nonface object categories matched with faces for a number of attributes: shoes (Experiment 1) and frontal images of cars (Experiments 2 and 3). The inversion effect was present for faces but not shoes at 3 years of age (Experiment 1). Analogous…

  11. Remembering the Specific Visual Details of Presented Objects: Neuroimaging Evidence for Effects of Emotion

    Science.gov (United States)

    Kensinger, Elizabeth A.; Schacter, Daniel L.

    2007-01-01

    Memories can be retrieved with varied amounts of visual detail, and the emotional content of information can influence the likelihood that visual detail is remembered. In the present fMRI experiment (conducted with 19 adults scanned using a 3T magnet), we examined the neural processes that correspond with recognition of the visual details of…

  12. Fragile visual short-term memory is an object-based and location-specific store

    NARCIS (Netherlands)

    Pinto, Y.; Sligte, I.G.; Shapiro, K.L.; Lamme, V.A.F.

    2013-01-01

    Fragile visual short-term memory (FM) is a recently discovered form of visual short-term memory. Evidence suggests that it provides rich and high-capacity storage, like iconic memory, yet it exists, without interference, almost as long as visual working memory. In the present study, we sought to

  13. Stroboscopic Image Modulation to Reduce the Visual Blur of an Object Being Viewed by an Observer Experiencing Vibration

    Science.gov (United States)

    Kaiser, Mary K. (Inventor); Adelstein, Bernard D. (Inventor); Anderson, Mark R. (Inventor); Beutter, Brent R. (Inventor); Ahumada, Albert J., Jr. (Inventor); McCann, Robert S. (Inventor)

    2014-01-01

    A method and apparatus for reducing the visual blur of an object being viewed by an observer experiencing vibration. In various embodiments of the present invention, the visual blur is reduced through stroboscopic image modulation (SIM). A SIM device is operated in an alternating "on/off" temporal pattern according to a SIM drive signal (SDS) derived from the vibration being experienced by the observer. A SIM device (controlled by a SIM control system) operates according to the SDS serves to reduce visual blur by "freezing" (or reducing an image's motion to a slow drift) the visual image of the viewed object. In various embodiments, the SIM device is selected from the group consisting of illuminator(s), shutter(s), display control system(s), and combinations of the foregoing (including the use of multiple illuminators, shutters, and display control systems).

  14. Object perception is selectively slowed by a visually similar working memory load.

    Science.gov (United States)

    Robinson, Alan; Manzi, Alberto; Triesch, Jochen

    2008-12-22

    The capacity of visual working memory has been extensively characterized, but little work has investigated how occupying visual memory influences other aspects of cognition and perception. Here we show a novel effect: maintaining an item in visual working memory slows processing of similar visual stimuli during the maintenance period. Subjects judged the gender of computer rendered faces or the naturalness of body postures while maintaining different visual memory loads. We found that when stimuli of the same class (faces or bodies) were maintained in memory, perceptual judgments were slowed. Interestingly, this is the opposite of what would be predicted from traditional priming. Our results suggest there is interference between visual working memory and perception, caused by visual similarity between new perceptual input and items already encoded in memory.

  15. The temporal dynamics of object processing in visual cortex during the transition from distributed to focused spatial attention.

    Science.gov (United States)

    Wu, Chien-Te; Libertus, Melissa E; Meyerhoff, Karen L; Woldorff, Marty G

    2011-12-01

    Several major cognitive neuroscience models have posited that focal spatial attention is required to integrate different features of an object to form a coherent perception of it within a complex visual scene. Although many behavioral studies have supported this view, some have suggested that complex perceptual discrimination can be performed even with substantially reduced focal spatial attention, calling into question the complexity of object representation that can be achieved without focused spatial attention. In the present study, we took a cognitive neuroscience approach to this problem by recording cognition-related brain activity both to help resolve the questions about the role of focal spatial attention in object categorization processes and to investigate the underlying neural mechanisms, focusing particularly on the temporal cascade of these attentional and perceptual processes in visual cortex. More specifically, we recorded electrical brain activity in humans engaged in a specially designed cued visual search paradigm to probe the object-related visual processing before and during the transition from distributed to focal spatial attention. The onset times of the color popout cueing information, indicating where within an object array the subject was to shift attention, was parametrically varied relative to the presentation of the array (i.e., either occurring simultaneously or being delayed by 50 or 100 msec). The electrophysiological results demonstrate that some levels of object-specific representation can be formed in parallel for multiple items across the visual field under spatially distributed attention, before focal spatial attention is allocated to any of them. The object discrimination process appears to be subsequently amplified as soon as focal spatial attention is directed to a specific location and object. This set of novel neurophysiological findings thus provides important new insights on fundamental issues that have been long

  16. Airport object extraction based on visual attention mechanism and parallel line detection

    Science.gov (United States)

    Lv, Jing; Lv, Wen; Zhang, Libao

    2017-10-01

    Target extraction is one of the important aspects in remote sensing image analysis and processing, which has wide applications in images compression, target tracking, target recognition and change detection. Among different targets, airport has attracted more and more attention due to its significance in military and civilian. In this paper, we propose a novel and reliable airport object extraction model combining visual attention mechanism and parallel line detection algorithm. First, a novel saliency analysis model for remote sensing images with airport region is proposed to complete statistical saliency feature analysis. The proposed model can precisely extract the most salient region and preferably suppress the background interference. Then, the prior geometric knowledge is analyzed and airport runways contained two parallel lines with similar length are detected efficiently. Finally, we use the improved Otsu threshold segmentation method to segment and extract the airport regions from the salient map of remote sensing images. The experimental results demonstrate that the proposed model outperforms existing saliency analysis models and shows good performance in the detection of the airport.

  17. The 5-HT2A/1A agonist psilocybin disrupts modal object completion associated with visual hallucinations.

    Science.gov (United States)

    Kometer, Michael; Cahn, B Rael; Andel, David; Carter, Olivia L; Vollenweider, Franz X

    2011-03-01

    Recent findings suggest that the serotonergic system and particularly the 5-HT2A/1A receptors are implicated in visual processing and possibly the pathophysiology of visual disturbances including hallucinations in schizophrenia and Parkinson's disease. To investigate the role of 5-HT2A/1A receptors in visual processing the effect of the hallucinogenic 5-HT2A/1A agonist psilocybin (125 and 250 μg/kg vs. placebo) on the spatiotemporal dynamics of modal object completion was assessed in normal volunteers (n = 17) using visual evoked potential recordings in conjunction with topographic-mapping and source analysis. These effects were then considered in relation to the subjective intensity of psilocybin-induced visual hallucinations quantified by psychometric measurement. Psilocybin dose-dependently decreased the N170 and, in contrast, slightly enhanced the P1 component selectively over occipital electrode sites. The decrease of the N170 was most apparent during the processing of incomplete object figures. Moreover, during the time period of the N170, the overall reduction of the activation in the right extrastriate and posterior parietal areas correlated positively with the intensity of visual hallucinations. These results suggest a central role of the 5-HT2A/1A-receptors in the modulation of visual processing. Specifically, a reduced N170 component was identified as potentially reflecting a key process of 5-HT2A/1A receptor-mediated visual hallucinations and aberrant modal object completion potential. Copyright © 2011 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  18. The relation of object naming and other visual speech production tasks:A large scale voxel-based morphometric study

    Directory of Open Access Journals (Sweden)

    Johnny King L. Lau

    2015-01-01

    Full Text Available We report a lesion–symptom mapping analysis of visual speech production deficits in a large group (280 of stroke patients at the sub-acute stage (<120 days post-stroke. Performance on object naming was evaluated alongside three other tests of visual speech production, namely sentence production to a picture, sentence reading and nonword reading. A principal component analysis was performed on all these tests' scores and revealed a ‘shared’ component that loaded across all the visual speech production tasks and a ‘unique’ component that isolated object naming from the other three tasks. Regions for the shared component were observed in the left fronto-temporal cortices, fusiform gyrus and bilateral visual cortices. Lesions in these regions linked to both poor object naming and impairment in general visual–speech production. On the other hand, the unique naming component was potentially associated with the bilateral anterior temporal poles, hippocampus and cerebellar areas. This is in line with the models proposing that object naming relies on a left-lateralised language dominant system that interacts with a bilateral anterior temporal network. Neuropsychological deficits in object naming can reflect both the increased demands specific to the task and the more general difficulties in language processing.

  19. The Attentional Fields of Visual Search in Simultanagnosia and Healthy Individuals: How Object and Space Attention Interact.

    Science.gov (United States)

    Khan, A Z; Prost-Lefebvre, M; Salemme, R; Blohm, G; Rossetti, Y; Tilikete, C; Pisella, L

    2016-03-01

    Simultanagnosia is a deficit in which patients are unable to perceive multiple objects simultaneously. To date, it remains disputed whether this deficit results from disrupted object or space perception. We asked both healthy participants as well as a patient with simultanagnosia to perform different visual search tasks of variable difficulty. We also modulated the number of objects (target and distracters) presented. For healthy participants, we found that each visual search task was performed with a specific "attentional field" depending on the difficulty of visual object processing but not on the number of objects falling within this "working space." This was demonstrated by measuring the cost in reaction times using different gaze-contingent visible window sizes. We found that bilateral damage to the superior parietal lobule impairs the spatial integration of separable features (within-object processing), shrinking the attentional field in which a target can be detected, but causing no deficit in processing multiple objects per se. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  20. Alternation of sound location induces visual motion perception of a static object.

    Directory of Open Access Journals (Sweden)

    Souta Hidaka

    Full Text Available BACKGROUND: Audition provides important cues with regard to stimulus motion although vision may provide the most salient information. It has been reported that a sound of fixed intensity tends to be judged as decreasing in intensity after adaptation to looming visual stimuli or as increasing in intensity after adaptation to receding visual stimuli. This audiovisual interaction in motion aftereffects indicates that there are multimodal contributions to motion perception at early levels of sensory processing. However, there has been no report that sounds can induce the perception of visual motion. METHODOLOGY/PRINCIPAL FINDINGS: A visual stimulus blinking at a fixed location was perceived to be moving laterally when the flash onset was synchronized to an alternating left-right sound source. This illusory visual motion was strengthened with an increasing retinal eccentricity (2.5 deg to 20 deg and occurred more frequently when the onsets of the audio and visual stimuli were synchronized. CONCLUSIONS/SIGNIFICANCE: We clearly demonstrated that the alternation of sound location induces illusory visual motion when vision cannot provide accurate spatial information. The present findings strongly suggest that the neural representations of auditory and visual motion processing can bias each other, which yields the best estimates of external events in a complementary manner.

  1. The predicting brain: anticipation of moving objects in human visual cortex

    NARCIS (Netherlands)

    Schellekens, W.

    2015-01-01

    The human brain is nearly constantly subjected to visual motion signals originating from a large variety of external sources. It is the job of the central nervous system to determine correspondence among visual motion input across spatially distant locations within certain time frames. In order to

  2. The Nigerian national blindness and visual impairment survey: Rationale, objectives and detailed methodology

    Science.gov (United States)

    Dineen, Brendan; Gilbert, Clare E; Rabiu, Mansur; Kyari, Fatima; Mahdi, Abdull M; Abubakar, Tafida; Ezelum, Christian C; Gabriel, Entekume; Elhassan , Elizabeth; Abiose, Adenike; Faal, Hannah; Jiya, Jonathan Y; Ozemela, Chinenyem P; Lee, Pak Sang; Gudlavalleti, Murthy VS

    2008-01-01

    Background Despite having the largest population in Africa, Nigeria has no accurate population based data to plan and evaluate eye care services. A national survey was undertaken to estimate the prevalence and determine the major causes of blindness and low vision. This paper presents the detailed methodology used during the survey. Methods A nationally representative sample of persons aged 40 years and above was selected. Children aged 10–15 years and individuals aged measured followed by assessment of presenting visual acuity, refractokeratomery, A-scan ultrasonography, visual fields and best corrected visual acuity. Anterior and posterior segments of each eye were examined with a torch and direct ophthalmoscope. Participants with visual acuity of blindness in Nigeria. The survey would also provide information on barriers to accessing services, quality of life of visually impaired individuals and also provide normative data for Nigerian eyes. PMID:18808712

  3. 3D Visual Data-Driven Spatiotemporal Deformations for Non-Rigid Object Grasping Using Robot Hands

    Directory of Open Access Journals (Sweden)

    Carlos M. Mateo

    2016-05-01

    Full Text Available Sensing techniques are important for solving problems of uncertainty inherent to intelligent grasping tasks. The main goal here is to present a visual sensing system based on range imaging technology for robot manipulation of non-rigid objects. Our proposal provides a suitable visual perception system of complex grasping tasks to support a robot controller when other sensor systems, such as tactile and force, are not able to obtain useful data relevant to the grasping manipulation task. In particular, a new visual approach based on RGBD data was implemented to help a robot controller carry out intelligent manipulation tasks with flexible objects. The proposed method supervises the interaction between the grasped object and the robot hand in order to avoid poor contact between the fingertips and an object when there is neither force nor pressure data. This new approach is also used to measure changes to the shape of an object’s surfaces and so allows us to find deformations caused by inappropriate pressure being applied by the hand’s fingers. Test was carried out for grasping tasks involving several flexible household objects with a multi-fingered robot hand working in real time. Our approach generates pulses from the deformation detection method and sends an event message to the robot controller when surface deformation is detected. In comparison with other methods, the obtained results reveal that our visual pipeline does not use deformations models of objects and materials, as well as the approach works well both planar and 3D household objects in real time. In addition, our method does not depend on the pose of the robot hand because the location of the reference system is computed from a recognition process of a pattern located place at the robot forearm. The presented experiments demonstrate that the proposed method accomplishes a good monitoring of grasping task with several objects and different grasping configurations in indoor

  4. Visual scanning and recognition of Chinese, Caucasian, and racially ambiguous faces: contributions from bottom-up facial physiognomic information and top-down knowledge of racial categories.

    Science.gov (United States)

    Wang, Qiandong; Xiao, Naiqi G; Quinn, Paul C; Hu, Chao S; Qian, Miao; Fu, Genyue; Lee, Kang

    2015-02-01

    Recent studies have shown that participants use different eye movement strategies when scanning own- and other-race faces. However, it is unclear (1) whether this effect is related to face recognition performance, and (2) to what extent this effect is influenced by top-down or bottom-up facial information. In the present study, Chinese participants performed a face recognition task with Chinese, Caucasian, and racially ambiguous faces. For the racially ambiguous faces, we led participants to believe that they were viewing either own-race Chinese faces or other-race Caucasian faces. Results showed that (1) Chinese participants scanned the nose of the true Chinese faces more than that of the true Caucasian faces, whereas they scanned the eyes of the Caucasian faces more than those of the Chinese faces; (2) they scanned the eyes, nose, and mouth equally for the ambiguous faces in the Chinese condition compared with those in the Caucasian condition; (3) when recognizing the true Chinese target faces, but not the true target Caucasian faces, the greater the fixation proportion on the nose, the faster the participants correctly recognized these faces. The same was true when racially ambiguous face stimuli were thought to be Chinese faces. These results provide the first evidence to show that (1) visual scanning patterns of faces are related to own-race face recognition response time, and (2) it is bottom-up facial physiognomic information that mainly contributes to face scanning. However, top-down knowledge of racial categories can influence the relationship between face scanning patterns and recognition response time. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Tactile force perception depends on the visual speed of the collision object.

    Science.gov (United States)

    Arai, Kan; Okajima, Katsunori

    2009-10-22

    Previous research on the interaction between vision and touch has employed static visual and continuous tactile stimuli, and has shown that two kinds of multimodal interaction effect exist: the averaging effect and the contrast effect. The averaging effect has been used to explain several kinds of stimuli interaction while the contrast effect is associated only with the size-weight illusion (A. Charpentier, 1891). Here, we describe a novel visuotactile interaction using visual motion information that can be explained with the contrast effect. We show that the magnitude of tactile force perception (MTFP) from an impact on the palm is significantly modified by the visual motion information of a virtual collision event. Our collision simulator generates visual stimuli independently from the corresponding tactile stimuli. The results show that visual speed modified MTFP even though the actual contact force remained constant: higher visual pre- and post-collision speeds induced lower tactile force perception. Finally, we propose a quantitative model of MTFP in which MTFP is expressed as a function of the visual velocity difference, suggesting that the gain of the tactile perception in the human brain is altered via MTFP modulation.

  6. Limitations of attentional orienting. Effects of abrupt visual onsets and offsets on naming two objects in a patient with simultanagnosia.

    Science.gov (United States)

    Pavese, Antonella; Coslett, H Branch; Saffran, Eleanor; Buxbaum, Laurel

    2002-01-01

    It has been proposed that the underlying deficit for some simultanagnosics is the inability to bilaterally orient attention in space due to parietal damage. In five experiments, we examine the performance of a patient with simultanagnosia secondary to bilateral occipito-parietal lesions, IC, in naming pairs of line-drawings. With simultaneous presentation and disappearance of objects (Experiment 1), IC typically named a single object. IC's performance dramatically improved when the two drawings alternated every 500 ms (Experiment 2). This improvement was not due to the abrupt onset of the second drawing "capturing attention", as indicated by the results of Experiment 3. Experiments 4 and 5 demonstrated that the crucial factor in improving IC's performance with simultaneous presentation of visual objects was the offset of one of the two stimuli. We propose that IC's impairment in naming two objects is attributable to the inability to "unlock" attention from the first object detected to other objects in the array. Visual offset of the first object disengages attention from the first object, allowing it to be allocated to the second object.

  7. A Morphism Double Category and Monoidal Structure

    Directory of Open Access Journals (Sweden)

    Saikat Chatterjee

    2013-01-01

    Full Text Available We provide a recipe for “fattening” a category that leads to the construction of a double category. Motivated by an example where the underlying category has vector spaces as objects, we show how a monoidal category leads to a law of composition, satisfying certain coherence properties, on the object set of the fattened category.

  8. Object-Spatial Visualization and Verbal Cognitive Styles, and Their Relation to Cognitive Abilities and Mathematical Performance

    Science.gov (United States)

    Haciomeroglu, Erhan Selcuk

    2016-01-01

    The present study investigated the object-spatial visualization and verbal cognitive styles among high school students and related differences in spatial ability, verbal-logical reasoning ability, and mathematical performance of those students. Data were collected from 348 students enrolled in Advanced Placement calculus courses at six high…

  9. A new 2-dimensional method for constructing visualized treatment objectives for distraction osteogenesis of the short mandible

    NARCIS (Netherlands)

    van Beek, H.

    2010-01-01

    Open bite development during distraction of the mandible is common and partly due to inaccurate planning of the treatment. Conflicting guidelines exist in the literature. A method for Visualized Treatment Objective (VTO) construction is presented as an aid for determining the correct orientation of

  10. Visual agnosia for line drawings and silhouettes without apparent impairment of real-object recognition: a case report.

    Science.gov (United States)

    Hiraoka, Kotaro; Suzuki, Kyoko; Hirayama, Kazumi; Mori, Etsuro

    2009-01-01

    We report on a patient with visual agnosia for line drawings and silhouette pictures following cerebral infarction in the region of the right posterior cerebral artery. The patient retained the ability to recognize real objects and their photographs, and could precisely copy line drawings of objects that she could not name. This case report highlights the importance of clinicians and researchers paying special attention to avoid overlooking agnosia in such cases. The factors that lead to problems in the identification of stimuli other than real objects in agnosic cases are discussed.

  11. Visual Agnosia for Line Drawings and Silhouettes without Apparent Impairment of Real-Object Recognition: A Case Report

    Directory of Open Access Journals (Sweden)

    Kotaro Hiraoka

    2009-01-01

    Full Text Available We report on a patient with visual agnosia for line drawings and silhouette pictures following cerebral infarction in the region of the right posterior cerebral artery. The patient retained the ability to recognize real objects and their photographs, and could precisely copy line drawings of objects that she could not name. This case report highlights the importance of clinicians and researchers paying special attention to avoid overlooking agnosia in such cases. The factors that lead to problems in the identification of stimuli other than real objects in agnosic cases are discussed.

  12. The role of hemifield sector analysis in multifocal visual evoked potential objective perimetry in the early detection of glaucomatous visual field defects.

    Science.gov (United States)

    Mousa, Mohammad F; Cubbidge, Robert P; Al-Mansouri, Fatima; Bener, Abdulbari

    2013-01-01

    The purpose of this study was to examine the effectiveness of a new analysis method of mfVEP objective perimetry in the early detection of glaucomatous visual field defects compared to the gold standard technique. Three groups were tested in this study; normal controls (38 eyes), glaucoma patients (36 eyes), and glaucoma suspect patients (38 eyes). All subjects underwent two standard 24-2 visual field tests: one with the Humphrey Field Analyzer and a single mfVEP test in one session. Analysis of the mfVEP results was carried out using the new analysis protocol: the hemifield sector analysis protocol. Analysis of the mfVEP showed that the signal to noise ratio (SNR) difference between superior and inferior hemifields was statistically significant between the three groups (analysis of variance, P field defects detected by standard perimetry, was able to differentiate between the three study groups with a clear distinction between normal patients and those with suspected glaucoma, and was able to detect early visual field changes not detected by standard perimetry. In addition, the distinction between normal and glaucoma patients was especially clear and significant using this analysis. The new hemifield sector analysis protocol used in mfVEP testing can be used to detect glaucomatous visual field defects in both glaucoma and glaucoma suspect patients. Using this protocol, it can provide information about focal visual field differences across the horizontal midline, which can be utilized to differentiate between glaucoma and normal subjects. The sensitivity and specificity of the mfVEP test showed very promising results and correlated with other anatomical changes in glaucomatous visual field loss. The intersector analysis protocol can detect early field changes not detected by the standard Humphrey Field Analyzer test.

  13. Different measures of structural similarity tap different aspects of visual object processing

    DEFF Research Database (Denmark)

    Gerlach, Christian

    2017-01-01

    The structural similarity of objects has been an important variable in explaining why some objects are easier to categorize at a superordinate level than to individuate, and also why some patients with brain injury have more difficulties in recognizing natural (structurally similar) objects than ...

  14. A Multi-Objective Approach to Visualize Proportions and Similarities Between Individuals by Rectangular Maps

    DEFF Research Database (Denmark)

    Carrizosa, Emilio; Guerrero, Vanesa; Morales, Dolores Romero

    In this paper we address the problem of visualizing the proportions and the similarities attached to a set of individuals. We represent this information using a rectangular map, i.e., a subdivision of a rectangle into rectangular portions so that each portion is associated with one individual, th...

  15. Real-world spatial regularities affect visual working memory for objects

    NARCIS (Netherlands)

    Kaiser, D.; Stein, T.; Peelen, M.V.

    2015-01-01

    Traditional memory research has focused on measuring and modeling the capacity of visual working memory for simple stimuli such as geometric shapes or colored disks. Although these studies have provided important insights, it is unclear how their findings apply to memory for more naturalistic

  16. Visual Attention as a Function of Sex and Apparel of Stimulus Object: Who Looks at Whom?

    Science.gov (United States)

    Rosenwasser, Shirley Miller; And Others

    1983-01-01

    Examined same-sex awareness by comparing the visual attention of 51 college students toward stimulus persons. Results showed men looked longer at slides of women both clothed and in bathing suits than slides of men, and women looked longest at slides of clothed women. Results suggested intrasex competitiveness and intersex attraction. (JAC)

  17. Massive Memory Revisited: Limitations on Storage Capacity for Object Details in Visual Long-Term Memory

    Science.gov (United States)

    Cunningham, Corbin A.; Yassa, Michael A.; Egeth, Howard E.

    2015-01-01

    Previous work suggests that visual long-term memory (VLTM) is highly detailed and has a massive capacity. However, memory performance is subject to the effects of the type of testing procedure used. The current study examines detail memory performance by probing the same memories within the same subjects, but using divergent probing methods. The…

  18. Prototypical components of honeybee homing flight behaviour depend on the visual appearance of objects surrounding the goal

    Directory of Open Access Journals (Sweden)

    Elke eBraun

    2012-01-01

    Full Text Available Honeybees use visual cues to relocate profitable food sources and their hive. What bees see while navigating, depends on the appearance of the cues, the bee’s current position, orientation and movement relative to them. Here we analyse the detailed flight behaviour during the localisation of a goal surrounded by cylinders that are characterised either by a high contrast in luminance and texture or by mostly motion contrast relative to the background. By relating flight behaviour to the nature of the information available from these landmarks, we aim to identify behavioural strategies that facilitate the processing of visual information during goal localisation. We decompose flight behaviour into prototypical movements using clustering algorithms in order to reduce the behavioural complexity. The determined prototypical movements reflect the honeybee’s saccadic flight pattern that largely separates rotational from translational movements. During phases of translational movements between fast saccadic rotations, the bees can gain information about the three dimensional layout of their environment from the translational optic flow. The prototypical movements reveal the prominent role of sideways and up- or downward movements, which can help bees to gather information about objects, particularly in the frontal visual field. We find that the occurrence of specific prototypes depends on the bees’ distance from the landmarks and the feeder and that changing the texture of the landmarks evokes different prototypical movements. The adaptive use of different behavioural prototypes shapes the visual input and can facilitate information processing in the bees’ visual system during local navigation.

  19. Automatic Tuning of a Retina Model for a Cortical Visual Neuroprosthesis Using a Multi-Objective Optimization Genetic Algorithm.

    Science.gov (United States)

    Martínez-Álvarez, Antonio; Crespo-Cano, Rubén; Díaz-Tahoces, Ariadna; Cuenca-Asensi, Sergio; Ferrández Vicente, José Manuel; Fernández, Eduardo

    2016-11-01

    The retina is a very complex neural structure, which contains many different types of neurons interconnected with great precision, enabling sophisticated conditioning and coding of the visual information before it is passed via the optic nerve to higher visual centers. The encoding of visual information is one of the basic questions in visual and computational neuroscience and is also of seminal importance in the field of visual prostheses. In this framework, it is essential to have artificial retina systems to be able to function in a way as similar as possible to the biological retinas. This paper proposes an automatic evolutionary multi-objective strategy based on the NSGA-II algorithm for tuning retina models. Four metrics were adopted for guiding the algorithm in the search of those parameters that best approximate a synthetic retinal model output with real electrophysiological recordings. Results show that this procedure exhibits a high flexibility when different trade-offs has to be considered during the design of customized neuro prostheses.

  20. Organizational Categories as Viewing Categories

    DEFF Research Database (Denmark)

    Mik-Meyer, Nanna

    I elucidate how the two rehabilitation organizations local history, legislation, structural features of the present labour market and of social work result in a number of contradictions which make it difficult to deliver client-centred care. This exact goal is according to the staff one of the most......This paper explores how two Danish rehabilitation organizations textual guidelines for assessment of clients' personality traits influence the actual evaluation of clients. The analysis will show how staff members produce institutional identities corresponding to organizational categories, which...... very often have little or no relevance for the clients evaluated. The goal of the article is to demonstrate how the institutional complex that frames the work of the organizations produces the client types pertaining to that organization. By applying the analytical strategy of institutional ethnography...

  1. Glucose improves object-location binding in visual-spatial working memory

    OpenAIRE

    Brian T. Stollery; Christian, Leonie M

    2016-01-01

    Rationale There is evidence that glucose temporarily enhances cognition and that processes dependent on the hippocampus may be particularly sensitive. As the hippocampus plays a key role in binding processes, we examined the influence of glucose on memory for object-location bindings. Objective This study aims to study how glucose modifies performance on an object-location memory task, a task that draws heavily on hippocampal function. Methods Thirty-one participants received 30 g glucose or ...

  2. From objects to landmarks: the function of visual location information in spatial navigation

    OpenAIRE

    Edgar eChan; Oliver eBaumann; Mark A. Bellgrove; Mattingley, Jason B

    2012-01-01

    Landmarks play an important role in guiding navigational behavior. A host of studies in the last 15 years has demonstrated that environmental objects can act as landmarks for navigation in different ways. In this review, we propose a parsimonious four-part taxonomy for conceptualizing object location information during navigation. We begin by outlining object properties that appear to be important for a landmark to attain salience. We then systematically examine the different functions of obj...

  3. Fourier Descriptors Based on the Structure of the Human Primary Visual Cortex with Applications to Object Recognition

    OpenAIRE

    Bohi, Amine; Prandi, Dario; Guis, Vincente; Bouchara, Frédéric; Gauthier, Jean-Paul

    2016-01-01

    International audience; In this paper we propose a supervised object recognition method using new global features and inspired by the model of the human primary visual cortex V1 as the semidiscrete roto-translation group $SE(2,N)=\\mathbb Z_N\\rtimes \\mathbb{R}^2$. The proposed technique is based on generalized Fourier descriptors on the latter group, which are invariant to natural geometric transformations (rotations, translations). These descriptors are then used to feed an SVM classifier. We...

  4. Learning Category-Specific Dictionary and Shared Dictionary for Fine-Grained Image Categorization.

    Science.gov (United States)

    Gao, Shenghua; Tsang, Ivor Wai-Hung; Ma, Yi

    2014-02-01

    This paper targets fine-grained image categorization by learning a category-specific dictionary for each category and a shared dictionary for all the categories. Such category-specific dictionaries encode subtle visual differences among different categories, while the shared dictionary encodes common visual patterns among all the categories. To this end, we impose incoherence constraints among the different dictionaries in the objective of feature coding. In addition, to make the learnt dictionary stable, we also impose the constraint that each dictionary should be self-incoherent. Our proposed dictionary learning formulation not only applies to fine-grained classification, but also improves conventional basic-level object categorization and other tasks such as event recognition. Experimental results on five data sets show that our method can outperform the state-of-the-art fine-grained image categorization frameworks as well as sparse coding based dictionary learning frameworks. All these results demonstrate the effectiveness of our method.

  5. GRAPES—Grounding representations in action, perception, and emotion systems: How object properties and categories are represented in the human brain

    Science.gov (United States)

    Martin, Alex

    2016-01-01

    In this article, I discuss some of the latest functional neuroimaging findings on the organization of object concepts in the human brain. I argue that these data provide strong support for viewing concepts as the products of highly interactive neural circuits grounded in the action, perception, and emotion systems. The nodes of these circuits are defined by regions representing specific object properties (e.g., form, color, and motion) and thus are property-specific, rather than strictly modality-specific. How these circuits are modified by external and internal environmental demands, the distinction between representational content and format, and the grounding of abstract social concepts are also discussed. PMID:25968087

  6. GRAPES-Grounding representations in action, perception, and emotion systems: How object properties and categories are represented in the human brain.

    Science.gov (United States)

    Martin, Alex

    2016-08-01

    In this article, I discuss some of the latest functional neuroimaging findings on the organization of object concepts in the human brain. I argue that these data provide strong support for viewing concepts as the products of highly interactive neural circuits grounded in the action, perception, and emotion systems. The nodes of these circuits are defined by regions representing specific object properties (e.g., form, color, and motion) and thus are property-specific, rather than strictly modality-specific. How these circuits are modified by external and internal environmental demands, the distinction between representational content and format, and the grounding of abstract social concepts are also discussed.

  7. The relation between crawling and 9-month-old infants' visual prediction abilities in spatial object processing.

    Science.gov (United States)

    Kubicek, Claudia; Jovanovic, Bianca; Schwarzer, Gudrun

    2017-06-01

    We examined whether 9-month-old infants' visual prediction abilities in the context of spatial object processing are related to their crawling ability. A total of 33 9-month-olds were tested; half of them crawled for 7.6weeks on average. A new visual prediction paradigm was developed during which a three-dimensional three-object array was presented in a live setting. During familiarization, the object array rotated back and forth along the vertical axis. While the array was moving, two target objects of it were briefly occluded from view and uncovered again as the array changed its direction of motion. During the test phase, the entire array was rotated around 90° and then rotated back and forth along the horizontal axis. The targets remained at the same position or were moved to a modified placement. We recorded infants' eye movements directed at the dynamically covered and uncovered target locations and analyzed infants' prediction rates. All infants showed higher prediction rates at test and when the targets' placement was modified. Most importantly, the results demonstrated that crawlers had higher prediction rates during test trials as compared with non-crawlers. Our study supports the assumption that crawling experience might enhance 9-month-old infants' ability to correctly predict complex object movement. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. An interactive visualization tool for the analysis of multi-objective embedded systems design space exploration

    NARCIS (Netherlands)

    Taghavi, T.; Pimentel, A.D.

    2011-01-01

    The design of today’s embedded systems involves a complex Design Space Exploration (DSE) process. Typically, multiple and conflicting criteria (objectives) should be optimized simultaneously such as performance, power, cost, etc. Usually, Multi-Objective Evolutionary Algorithms (MOEAs) are used to

  9. Object Manipulation and Motion Perception: Evidence of an Influence of Action Planning on Visual Processing

    Science.gov (United States)

    Lindemann, Oliver; Bekkering, Harold

    2009-01-01

    In 3 experiments, the authors investigated the bidirectional coupling of perception and action in the context of object manipulations and motion perception. Participants prepared to grasp an X-shaped object along one of its 2 diagonals and to rotate it in a clockwise or a counterclockwise direction. Action execution had to be delayed until the…

  10. Object manipulation and motion perception: Evidence of an influence of action planning on visual processing

    NARCIS (Netherlands)

    Lindemann, O.; Bekkering, H.

    2009-01-01

    In 3 experiments, the authors investigated the bidirectional coupling of perception and action in the context of object manipulations and motion perception. Participants prepared to grasp an X-shaped object along one of its 2 diagonals and to rotate it in a clockwise or a counterclockwise direction.

  11. Visual comparison for information visualization

    KAUST Repository

    Gleicher, M.

    2011-09-07

    Data analysis often involves the comparison of complex objects. With the ever increasing amounts and complexity of data, the demand for systems to help with these comparisons is also growing. Increasingly, information visualization tools support such comparisons explicitly, beyond simply allowing a viewer to examine each object individually. In this paper, we argue that the design of information visualizations of complex objects can, and should, be studied in general, that is independently of what those objects are. As a first step in developing this general understanding of comparison, we propose a general taxonomy of visual designs for comparison that groups designs into three basic categories, which can be combined. To clarify the taxonomy and validate its completeness, we provide a survey of work in information visualization related to comparison. Although we find a great diversity of systems and approaches, we see that all designs are assembled from the building blocks of juxtaposition, superposition and explicit encodings. This initial exploration shows the power of our model, and suggests future challenges in developing a general understanding of comparative visualization and facilitating the development of more comparative visualization tools. © The Author(s) 2011.

  12. A bilateral advantage for maintaining objects in visual short term memory.

    Science.gov (United States)

    Holt, Jessica L; Delvenne, Jean-François

    2015-01-01

    Research has shown that attentional pre-cues can subsequently influence the transfer of information into visual short term memory (VSTM) (Schmidt, B., Vogel, E., Woodman, G., & Luck, S. (2002). Voluntary and automatic attentional control of visual working memory. Perception & Psychophysics, 64(5), 754-763). However, studies also suggest that those effects are constrained by the hemifield alignment of the pre-cues (Holt, J. L., & Delvenne, J.-F. (2014). A bilateral advantage in controlling access to visual short-term memory. Experimental Psychology, 61(2), 127-133), revealing better recall when distributed across hemifields relative to within a single hemifield (otherwise known as a bilateral field advantage). By manipulating the duration of the retention interval in a colour change detection task (1s, 3s), we investigated whether selective pre-cues can also influence how information is later maintained in VSTM. The results revealed that the pre-cues influenced the maintenance of the colours in VSTM, promoting consistent performance across retention intervals (Experiments 1 & 4). However, those effects were only shown when the pre-cues were directed to stimuli displayed across hemifields relative to stimuli within a single hemifield. Importantly, the results were not replicated when participants were required to memorise colours (Experiment 2) or locations (Experiment 3) in the absence of spatial pre-cues. Those findings strongly suggest that attentional pre-cues have a strong influence on both the transfer of information in VSTM and its subsequent maintenance, allowing bilateral items to better survive decay. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Objective Methods to Test Visual Dysfunction in the Presence of Cognitive Impairment

    Science.gov (United States)

    2015-12-01

    movie  clips  for  presentations,  etc.     For...will  allow  us  in  future  to  analyze   videos  of  the  eye/pupil,  recorded  with  different   brands  of...indicate focal areas of better retinal sensitivity or be influenced by 290   optimal placement of the central scotoma to maximize visual function.

  14. The Nigerian national blindness and visual impairment survey: Rationale, objectives and detailed methodology.

    Science.gov (United States)

    Dineen, Brendan; Gilbert, Clare E; Rabiu, Mansur; Kyari, Fatima; Mahdi, Abdull M; Abubakar, Tafida; Ezelum, Christian C; Gabriel, Entekume; Elhassan, Elizabeth; Abiose, Adenike; Faal, Hannah; Jiya, Jonathan Y; Ozemela, Chinenyem P; Lee, Pak Sang; Gudlavalleti, Murthy V S

    2008-09-22

    Despite having the largest population in Africa, Nigeria has no accurate population based data to plan and evaluate eye care services. A national survey was undertaken to estimate the prevalence and determine the major causes of blindness and low vision. This paper presents the detailed methodology used during the survey. A nationally representative sample of persons aged 40 years and above was selected. Children aged 10-15 years and individuals aged blindness in Nigeria. The survey would also provide information on barriers to accessing services, quality of life of visually impaired individuals and also provide normative data for Nigerian eyes.

  15. Objective Methods to Test Visual Dysfunction in the Presence of Cognitive Impairment

    Science.gov (United States)

    2012-10-01

    currently  developing  a   lightweight   wearable  portable  pupillometer  that  can  deliver  red,  blue  or  white  light... sports  at  both  the  school  and  professional   level.  Traumatic  causes  of  visual  damage  can  also  be

  16. Real-time visualization of 3-D dynamic microscopic objects using optical diffraction tomography.

    Science.gov (United States)

    Kim, Kyoohyun; Kim, Kyung Sang; Park, Hyunjoo; Ye, Jong Chul; Park, Yongkeun

    2013-12-30

    3-D refractive index (RI) distribution is an intrinsic bio-marker for the chemical and structural information about biological cells. Here we develop an optical diffraction tomography technique for the real-time reconstruction of 3-D RI distribution, employing sparse angle illumination and a graphic processing unit (GPU) implementation. The execution time for the tomographic reconstruction is 0.21 s for 96(3) voxels, which is 17 times faster than that of a conventional approach. We demonstrated the real-time visualization capability with imaging the dynamics of Brownian motion of an anisotropic colloidal dimer and the dynamic shape change in a red blood cell upon shear flow.

  17. Glucose improves object-location binding in visual-spatial working memory.

    Science.gov (United States)

    Stollery, Brian; Christian, Leonie

    2016-02-01

    There is evidence that glucose temporarily enhances cognition and that processes dependent on the hippocampus may be particularly sensitive. As the hippocampus plays a key role in binding processes, we examined the influence of glucose on memory for object-location bindings. This study aims to study how glucose modifies performance on an object-location memory task, a task that draws heavily on hippocampal function. Thirty-one participants received 30 g glucose or placebo in a single 1-h session. After seeing between 3 and 10 objects (words or shapes) at different locations in a 9 × 9 matrix, participants attempted to immediately reproduce the display on a blank 9 × 9 matrix. Blood glucose was measured before drink ingestion, mid-way through the session, and at the end of the session. Glucose significantly improves object-location binding (d = 1.08) and location memory (d = 0.83), but not object memory (d = 0.51). Increasing working memory load impairs object memory and object-location binding, and word-location binding is more successful than shape-location binding, but the glucose improvement is robust across all difficulty manipulations. Within the glucose group, higher levels of circulating glucose are correlated with better binding memory and remembering the locations of successfully recalled objects. The glucose improvements identified are consistent with a facilitative impact on hippocampal function. The findings are discussed in the context of the relationship between cognitive processes, hippocampal function, and the implications for glucose's mode of action.

  18. Using binocular rivalry to tag foreground sounds: Towards an objective visual measure for auditory multistability.

    Science.gov (United States)

    Einhäuser, Wolfgang; Thomassen, Sabine; Bendixen, Alexandra

    2017-01-01

    In binocular rivalry, paradigms have been proposed for unobtrusive moment-by-moment readout of observers' perceptual experience ("no-report paradigms"). Here, we take a first step to extend this concept to auditory multistability. Observers continuously reported which of two concurrent tone sequences they perceived in the foreground: high-pitch (1008 Hz) or low-pitch (400 Hz) tones. Interstimulus intervals were either fixed per sequence (Experiments 1 and 2) or random with tones alternating (Experiment 3). A horizontally drifting grating was presented to each eye; to induce binocular rivalry, gratings had distinct colors and motion directions. To associate each grating with one tone sequence, a pattern on the grating jumped vertically whenever the respective tone occurred. We found that the direction of the optokinetic nystagmus (OKN)-induced by the visually dominant grating-could be used to decode the tone (high/low) that was perceived in the foreground well above chance. This OKN-based readout improved after observers had gained experience with the auditory task (Experiments 1 and 2) and for simpler auditory tasks (Experiment 3). We found no evidence that the visual stimulus affected auditory multistability. Although decoding performance is still far from perfect, our paradigm may eventually provide a continuous estimate of the currently dominant percept in auditory multistability.

  19. The role of hemifield sector analysis in multifocal visual evoked potential objective perimetry in the early detection of glaucomatous visual field defects

    Directory of Open Access Journals (Sweden)

    Mousa MF

    2013-05-01

    Full Text Available Mohammad F Mousa,1 Robert P Cubbidge,2 Fatima Al-Mansouri,1 Abdulbari Bener3,41Department of Ophthalmology, Hamad Medical Corporation, Doha, Qatar; 2School of Life and Health Sciences, Aston University, Birmingham, UK; 3Department of Medical Statistics and Epidemiology, Hamad Medical Corporation, Department of Public Health, Weill Cornell Medical College, Doha, Qatar; 4Department Evidence for Population Health Unit, School of Epidemiology and Health Sciences, University of Manchester, Manchester, UKObjective: The purpose of this study was to examine the effectiveness of a new analysis method of mfVEP objective perimetry in the early detection of glaucomatous visual field defects compared to the gold standard technique.Methods and patients: Three groups were tested in this study; normal controls (38 eyes, glaucoma patients (36 eyes, and glaucoma suspect patients (38 eyes. All subjects underwent two standard 24-2 visual field tests: one with the Humphrey Field Analyzer and a single mfVEP test in one session. Analysis of the mfVEP results was carried out using the new analysis ­protocol: the hemifield sector analysis protocol.Results: Analysis of the mfVEP showed that the signal to noise ratio (SNR difference between superior and inferior hemifields was statistically significant between the three groups (analysis of variance, P < 0.001 with a 95% confidence interval, 2.82, 2.89 for normal group; 2.25, 2.29 for glaucoma suspect group; 1.67, 1.73 for glaucoma group. The difference between superior and inferior hemifield sectors and hemi-rings was statistically significant in 11/11 pair of sectors and hemi-rings in the glaucoma patients group (t-test P < 0.001, statistically significant in 5/11 pairs of sectors and hemi-rings in the glaucoma suspect group (t-test P < 0.01, and only 1/11 pair was statistically significant (t-test P < 0.9. The sensitivity and specificity of the hemifield sector analysis protocol in detecting glaucoma was 97% and 86

  20. Comparing dogs and great apes in their ability to visually track object transpositions.

    Science.gov (United States)

    Rooijakkers, Eveline F; Kaminski, Juliane; Call, Josep

    2009-11-01

    Knowing that objects continue to exist after disappearing from sight and tracking invisible object displacements are two basic elements of spatial cognition. The current study compares dogs and apes in an invisible transposition task. Food was hidden under one of two cups in full view of the subject. After that both cups were displaced, systematically varying two main factors, whether cups were crossed during displacement and whether the cups were substituted by the other cup or instead cups were moved to new locations. While the apes were successful in all conditions, the dogs had a strong preference to approach the location where they last saw the reward, especially if this location remained filled. In addition, dogs seem to have special difficulties to track the reward when both containers crossed their path during displacement. These results confirm the substantial difference that exists between great apes and dogs with regard to mental representation abilities required to track the invisible displacements of objects.

  1. Motion-seeded object-based attention for dynamic visual imagery

    Science.gov (United States)

    Huber, David J.; Khosla, Deepak; Kim, Kyungnam

    2017-05-01

    This paper† describes a novel system that finds and segments "objects of interest" from dynamic imagery (video) that (1) processes each frame using an advanced motion algorithm that pulls out regions that exhibit anomalous motion, and (2) extracts the boundary of each object of interest using a biologically-inspired segmentation algorithm based on feature contours. The system uses a series of modular, parallel algorithms, which allows many complicated operations to be carried out by the system in a very short time, and can be used as a front-end to a larger system that includes object recognition and scene understanding modules. Using this method, we show 90% accuracy with fewer than 0.1 false positives per frame of video, which represents a significant improvement over detection using a baseline attention algorithm.

  2. Indoor objects and outdoor urban scenes recognition by 3D visual primitives

    DEFF Research Database (Denmark)

    Fu, Junsheng; Kämäräinen, Joni-Kristian; Buch, Anders Glent

    2014-01-01

    Object detection, recognition and pose estimation in 3D images have gained momentum due to availability of 3D sensors (RGB-D) and increase of large scale 3D data, such as city maps. The most popular approach is to extract and match 3D shape descriptors that encode local scene structure, but omits....... For recognition, we propose a fast and eective correspondence matching using random sampling. For quantitative evaluation we construct a semi-synthetic benchmark dataset using a public 3D model dataset of 119 kitchen objects and another benchmark of challenging street-view images from 4 dierent cities....... In the experiments, our method utilises only a stereo view for training. As the result, with the kitchen objects dataset our method achieved almost perfect recognition rate for 10 camera view point change and nearly 90% for 20, and for the streetview benchmarks it achieved 75% accuracy for 160 street-view images...

  3. Sleep deprivation impairs object-selective attention: a view from the ventral visual cortex.

    Directory of Open Access Journals (Sweden)

    Julian Lim

    Full Text Available BACKGROUND: Most prior studies on selective attention in the setting of total sleep deprivation (SD have focused on behavior or activation within fronto-parietal cognitive control areas. Here, we evaluated the effects of SD on the top-down biasing of activation of ventral visual cortex and on functional connectivity between cognitive control and other brain regions. METHODOLOGY/PRINCIPAL FINDINGS: Twenty-three healthy young adult volunteers underwent fMRI after a normal night of sleep (RW and after sleep deprivation in a counterbalanced manner while performing a selective attention task. During this task, pictures of houses or faces were randomly interleaved among scrambled images. Across different blocks, volunteers responded to house but not face pictures, face but not house pictures, or passively viewed pictures without responding. The appearance of task-relevant pictures was unpredictable in this paradigm. SD resulted in less accurate detection of target pictures without affecting the mean false alarm rate or response time. In addition to a reduction of fronto-parietal activation, attending to houses strongly modulated parahippocampal place area (PPA activation during RW, but this attention-driven biasing of PPA activation was abolished following SD. Additionally, SD resulted in a significant decrement in functional connectivity between the PPA and two cognitive control areas, the left intraparietal sulcus and the left inferior frontal lobe. CONCLUSIONS/SIGNIFICANCE: SD impairs selective attention as evidenced by reduced selectivity in PPA activation. Further, reduction in fronto-parietal and ventral visual task-related activation suggests that it also affects sustained attention. Reductions in functional connectivity may be an important additional imaging parameter to consider in characterizing the effects of sleep deprivation on cognition.

  4. Sleep Deprivation Impairs Object-Selective Attention: A View from the Ventral Visual Cortex

    Science.gov (United States)

    Lim, Julian; Tan, Jiat Chow; Parimal, Sarayu; Dinges, David F.; Chee, Michael W. L.

    2010-01-01

    Background Most prior studies on selective attention in the setting of total sleep deprivation (SD) have focused on behavior or activation within fronto-parietal cognitive control areas. Here, we evaluated the effects of SD on the top-down biasing of activation of ventral visual cortex and on functional connectivity between cognitive control and other brain regions. Methodology/Principal Findings Twenty-three healthy young adult volunteers underwent fMRI after a normal night of sleep (RW) and after sleep deprivation in a counterbalanced manner while performing a selective attention task. During this task, pictures of houses or faces were randomly interleaved among scrambled images. Across different blocks, volunteers responded to house but not face pictures, face but not house pictures, or passively viewed pictures without responding. The appearance of task-relevant pictures was unpredictable in this paradigm. SD resulted in less accurate detection of target pictures without affecting the mean false alarm rate or response time. In addition to a reduction of fronto-parietal activation, attending to houses strongly modulated parahippocampal place area (PPA) activation during RW, but this attention-driven biasing of PPA activation was abolished following SD. Additionally, SD resulted in a significant decrement in functional connectivity between the PPA and two cognitive control areas, the left intraparietal sulcus and the left inferior frontal lobe. Conclusions/Significance SD impairs selective attention as evidenced by reduced selectivity in PPA activation. Further, reduction in fronto-parietal and ventral visual task-related activation suggests that it also affects sustained attention. Reductions in functional connectivity may be an important additional imaging parameter to consider in characterizing the effects of sleep deprivation on cognition. PMID:20140099

  5. Interference of spoken word recognition through phonological priming from visual objects and printed words.

    Science.gov (United States)

    McQueen, James M; Huettig, Falk

    2014-01-01

    Three cross-modal priming experiments examined the influence of preexposure to pictures and printed words on the speed of spoken word recognition. Targets for auditory lexical decision were spoken Dutch words and nonwords, presented in isolation (Experiments 1 and 2) or after a short phrase (Experiment 3). Auditory stimuli were preceded by primes, which were pictures (Experiments 1 and 3) or those pictures' printed names (Experiment 2). Prime-target pairs were phonologically onset related (e.g., pijl-pijn, arrow-pain), were from the same semantic category (e.g., pijl-zwaard, arrow-sword), or were unrelated on both dimensions. Phonological interference and semantic facilitation were observed in all experiments. Priming magnitude was similar for pictures and printed words and did not vary with picture viewing time or number of pictures in the display (either one or four). These effects arose even though participants were not explicitly instructed to name the pictures and where strategic naming would interfere with lexical decision making. This suggests that, by default, processing of related pictures and printed words influences how quickly we recognize spoken words.

  6. A role for the CAMKK pathway in visual object recognition memory.

    Science.gov (United States)

    Tinsley, Chris J; Narduzzo, Katherine E; Brown, Malcolm W; Warburton, E Clea

    2012-03-01

    The role of the CAMKK pathway in object recognition memory was investigated. Rats' performance in a preferential object recognition test was examined after local infusion into the perirhinal cortex of the CAMKK inhibitor STO-609. STO-609 infused either before or immediately after acquisition impaired memory tested after a 24 h but not a 20-min delay. Memory was not impaired when STO-609 was infused 20 min after acquisition. The expression of a downstream reaction product of CAMKK was measured by immunohistochemical staining for phospho-CAMKI(Thr177) at 10, 40, 70, and 100 min following the viewing of novel and familiar images of objects. Processing familiar images resulted in more pCAMKI stained neurons in the perirhinal cortex than processing novel images at the 10- and 40-min delays. Prior infusion of STO-609 caused a reduction in pCAMKI stained neurons in response to viewing either novel or familiar images, consistent with its role as an inhibitor of CAMKK. The results establish that the CAMKK pathway within the perirhinal cortex is important for the consolidation of object recognition memory. The activation of pCAMKI after acquisition is earlier than previously reported for pCAMKII. Copyright © 2011 Wiley Periodicals, Inc.

  7. Reach on sound: a key to object permanence in visually impaired children.

    Science.gov (United States)

    Fazzi, Elisa; Signorini, Sabrina Giovanna; Bomba, Monica; Luparia, Antonella; Lanners, Josée; Balottin, Umberto

    2011-04-01

    The capacity to reach an object presented through sound clue indicates, in the blind child, the acquisition of object permanence and gives information over his/her cognitive development. To assess cognitive development in congenitally blind children with or without multiple disabilities. Cohort study. Thirty-seven congenitally blind subjects (17 with associated multiple disabilities, 20 mainly blind) were enrolled. We used Bigelow's protocol to evaluate "reach on sound" capacity over time (at 6, 12, 18, 24, and 36 months), and a battery of clinical, neurophysiological and cognitive instruments to assess clinical features. Tasks n.1 to 5 were acquired by most of the mainly blind children by 12 months of age. Task 6 coincided with a drop in performance, and the acquisition of the subsequent tasks showed a less agehomogeneous pattern. In blind children with multiple disabilities, task acquisition rates were lower, with the curves dipping in relation to the more complex tasks. The mainly blind subjects managed to overcome Fraiberg's "conceptual problem"--i.e., they acquired the ability to attribute an external object with identity and substance even when it manifested its presence through sound only--and thus developed the ability to reach an object presented through sound. Instead, most of the blind children with multiple disabilities presented poor performances on the "reach on sound" protocol and were unable, before 36 months of age, to develop the strategies needed to resolve Fraiberg's "conceptual problem". Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. Object Selection Costs in Visual Working Memory: A Diffusion Model Analysis of the Focus of Attention

    Science.gov (United States)

    Sewell, David K.; Lilburn, Simon D.; Smith, Philip L.

    2016-01-01

    A central question in working memory research concerns the degree to which information in working memory is accessible to other cognitive processes (e.g., decision-making). Theories assuming that the focus of attention can only store a single object at a time require the focus to orient to a target representation before further processing can…

  9. Young infants' visual fixation patterns in addition and subtraction tasks support an object tracking account.

    Science.gov (United States)

    Bremner, J Gavin; Slater, Alan M; Hayes, Rachel A; Mason, Uschi C; Murphy, Caroline; Spring, Jo; Draper, Lucinda; Gaskell, David; Johnson, Scott P

    2017-10-01

    Investigating infants' numerical ability is crucial to identifying the developmental origins of numeracy. Wynn (1992) claimed that 5-month-old infants understand addition and subtraction as indicated by longer looking at outcomes that violate numerical operations (i.e., 1+1=1 and 2-1=2). However, Wynn's claim was contentious, with others suggesting that her results might reflect a familiarity preference for the initial array or that they could be explained in terms of object tracking. To cast light on this controversy, Wynn's conditions were replicated with conventional looking time supplemented with eye-tracker data. In the incorrect outcome of 2 in a subtraction event (2-1=2), infants looked selectively at the incorrectly present object, a finding that is not predicted by an initial array preference account or a symbolic numerical account but that is consistent with a perceptual object tracking account. It appears that young infants can track at least one object over occlusion, and this may form the precursor of numerical ability. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.

  10. Object Processing in Visual Perception and Action in Children and Adults

    Science.gov (United States)

    Schum, Nina; Franz, Volker H.; Jovanovic, Bianca; Schwarzer, Gudrun

    2012-01-01

    We investigated whether 6- and 7-year-olds and 9- and 10-year-olds, as well as adults, process object dimensions independent of or in interaction with one another in a perception and action task by adapting Ganel and Goodale's method for testing adults ("Nature", 2003, Vol. 426, pp. 664-667). In addition, we aimed to confirm Ganel and Goodale's…

  11. Joint attention without gaze following: human infants and their parents coordinate visual attention to objects through eye-hand coordination.

    Directory of Open Access Journals (Sweden)

    Chen Yu

    Full Text Available The coordination of visual attention among social partners is central to many components of human behavior and human development. Previous research has focused on one pathway to the coordination of looking behavior by social partners, gaze following. The extant evidence shows that even very young infants follow the direction of another's gaze but they do so only in highly constrained spatial contexts because gaze direction is not a spatially precise cue as to the visual target and not easily used in spatially complex social interactions. Our findings, derived from the moment-to-moment tracking of eye gaze of one-year-olds and their parents as they actively played with toys, provide evidence for an alternative pathway, through the coordination of hands and eyes in goal-directed action. In goal-directed actions, the hands and eyes of the actor are tightly coordinated both temporally and spatially, and thus, in contexts including manual engagement with objects, hand movements and eye movements provide redundant information about where the eyes are looking. Our findings show that one-year-olds rarely look to the parent's face and eyes in these contexts but rather infants and parents coordinate looking behavior without gaze following by attending to objects held by the self or the social partner. This pathway, through eye-hand coupling, leads to coordinated joint switches in visual attention and to an overall high rate of looking at the same object at the same time, and may be the dominant pathway through which physically active toddlers align their looking behavior with a social partner.

  12. An objective method for measuring face detection thresholds using the sweep steady-state visual evoked response.

    Science.gov (United States)

    Ales, Justin M; Farzin, Faraz; Rossion, Bruno; Norcia, Anthony M

    2012-09-29

    We introduce a sensitive method for measuring face detection thresholds rapidly, objectively, and independently of low-level visual cues. The method is based on the swept parameter steady-state visual evoked potential (ssVEP), in which a stimulus is presented at a specific temporal frequency while parametrically varying ("sweeping") the detectability of the stimulus. Here, the visibility of a face image was increased by progressive derandomization of the phase spectra of the image in a series of equally spaced steps. Alternations between face and fully randomized images at a constant rate (3/s) elicit a robust first harmonic response at 3 Hz specific to the structure of the face. High-density EEG was recorded from 10 human adult participants, who were asked to respond with a button-press as soon as they detected a face. The majority of participants produced an evoked response at the first harmonic (3 Hz) that emerged abruptly between 30% and 35% phase-coherence of the face, which was most prominent on right occipito-temporal sites. Thresholds for face detection were estimated reliably in single participants from 15 trials, or on each of the 15 individual face trials. The ssVEP-derived thresholds correlated with the concurrently measured perceptual face detection thresholds. This first application of the sweep VEP approach to high-level vision provides a sensitive and objective method that could be used to measure and compare visual perception thresholds for various object shapes and levels of categorization in different human populations, including infants and individuals with developmental delay.

  13. Separate channels for processing form, texture, and color: evidence from FMRI adaptation and visual object agnosia.

    Science.gov (United States)

    Cavina-Pratesi, C; Kentridge, R W; Heywood, C A; Milner, A D

    2010-10-01

    Previous neuroimaging research suggests that although object shape is analyzed in the lateral occipital cortex, surface properties of objects, such as color and texture, are dealt with in more medial areas, close to the collateral sulcus (CoS). The present study sought to determine whether there is a single medial region concerned with surface properties in general or whether instead there are multiple foci independently extracting different surface properties. We used stimuli varying in their shape, texture, or color, and tested healthy participants and 2 object-agnosic patients, in both a discrimination task and a functional MR adaptation paradigm. We found a double dissociation between medial and lateral occipitotemporal cortices in processing surface (texture or color) versus geometric (shape) properties, respectively. In Experiment 2, we found that the medial occipitotemporal cortex houses separate foci for color (within anterior CoS and lingual gyrus) and texture (caudally within posterior CoS). In addition, we found that areas selective for shape, texture, and color individually were quite distinct from those that respond to all of these features together (shape and texture and color). These latter areas appear to correspond to those associated with the perception of complex stimuli such as faces and places.

  14. Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes.

    Science.gov (United States)

    Yebes, J Javier; Bergasa, Luis M; García-Garrido, Miguel Ángel

    2015-04-20

    Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles). In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity), while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM.

  15. Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes

    Directory of Open Access Journals (Sweden)

    J. Javier Yebes

    2015-04-01

    Full Text Available Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles. In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity, while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM.

  16. The Nigerian national blindness and visual impairment survey: Rationale, objectives and detailed methodology

    Directory of Open Access Journals (Sweden)

    Abiose Adenike

    2008-09-01

    Full Text Available Abstract Background Despite having the largest population in Africa, Nigeria has no accurate population based data to plan and evaluate eye care services. A national survey was undertaken to estimate the prevalence and determine the major causes of blindness and low vision. This paper presents the detailed methodology used during the survey. Methods A nationally representative sample of persons aged 40 years and above was selected. Children aged 10–15 years and individuals aged Discussion The field work for the study was completed in 30 months over the period 2005–2007 and covered 305 clusters across the entire country. Concurrently persons 40+ years were examined to form a normative data base. Analysis of the data is currently underway. Conclusion The methodology used was robust and adequate to provide estimates on the prevalence and causes of blindness in Nigeria. The survey would also provide information on barriers to accessing services, quality of life of visually impaired individuals and also provide normative data for Nigerian eyes.

  17. Real-Time Propagation Measurement System and Scattering Object Identification by 3D Visualization by Using VRML for ETC System

    Directory of Open Access Journals (Sweden)

    Ando Tetsuo

    2009-01-01

    Full Text Available In the early deployment of electric toll collecting (ETC system, multipath interference has caused the malfunction of the system. Therefore, radio absorbers are installed in the toll gate to suppress the scattering effects. This paper presents a novel radio propagation measurement system using the beamforming with 8-elmenet antenna array to examine the power intensity distribution of the ETC gate in real time without closing the toll gates that are already open for traffic. In addition, an identification method of the individual scattering objects with 3D visualization by using virtual reality modeling language will be proposed and the validity is also demonstrated by applying to the measurement data.

  18. VRP09 Objective Methods to Test Visual Dysfunction in the Presence of Cognitive Impairment

    Science.gov (United States)

    2014-10-01

    505.  doi:   10.1097/ ICU .0b013e328359045e.  Review.  PubMed  PMID:  23047167.   Tasks  2  and  3.  In  normal  eyes,  as...function,  leading  to   rehabilitation  and   treatment  when  appropriate.     The  availability  of  the  objective  tests

  19. To bind or not to bind, that's the wrong question: Features and objects coexist in visual short-term memory.

    Science.gov (United States)

    Geigerman, Shriradha; Verhaeghen, Paul; Cerella, John

    2016-06-01

    In three experiments, we investigated whether features and whole-objects can be represented simultaneously in visual short-term memory (VSTM). Participants were presented with a memory set of colored shapes; we probed either for the constituent features or for the whole object, and analyzed retrieval dynamics (cumulative response time distributions). In our first experiment, we used whole-object probes that recombined features from the memory display; we found that subjects' data conformed to a kitchen-line model, showing that they used whole-object representations for the matching process. In the second experiment, we encouraged independent-feature representations by using probes that used features not present in the memory display; subjects' data conformed to the race-model inequality, showing that they used independent-feature representations for the matching process. In a final experiment, we used both types of probes; subjects now used both types of representations, depending on the nature of the probe. Combined, our three experiments suggest that both feature and whole-object representations can coexist in VSTM. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Steady-state multifocal visual evoked potential (ssmfVEP) using dartboard stimulation as a possible tool for objective visual field assessment.

    Science.gov (United States)

    Horn, Folkert K; Selle, Franziska; Hohberger, Bettina; Kremers, Jan

    2016-02-01

    To investigate whether a conventional, monitor-based multifocal visual evoked potential (mfVEP) system can be used to record steady-state mfVEP (ssmfVEP) in healthy subjects and to study the effects of temporal frequency, electrode configuration and alpha waves. Multifocal pattern reversal VEP measurements were performed at 58 dartboard fields using VEP recording equipment. The responses were measured using m-sequences with four pattern reversals per m-step. Temporal frequencies were varied between 6 and 15 Hz. Recordings were obtained from nine normal subjects with a cross-shaped, four-electrode device (two additional channels were derived). Spectral analyses were performed on the responses at all locations. The signal to noise ratio (SNR) was computed for each response using the signal amplitude at the reversal frequency and the noise at the neighbouring frequencies. Most responses in the ssmfVEP were significantly above noise. The SNR was largest for an 8.6-Hz reversal frequency. The individual alpha electroencephalogram (EEG) did not strongly influence the results. The percentage of the records in which each of the 6 channels had the largest SNR was between 10.0 and 25.2 %. Our results in normal subjects indicate that reliable mfVEP responses can be achieved by steady-state stimulation using a conventional dartboard stimulator and multi-channel electrode device. The ssmfVEP may be useful for objective visual field assessment as spectrum analysis can be used for automated evaluation of responses. The optimal reversal frequency is 8.6 Hz. Alpha waves have only a minor influence on the analysis. Future studies must include comparisons with conventional mfVEP and psychophysical visual field tests.

  1. Category of A_infinity-categories

    OpenAIRE

    Lyubashenko, Volodymyr

    2002-01-01

    We define natural A_infinity-transformations and construct A_infinity-category of A_infinity-functors. The notion of non-strict units in an A_infinity-category is introduced. The 2-category of (unital) A_infinity-categories, (unital) functors and transformations is described.

  2. Multi-objective evolutionary optimization for constructing neural networks for virtual reality visual data mining: application to geophysical prospecting.

    Science.gov (United States)

    Valdés, Julio J; Barton, Alan J

    2007-05-01

    A method for the construction of virtual reality spaces for visual data mining using multi-objective optimization with genetic algorithms on nonlinear discriminant (NDA) neural networks is presented. Two neural network layers (the output and the last hidden) are used for the construction of simultaneous solutions for: (i) a supervised classification of data patterns and (ii) an unsupervised similarity structure preservation between the original data matrix and its image in the new space. A set of spaces are constructed from selected solutions along the Pareto front. This strategy represents a conceptual improvement over spaces computed by single-objective optimization. In addition, genetic programming (in particular gene expression programming) is used for finding analytic representations of the complex mappings generating the spaces (a composition of NDA and orthogonal principal components). The presented approach is domain independent and is illustrated via application to the geophysical prospecting of caves.

  3. How Fast Do Objects Fall in Visual Memory? Uncovering the Temporal and Spatial Features of Representational Gravity.

    Science.gov (United States)

    De Sá Teixeira, Nuno

    2016-01-01

    Visual memory for the spatial location where a moving target vanishes has been found to be systematically displaced downward in the direction of gravity. Moreover, it was recently reported that the magnitude of the downward error increases steadily with increasing retention intervals imposed after object's offset and before observers are allowed to perform the spatial localization task, in a pattern where the remembered vanishing location drifts downward as if following a falling trajectory. This outcome was taken to reflect the dynamics of a representational model of earth's gravity. The present study aims to establish the spatial and temporal features of this downward drift by taking into account the dynamics of the motor response. The obtained results show that the memory for the last location of the target drifts downward with time, thus replicating previous results. Moreover, the time taken for completion of the behavioural localization movements seems to add to the imposed retention intervals in determining the temporal frame during which the visual memory is updated. Overall, it is reported that the representation of spatial location drifts downward by about 3 pixels for each two-fold increase of time until response. The outcomes are discussed in relation to a predictive internal model of gravity which outputs an on-line spatial update of remembered objects' location.

  4. The impact of the lateral geniculate nucleus and corticogeniculate interactions on efficient coding and higher-order visual object processing.

    Science.gov (United States)

    Zabbah, Sajjad; Rajaei, Karim; Mirzaei, Amin; Ebrahimpour, Reza; Khaligh-Razavi, Seyed-Mahdi

    2014-08-01

    Principles of efficient coding suggest that the peripheral units of any sensory processing system are designed for efficient coding. The function of the lateral geniculate nucleus (LGN) as an early stage in the visual system is not well understood. Some findings indicate that similar to the retina that decorrelates input signals spatially, the LGN tends to perform a temporal decorrelation. There is evidence suggesting that corticogeniculate connections may account for this decorrelation in the LGN. In this study, we propose a computational model based on biological evidence reported by Wang et al. (2006), who demonstrated that the influence pattern of V1 feedback is phase-reversed. The output of our model shows how corticogeniculate connections decorrelate LGN responses and make an efficient representation. We evaluated our model using criteria that have previously been tested on LGN neurons through cell recording experiments, including sparseness, entropy, power spectra, and information transfer. We also considered the role of the LGN in higher-order visual object processing, comparing the categorization performance of human subjects with a cortical object recognition model in the presence and absence of our LGN input-stage model. Our results show that the new model that considers the role of the LGN, more closely follows the categorization performance of human subjects. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. How Fast Do Objects Fall in Visual Memory? Uncovering the Temporal and Spatial Features of Representational Gravity.

    Directory of Open Access Journals (Sweden)

    Nuno De Sá Teixeira

    Full Text Available Visual memory for the spatial location where a moving target vanishes has been found to be systematically displaced downward in the direction of gravity. Moreover, it was recently reported that the magnitude of the downward error increases steadily with increasing retention intervals imposed after object's offset and before observers are allowed to perform the spatial localization task, in a pattern where the remembered vanishing location drifts downward as if following a falling trajectory. This outcome was taken to reflect the dynamics of a representational model of earth's gravity. The present study aims to establish the spatial and temporal features of this downward drift by taking into account the dynamics of the motor response. The obtained results show that the memory for the last location of the target drifts downward with time, thus replicating previous results. Moreover, the time taken for completion of the behavioural localization movements seems to add to the imposed retention intervals in determining the temporal frame during which the visual memory is updated. Overall, it is reported that the representation of spatial location drifts downward by about 3 pixels for each two-fold increase of time until response. The outcomes are discussed in relation to a predictive internal model of gravity which outputs an on-line spatial update of remembered objects' location.

  6. To call a cloud ‘cirrus’: sound symbolism in names for categories or items

    Science.gov (United States)

    Sučević, Jelena; Styles, Suzy J.

    2017-01-01

    The aim of the present paper is to experimentally test whether sound symbolism has selective effects on labels with different ranges-of-reference within a simple noun-hierarchy. In two experiments, adult participants learned the make up of two categories of unfamiliar objects (‘alien life forms’), and were passively exposed to either category-labels or item-labels, in a learning-by-guessing categorization task. Following category training, participants were tested on their visual discrimination of object pairs. For different groups of participants, the labels were either congruent or incongruent with the objects. In Experiment 1, when trained on items with individual labels, participants were worse (made more errors) at detecting visual object mismatches when trained labels were incongruent. In Experiment 2, when participants were trained on items in labelled categories, participants were faster at detecting a match if the trained labels were congruent, and faster at detecting a mismatch if the trained labels were incongruent. This pattern of results suggests that sound symbolism in category labels facilitates later similarity judgments when congruent, and discrimination when incongruent, whereas for item labels incongruence generates error in judgements of visual object differences. These findings reveal that sound symbolic congruence has a different outcome at different levels of labelling within a noun hierarchy. These effects emerged in the absence of the label itself, indicating subtle but pervasive effects on visual object processing. PMID:28674648

  7. Comparison of Objective Measures for Predicting Perceptual Balance and Visual Aesthetic Preference

    Science.gov (United States)

    Hübner, Ronald; Fillinger, Martin G.

    2016-01-01

    The aesthetic appreciation of a picture largely depends on the perceptual balance of its elements. The underlying mental mechanisms of this relation, however, are still poorly understood. For investigating these mechanisms, objective measures of balance have been constructed, such as the Assessment of Preference for Balance (APB) score of Wilson and Chatterjee (2005). In the present study we examined the APB measure and compared it to an alternative measure (DCM; Deviation of the Center of “Mass”) that represents the center of perceptual “mass” in a picture and its deviation from the geometric center. Additionally, we applied measures of homogeneity and of mirror symmetry. In a first experiment participants had to rate the balance and symmetry of simple pictures, whereas in a second experiment different participants rated their preference (liking) for these pictures. In a third experiment participants rated the balance as well as the preference of new pictures. Altogether, the results show that DCM scores accounted better for balance ratings than APB scores, whereas the opposite held with respect to preference. Detailed analyses revealed that these results were due to the fact that aesthetic preference does not only depend on balance but also on homogeneity, and that the APB measure takes this feature into account. PMID:27014143

  8. Category superiority effects in young and elderly adults.

    Science.gov (United States)

    Sharps, M J

    1997-06-01

    Recent research indicates that some elderly persons experience an age-related visual processing deficit, for which they may attempt to compensate through the use of relational information. This hypothesis was tested, using the category superiority effect as a model system. In studies of young adults, the category superiority effect has been shown to be confined to relatively abstract stimulus materials such as verbal items, and to be absent for more concrete representations such as photographs of actual objects. However, it was predicted that, contrary to the data from young adults, a category superiority effect would be present in elderly adults for both verbal and pictorial stimuli, because elderly people would be expected to use category information to compensate for imageric deficits. This prediction was confirmed, consistent with the hypothesis.

  9. The effects of object height and visual information on the control of obstacle crossing during locomotion in healthy older adults.

    Science.gov (United States)

    Kunimune, Sho; Okada, Shuichi

    2017-06-01

    In order to safely avoid obstacles, humans must rely on visual information regarding the position and shape of the object obtained in advance. The present study aimed to reveal the duration of obstacle visibility necessary for appropriate visuomotor control during obstacle avoidance in healthy older adults. Participants included 13 healthy young women (mean age: 21.5±1.4years) and 15 healthy older women (mean age: 68.5±3.5years) who were instructed to cross over an obstacle along a pressure-sensitive pathway at a self-selected pace while wearing liquid crystal shutter goggles. Participants were evaluated during three visual occlusion conditions: (i) full visibility, (ii) occlusion at T-1 step (T: time of obstacle crossing), and (iii) occlusion at T-2 steps. Toe clearances of both the lead and trail limb (LTC and TTC) were calculated. LTC in the occlusion at T-2 steps condition was significantly greater than that in other conditions. Furthermore, a significant correlation was observed between LTC and TTC in both groups, regardless of the condition or obstacle height. In the older adult group alone, step width in the occlusion at T-2 steps condition increased relative to that in full visibility conditions. The results of the present study suggest that there is no difference in the characteristics of visuomotor control for appropriate obstacle crossing based on age. However, older adults may exhibit increased dependence on visual information for postural stability; they may also need an increased step width when lacking information regarding their positional relationship to obstacles. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Acquiring auditory and phonetic categories

    NARCIS (Netherlands)

    Goudbeek, M.B.; Smits, R.; Swingley, D.; Cutler, A.

    2005-01-01

    Infants' first steps in language acquisition involve learning the relevant contrasts of the language-specific phonemic repertoire. This learning is viewed as the formation of categories in a multidimensional psychophysical space. Categorization research in the visual modality has shown that adults

  11. Structural synaptic remodeling in the perirhinal cortex of adult and old rats following object-recognition visual training.

    Science.gov (United States)

    Platano, D; Bertoni-Freddari, C; Fattoretti, P; Giorgetti, B; Grossi, Y; Balietti, M; Casoli, T; Di Stefano, G; Aicardi, G

    2006-01-01

    The ultrastructural features of layer II synapses in the perirhinal cortex of adult (4- to 6-month-old) and old (25- to 27-month-old) rats exposed to a six-session object recognition visual training were investigated by morphometric methods. The comparative analysis showed a higher synaptic numeric density, a lower synaptic average area, and a lower percentage of megasynapses (S > 0.5 microm2) in old trained rats versus controls, and a higher percentage of small (S < 0.15 microm2) junctions in adult trained rats versus controls. The more marked synaptic remodeling underlying memory consolidation in the perirhinal cortex of old rats might reflect a pre-existing lower dynamic status.

  12. Recurrent processing during object recognition

    Directory of Open Access Journals (Sweden)

    Randall C. O'Reilly

    2013-04-01

    Full Text Available How does the brain learn to recognize objects visually, and perform this difficult feat robustly in the face of many sources of ambiguity and variability? We present a computational model based on the biology of the relevant visual pathways that learns to reliably recognize 100 different object categories in the face of of naturally-occurring variability in location, rotation, size, and lighting. The model exhibits robustness to highly ambiguous, partially occluded inputs. Both the unified, biologically plausible learning mechanism and the robustness to occlusion derive from the role that recurrent connectivity and recurrent processing mechanisms play in the model. Furthermore, this interaction of recurrent connectivity and learning predicts that high-level visual representations should be shaped by error signals from nearby, associated brain areas over the course of visual learning. Consistent with this prediction, we show how semantic knowledge about object categories changes the nature of their learned visual representations, as well as how this representational shift supports the mapping between perceptual and conceptual knowledge. Altogether, these findings support the potential importance of ongoing recurrent processing throughout the brain's visual system and suggest ways in which object recognition can be understood in terms of interactions within and between processes over time.

  13. Visual cognition in social insects.

    Science.gov (United States)

    Avarguès-Weber, Aurore; Deisig, Nina; Giurfa, Martin

    2011-01-01

    Visual learning admits different levels of complexity, from the formation of a simple associative link between a visual stimulus and its outcome, to more sophisticated performances, such as object categorization or rules learning, that allow flexible responses beyond simple forms of learning. Not surprisingly, higher-order forms of visual learning have been studied primarily in vertebrates with larger brains, while simple visual learning has been the focus in animals with small brains such as insects. This dichotomy has recently changed as studies on visual learning in social insects have shown that these animals can master extremely sophisticated tasks. Here we review a spectrum of visual learning forms in social insects, from color and pattern learning, visual attention, and top-down image recognition, to interindividual recognition, conditional discrimination, category learning, and rule extraction. We analyze the necessity and sufficiency of simple associations to account for complex visual learning in Hymenoptera and discuss possible neural mechanisms underlying these visual performances.

  14. Open-ended category learning for language acquisition

    Science.gov (United States)

    Seabra Lopes, Luis; Chauhan, Aneesh

    2008-12-01

    Motivated by the need to support language-based communication between robots and their human users, as well as grounded symbolic reasoning, this paper presents a learning architecture that can be used by robotic agents for long-term and open-ended category acquisition. To be more adaptive and to improve learning performance as well as memory usage, this learning architecture includes a metacognitive processing component. Multiple object representations and multiple classifiers and classifier combinations are used. At the object level, the main similarity measure is based on a multi-resolution matching algorithm. Categories are represented as sets of known instances. In this instance-based approach, storing and forgetting rules optimise memory usage. Classifier combinations are based on majority voting and the Dempster-Shafer evidence theory. All learning computations are carried out during the normal execution of the agent, which allows continuous monitoring of the performance of the different classifiers. The measured classification successes of the individual classifiers support an attentional selection mechanism, through which classifier combinations are dynamically reconfigured and a specific classifier is chosen to predict the category of a new unseen object. A simple physical agent, incorporating these learning capabilities, is used to test the approach. A long-term experiment was carried out having in mind the open-ended nature of category learning. With the help of a human mediator, the agent incrementally learned 68 categories of real-world objects visually perceivable through an inexpensive camera. Various aspects of the approach are evaluated through systematic experiments.

  15. The Ungraded Derived Category

    OpenAIRE

    Stai, Torkil Utvik

    2012-01-01

    By means of the ungraded derived category we prove that the orbit category of the bounded derived category of an iterated tilted algebra with respect to translation is triangulated in such a way that the canonical functor from the bounded derived category to the orbit category becomes a triangle functor.

  16. Accuracy of Dolphin visual treatment objective (VTO) prediction software on class III patients treated with maxillary advancement and mandibular setback.

    Science.gov (United States)

    Peterman, Robert J; Jiang, Shuying; Johe, Rene; Mukherjee, Padma M

    2016-12-01

    Dolphin® visual treatment objective (VTO) prediction software is routinely utilized by orthodontists during the treatment planning of orthognathic cases to help predict post-surgical soft tissue changes. Although surgical soft tissue prediction is considered to be a vital tool, its accuracy is not well understood in tow-jaw surgical procedures. The objective of this study was to quantify the accuracy of Dolphin Imaging's VTO soft tissue prediction software on class III patients treated with maxillary advancement and mandibular setback and to validate the efficacy of the software in such complex cases. This retrospective study analyzed the records of 14 patients treated with comprehensive orthodontics in conjunction with two-jaw orthognathic surgery. Pre- and post-treatment radiographs were traced and superimposed to determine the actual skeletal movements achieved in surgery. This information was then used to simulate surgery in the software and generate a final soft tissue patient profile prediction. Prediction images were then compared to the actual post-treatment profile photos to determine differences. Dolphin Imaging's software was determined to be accurate within an error range of +/- 2 mm in the X-axis at most landmarks. The lower lip predictions were most inaccurate. Clinically, the observed error suggests that the VTO may be used for demonstration and communication with a patient or consulting practitioner. However, Dolphin should not be useful for precise treatment planning of surgical movements. This program should be used with caution to prevent unrealistic patient expectations and dissatisfaction.

  17. Visual discrimination of rotated 3D objects in Malawi cichlids (Pseudotropheus sp.): a first indication for form constancy in fishes.

    Science.gov (United States)

    Schluessel, V; Kraniotakes, H; Bleckmann, H

    2014-03-01

    Fish move in a three-dimensional environment in which it is important to discriminate between stimuli varying in colour, size, and shape. It is also advantageous to be able to recognize the same structures or individuals when presented from different angles, such as back to front or front to side. This study assessed visual discrimination abilities of rotated three-dimensional objects in eight individuals of Pseudotropheus sp. using various plastic animal models. All models were displayed in two choice experiments. After successful training, fish were presented in a range of transfer tests with objects rotated in the same plane and in space by 45° and 90° to the side or to the front. In one experiment, models were additionally rotated by 180°, i.e., shown back to front. Fish showed quick associative learning and with only one exception successfully solved and finished all experimental tasks. These results provide first evidence for form constancy in this species and in fish in general. Furthermore, Pseudotropheus seemed to be able to categorize stimuli; a range of turtle and frog models were recognized independently of colour and minor shape variations. Form constancy and categorization abilities may be important for behaviours such as foraging, recognition of predators, and conspecifics as well as for orienting within habitats or territories.

  18. Structural similarity and category-specificity: a refined account.

    Science.gov (United States)

    Gerlach, Christian; Law, Ian; Paulson, Olaf B

    2004-01-01

    It has been suggested that category-specific recognition disorders for natural objects may reflect that natural objects are more structurally (visually) similar than artefacts and therefore more difficult to recognize following brain damage. On this account one might expect a positive relationship between blood flow and structural similarity in areas involved in visual object recognition. Contrary to this expectation we report a negative relationship in that identification of articles of clothing cause more extensive activation than identification of vegetables/fruit and animals even though items from the categories of animals and vegetables/fruit are rated as more structurally similar than items from the category of articles of clothing. Given that this pattern cannot be explained in terms of a tradeoff between activation and accuracy, we interpret these findings within a model where the matching of visual forms to memory incorporates two operations: (i) the integration of stored object features into whole object representations (integral units), and (ii) the competition between activated integral units for selection (i.e. identification). In addition, we suggest that these operations are differentially affected by structural similarity in that high structural similarity may be beneficial for the integration of stored features into integral units, thus explaining the greater activation found with articles of clothing, whereas it may be harmful for the selection process proper because a greater range of candidate integral units will be activated and compete for selection, thus explaining the higher error rate associated with animals. We evaluate the model based on previous evidence from both normal subjects and patients with category-specific disorders and argue that this model can help reconcile otherwise conflicting data.

  19. Comparison of visual and objective quantification of elbow and shoulder movement in children with obstetric brachial plexus palsy

    Directory of Open Access Journals (Sweden)

    Galea Mary

    2006-12-01

    Full Text Available Abstract Background The Active Movement Scale is a frequently used outcome measure for children with obstetric brachial plexus palsy (OBPP. Clinicians observe upper limb movements while the child is playing and quantify them on an 8 point scale. This scale has acceptable reliability however it is not known whether it accurately depicts the movements observed. In this study, therapist-rated Active Movement Scale grades were compared with objectively-quantified range of elbow flexion and extension and shoulder abduction and flexion in children with OBPP. These movements were chosen as they primarily assess the C5, C6 and C7 nerve roots, the most frequently involved in OBPP. Objective quantification of elbow and shoulder movements was undertaken by two-dimensional motion analysis, using the v-scope. Methods Young children diagnosed with OBPP were recruited from the Royal Children's Hospital (Melbourne, Australia Brachial Plexus registry. They participated in one measurement session where an experienced paediatric physiotherapist facilitated maximal elbow flexion and extension, shoulder abduction and extension through play, and quantified them on the Active Movement Scale. Two-dimensional motion analysis captured the same movements in degrees, which were then converted into Active Movement Score grades using normative reference data. The agreement between the objectively-quantified and therapist-rated grades was determined using percentage agreement and Kappa statistics. Results Thirty children with OBPP participated in the study. All were able to perform elbow and shoulder movements against gravity. Active Movement Score grades ranged from 5 to 7. Two-dimensional motion analysis revealed that full range of movement at the elbow and shoulder was rarely achieved. There was moderate percentage agreement between the objectively-quantified and therapist-rated methods of movement assessment however the therapist frequently over-estimated the range of

  20. Accuracy of Dolphin visual treatment objective (VTO prediction software on class III patients treated with maxillary advancement and mandibular setback

    Directory of Open Access Journals (Sweden)

    Robert J. Peterman

    2016-06-01

    Full Text Available Abstract Background Dolphin® visual treatment objective (VTO prediction software is routinely utilized by orthodontists during the treatment planning of orthognathic cases to help predict post-surgical soft tissue changes. Although surgical soft tissue prediction is considered to be a vital tool, its accuracy is not well understood in tow-jaw surgical procedures. The objective of this study was to quantify the accuracy of Dolphin Imaging’s VTO soft tissue prediction software on class III patients treated with maxillary advancement and mandibular setback and to validate the efficacy of the software in such complex cases. Methods This retrospective study analyzed the records of 14 patients treated with comprehensive orthodontics in conjunction with two-jaw orthognathic surgery. Pre- and post-treatment radiographs were traced and superimposed to determine the actual skeletal movements achieved in surgery. This information was then used to simulate surgery in the software and generate a final soft tissue patient profile prediction. Prediction images were then compared to the actual post-treatment profile photos to determine differences. Results Dolphin Imaging’s software was determined to be accurate within an error range of +/− 2 mm in the X-axis at most landmarks. The lower lip predictions were most inaccurate. Conclusions Clinically, the observed error suggests that the VTO may be used for demonstration and communication with a patient or consulting practitioner. However, Dolphin should not be useful for precise treatment planning of surgical movements. This program should be used with caution to prevent unrealistic patient expectations and dissatisfaction.

  1. Shared processing in multiple object tracking and visual working memory in the absence of response order and task order confounds.

    Science.gov (United States)

    Lapierre, Mark D; Cropper, Simon J; Howe, Piers D L

    2017-01-01

    To understand how the visual system represents multiple moving objects and how those representations contribute to tracking, it is essential that we understand how the processes of attention and working memory interact. In the work described here we present an investigation of that interaction via a series of tracking and working memory dual-task experiments. Previously, it has been argued that tracking is resistant to disruption by a concurrent working memory task and that any apparent disruption is in fact due to observers making a response to the working memory task, rather than due to competition for shared resources. Contrary to this, in our experiments we find that when task order and response order confounds are avoided, all participants show a similar decrease in both tracking and working memory performance. However, if task and response order confounds are not adequately controlled for we find substantial individual differences, which could explain the previous conflicting reports on this topic. Our results provide clear evidence that tracking and working memory tasks share processing resources.

  2. The representation of material categories in the brain

    Science.gov (United States)

    Jacobs, Richard H. A. H.; Baumgartner, Elisabeth; Gegenfurtner, Karl R.

    2014-01-01

    Using textures mapped onto virtual nonsense objects, it has recently been shown that early visual cortex plays an important role in processing material properties. Here, we examined brain activation to photographs of materials, consisting of wood, stone, metal and fabric surfaces. These photographs were close-ups in the sense that the materials filled the image. In the first experiment, observers categorized the material in each image (i.e., wood, stone, metal, or fabric), while in an fMRI-scanner. We predicted the assigned material category using the obtained voxel patterns using a linear classifier. Region-of-interest and whole-brain analyses demonstrated material coding in the early visual regions, with lower accuracies for more anterior regions. There was little evidence for material coding in other brain regions. In the second experiment, we used an adaptation paradigm to reveal additional brain areas involved in the perception of material categories. Participants viewed images of wood, stone, metal, and fabric, presented in blocks with images of either different material categories (no adaptation) or images of different samples from the same material category (material adaptation). To measure baseline activation, blocks with the same material sample were presented (baseline adaptation). Material adaptation effects were found mainly in the parahippocampal gyrus, in agreement with fMRI-studies of texture perception. Our findings suggest that the parahippocampal gyrus, early visual cortex, and possibly the supramarginal gyrus are involved in the perception of material categories, but in different ways. The different outcomes from the two studies are likely due to inherent differences between the two paradigms. A third experiment suggested, based on anatomical overlap between activations, that spatial frequency information is important for within-category material discrimination. PMID:24659972

  3. The representation of material categories in the brain

    Directory of Open Access Journals (Sweden)

    Richard Henrikus Augustinus Hubertus Jacobs

    2014-03-01

    Full Text Available Using textures mapped onto virtual nonsense objects, it has recently been shown that early visual cortex plays an important role in processing material properties. Here, we examined brain activation to photographs of materials, consisting of wood, stone, metal and fabric surfaces. These photographs were close-ups in the sense that the materials filled the image. In the first experiment, observers categorized the material in each image (i.e., wood, stone, metal, or fabric, while in an fMRI-scanner. We predicted the assigned material category using the obtained voxel patterns using a linear classifier. Region-of-interest and whole-brain analyses demonstrated material coding in the early visual regions, with lower accuracies for more anterior regions. There was little evidence for material coding in other brain regions. In the second experiment, we used an adaptation paradigm to reveal additional brain areas involved in the perception of material categories. Participants viewed images of wood, stone, metal, and fabric, presented in blocks with images of either different material categories (no adaptation or images of different samples from the same material category (material adaptation. To measure baseline activation, blocks with the same material sample were presented (baseline adaptation. Material adaptation effects were found mainly in the parahippocampal gyrus, in agreement with fMRI-studies of texture perception. Our findings suggest that the parahippocampal gyrus, early visual cortex, and possibly the supramarginal gyrus are involved in the perception of material categories, but in different ways. The different outcomes from the two studies are likely due to inherent differences between the two paradigms. A third experiment suggested, based on anatomical overlap between activations, that spatial frequency information is important for within-category material discrimination.

  4. Temporal integration of 3D coherent motion cues defining visual objects of unknown orientation is impaired in amnestic mild cognitive impairment and Alzheimer's disease.

    Science.gov (United States)

    Lemos, Raquel; Figueiredo, Patrícia; Santana, Isabel; Simões, Mário R; Castelo-Branco, Miguel

    2012-01-01

    The nature of visual impairments in Alzheimer's disease (AD) and their relation with other cognitive deficits remains highly debated. We asked whether independent visual deficits are present in AD and amnestic forms of mild cognitive impairment (MCI) in the absence of other comorbidities by performing a hierarchical analysis of low-level and high-level visual function in MCI and AD. Since parietal structures are a frequent pathophysiological target in AD and subserve 3D vision driven by motion cues, we hypothesized that the parietal visual dorsal stream function is predominantly affected in these conditions. We used a novel 3D task combining three critical variables to challenge parietal function: 3D motion coherence of objects of unknown orientation, with constrained temporal integration of these cues. Groups of amnestic MCI (n = 20), AD (n = 19), and matched controls (n = 20) were studied. Low-level visual function was assessed using psychophysical contrast sensitivity tests probing the magnocellular, parvocellular, and koniocellular pathways. We probed visual ventral stream function using the Benton Face Recognition task. We have found hierarchical visual impairment in AD, independently of neuropsychological deficits, in particular in the novel parietal 3D task, which was selectively affected in MCI. Integration of local motion cues into 3D objects was specifically and most strongly impaired in AD and MCI, especially when 3D motion was unpredictable, with variable orientation and short-lived in space and time. In sum, specific early dorsal stream visual impairment occurs independently of ventral stream, low-level visual and neuropsychological deficits, in amnestic types of MCI and AD.

  5. Visual agnosia and focal brain injury.

    Science.gov (United States)

    Martinaud, O

    Visual agnosia encompasses all disorders of visual recognition within a selective visual modality not due to an impairment of elementary visual processing or other cognitive deficit. Based on a sequential dichotomy between the perceptual and memory systems, two different categories of visual object agnosia are usually considered: 'apperceptive agnosia' and 'associative agnosia'. Impaired visual recognition within a single category of stimuli is also reported in: (i) visual object agnosia of the ventral pathway, such as prosopagnosia (for faces), pure alexia (for words), or topographagnosia (for landmarks); (ii) visual spatial agnosia of the dorsal pathway, such as cerebral akinetopsia (for movement), or orientation agnosia (for the placement of objects in space). Focal brain injuries provide a unique opportunity to better understand regional brain function, particularly with the use of effective statistical approaches such as voxel-based lesion-symptom mapping (VLSM). The aim of the present work was twofold: (i) to review the various agnosia categories according to the traditional visual dual-pathway model; and (ii) to better assess the anatomical network underlying visual recognition through lesion-mapping studies correlating neuroanatomical and clinical outcomes. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  6. Subtraction of unidirectionally encoded images for suppression of heavily isotropic objects (SUSHI) for selective visualization of peripheral nerves

    Energy Technology Data Exchange (ETDEWEB)

    Takahara, Taro; Kwee, Thomas C.; Hendrikse, Jeroen; Niwa, Tetsu; Mali, Willem P.T.M.; Luijten, Peter R. [University Medical Center Utrecht, Department of Radiology, Utrecht (Netherlands); Van Cauteren, Marc [Philips Healthcare, Asia Pacific, Tokyo (Japan); Koh, Dow-Mu [Royal Marsden Hospital, Department of Radiology, Sutton (United Kingdom)

    2011-02-15

    The aim of this study was to introduce and assess a new magnetic resonance (MR) technique for selective peripheral nerve imaging, called ''subtraction of unidirectionally encoded images for suppression of heavily isotropic objects'' (SUSHI). Six volunteers underwent diffusion-weighted MR neurography (DW-MRN) of the brachial plexus, and seven volunteers underwent DW-MRN of the sciatic, common peroneal, and tibial nerves at the level of the knee, at 1.5 T. DW-MRN images with SUSHI (DW-MRN{sub SUSHI}) and conventional DW-MRN images (DW-MRN{sub AP}) were displayed using a coronal maximum intensity projection and evaluated by two independent observers regarding signal suppression of lymph nodes, bone marrow, veins, and articular fluids and regarding signal intensity of nerves and ganglia, using five-point grading scales. Scores of DW-MRN{sub SUSHI} were compared to those of DW-MRN{sub AP} using Wilcoxon tests. Suppression of lymph nodes around the brachial plexus and suppression of articular fluids at the level of the knee at DW-MRN{sub SUSHI} was significantly better than that at DW-MRN{sub AP} (P < 0.05). However, overall signal intensity of brachial plexus nerves and ganglia at DW-MRN{sub SUSHI} was significantly lower than that at DW-MRN{sub AP} (P < 0.05). On the other hand, signal intensity of the sciatic, common peroneal, and tibial nerves at the level of the knee at DW-MRN{sub SUSHI} was judged as significantly better than that at DW-MRN{sub AP} (P < 0.05). The SUSHI technique allows more selective visualization of the sciatic, common peroneal, and tibial nerves at the level of the knee but is less useful for brachial plexus imaging because signal intensity of the brachial plexus nerves and ganglia can considerably be decreased. (orig.)

  7. Modulating the granularity of category formation by global cortical states

    Directory of Open Access Journals (Sweden)

    Yihwa Kim

    2008-06-01

    Full Text Available The unsupervised categorization of sensory stimuli is typically attributed to feedforward processing in a hierarchy of cortical areas. This purely sensory-driven view of cortical processing, however, ignores any internal modulation, e.g., by top-down attentional signals or neuromodulator release. To isolate the role of internal signaling on category formation, we consider an unbroken continuum of stimuli without intrinsic category boundaries. We show that a competitive network, shaped by recurrent inhibition and endowed with Hebbian and homeostatic synaptic plasticity, can enforce stimulus categorization. The degree of competition is internally controlled by the neuronal gain and the strength of inhibition. Strong competition leads to the formation of many attracting network states, each being evoked by a distinct subset of stimuli and representing a category. Weak competition allows more neurons to be co-active, resulting in fewer but larger categories. We conclude that the granularity of cortical category formation, i.e., the number and size of emerging categories, is not simply determined by the richness of the stimulus environment, but rather by some global internal signal modulating the network dynamics. The model also explains the salient non-additivity of visual object representation observed in the monkey inferotemporal (IT cortex. Furthermore, it offers an explanation of a previously observed, demand-dependent modulation of IT activity on a stimulus categorization task and of categorization-related cognitive deficits in schizophrenic patients.

  8. Valuation, Categories and Attributes

    OpenAIRE

    Galperin, Inna; Sorenson, Olav

    2014-01-01

    Existing research on categories has only examined indirectly the value associated with being a member of a category relative to the value of the set of attributes that determine membership in that category. This study uses survey data to analyze consumers' preferences for the "organic" label versus for the attributes underlying that label. We found that consumers generally preferred products with the category label to those with the attributes required for the organic label but without the la...

  9. Developing a visualized patient-centered, flow-based and objective-oriented care path of cardiac catheterization examination.

    Science.gov (United States)

    Kuo, Ming Chuan; Chang, Polun

    2009-01-01

    It has been known that visualization is a user-preferred and more meaningful interface of information systems. To reduce the anxiety and uncertainty of patients, we transformed the sophisticated process of cardiac catheterization into visualized information. The Microsoft Visio 2003 and Excel 2003 with the VBA automation tool were used to design a process flow of Cardiac Catheterization. The results show the technical feasibility and potentials helpful for patient to realize the nursing process of cardiac catheterization.

  10. Formalizing Restriction Categories

    Directory of Open Access Journals (Sweden)

    James Chapman

    2017-03-01

    Full Text Available Restriction categories are an abstract axiomatic framework by Cockett and Lack for reasoning about (generalizations of the idea of partiality of functions. In a restriction category, every map defines an endomap on its domain, the corresponding partial identity map. Restriction categories cover a number of examples of different flavors and are sound and complete with respect to the more synthetic and concrete partial map categories. A partial map category is based on a given category (of total maps and a map in it is a map from a subobject of the domain. In this paper, we report on an Agda formalization of the first chapters of the theory of restriction categories, including the challenging completeness result. We explain the mathematics formalized, comment on the design decisions we made for the formalization, and illustrate them at work.

  11. The Precategorical Nature of Visual Short-Term Memory

    Science.gov (United States)

    Quinlan, Philip T.; Cohen, Dale J.

    2016-01-01

    We conducted a series of recognition experiments that assessed whether visual short-term memory (VSTM) is sensitive to shared category membership of to-be-remembered (tbr) images of common objects. In Experiment 1 some of the tbr items shared the same basic level category (e.g., hand axe): Such items were no better retained than others. In the…

  12. Impaired integration of object knowledge and visual input in a case of ventral simultanagnosia with bilateral damage to area V4.

    Science.gov (United States)

    Leek, E Charles; d'Avossa, Giovanni; Tainturier, Marie-Josèphe; Roberts, Daniel J; Yuen, Sung Lai; Hu, Mo; Rafal, Robert

    2012-01-01

    This study examines how brain damage can affect the cognitive processes that support the integration of sensory input and prior knowledge during shape perception. It is based on the first detailed study of acquired ventral simultanagnosia, which was found in a patient (M.T.) with posterior occipitotemporal lesions encompassing V4 bilaterally. Despite showing normal object recognition for single items in both accuracy and response times (RTs), and intact low-level vision assessed across an extensive battery of tests, M.T. was impaired in object identification with overlapping figures displays. Task performance was modulated by familiarity: Unlike controls, M.T. was faster with overlapping displays of abstract shapes than with overlapping displays of common objects. His performance with overlapping common object displays was also influenced by both the semantic relatedness and visual similarity of the display items. These findings challenge claims that visual perception is driven solely by feedforward mechanisms and show how brain damage can selectively impair high-level perceptual processes supporting the integration of stored knowledge and visual sensory input.

  13. Valuation, categories and attributes.

    Directory of Open Access Journals (Sweden)

    Inna Galperin

    Full Text Available Existing research on categories has only examined indirectly the value associated with being a member of a category relative to the value of the set of attributes that determine membership in that category. This study uses survey data to analyze consumers' preferences for the "organic" label versus for the attributes underlying that label. We found that consumers generally preferred products with the category label to those with the attributes required for the organic label but without the label. We also found that the value accorded to the organic label increased with the number of attributes that an individual associated with the category. Category membership nevertheless still had greater value than even that of the sum of the attributes associated with it.

  14. Valuation, categories and attributes.

    Science.gov (United States)

    Galperin, Inna; Sorenson, Olav

    2014-01-01

    Existing research on categories has only examined indirectly the value associated with being a member of a category relative to the value of the set of attributes that determine membership in that category. This study uses survey data to analyze consumers' preferences for the "organic" label versus for the attributes underlying that label. We found that consumers generally preferred products with the category label to those with the attributes required for the organic label but without the label. We also found that the value accorded to the organic label increased with the number of attributes that an individual associated with the category. Category membership nevertheless still had greater value than even that of the sum of the attributes associated with it.

  15. Does the semantic content of verbal categories influence categorical perception? An ERP study.

    Science.gov (United States)

    Maier, Martin; Glage, Philipp; Hohlfeld, Annette; Abdel Rahman, Rasha

    2014-11-01

    Accumulating evidence suggests that visual perception and, in particular, visual discrimination, can be influenced by verbal category boundaries. One issue that still awaits systematic investigation is the specific influence of semantic contents of verbal categories on categorical perception (CP). We tackled this issue with a learning paradigm in which initially unfamiliar, yet realistic objects were associated with either bare labels lacking explicit semantic content or labels that were accompanied by enriched semantic information about the specific meaning of the label. Two to three days after learning, the EEG was recorded while participants performed a lateralized oddball task. Newly acquired verbal category boundaries modulated low-level aspects of visual perception as early as 100-150 ms after stimulus onset, suggesting a genuine influence of language on perception. Importantly, this effect was not further influenced by enriched semantic category information, suggesting that bare labels and the associated minimal and predominantly perceptual information are sufficient for CP. Distinct effects of semantic knowledge independent of category boundaries were found subsequently, starting at about 200 ms, possibly reflecting selective attention to semantically meaningful visual features. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. Category Learning Research in the Interactive Online Environment Second Life

    Science.gov (United States)

    Andrews, Jan; Livingston, Ken; Sturm, Joshua; Bliss, Daniel; Hawthorne, Daniel

    2011-01-01

    The interactive online environment Second Life allows users to create novel three-dimensional stimuli that can be manipulated in a meaningful yet controlled environment. These features suggest Second Life's utility as a powerful tool for investigating how people learn concepts for unfamiliar objects. The first of two studies was designed to establish that cognitive processes elicited in this virtual world are comparable to those tapped in conventional settings by attempting to replicate the established finding that category learning systematically influences perceived similarity . From the perspective of an avatar, participants navigated a course of unfamiliar three-dimensional stimuli and were trained to classify them into two labeled categories based on two visual features. Participants then gave similarity ratings for pairs of stimuli and their responses were compared to those of control participants who did not learn the categories. Results indicated significant compression, whereby objects classified together were judged to be more similar by learning than control participants, thus supporting the validity of using Second Life as a laboratory for studying human cognition. A second study used Second Life to test the novel hypothesis that effects of learning on perceived similarity do not depend on the presence of verbal labels for categories. We presented the same stimuli but participants classified them by selecting between two complex visual patterns designed to be extremely difficult to label. While learning was more challenging in this condition , those who did learn without labels showed a compression effect identical to that found in the first study using verbal labels. Together these studies establish that at least some forms of human learning in Second Life parallel learning in the actual world and thus open the door to future studies that will make greater use of the enriched variety of objects and interactions possible in simulated environments

  17. D-brane categories

    OpenAIRE

    Lazaroiu, C. I.

    2003-01-01

    This is an exposition of recent progress in the categorical approach to D-brane physics. I discuss the physical underpinnings of the appearance of homotopy categories and triangulated categories of D-branes from a string field theoretic perspective, and with a focus on applications to homological mirror symmetry.

  18. MOOCs Definition & Categories

    OpenAIRE

    Hernández López, Arantxa; Gil Rodríguez, Eva Patrícia; Peña López, Ismael

    2013-01-01

    Infographics about MOOCs Definition and Categories, by Learning Technologies Office. Infografía sobre definición y categorías de MOOC, por Tecnología educativa. Infografia sobre definició i categories de MOOC, per Tecnologia educativa.

  19. Evaluation of hemifield sector analysis protocol in multifocal visual evoked potential objective perimetry for the diagnosis and early detection of glaucomatous field defects.

    Science.gov (United States)

    Mousa, Mohammad F; Cubbidge, Robert P; Al-Mansouri, Fatima; Bener, Abdulbari

    2014-02-01

    Multifocal visual evoked potential (mfVEP) is a newly introduced method used for objective visual field assessment. Several analysis protocols have been tested to identify early visual field losses in glaucoma patients using the mfVEP technique, some were successful in detection of field defects, which were comparable to the standard automated perimetry (SAP) visual field assessment, and others were not very informative and needed more adjustment and research work. In this study we implemented a novel analysis approach and evaluated its validity and whether it could be used effectively for early detection of visual field defects in glaucoma. Three groups were tested in this study; normal controls (38 eyes), glaucoma patients (36 eyes) and glaucoma suspect patients (38 eyes). All subjects had a two standard Humphrey field analyzer (HFA) test 24-2 and a single mfVEP test undertaken in one session. Analysis of the mfVEP results was done using the new analysis protocol; the hemifield sector analysis (HSA) protocol. Analysis of the HFA was done using the standard grading system. Analysis of mfVEP results showed that there was a statistically significant difference between the three groups in the mean signal to noise ratio (ANOVA test, p field defects in both glaucoma and glaucoma suspect patients. Using this protocol can provide information about focal visual field differences across the horizontal midline, which can be utilized to differentiate between glaucoma and normal subjects. Sensitivity and specificity of the mfVEP test showed very promising results and correlated with other anatomical changes in glaucoma field loss.

  20. Triangulated categories (AM-148)

    CERN Document Server

    Neeman, Amnon

    2014-01-01

    The first two chapters of this book offer a modern, self-contained exposition of the elementary theory of triangulated categories and their quotients. The simple, elegant presentation of these known results makes these chapters eminently suitable as a text for graduate students. The remainder of the book is devoted to new research, providing, among other material, some remarkable improvements on Brown''s classical representability theorem. In addition, the author introduces a class of triangulated categories""--the ""well generated triangulated categories""--and studies their properties. This

  1. Consumer Product Category Database

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Chemical and Product Categories database (CPCat) catalogs the use of over 40,000 chemicals and their presence in different consumer products. The chemical use...

  2. Variation within categories.

    NARCIS (Netherlands)

    Das-Smaal, E.A.; Swart, de J.H.

    1984-01-01

    Two aspects of variation within categories, relating to different models of categorization, were investigated - frequency of dimensional values and typicality differences within values. The influence of range of typicality experienced during learning and of informational value of feedback was also

  3. Analysis of rare categories

    CERN Document Server

    He, Jingrui

    2012-01-01

    This book focuses on rare category analysis where the majority classes have smooth distributions and the minority classes exhibit the compactness property. It focuses on challenging cases where the support regions of the majority and minority classes overlap.

  4. Categories without structures

    OpenAIRE

    Rodin, Andrei

    2009-01-01

    The popular view according to which Category theory provides a support for Mathematical Structuralism is erroneous. Category-theoretic foundations of mathematics require a different philosophy of mathematics. While structural mathematics studies invariant forms (Awodey) categorical mathematics studies covariant transformations which, generally, don t have any invariants. In this paper I develop a non-structuralist interpretation of categorical mathematics and show its consequences for history...

  5. A Study of the Development of Students' Visualizations of Program State during an Elementary Object-Oriented Programming Course

    Science.gov (United States)

    Sajaniemi, Jorma; Kuittinen, Marja; Tikansalo, Taina

    2008-01-01

    Students' understanding of object-oriented (OO) program execution was studied by asking students to draw a picture of a program state at a specific moment. Students were given minimal instructions on what to include in their drawings in order to see what they considered to be central concepts and relationships in program execution. Three drawing…

  6. ACCURACY EVALUATION OF THE OBJECT LOCATION VISUALIZATION FOR GEO-INFORMATION AND DISPLAY SYSTEMS OF MANNED AIRCRAFTS NAVIGATION COMPLEXES

    Directory of Open Access Journals (Sweden)

    M. O. Kostishin

    2014-01-01

    Full Text Available The paper deals with the issue of accuracy estimating for the object location display in the geographic information systems and display systems of manned aircrafts navigation complexes. Application features of liquid crystal screens with a different number of vertical and horizontal pixels are considered at displaying of geographic information data on different scales. Estimation display of navigation parameters values on board the aircraft is done in two ways: a numeric value is directly displayed on the screen of multi-color indicator, and a silhouette of the object is formed on the screen on a substrate background, which is a graphical representation of area map in the flight zone. Various scales of area digital map display currently used in the aviation industry have been considered. Calculation results of one pixel scale interval, depending on the specifications of liquid crystal screen and zoom of the map display area on the multifunction digital display, are given. The paper contains experimental results of the accuracy evaluation for area position display of the aircraft based on the data from the satellite navigation system and inertial navigation system, obtained during the flight program run of the real object. On the basis of these calculations a family of graphs was created for precision error display of the object reference point position using the onboard indicators with liquid crystal screen with different screen resolutions (6 "×8", 7.2 "×9.6", 9"×12" for two map display scales (1:0 , 25 km, 1-2 km. These dependency graphs can be used both to assess the error value of object area position display in existing navigation systems and to calculate the error value in upgrading facilities.

  7. Visual long-term memory and change blindness: Different effects of pre- and post-change information on one-shot change detection using meaningless geometric objects.

    Science.gov (United States)

    Nishiyama, Megumi; Kawaguchi, Jun

    2014-11-01

    To clarify the relationship between visual long-term memory (VLTM) and online visual processing, we investigated whether and how VLTM involuntarily affects the performance of a one-shot change detection task using images consisting of six meaningless geometric objects. In the study phase, participants observed pre-change (Experiment 1), post-change (Experiment 2), or both pre- and post-change (Experiment 3) images appearing in the subsequent change detection phase. In the change detection phase, one object always changed between pre- and post-change images and participants reported which object was changed. Results showed that VLTM of pre-change images enhanced the performance of change detection, while that of post-change images decreased accuracy. Prior exposure to both pre- and post-change images did not influence performance. These results indicate that pre-change information plays an important role in change detection, and that information in VLTM related to the current task does not always have a positive effect on performance. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. Perceptual advantage for category-relevant perceptual dimensions: The case of shape and motion

    Directory of Open Access Journals (Sweden)

    Jonathan R Folstein

    2014-12-01

    Full Text Available Category learning facilitates perception along relevant stimulus dimensions, even when tested in a discrimination task that does not require categorization. While this general phenomenon has been demonstrated previously, perceptual facilitation along dimensions has been documented by measuring different specific phenomena in different studies using different kinds of objects. Across several object domains, there is support for acquired distinctiveness, the stretching of a perceptual dimension relevant to learned categories. Studies using faces and studies using simple separable visual dimensions have also found evidence of acquired equivalence, the shrinking of a perceptual dimension irrelevant to learned categories, and categorical perception, the local stretching across the category boundary. These later two effects are rarely observed with complex non-face objects. Failures to find these effects with complex non-face objects may have been because the dimensions tested previously were perceptually integrated. Here we tested effects of category learning with non-face objects categorized along dimensions that have been found to be processed by different areas of the brain, shape and motion. While we replicated acquired distinctiveness, we found no evidence for acquired equivalence or categorical perception.

  9. Converging modalities ground abstract categories: the case of politics.

    Science.gov (United States)

    Farias, Ana Rita; Garrido, Margarida V; Semin, Gün R

    2013-01-01

    Three studies are reported examining the grounding of abstract concepts across two modalities (visual and auditory) and their symbolic representation. A comparison of the outcomes across these studies reveals that the symbolic representation of political concepts and their visual and auditory modalities is convergent. In other words, the spatial relationships between specific instances of the political categories are highly overlapping across the symbolic, visual and auditory modalities. These findings suggest that abstract categories display redundancy across modal and amodal representations, and are multimodal.

  10. A life-long learning vector quantization approach for interactive learning of multiple categories.

    Science.gov (United States)

    Kirstein, Stephan; Wersing, Heiko; Gross, Horst-Michael; Körner, Edgar

    2012-04-01

    We present a new method capable of learning multiple categories in an interactive and life-long learning fashion to approach the "stability-plasticity dilemma". The problem of incremental learning of multiple categories is still largely unsolved. This is especially true for the domain of cognitive robotics, requiring real-time and interactive learning. To achieve the life-long learning ability for a cognitive system, we propose a new learning vector quantization approach combined with a category-specific feature selection method to allow several metrical "views" on the representation space of each individual vector quantization node. These category-specific features are incrementally collected during the learning process, so that a balance between the correction of wrong representations and the stability of acquired knowledge is achieved. We demonstrate our approach for a difficult visual categorization task, where the learning is applied for several complex-shaped objects rotated in depth. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. Models as Relational Categories

    Science.gov (United States)

    Kokkonen, Tommi

    2017-10-01

    Model-based learning (MBL) has an established position within science education. It has been found to enhance conceptual understanding and provide a way for engaging students in authentic scientific activity. Despite ample research, few studies have examined the cognitive processes regarding learning scientific concepts within MBL. On the other hand, recent research within cognitive science has examined the learning of so-called relational categories. Relational categories are categories whose membership is determined on the basis of the common relational structure. In this theoretical paper, I argue that viewing models as relational categories provides a well-motivated cognitive basis for MBL. I discuss the different roles of models and modeling within MBL (using ready-made models, constructive modeling, and generative modeling) and discern the related cognitive aspects brought forward by the reinterpretation of models as relational categories. I will argue that relational knowledge is vital in learning novel models and in the transfer of learning. Moreover, relational knowledge underlies the coherent, hierarchical knowledge of experts. Lastly, I will examine how the format of external representations may affect the learning of models and the relevant relations. The nature of the learning mechanisms underlying students' mental representations of models is an interesting open question to be examined. Furthermore, the ways in which the expert-like knowledge develops and how to best support it is in need of more research. The discussion and conceptualization of models as relational categories allows discerning students' mental representations of models in terms of evolving relational structures in greater detail than previously done.

  12. Personalized visual aesthetics

    Science.gov (United States)

    Vessel, Edward A.; Stahl, Jonathan; Maurer, Natalia; Denker, Alexander; Starr, G. G.

    2014-02-01

    How is visual information linked to aesthetic experience, and what factors determine whether an individual finds a particular visual experience pleasing? We have previously shown that individuals' aesthetic responses are not determined by objective image features but are instead a function of internal, subjective factors that are shaped by a viewers' personal experience. Yet for many classes of stimuli, culturally shared semantic associations give rise to similar aesthetic taste across people. In this paper, we investigated factors that govern whether a set of observers will agree in which images are preferred, or will instead exhibit more "personalized" aesthetic preferences. In a series of experiments, observers were asked to make aesthetic judgments for different categories of visual stimuli that are commonly evaluated in an aesthetic manner (faces, natural landscapes, architecture or artwork). By measuring agreement across observers, this method was able to reveal instances of highly individualistic preferences. We found that observers showed high agreement on their preferences for images of faces and landscapes, but much lower agreement for images of artwork and architecture. In addition, we found higher agreement for heterosexual males making judgments of beautiful female faces than of beautiful male faces. These results suggest that preferences for stimulus categories that carry evolutionary significance (landscapes and faces) come to rely on similar information across individuals, whereas preferences for artifacts of human culture such as architecture and artwork, which have fewer basic-level category distinctions and reduced behavioral relevance, rely on a more personalized set of attributes.

  13. A 2-categories companion

    OpenAIRE

    Lack, Stephen

    2007-01-01

    This paper is a rather informal guide to some of the basic theory of 2-categories and bicategories, including notions of limit and colimit, 2-dimensional universal algebra, formal category theory, and nerves of bicategories. As is the way of these things, the choice of topics is somewhat personal. No attempt is made at either rigour or completeness. Nor is it completely introductory: you will not find a definition of bicategory; but then nor will you really need one to read it. In keeping wit...

  14. Evidence from auditory and visual event-related potential (ERP) studies of deviance detection (MMN and vMMN) linking predictive coding theories and perceptual object representations.

    Science.gov (United States)

    Winkler, István; Czigler, István

    2012-02-01

    Predictive coding theories posit that the perceptual system is structured as a hierarchically organized set of generative models with increasingly general models at higher levels. The difference between model predictions and the actual input (prediction error) drives model selection and adaptation processes minimizing the prediction error. Event-related brain potentials elicited by sensory deviance are thought to reflect the processing of prediction error at an intermediate level in the hierarchy. We review evidence from auditory and visual studies of deviance detection suggesting that the memory representations inferred from these studies meet the criteria set for perceptual object representations. Based on this evidence we then argue that these perceptual object representations are closely related to the generative models assumed by predictive coding theories. Copyright © 2011 Elsevier B.V. All rights reserved.

  15. What Makes an Object Memorable?

    KAUST Repository

    Dubey, Rachit

    2016-02-19

    Recent studies on image memorability have shed light on what distinguishes the memorability of different images and the intrinsic and extrinsic properties that make those images memorable. However, a clear understanding of the memorability of specific objects inside an image remains elusive. In this paper, we provide the first attempt to answer the question: what exactly is remembered about an image? We augment both the images and object segmentations from the PASCAL-S dataset with ground truth memorability scores and shed light on the various factors and properties that make an object memorable (or forgettable) to humans. We analyze various visual factors that may influence object memorability (e.g. color, visual saliency, and object categories). We also study the correlation between object and image memorability and find that image memorability is greatly affected by the memorability of its most memorable object. Lastly, we explore the effectiveness of deep learning and other computational approaches in predicting object memorability in images. Our efforts offer a deeper understanding of memorability in general thereby opening up avenues for a wide variety of applications. © 2015 IEEE.

  16. Homological algebra in -abelian categories

    Indian Academy of Sciences (India)

    Deren Luo

    2017-08-16

    13]. He developed the classical abelian category and exact category theory to higher-dimensional n-abelian category and n-exact category theory [13]. He also proved that n-cluster tilting subcategories are n-abelian categories ...

  17. Involuntary top-down control by search-irrelevant features: Visual working memory biases attention in an object-based manner.

    Science.gov (United States)

    Foerster, Rebecca M; Schneider, Werner X

    2018-03-01

    Many everyday tasks involve successive visual-search episodes with changing targets. Converging evidence suggests that these targets are retained in visual working memory (VWM) and bias attention from there. It is unknown whether all or only search-relevant features of a VWM template bias attention during search. Bias signals might be configured exclusively to task-relevant features so that only search-relevant features bias attention. Alternatively, VWM might maintain objects in the form of bound features. Then, all template features will bias attention in an object-based manner, so that biasing effects are ranked by feature relevance. Here, we investigated whether search-irrelevant VWM template features bias attention. Participants had to saccade to a target opposite a distractor. A colored cue depicted the target prior to each search trial. The target was predefined only by its identity, while its color was irrelevant. When target and cue matched not only in identity (search-relevant) but also in color (search-irrelevant), saccades went more often and faster directly to the target than without any color match (Experiment 1). When introducing a cue-distractor color match (Experiment 2), direct target saccades were most likely when target and cue matched in the search-irrelevant color and least likely in case of a cue-distractor color match. When cue and target were never colored the same (Experiment 3), cue-colored distractors still captured the eyes more often than different-colored distractors despite color being search-irrelevant. As participants were informed about the misleading color, the result argues against a strategical and voluntary usage of color. Instead, search-irrelevant features biased attention obligatorily arguing for involuntary top-down control by object-based VWM templates. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Category Label Effects on Chinese Children's Inductive Inferences: Modulation by Perceptual Detail and Category Specificity

    Science.gov (United States)

    Long, Changquan; Lu, Xiaoying; Zhang, Li; Li, Hong; Deak, Gedeon O.

    2012-01-01

    Inductive generalization of novel properties to same-category or similar-looking objects was studied in Chinese preschool children. The effects of category labels on generalizations were investigated by comparing basic-level labels, superordinate-level labels, and a control phrase applied to three kinds of stimulus materials: colored photographs…

  19. How does aging affect the types of error made in a visual short-term memory ‘object-recall’ task?

    Directory of Open Access Journals (Sweden)

    Raju P Sapkota

    2015-01-01

    Full Text Available This study examines how normal aging affects the occurrence of different types of incorrect responses in a visual short-term memory object-recall task. Seventeen young (Mean = 23.3 years, SD = 3.76, and 17 normally aging older (Mean = 66.5 years, SD = 6.30 adults participated. Memory stimuli comprised 2 or 4 real world objects (the memory load presented sequentially, each for 650ms, at random locations on a computer screen. After a 1000ms retention interval, a test display was presented, comprising an empty box at one of the previously presented 2 or 4 memory stimulus locations. Participants were asked to report the name of the object presented at the cued location. Errors rates wherein participants reported the names of objects that had been presented in the memory display but not at the cued location (non-target errors vs. objects that had not been presented at all in the memory display (non-memory errors were compared. Significant effects of aging, memory load and target recency on error type and absolute error rates were found. Non-target error rate was higher than non-memory error rate in both age groups, indicating that VSTM may have been more often than not populated with partial traces of previously presented items. At high memory load, non-memory error rate was higher in young participants (compared to older participants when the memory target had been presented at the earliest temporal position. However, non-target error rates exhibited a reversed trend, i.e., greater error rates were found in older participants when the memory target had been presented at the two most recent temporal positions. Data are interpreted in terms of proactive interference (earlier examined non-target items interfering with more recent items, false memories (non-memory items which have a categorical relationship to presented items, interfering with memory targets, slot and flexible resource models, and spatial coding deficits.

  20. Scientific Objectives and Design Study of an Adaptive Optics Visual Echelle Spectrograph and Imager Coronograph (AVES-IMCO) for the NAOS Visitor Focus at the VLT

    Science.gov (United States)

    Pallavicini, Roberto; Zerbi, Filippo; Beuzit, Jean-Luc; Bonanno, Giovanni; Bonifacio, Piercarlo; Comari, Maurizio; Conconi, Paolo; Delabre, Bernard; Franchini, Mariagrazia; di Marcantonio, Paolo; Lagrange, Anne-Marie; Mazzoleni, Ruben; Molaro, Paolo; Pasquini, Luca; Santin, Paolo

    We present the scientific case for an Adaptive Optics Visual Echelle Spectrograph and Imager Coronograph (AVES-IMCO) that we propose as a visitor instrument for the secondary port of NAOS at the VLT. We show that such an instrument would be ideal for intermediate resolution (R=16,000) spectroscopy of faint sky-limited objects down to a magnitude of V=24.0 and will complement very effectively the near-IR imaging capabilities of CONICA. We present examples of science programmes that could be carried out with such an instrument and which cannot be addressed with existing VLT instruments. We also report on the result of a two-year design study of the instrument, with specific reference to its use as parallel instrument of NAOS.

  1. Efficient light scattering through thin semi-transparent objects

    DEFF Research Database (Denmark)

    Frisvad, Jeppe Revall; Christensen, Niels Jørgen; Falster, Peter

    2005-01-01

    This paper concerns real-time rendering of thin semi-transparent objects. An object in this category could be a piece of cloth, eg. a curtain. Semi-transparent objects are visualized most correctly using volume rendering techniques. In general such techniques are, however, intractable for real...... and almost opaque. To capture such visual effects in the standard rendering pipeline, Blinn [1982] proposed an efficient local illumination model based on radiative transfer theory. He assumed media of low density, hence, his equations can render media such as clouds, smoke, and dusty surfaces. Our...

  2. [Symptoms and lesion localization in visual agnosia].

    Science.gov (United States)

    Suzuki, Kyoko

    2004-11-01

    There are two cortical visual processing streams, the ventral and dorsal stream. The ventral visual stream plays the major role in constructing our perceptual representation of the visual world and the objects within it. Disturbance of visual processing at any stage of the ventral stream could result in impairment of visual recognition. Thus we need systematic investigations to diagnose visual agnosia and its type. Two types of category-selective visual agnosia, prosopagnosia and landmark agnosia, are different from others in that patients could recognize a face as a face and buildings as buildings, but could not identify an individual person or building. Neuronal bases of prosopagnosia and landmark agnosia are distinct. Importance of the right fusiform gyrus for face recognition was confirmed by both clinical and neuroimaging studies. Landmark agnosia is related to lesions in the right parahippocampal gyrus. Enlarged lesions including both the right fusiform and parahippocampal gyri can result in prosopagnosia and landmark agnosia at the same time. Category non-selective visual agnosia is related to bilateral occipito-temporal lesions, which is in agreement with the results of neuroimaging studies that revealed activation of the bilateral occipito-temporal during object recognition tasks.

  3. Visual agnosia.

    Science.gov (United States)

    Álvarez, R; Masjuan, J

    2016-03-01

    Visual agnosia is defined as an impairment of object recognition, in the absence of visual acuity or cognitive dysfunction that would explain this impairment. This condition is caused by lesions in the visual association cortex, sparing primary visual cortex. There are 2 main pathways that process visual information: the ventral stream, tasked with object recognition, and the dorsal stream, in charge of locating objects in space. Visual agnosia can therefore be divided into 2 major groups depending on which of the two streams is damaged. The aim of this article is to conduct a narrative review of the various visual agnosia syndromes, including recent developments in a number of these syndromes. Copyright © 2015 Elsevier España, S.L.U. y Sociedad Española de Medicina Interna (SEMI). All rights reserved.

  4. What explains health in persons with visual impairment?

    Science.gov (United States)

    Leissner, Juliane; Coenen, Michaela; Froehlich, Stephan; Loyola, Danny; Cieza, Alarcos

    2014-05-03

    Visual impairment is associated with important limitations in functioning. The International Classification of Functioning, Disability and Health (ICF) adopted by the World Health Organisation (WHO) relies on a globally accepted framework for classifying problems in functioning and the influence of contextual factors. Its comprehensive perspective, including biological, individual and social aspects of health, enables the ICF to describe the whole health experience of persons with visual impairment. The objectives of this study are (1) to analyze whether the ICF can be used to comprehensively describe the problems in functioning of persons with visual impairment and the environmental factors that influence their lives and (2) to select the ICF categories that best capture self-perceived health of persons with visual impairment. Data from 105 persons with visual impairment were collected, including socio-demographic data, vision-related data, the Extended ICF Checklist and the visual analogue scale of the EuroQoL-5D, to assess self-perceived health. Descriptive statistics and a Group Lasso regression were performed. The main outcome measures were functioning defined as impairments in Body functions and Body structures, limitations in Activities and restrictions in Participation, influencing Environmental factors and self-perceived health. In total, 120 ICF categories covering a broad range of Body functions, Body structures, aspects of Activities and Participation and Environmental factors were identified. Thirteen ICF categories that best capture self-perceived health were selected based on the Group Lasso regression. While Activities-and-Participation categories were selected most frequently, the greatest impact on self-perceived health was found in Body-functions categories. The ICF can be used as a framework to comprehensively describe the problems of persons with visual impairment and the Environmental factors which influence their lives. There are plenty of

  5. Oxytocin can impair memory for social and non-social visual objects: a within-subject investigation of oxytocin's effects on human memory.

    Science.gov (United States)

    Herzmann, Grit; Young, Brent; Bird, Christopher W; Curran, Tim

    2012-04-27

    Oxytocin is important to social behavior and emotion regulation in humans. Oxytocin's role derives in part from its effect on memory performance. More specifically, previous research suggests that oxytocin facilitates recognition of social (e.g., faces), but not of non-social stimuli (e.g., words, visual objects). We conducted the first within-subject study to this hypothesis in a double-blind, placebo-controlled design. We administered oxytocin (24IU) and placebo (saline) in two separate sessions and in randomized order to healthy men. To obtain a baseline measure for session-dependent memory effects, which are caused by proactive interference, an additional group of male subjects in each session received placebo unbeknownst to them and the experimenter. After administration, participants studied faces and houses. Exactly one day after each study session, participants were asked to make memory judgments of new and old items. In the first study-test session, participants administered with oxytocin showed reduced recollection of previously studied faces and houses. Oxytocin also interacted with proactive-interference effects. By impeding memory in the first session, it reduced proactive interference in the second. But oxytocin contributed additionally to the memory-reducing effect of proactive interference when administered in the second session. These results demonstrate that oxytocin can have a memory-impairing effect on both social and non-social visual objects. The present study also emphasizes the necessity of including a non-treated, baseline group in within-subject designs when investigating oxytocin's effects on human memory. Copyright © 2012 Elsevier B.V. All rights reserved.

  6. The style of a stranger: Identification expertise generalizes to coarser level categories.

    Science.gov (United States)

    Searston, Rachel A; Tangen, Jason M

    2017-08-01

    Experience identifying visual objects and categories improves generalization within the same class (e.g., discriminating bird species improves transfer to new bird species), but does such perceptual expertise transfer to coarser category judgments? We tested whether fingerprint experts, who spend their days comparing pairs of prints and judging whether they were left by the same finger or two different fingers, can generalize their finger discrimination expertise to people more broadly. That is, can these experts identify prints from Jones's right thumb and prints from Jones's right index finger as instances of the same "Jones" category? Novices and experts were both sensitive to the style of a stranger's prints; despite lower levels of confidence, experts were significantly more sensitive to this style than novices. This expert advantage persisted even when we reduced the number of exemplars provided. Our results demonstrate that perceptual expertise can be flexible to upwards shifts in the level of specificity, suggesting a dynamic memory retrieval process.

  7. Beyond the Categories.

    Science.gov (United States)

    Weeks, Jeffrey

    2015-07-01

    Shushu is a Turkish Cypriot drag performance artist and the article begins with a discussion of a short film about him by a Greek Cypriot playwright, film maker, and gay activist. The film is interesting in its own right as a documentary about a complex personality, but it is also relevant to wider discussion of sexual and gender identity and categorization in a country divided by history, religion, politics, and military occupation. Shushu rejects easy identification as gay or transgender, or anything else. He is his own self. But refusing a recognized and recognizable identity brings problems, and I detected a pervasive mood of melancholy in his portrayal. The article builds from this starting point to explore the problematic nature of identities and categorizations in the contemporary world. The analysis opens with the power of words and language in defining and classifying sexuality. The early sexologists set in motion a whole catalogue of categories which continue to shape sexual thinking, believing that they were providing a scientific basis for a more humane treatment of sexual variations. This logic continues in DSM-5. The historical effect, however, has been more complex. Categorizations have often fixed individuals into a narrow band of definitions and identities that marginalize and pathologize. The emergence of radical sexual-social movements from the late 1960s offered new forms of grassroots knowledge in opposition to the sexological tradition, but at first these movements worked to affirm rather than challenge the significance of identity categories. Increasingly, however, identities have been problematized and challenged for limiting sexual and gender possibilities, leading to the apparently paradoxical situation where sexual identities are seen as both necessary and impossible. There are emotional costs both in affirming a fixed identity and in rejecting one. Shushu is caught in this dilemma, leading to the pervasive sense of loss that shapes the

  8. Words can slow down category learning.

    Science.gov (United States)

    Brojde, Chandra L; Porter, Chelsea; Colunga, Eliana

    2011-08-01

    Words have been shown to influence many cognitive tasks, including category learning. Most demonstrations of these effects have focused on instances in which words facilitate performance. One possibility is that words augment representations, predicting an across the-board benefit of words during category learning. We propose that words shift attention to dimensions that have been historically predictive in similar contexts. Under this account, there should be cases in which words are detrimental to performance. The results from two experiments show that words impair learning of object categories under some conditions. Experiment 1 shows that words hurt performance when learning to categorize by texture. Experiment 2 shows that words also hurt when learning to categorize by brightness, leading to selectively attending to shape when both shape and hue could be used to correctly categorize stimuli. We suggest that both the positive and negative effects of words have developmental origins in the history of word usage while learning categories. [corrected

  9. When complex is easy on the mind: internal repetition of visual information in complex objects is a source of perceptual fluency

    NARCIS (Netherlands)

    Ayça Berfu Ünal; Linda Steg; Roos Pals; Yannick Joye

    2015-01-01

    Across 3 studies, we investigated whether visual complexity deriving from internally repeating visual information over many scale levels is a source of perceptual fluency. Such continuous repetition of visual information is formalized in fractal geometry and is a key-property of natural structures.

  10. Black light visualized solar lentigines on the shoulders and upper back are associated with objectively measured UVR exposure and cutaneous malignant melanoma

    DEFF Research Database (Denmark)

    Idorn, Luise Winkel; Datta, Pameli; Heydenreich, Jakob

    2015-01-01

    and graded into 3 categories using black light photographs to show sun damage. Current UVR exposure in healthy controls was assessed by personal electronic UVR dosimeters that measured time-related UVR and by corresponding exposure diaries during a summer season. Sunburn history was assessed by interviews....... Among controls, the number of solar lentigines was positively associated with daily hours spent outdoors between noon and 3 pm on holidays (P = 0.027), days at the beach (P = 0.048) and reported number of life sunburns (P ... lentigines (P = 0.044). There was a positive association between CMM and higher solar lentigines grade; Category III versus Category I (P = 0.002) and Category II versus Category I (P = 0.014). Our findings indicate that solar lentigines in healthy individuals are associated with number of life sunburns...

  11. Fashion Objects

    DEFF Research Database (Denmark)

    Andersen, Bjørn Schiermer

    2009-01-01

    This article attempts to create a framework for understanding modern fashion phenomena on the basis of Durkheim's sociology of religion. It focuses on Durkheim's conception of the relation between the cult and the sacred object, on his notion of 'exteriorisation', and on his theory of the social...... symbol in an attempt to describe the peculiar attraction of the fashion object and its social constitution. However, Durkheim's notions of cult and ritual must undergo profound changes if they are to be used in an analysis of fashion. The article tries to expand the Durkheimian cult, radically enlarging...... it without totally dispersing it; depicting it as held together exclusively by the sheer 'force' of the sacred object. Firstly, the article introduces the themes and problems surrounding Durkheim's conception of the sacred. Next, it briefly sketches an outline of fashion phenomena in Durkheimian categories...

  12. Why some colors appear more memorable than others: A model combining categories and particulars in color working memory.

    Science.gov (United States)

    Bae, Gi-Yeul; Olkkonen, Maria; Allred, Sarah R; Flombaum, Jonathan I

    2015-08-01

    Categorization with basic color terms is an intuitive and universal aspect of color perception. Yet research on visual working memory capacity has largely assumed that only continuous estimates within color space are relevant to memory. As a result, the influence of color categories on working memory remains unknown. We propose a dual content model of color representation in which color matches to objects that are either present (perception) or absent (memory) integrate category representations along with estimates of specific values on a continuous scale ("particulars"). We develop and test the model through 4 experiments. In a first experiment pair, participants reproduce a color target, both with and without a delay, using a recently influential estimation paradigm. In a second experiment pair, we use standard methods in color perception to identify boundary and focal colors in the stimulus set. The main results are that responses drawn from working memory are significantly biased away from category boundaries and toward category centers. Importantly, the same pattern of results is present without a memory delay. The proposed dual content model parsimoniously explains these results, and it should replace prevailing single content models in studies of visual working memory. More broadly, the model and the results demonstrate how the main consequence of visual working memory maintenance is the amplification of category related biases and stimulus-specific variability that originate in perception. (c) 2015 APA, all rights reserved).

  13. Tailor-made micro-object optical sensor based on mesoporous pellets for visual monitoring and removal of toxic metal ions from aqueous media.

    Science.gov (United States)

    El-Safty, Sherif A; Shenashen, M A; Shahat, A

    2013-07-08

    Methods for the continuous monitoring and removal of ultra-trace levels of toxic inorganic species (e.g., mercury, copper, and cadmium ions) from aqueous media such as drinking water and biological fluids are essential. In this paper, the design and engineering of a simple, pH-dependent, micro-object optical sensor is described based on mesoporous aluminosilica pellets with an adsorbed dressing receptor (a porphyrinic chelating ligand). This tailor-made optical sensor permits ultra-fast (≤ 60 s), specific, pH-dependent visualization and removal of Cu(2+) , Cd(2+) , and Hg(2+) at sub-picomolar concentrations (∼10(-11) mol dm(-3) ) from aqueous media, including drinking water and a suspension of red blood cells. The acidic active acid sites of the pellets consist of heteroatoms arranged around uniformly shaped pores in 3D nanoscale gyroidal mesostructures densely coated with the chelating ligand. The sensor can be used in batch mode, as well as in a flow-through system in which sampling, target ion recognition and removal, and analysis are integrated in a highly automated and efficient manner. Because the pellets exhibit long-term stability, reproducibility, and versatility over a number of analysis/regeneration cycles, they can be expected to be useful for the fabrication of inexpensive sensor devices for naked-eye detection of toxic pollutants. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Constraints on Colour Category Formation

    NARCIS (Netherlands)

    Jraissati, Yasmina; Wakui, Elley; Decock, Lieven; Douven, Igor

    2012-01-01

    This article addresses two questions related to colour categorization, to wit, the question what a colour category is, and the question how we identify colour categories. We reject both the relativist and universalist answers to these questions. Instead, we suggest that colour categories can be

  15. Shape configuration and category-specificity

    DEFF Research Database (Denmark)

    Gerlach, Christian; Law, I; Paulson, Olaf B.

    2006-01-01

    in difficult object decision tasks, which is also found in the present experiments with outlines, is reversed when the stimuli are fragmented. This interaction between category (natural versus artefacts) and stimulus type (outlines versus fragmented forms) is in accordance with predictions derived from...

  16. A category-specific top-down attentional set can affect the neural responses outside the current focus of attention.

    Science.gov (United States)

    Jiang, Yunpeng; Wu, Xia; Gao, Xiaorong

    2017-10-17

    A top-down set can guide attention to enhance the processing of task-relevant objects. Many studies have found that the top-down set can be tuned to a category level. However, it is unclear whether the category-specific top-down set involving a central search task can exist outside the current area of attentional focus. To directly probe the neural responses inside and outside the current focus of attention, we recorded continuous EEG to measure the contralateral ERP components for central targets and the steady-state visual evoked potential (SSVEP) oscillations associated with a flickering checkerboard placed on the visual periphery. The relationship of color categories between targets and non-targets was manipulated to investigate the effect of category-specific top-down set. Results showed that when the color categories of targets and non-targets in the central search arrays were the same, larger SSVEP oscillations were evoked by target color peripheral checkerboards relative to the non-target color ones outside the current attentional focus. However, when the color categories of targets and non-targets were different, the peripheral checkerboards in two different colors of the same category evoked similar SSVEP oscillations, indicating the effects of category-specific top-down set. These results firstly demonstrate that the category-specific top-down set can affect the neural responses of peripheral distractors. The results could support the idea of a global selection account and challenge the attentional window account in selective attention. Copyright © 2017. Published by Elsevier B.V.

  17. Citation analysis of scientific categories

    Directory of Open Access Journals (Sweden)

    Gregory S. Patience

    2017-05-01

    Full Text Available Databases catalogue the corpus of research literature into scientific categories and report classes of bibliometric data such as the number of citations to articles, the number of authors, journals, funding agencies, institutes, references, etc. The number of articles and citations in a category are gauges of productivity and scientific impact but a quantitative basis to compare researchers between categories is limited. Here, we compile a list of bibliometric indicators for 236 science categories and citation rates of the 500 most cited articles of each category. The number of citations per paper vary by several orders of magnitude and are highest in multidisciplinary sciences, general internal medicine, and biochemistry and lowest in literature, poetry, and dance. A regression model demonstrates that citation rates to the top articles in each category increase with the square root of the number of articles in a category and decrease proportionately with the age of the references: articles in categories that cite recent research are also cited more frequently. The citation rate correlates positively with the number of funding agencies that finance the research. The category h-index correlates with the average number of cites to the top 500 ranked articles of each category (R2=0.997. Furthermore, only a few journals publish the top 500 cited articles in each category: four journals publish 60% (σ=±20% of these and ten publish 81% (σ=±15%.

  18. Citation analysis of scientific categories.

    Science.gov (United States)

    Patience, Gregory S; Patience, Christian A; Blais, Bruno; Bertrand, Francois

    2017-05-01

    Databases catalogue the corpus of research literature into scientific categories and report classes of bibliometric data such as the number of citations to articles, the number of authors, journals, funding agencies, institutes, references, etc. The number of articles and citations in a category are gauges of productivity and scientific impact but a quantitative basis to compare researchers between categories is limited. Here, we compile a list of bibliometric indicators for 236 science categories and citation rates of the 500 most cited articles of each category. The number of citations per paper vary by several orders of magnitude and are highest in multidisciplinary sciences, general internal medicine, and biochemistry and lowest in literature, poetry, and dance. A regression model demonstrates that citation rates to the top articles in each category increase with the square root of the number of articles in a category and decrease proportionately with the age of the references: articles in categories that cite recent research are also cited more frequently. The citation rate correlates positively with the number of funding agencies that finance the research. The category h-index correlates with the average number of cites to the top 500 ranked articles of each category ([Formula: see text]). Furthermore, only a few journals publish the top 500 cited articles in each category: four journals publish 60% ([Formula: see text]) of these and ten publish 81% ([Formula: see text]).

  19. Procedural learning of unstructured categories.

    Science.gov (United States)

    Crossley, Matthew J; Madsen, Nils R; Ashby, F Gregory

    2012-12-01

    Unstructured categories are those in which the stimuli are assigned to each contrasting category randomly, and thus there is no rule- or similarity-based strategy for determining category membership. Intuition suggests that unstructured categories are likely to be learned via explicit memorization that is under the control of declarative memory. In contrast to this prediction, neuroimaging studies of unstructured-category learning have reported task-related activation in the striatum, but typically not in the hippocampus--results that seem more consistent with procedural learning than with a declarative-memory strategy. This article reports the first known behavioral test of whether unstructured-category learning is mediated by explicit strategies or by procedural learning. Our results suggest that the feedback-based learning of unstructured categories is mediated by procedural memory.

  20. How Do Observer's Responses Affect Visual Long-Term Memory?

    Science.gov (United States)

    Makovski, Tal; Jiang, Yuhong V.; Swallow, Khena M.

    2013-01-01

    How does responding to an object affect explicit memory for visual information? The close theoretical relationship between action and perception suggests that items that require a response should be better remembered than items that require no response. However, conclusive evidence for this claim is lacking, as semantic coherence, category size,…

  1. Category label effects on Chinese children's inductive inferences: Modulation by perceptual detail and category specificity

    OpenAIRE

    Long, C.; Lu, X; Zhang, L.; Li, H.; Deák, GO

    2012-01-01

    Inductive generalization of novel properties to same-category or similar-looking objects was studied in Chinese preschool children. The effects of category labels on generalizations were investigated by comparing basic-level labels, superordinate-level labels, and a control phrase applied to three kinds of stimulus materials: colored photographs (Experiment 1), realistic line drawings (Experiment 2), and cartoon-like line drawings (Experiment 3). No significant labeling effects were found for...

  2. Graph comprehension in science and mathematics education: Objects and categories

    DEFF Research Database (Denmark)

    Voetmann Christiansen, Frederik; May, Michael

    types of registers. In the second part of the paper, we consider how diagrams in science are often composites of iconic and indexical elements, and how this fact may lead to confusion for students. In the discussion the utility of the Peircian semiotic framework for educational studies......The first part of the paper presents a taxonomy of representational forms inspired by Peircian semiotics and Duval’s description of learning in mathematics. The typology highlights the possibility that students may con-ceive of e.g. a graph as another type of representational form. Specifically......, the typological mistake of considering graphs as images is discussed related to litterature, and two examples from engineering education are given. The educational implications for science and engineering are discussed, with emphasis on the need for students to work explicitly with conversions between different...

  3. Attentional accounting: Voluntary spatial attention increases budget category prioritization.

    Science.gov (United States)

    Mrkva, Kellen; Van Boven, Leaf

    2017-09-01

    Too often, people fail to prioritize the most important activities, life domains, and budget categories. One reason for misplaced priorities, we argue, is that activities and categories people have frequently or recently attended to seem higher priority than other activities and categories. In Experiment 1, participants were cued to direct voluntary spatial attention toward 1 side of a screen while images depicting different budget categories were presented: 1 category on the cued side and 1 on the noncued side of the screen. Participants rated cued budget categories as higher priority than noncued budget categories. Cued attention also increased perceived distinctiveness, and a mediation model was consistent with the hypothesis that distinctiveness mediates the effect of cued attention on prioritization. Experiment 2 orthogonally manipulated 2 components of a spatial cuing manipulation-heightened visual attention and heightened mental attention-to examine how each influences prioritization. Visual attention and mental attention additively increased prioritization. In Experiment 3, attention increased prioritization even when prioritization decisions were incentivized, and even when heightened attention was isolated from primacy and recency. Across experiments, cued categories were prioritized more than noncued categories even though measures were taken to disguise the purpose of the experiments and manipulate attention incidentally (i.e., as a by-product of an unrelated task). (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  4. The color "fruit": object memories defined by color.

    Science.gov (United States)

    Lewis, David E; Pearson, Joel; Khuu, Sieu K

    2013-01-01

    Most fruits and other highly color-diagnostic objects have color as a central aspect of their identity, which can facilitate detection and visual recognition. It has been theorized that there may be a large amount of overlap between the neural representations of these objects and processing involved in color perception. In accordance with this theory we sought to determine if the recognition of highly color diagnostic fruit objects could be facilitated by the visual presentation of their known color associates. In two experiments we show that color associate priming is possible, but contingent upon multiple factors. Color priming was found to be maximally effective for the most highly color diagnostic fruits, when low spatial-frequency information was present in the image, and when determination of the object's specific identity, not merely its category, was required. These data illustrate the importance of color for determining the identity of certain objects, and support the theory that object knowledge involves sensory specific systems.

  5. Temporal properties of material categorization and material rating: visual vs non-visual material features.

    Science.gov (United States)

    Nagai, Takehiro; Matsushima, Toshiki; Koida, Kowa; Tani, Yusuke; Kitazaki, Michiteru; Nakauchi, Shigeki

    2015-10-01

    Humans can visually recognize material categories of objects, such as glass, stone, and plastic, easily. However, little is known about the kinds of surface quality features that contribute to such material class recognition. In this paper, we examine the relationship between perceptual surface features and material category discrimination performance for pictures of materials, focusing on temporal aspects, including reaction time and effects of stimulus duration. The stimuli were pictures of objects with an identical shape but made of different materials that could be categorized into seven classes (glass, plastic, metal, stone, wood, leather, and fabric). In a pre-experiment, observers rated the pictures on nine surface features, including visual (e.g., glossiness and transparency) and non-visual features (e.g., heaviness and warmness), on a 7-point scale. In the main experiments, observers judged whether two simultaneously presented pictures were classified as the same or different material category. Reaction times and effects of stimulus duration were measured. The results showed that visual feature ratings were correlated with material discrimination performance for short reaction times or short stimulus durations, while non-visual feature ratings were correlated only with performance for long reaction times or long stimulus durations. These results suggest that the mechanisms underlying visual and non-visual feature processing may differ in terms of processing time, although the cause is unclear. Visual surface features may mainly contribute to material recognition in daily life, while non-visual features may contribute only weakly, if at all. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Functional categories in comparative linguistics

    DEFF Research Database (Denmark)

    Rijkhoff, Jan

    Functional categories in comparative linguistics Even after many decades of typological research, the biggest methodological problem still concerns the fundamental question: how can we be sure that we identify and compare the same linguistic form, structure, meaning etc. across languages? Very few...... linguistic categories, if any, appear to be ‘universal’ in the sense that they are attested in each and every language (Evans and Levinson 2009). The language-specific nature of form-based (structural, morphosyntactic) categories is well known, which is why typologists usually resort to ‘Greenbergian......’, meaning-based categories. The use of meaning-based or semantic categories, however, does not necessarily result in the identification of cross-linguistically comparable data either, as was already shown by Greenberg (1966: 88) himself. Whereas formal categories are too narrow in that they do not cover all...

  7. How do Category Managers Manage?

    DEFF Research Database (Denmark)

    Hald, Kim Sundtoft; Sigurbjornsson, Tomas

    2013-01-01

    The aim of this research is to explore the managerial role of category managers in purchasing. A network management perspective is adopted. A case based research methodology is applied, and three category managers managing a diverse set of component and service categories in a global production...... firm is observed while providing accounts of their progress and results in meetings. We conclude that the network management classification scheme originally deve loped by Harland and Knight (2001) and Knight and Harland (2005) is a valuable and fertile theoretical framework for the analysis...... of the role of the category manager in purchasing....

  8. Two Categories of Dirac Manifolds

    OpenAIRE

    Milburn, Brett

    2007-01-01

    We define two categories of Dirac manifolds, i.e. manifolds with complex Dirac structures. The first notion of maps I call \\emph{Dirac maps}, and the category of Dirac manifolds is seen to contain the categories of Poisson and complex manifolds as full subcategories. The second notion, \\emph{dual-Dirac maps}, defines a \\emph{dual-Dirac category} which contains presymplectic and complex manifolds as full subcategories. The dual-Dirac maps are stable under B-transformations. In particular we ge...

  9. The role of independent motion in object segmentation in the ventral visual stream: Learning to recognise the separate parts of the body.

    Science.gov (United States)

    Higgins, I V; Stringer, S M

    2011-03-25

    This paper investigates how the visual areas of the brain may learn to segment the bodies of humans and other animals into separate parts. A neural network model of the ventral visual pathway, VisNet, was used to study this problem. In particular, the current work investigates whether independent motion of body parts can be sufficient to enable the visual system to learn separate representations of them even when the body parts are never seen in isolation. The network was shown to be able to separate out the independently moving body parts because the independent motion created statistical decoupling between them. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. Influence of emotionally charged information on category-based induction.

    Directory of Open Access Journals (Sweden)

    Jennifer Zhu

    Full Text Available Categories help us make predictions, or inductions, about new objects. However, we cannot always be certain that a novel object belongs to the category we are using to make predictions. In such cases, people should use multiple categories to make inductions. Past research finds that people often use only the most likely category to make inductions, even if it is not certain. In two experiments, subjects read stories and answered questions about items whose categorization was uncertain. In Experiment 1, the less likely category was either emotionally neutral or dangerous (emotionally charged or likely to pose a threat. Subjects used multiple categories in induction when one of the categories was dangerous but not when they were all neutral. In Experiment 2, the most likely category was dangerous. Here, people used multiple categories, but there was also an effect of avoidance, in which people denied that dangerous categories were the most likely. The attention-grabbing power of dangerous categories may be balanced by a higher-level strategy to reject them.

  11. Face perception is category-specific: evidence from normal body perception in acquired prosopagnosia.

    Science.gov (United States)

    Susilo, Tirta; Yovel, Galit; Barton, Jason J S; Duchaine, Bradley

    2013-10-01

    Does the human visual system contain perceptual mechanisms specialized for particular object categories such as faces? This question lies at the heart of a long-running debate in face perception. The face-specific hypothesis posits that face perception relies on mechanisms dedicated to faces, while the expertise hypothesis proposes that faces are processed by more generic mechanisms that operate on objects we have extended experience with. Previous studies that have addressed this question using acquired prosopagnosia are inconclusive because the non-face categories tested (e.g., cars) were not well-matched to faces in terms of visual exposure and perceptual experience. Here we compare perception of faces and bodies in four acquired prosopagnosics. Critically, we used face and body tasks that generate comparable inversion effects in controls, which indicates that our tasks engage orientation-specific perceptual mechanisms for faces and bodies to a similar extent. Three prosopagnosics were able to discriminate bodies normally despite their impairment in face perception. Moreover, they exhibited normal inversion effects for bodies, suggesting their body perception was carried out by the same mechanisms used by controls. Our findings indicate that the human visual system contains processes specialized for faces. Copyright © 2013 Elsevier B.V. All rights reserved.

  12. Perceptual and category processing of the Uncanny Valley hypothesis' dimension of human likeness: some methodological issues.

    Science.gov (United States)

    Cheetham, Marcus; Jancke, Lutz

    2013-06-03

    Mori's Uncanny Valley Hypothesis(1,2) proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness (DHL) (Figure 1). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings (3, 4, 5, 6). One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) (7). Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated.

  13. How categories come to matter

    DEFF Research Database (Denmark)

    Cohn, Marisa

    2013-01-01

    In a study of users' interactions with Siri, the iPhone personal assistant application, we noticed the emergence of overlaps and blurrings between explanatory categories such as "human" and "machine". We found that users work to purify these categories, thus resolving the tensions related...

  14. Categories of theories and interpretations

    NARCIS (Netherlands)

    Visser, A.

    In this paper we study categories of theories and interpretations. In these categories, notions of sameness of theories, like synonymy, bi-interpretability and mutual interpretability, take the form of isomorphism. We study the usual notions like monomorphism and product in the various theories.

  15. Affective and contextual values modulate spatial frequency use in object recognition

    Directory of Open Access Journals (Sweden)

    Laurent eCaplette

    2014-05-01

    Full Text Available Visual object recognition is of fundamental importance in our everyday interaction with the environment. Recent models of visual perception emphasize the role of top-down predictions facilitating object recognition via initial guesses that limit the number of object representations that need to be considered. Several results suggest that this rapid and efficient object processing relies on the early extraction and processing of low spatial frequencies (SF. The present study aimed to investigate the SF content of visual object representations and its modulation by contextual and affective values of the perceived object during a picture-name verification task. Stimuli consisted of pictures of objects equalized in SF content and categorised as having low or high affective and contextual values. To access the SF content of stored visual representations of objects, SFs of each image were then randomly sampled on a trial-by-trial basis. Results reveal that intermediate SFs between 14 and 24 cycles per object (2.3 to 4 cycles per degree are correlated with fast and accurate identification for all categories of objects. Moreover, there was a significant interaction between affective and contextual values over the SFs correlating with fast recognition. These results suggest that affective and contextual values of a visual object modulate the SF content of its internal representation, thus highlighting the flexibility of the visual recognition system.

  16. Bilateral impairment of somesthetically mediated object recognition in humans.

    Science.gov (United States)

    Caselli, R J

    1991-04-01

    Thirty adult patients (six in each of five groups--neurologically normal, lacunar infarct-related hemiparesis, unilateral thalamic lacunar infarction, right cortical infarction with mild left hemineglect, and extensive right cortical infarction with severe left hemineglect) were asked to perform various tasks that encompassed basic and intermediate somatosensory functions and tactile and visual object recognition. Patients with thalamic and cortical infarctions had severe impairment of contralateral hand-mediated somatosensory functions in all three categories of somesthetic tasks, although patients with cortical infarction were more impaired on the object recognition task than were patients with thalamic infarction. Patients with extensive damage to the right hemisphere and severe left hemineglect also had impairment of somesthetically mediated object recognition in the ipsilateral hand despite normal basic and intermediate somatosensory function and visually mediated object recognition analogous to unilateral tactile agnosia. All other groups had normal ipsilateral tactile object recognition.

  17. Categories for the working mathematician

    CERN Document Server

    MacLane, Saunders

    1971-01-01

    Category Theory has developed rapidly. This book aims to present those ideas and methods which can now be effectively used by Mathe­ maticians working in a variety of other fields of Mathematical research. This occurs at several levels. On the first level, categories provide a convenient conceptual language, based on the notions of category, functor, natural transformation, contravariance, and functor category. These notions are presented, with appropriate examples, in Chapters I and II. Next comes the fundamental idea of an adjoint pair of functors. This appears in many substantially equivalent forms: That of universal construction, that of direct and inverse limit, and that of pairs offunctors with a natural isomorphism between corresponding sets of arrows. All these forms, with their interrelations, are examined in Chapters III to V. The slogan is "Adjoint functors arise everywhere". Alternatively, the fundamental notion of category theory is that of a monoid -a set with a binary operation of multiplicati...

  18. An introduction to the language of category theory

    CERN Document Server

    Roman, Steven

    2017-01-01

    This textbook provides an introduction to elementary category theory, with the aim of making what can be a confusing and sometimes overwhelming subject more accessible. In writing about this challenging subject, the author has brought to bear all of the experience he has gained in authoring over 30 books in university-level mathematics. The goal of this book is to present the five major ideas of category theory: categories, functors, natural transformations, universality, and adjoints in as friendly and relaxed a manner as possible while at the same time not sacrificing rigor. These topics are developed in a straightforward, step-by-step manner and are accompanied by numerous examples and exercises, most of which are drawn from abstract algebra. The first chapter of the book introduces the definitions of category and functor and discusses diagrams, duality, initial and terminal objects, special types of morphisms, and some special types of categories, particularly comma categories and hom-set categories. Chap...

  19. Grammatical Gender and Mental Representation of Object: The Case of Musical Instruments

    Science.gov (United States)

    Vuksanovic, Jasmina; Bjekic, Jovana; Radivojevic, Natalija

    2015-01-01

    A body of research shows that grammatical gender, although an arbitrary category, is viewed as the system with its own meaning. However, the question remains to what extent does grammatical gender influence shaping our notions about objects when both verbal and visual information are available. Two experiments were conducted. The results obtained…

  20. Searching for something familiar or novel: ERP correlates of top-down attentional selection for specific items or categories

    Science.gov (United States)

    Wu, Rachel; Scerif, Gaia; Aslin, Richard N.; Smith, Tim J.; Nako, Rebecca; Eimer, Martin

    2013-01-01

    Visual search is often guided by top-down attentional templates that specify target-defining features. But search can also occur at the level of object categories. We measured the N2pc component, a marker of attentional target selection, in two visual search experiments where targets were defined either categorically (e.g., any letter), or at the item level (e.g., the letter C) by a prime stimulus. In both experiments, an N2pc was elicited during category search, in both familiar and novel contexts (Experiment 1) and with symbolic primes (Experiment 2), indicating that even when targets are only defined at the category level, they are selected at early sensory-perceptual stages. However, the N2pc emerged earlier and was larger during item-based search compared to category-based search, demonstrating the superiority of attentional guidance by item-specific templates. We discuss the implications of these findings for attentional control and category learning. PMID:23281777

  1. Neural correlates of body and face perception following bilateral destruction of the primary visual cortices.

    Science.gov (United States)

    Van den Stock, Jan; Tamietto, Marco; Zhan, Minye; Heinecke, Armin; Hervais-Adelman, Alexis; Legrand, Lore B; Pegna, Alan J; de Gelder, Beatrice

    2014-01-01

    Non-conscious visual processing of different object categories was investigated in a rare patient with bilateral destruction of the visual cortex (V1) and clinical blindness over the entire visual field. Images of biological and non-biological object categories were presented consisting of human bodies, faces, butterflies, cars, and scrambles. Behaviorally, only the body shape induced higher perceptual sensitivity, as revealed by signal detection analysis. Passive exposure to bodies and faces activated amygdala and superior temporal sulcus. In addition, bodies also activated the extrastriate body area, insula, orbitofrontal cortex (OFC) and cerebellum. The results show that following bilateral damage to the primary visual cortex and ensuing complete cortical blindness, the human visual system is able to process categorical properties of human body shapes. This residual vision may be based on V1-independent input to body-selective areas along the ventral stream, in concert with areas involved in the representation of bodily states, like insula, OFC, and cerebellum.

  2. Grounding grammatical categories: attention bias in hand space influences grammatical congruency judgment of Chinese nominal classifiers.

    Science.gov (United States)

    Lobben, Marit; D'Ascenzo, Stefania

    2015-01-01

    Embodied cognitive theories predict that linguistic conceptual representations are grounded and continually represented in real world, sensorimotor experiences. However, there is an on-going debate on whether this also holds for abstract concepts. Grammar is the archetype of abstract knowledge, and therefore constitutes a test case against embodied theories of language representation. Former studies have largely focussed on lexical-level embodied representations. In the present study we take the grounding-by-modality idea a step further by using reaction time (RT) data from the linguistic processing of nominal classifiers in Chinese. We take advantage of an independent body of research, which shows that attention in hand space is biased. Specifically, objects near the hand consistently yield shorter RTs as a function of readiness for action on graspable objects within reaching space, and the same biased attention inhibits attentional disengagement. We predicted that this attention bias would equally apply to the graspable object classifier but not to the big object classifier. Chinese speakers (N = 22) judged grammatical congruency of classifier-noun combinations in two conditions: graspable object classifier and big object classifier. We found that RTs for the graspable object classifier were significantly faster in congruent combinations, and significantly slower in incongruent combinations, than the big object classifier. There was no main effect on grammatical violations, but rather an interaction effect of classifier type. Thus, we demonstrate here grammatical category-specific effects pertaining to the semantic content and by extension the visual and tactile modality of acquisition underlying the acquisition of these categories. We conclude that abstract grammatical categories are subjected to the same mechanisms as general cognitive and neurophysiological processes and may therefore be grounded.

  3. Grounding grammatical categories: attention bias in hand space influences grammatical congruency judgment of Chinese nominal classifiers

    Directory of Open Access Journals (Sweden)

    Marit eLobben

    2015-08-01

    Full Text Available Embodied cognitive theories predict that linguistic conceptual representations are grounded and continually represented in real world, sensorimotor experiences. However, there is an on-going debate on whether this also holds for abstract concepts. Grammar is the archetype of abstract knowledge, and therefore constitutes a test case against embodied theories of language representation. Former studies have largely focussed on lexical-level embodied representations. In the present study we take the grounding-by-modality idea a step further by using reaction time data from the linguistic processing of nominal classifiers in Chinese. We take advantage of an independent body of research, which shows that attention in hand space is biased. Specifically, objects near the hand consistently yield shorter reaction times as a function of readiness for action on graspable objects within reaching space, and the same biased attention inhibits attentional disengagement. We predicted that this attention bias would equally apply to the graspable object classifier but not to the big object classifier. Chinese speakers (N=21 judged grammatical congruency of classifier-noun combinations in two conditions: graspable object classifier and big object classifier. We found that RTs for the graspable object classifier were significantly faster in congruent combinations, and significantly slower in incongruent combinations, than the big object classifier. There was no main effect on grammatical violations, but rather an interaction effect of classifier type. Thus, we demonstrate here grammatical category-specific effects pertaining to the semantic content and by extension the visual and tactile modality of acquisition underlying the acquisition of these categories. We conclude that abstract grammatical categories are subjected to the same mechanisms as general cognitive and neurophysiological processes and may therefore be grounded.

  4. Individual differences shape the content of visual representations.

    Science.gov (United States)

    Reeder, Reshanne R

    2017-12-01

    Visually perceiving a stimulus activates a pictorial representation of that item in the brain, but how pictorial is the representation of a stimulus in the absence of visual stimulation? Here I address this question with a review of the literatures on visual imagery (VI), visual working memory (VWM), and visual preparatory templates, all of which require activating visual information in the absence of sensory stimulation. These processes have historically been studied separately, but I propose that they can provide complimentary evidence for the pictorial nature of their contents. One major challenge in studying the contents of visual representations is the discrepant findings concerning the extent of overlap (both cortical and behavioral) between externally and internally sourced visual representations. I argue that these discrepancies may in large part be due to individual differences in VI vividness and precision, the specific representative abilities required to perform a task, appropriateness of visual preparatory strategies, visual cortex anatomy, and level of expertise with a particular object category. Individual differences in visual representative abilities greatly impact task performance and may influence the likelihood of experiences such as intrusive VI and hallucinations, but research still predominantly focuses on uniformities in visual experience across individuals. In this paper I review the evidence for the pictorial content of visual representations activated for VI, VWM, and preparatory templates, and highlight the importance of accounting for various individual differences in conducting research on this topic. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Shape configuration and category-specificity

    DEFF Research Database (Denmark)

    Gerlach, Christian; Law, Ian; Paulson, Olaf B.

    2006-01-01

    We examined the neural correlates of visual shape configuration, the binding of local shape characteristics into wholistic object descriptions, by comparing the regional cerebral blood flow associated with recognition of outline drawings and fragmented drawings. We found no areas that responded m...

  6. Learning of role-governed and thematic categories.

    Science.gov (United States)

    Goldwater, Micah B; Bainbridge, Rebecca; Murphy, Gregory L

    2016-02-01

    Natural categories are often based on intrinsic characteristics, such as shared features, but they can also be based on extrinsic relationships to items outside the categories. Examples of relational categories include items that share a thematic relation or items that share a common role. Five experiments used an artificial category learning paradigm to investigate whether people can learn role-governed and thematic categories without explicit instruction or linguistic support. Participants viewed film clips in which objects were engaged in similar actions and then were asked to group together objects that they believed were in the same category. Experiments 1 and 2 demonstrated that while people spontaneously grouped items using both role-governed and thematic relations, when forced to choose between the two, most preferred role-governed categories. In Experiment 3, category labels increased this preference. Experiment 4 found that people failed to group items based on more abstract role relations when the specific relations differed (e.g., objects that prevented different actions). However, Experiment 5 showed that people could identify them with the aid of comparison. We concluded that people can form role-governed categories even with minimal perceptual and linguistic cues. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. A Query Driven Computer Vision System: A Paradigm for Hierarchical Control Strategies during the Recognition Process of Three Dimensional Visually Perceived Objects.

    Science.gov (United States)

    1983-04-01

    modelling is top-down, reflecting the user’s interest in the scene. The Scene Model represents both the objects in the image and primitive spatial relations between these objects. Keywords: Computer vision , Computer architecture. (Author)

  8. When Seeing Depends on Knowing: Adults with Autism Spectrum Conditions Show Diminished Top-Down Processes in the Visual Perception of Degraded Faces but Not Degraded Objects

    Science.gov (United States)

    Loth, Eva; Gomez, Juan Carlos; Happe, Francesca

    2010-01-01

    Behavioural, neuroimaging and neurophysiological approaches emphasise the active and constructive nature of visual perception, determined not solely by the environmental input, but modulated top-down by prior knowledge. For example, degraded images, which at first appear as meaningless "blobs", can easily be recognized as, say, a face, after…

  9. Relics in medieval altarpieces? Combining X-ray tomographic, laminographic and phase-contrast imaging to visualize thin organic objects in paintings

    NARCIS (Netherlands)

    Krug, K.; Porra, L.; Coan, P.; Wallert, A.; Dik, J.; Coerdt, A.; Bravin, A.; Elyyan, M.; Reischig, P.; Helfen, L.; Baumbach, T.

    2007-01-01

    X-ray radiography is a common tool in the study of old master paintings. Transmission imaging can visualize hidden paint layers as well as the structure of the panel or canvas. In some medieval altarpieces, relics seem to have been imbedded in the wooden carrier of paintings. These are most probably

  10. Photometric activity of UX Ori stars and related objects in the near infrared and visual. BF Ori, CQ Tau, WW Vul, and SV Cep

    Science.gov (United States)

    Shenavrin, V. I.; Grinin, V. P.; Rostopchina-Shakhovskaja, A. N.; Demidova, T. V.; Shakhovskoi, D. N.

    2012-05-01

    We have analyzed the activity of four UX Ori stars in the near-IR ( JHKL) and visual ( V) using the results of long-term photometric observations. For comparison, we also obtained IR ( JHKLM) photometric observations of two visually quiet young stars of close spectral types (AB Aur and HD 190073). For the photometrically most active UX Ori stars BF Ori, CQ Tau, and WW Vul, the Algol-like declines of brightness in the visual, which are due to sporadic enhancements of the circumstellar extinction, are also observed (with decreasing amplitude) in the IR bands. A strict correlation between the V and J brightness variations is observed for all the stars except for SV Cep. For some of the UX Ori stars, a strong correlation between the visual and IR activity is observed up to L, where the main contribution to the emission is made by circumstellar dust. In the case of SV Cep, the visual variability is not correlated with the variability of the IR fluxes. On one occasion, a clear anti-correlation was even observed: a shallow, but prolonged decrease of the visual brightness was accompanied by an increase in the IR fluxes. This indicates that circumstellar clouds themselves can become powerful sources of IR emission. Our results provide evidence that the photometric activity of UX Ori stars is a consequence of instability of the deepest layers of their gas-dust accretion disks. In some cases (SV Cep), fluctuations of the density in this region are global, in the sense that they occur along a significant part of the circle marking the inner boundary of the dust disk. It is interesting that AB Aur, which is the quietest in the visual, appeared to be the most active in the IR. In contrast to UX Ori stars, the amplitude of its brightness variations increases from the J to the M band. It follows from analysis of the IR colors of this star that their variability cannot be described by models in which the variable IR emission has a temperature close to the sublimation temperature of

  11. From groups to categorial algebra introduction to protomodular and mal’tsev categories

    CERN Document Server

    Bourn, Dominique

    2017-01-01

    This book gives a thorough and entirely self-contained, in-depth introduction to a specific approach to group theory, in a large sense of that word. The focus lie on the relationships which a group may have with other groups, via “universal properties”, a view on that group “from the outside”. This method of categorical algebra, is actually not limited to the study of groups alone, but applies equally well to other similar categories of algebraic objects. By introducing protomodular categories and Mal’tsev categories, which form a larger class, the structural properties of the category Gp of groups, show how they emerge from four very basic observations about the algebraic litteral calculus and how, studied for themselves at the conceptual categorical level, they lead to the main striking features of the category Gp of groups. Hardly any previous knowledge of category theory is assumed, and just a little experience with standard algebraic structures such as groups and monoids. Examples and exercises...

  12. Constraint-Based Categorial Grammar

    CERN Document Server

    Bouma, G; Bouma, Gosse; Noord, Gertjan van

    1994-01-01

    We propose a generalization of Categorial Grammar in which lexical categories are defined by means of recursive constraints. In particular, the introduction of relational constraints allows one to capture the effects of (recursive) lexical rules in a computationally attractive manner. We illustrate the linguistic merits of the new approach by showing how it accounts for the syntax of Dutch cross-serial dependencies and the position and scope of adjuncts in such constructions. Delayed evaluation is used to process grammars containing recursive constraints.

  13. Data categories for marine planning

    Science.gov (United States)

    Lightsom, Frances L.; Cicchetti, Giancarlo; Wahle, Charles M.

    2015-01-01

    The U.S. National Ocean Policy calls for a science- and ecosystem-based approach to comprehensive planning and management of human activities and their impacts on America’s oceans. The Ocean Community in Data.gov is an outcome of 2010–2011 work by an interagency working group charged with designing a national information management system to support ocean planning. Within the working group, a smaller team developed a list of the data categories specifically relevant to marine planning. This set of categories is an important consensus statement of the breadth of information types required for ocean planning from a national, multidisciplinary perspective. Although the categories were described in a working document in 2011, they have not yet been fully implemented explicitly in online services or geospatial metadata, in part because authoritative definitions were not created formally. This document describes the purpose of the data categories, provides definitions, and identifies relations among the categories and between the categories and external standards. It is intended to be used by ocean data providers, managers, and users in order to provide a transparent and consistent framework for organizing and describing complex information about marine ecosystems and their connections to humans.

  14. Conceptual grounding of language in action and perception: a neurocomputational model of the emergence of category specificity and semantic hubs.

    Science.gov (United States)

    Garagnani, Max; Pulvermüller, Friedemann

    2016-03-01

    Current neurobiological accounts of language and cognition offer diverging views on the questions of 'where' and 'how' semantic information is stored and processed in the human brain. Neuroimaging data showing consistent activation of different multi-modal areas during word and sentence comprehension suggest that all meanings are processed indistinctively, by a set of general semantic centres or 'hubs'. However, words belonging to specific semantic categories selectively activate modality-preferential areas; for example, action-related words spark activity in dorsal motor cortex, whereas object-related ones activate ventral visual areas. The evidence for category-specific and category-general semantic areas begs for a unifying explanation, able to integrate the emergence of both. Here, a neurobiological model offering such an explanation is described. Using a neural architecture replicating anatomical and neurophysiological features of frontal, occipital and temporal cortices, basic aspects of word learning and semantic grounding in action and perception were simulated. As the network underwent training, distributed lexico-semantic circuits spontaneously emerged. These circuits exhibited different cortical distributions that reached into dorsal-motor or ventral-visual areas, reflecting the correlated category-specific sensorimotor patterns that co-occurred during action- or object-related semantic grounding, respectively. Crucially, substantial numbers of neurons of both types of distributed circuits emerged in areas interfacing between modality-preferential regions, i.e. in multimodal connection hubs, which therefore became loci of general semantic binding. By relating neuroanatomical structure and cellular-level learning mechanisms with system-level cognitive function, this model offers a neurobiological account of category-general and category-specific semantic areas based on the different cortical distributions of the underlying semantic circuits. © 2015 The

  15. Range management visual impacts

    Science.gov (United States)

    Bruce R. Brown; David Kissel

    1979-01-01

    Historical overgrazing of western public rangelands has resulted in the passage of the Public Rangeland Improvement Act of 1978. The main purpose of this Act is to improve unsatisfactory range conditions. A contributing factor to unfavorable range conditions is adverse visual impacts. These visual impacts can be identified in three categories of range management: range...

  16. Human Object-Similarity Judgments Reflect and Transcend the Primate-IT Object Representation.

    Science.gov (United States)

    Mur, Marieke; Meys, Mirjam; Bodurka, Jerzy; Goebel, Rainer; Bandettini, Peter A; Kriegeskorte, Nikolaus

    2013-01-01

    Primate inferior temporal (IT) cortex is thought to contain a high-level representation of objects at the interface between vision and semantics. This suggests that the perceived similarity of real-world objects might be predicted from the IT representation. Here we show that objects that elicit similar activity patterns in human IT (hIT) tend to be judged as similar by humans. The IT representation explained the human judgments better than early visual cortex, other ventral-stream regions, and a range of computational models. Human similarity judgments exhibited category clusters that reflected several categorical divisions that are prevalent in the IT representation of both human and monkey, including the animate/inanimate and the face/body division. Human judgments also reflected the within-category representation of IT. However, the judgments transcended the IT representation in that they introduced additional categorical divisions. In particular, human judgments emphasized human-related additional divisions between human and non-human animals and between man-made and natural objects. hIT was more similar to monkey IT than to human judgments. One interpretation is that IT has evolved visual-feature detectors that distinguish between animates and inanimates and between faces and bodies because these divisions are fundamental to survival and reproduction for all primate species, and that other brain systems serve to more flexibly introduce species-dependent and evolutionarily more recent divisions.

  17. Human object-similarity judgments reflect and transcend the primate-IT object representation

    Directory of Open Access Journals (Sweden)

    Marieke eMur

    2013-03-01

    Full Text Available Primate inferior temporal (IT cortex is thought to contain a high-level representation of objects at the interface between vision and semantics. This suggests that the perceived similarity of real-world objects might be predicted from the IT representation. Here we show that objects that elicit similar activity patterns in human IT tend to be judged as similar by humans. The IT representation explained the human judgments better than early visual cortex, other ventral stream regions, and a range of computational models. Human similarity judgments exhibited category clusters that reflected several categorical divisions that are prevalent in the IT representation of both human and monkey, including the animate/inanimate and the face/body division. Human judgments also reflected the within-category representation of IT. However, the judgments transcended the IT representation in that they introduced additional categorical divisions. In particular, human judgments emphasized human-related additional divisions between human and nonhuman animals and between man-made and natural objects. Human IT was more similar to monkey IT than to human judgments. One interpretation is that IT has evolved visual feature detectors that distinguish between animates and inanimates and between faces and bodies because these divisions are fundamental to survival and reproduction for all primate species, and that other brain systems serve to more flexibly introduce species-dependent and evolutionarily more recent divisions.

  18. Eyetracking reveals multiple-category use in induction.

    Science.gov (United States)

    Chen, Stephanie Y; Ross, Brian H; Murphy, Gregory L

    2016-07-01

    Category information is used to predict properties of new category members. When categorization is uncertain, people often rely on only one, most likely category to make predictions. Yet studies of perception and action often conclude that people combine multiple sources of information near-optimally. We present a perception-action analog of category-based induction using eye movements as a measure of prediction. The categories were objects of different shapes that moved in various directions. Experiment 1 found that people integrated information across categories in predicting object motion. The results of Experiment 2 suggest that the integration of information found in Experiment 1 were not a result of explicit strategies. Experiment 3 tested the role of explicit categorization, finding that making a categorization judgment, even an uncertain one, stopped people from using multiple categories in our eye-movement task. Experiment 4 found that induction was indeed based on category-level predictions rather than associations between object properties and directions. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  19. Visual cognition.

    Science.gov (United States)

    Cavanagh, Patrick

    2011-07-01

    Visual cognition, high-level vision, mid-level vision and top-down processing all refer to decision-based scene analyses that combine prior knowledge with retinal input to generate representations. The label "visual cognition" is little used at present, but research and experiments on mid- and high-level, inference-based vision have flourished, becoming in the 21st century a significant, if often understated part, of current vision research. How does visual cognition work? What are its moving parts? This paper reviews the origins and architecture of visual cognition and briefly describes some work in the areas of routines, attention, surfaces, objects, and events (motion, causality, and agency). Most vision scientists avoid being too explicit when presenting concepts about visual cognition, having learned that explicit models invite easy criticism. What we see in the literature is ample evidence for visual cognition, but few or only cautious attempts to detail how it might work. This is the great unfinished business of vision research: at some point we will be done with characterizing how the visual system measures the world and we will have to return to the question of how vision constructs models of objects, surfaces, scenes, and events. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. PRECEDENCE AS A PSYCHOLINGUISTIC CATEGORY

    Directory of Open Access Journals (Sweden)

    Panarina Nadezhda Sergeevna

    2015-06-01

    Full Text Available The use of particular linguistic units by representatives of a linguacultural community as the most preferable verbal actions is not necessary to be a case of verbal operations with some culturally specific knowledge. The analysis of a psychosocial mechanism used for generation and verbalization of such a knowledge allows to define the nature of precedence as a characteristic of meaning that is being effected in a speech act. The development of precedent meaning indispensably assumes not only generation of the definition component, but also entry into a structure of a culturological component meaning. The culturological component reflects a relationship between a subject-concept component of the meaning and the other elements of a speech situation – the relationship, which is notional for a person. Importance of the relationship consists in fact that definition of its content represents to a person their social identity. Until a person understands the content of relationship, which is represented by the culturological component, the use of corresponding linguistic units to nominate new objects of reality is a supraliminal appeal to the precedent knowledge, a speech act. But for new acts of usage the main thing is definitely quality of relationship as a characteristic of the cultural group stability, and the linguistic unit usage derives a new function. When the culturological component of the meaning is not included into generalization, since it is irrelevant one, and the core of meaning is composed of new and more relevant for the usage features, you can no more realize the inner form of the precedent meaning. The outer form is still relevant, since it is kept in mind by the representatives of linguaculture as the one which is preferable for usage. In this case the linguistic unit is just a tool not related to verbal representation of socially significant attitude, and its usage is a speech operation, a way to perform different speech acts