WorldWideScience

Sample records for visual object categories

  1. Visual object recognition and category-specificity

    DEFF Research Database (Denmark)

    Gerlach, Christian

    This thesis is based on seven published papers. The majority of the papers address two topics in visual object recognition: (i) category-effects at pre-semantic stages, and (ii) the integration of visual elements into elaborate shape descriptions corresponding to whole objects or large object parts...... (shape configuration). In the early writings these two topics were examined more or less independently. In later works, findings concerning category-effects and shape configuration merge into an integrated model, termed RACE, advanced to explain category-effects arising at pre-semantic stages in visual...... in visual long-term memory. In the thesis it is described how this simple model can account for a wide range of findings on category-specificity in both patients with brain damage and normal subjects. Finally, two hypotheses regarding the neural substrates of the model's components - and how activation...

  2. Category-specificity in visual object recognition

    DEFF Research Database (Denmark)

    Gerlach, Christian

    2009-01-01

    Are all categories of objects recognized in the same manner visually? Evidence from neuropsychology suggests they are not: some brain damaged patients are more impaired in recognizing natural objects than artefacts whereas others show the opposite impairment. Category-effects have also been...... demonstrated in neurologically intact subjects, but the findings are contradictory and there is no agreement as to why category-effects arise. This article presents a Pre-semantic Account of Category Effects (PACE) in visual object recognition. PACE assumes two processing stages: shape configuration (the...... binding of shape elements into elaborate shape descriptions) and selection (among competing representations in visual long-term memory), which are held to be differentially affected by the structural similarity between objects. Drawing on evidence from clinical studies, experimental studies...

  3. Category-specificity in visual object recognition

    DEFF Research Database (Denmark)

    Gerlach, Christian

    2009-01-01

    binding of shape elements into elaborate shape descriptions) and selection (among competing representations in visual long-term memory), which are held to be differentially affected by the structural similarity between objects. Drawing on evidence from clinical studies, experimental studies...

  4. Normal and abnormal category-effects in visual object recognition

    DEFF Research Database (Denmark)

    Gerlach, Christian

    2017-01-01

    Are all categories of objects recognized in the same manner visually? Evidence from neuropsychology suggests they are not, as some brain injured patients are more impaired in recognizing natural objects than artefacts while others show the opposite impairment. In an attempt to explain category-sp...

  5. Object-graphs for context-aware visual category discovery.

    Science.gov (United States)

    Lee, Yong Jae; Grauman, Kristen

    2012-02-01

    How can knowing about some categories help us to discover new ones in unlabeled images? Unsupervised visual category discovery is useful to mine for recurring objects without human supervision, but existing methods assume no prior information and thus tend to perform poorly for cluttered scenes with multiple objects. We propose to leverage knowledge about previously learned categories to enable more accurate discovery, and address challenges in estimating their familiarity in unsegmented, unlabeled images. We introduce two variants of a novel object-graph descriptor to encode the 2D and 3D spatial layout of object-level co-occurrence patterns relative to an unfamiliar region and show that by using them to model the interaction between an image’s known and unknown objects, we can better detect new visual categories. Rather than mine for all categories from scratch, our method identifies new objects while drawing on useful cues from familiar ones. We evaluate our approach on several benchmark data sets and demonstrate clear improvements in discovery over conventional purely appearance-based baselines.

  6. Establishing Visual Category Boundaries between Objects: A PET Study

    Science.gov (United States)

    Saumier, Daniel; Chertkow, Howard; Arguin, Martin; Whatmough, Cristine

    2005-01-01

    Individuals with Alzheimer's disease (AD) often have problems in recognizing common objects. This visual agnosia may stem from difficulties in establishing appropriate visual boundaries between visually similar objects. In support of this hypothesis, Saumier, Arguin, Chertkow, and Renfrew (2001) showed that AD subjects have difficulties in…

  7. Visual Working Memory Capacity for Objects from Different Categories: A Face-Specific Maintenance Effect

    Science.gov (United States)

    Wong, Jason H.; Peterson, Matthew S.; Thompson, James C.

    2008-01-01

    The capacity of visual working memory was examined when complex objects from different categories were remembered. Previous studies have not examined how visual similarity affects object memory, though it has long been known that similar-sounding phonological information interferes with rehearsal in auditory working memory. Here, experiments…

  8. Decoding visual object categories from temporal correlations of ECoG signals.

    Science.gov (United States)

    Majima, Kei; Matsuo, Takeshi; Kawasaki, Keisuke; Kawai, Kensuke; Saito, Nobuhito; Hasegawa, Isao; Kamitani, Yukiyasu

    2014-04-15

    How visual object categories are represented in the brain is one of the key questions in neuroscience. Studies on low-level visual features have shown that relative timings or phases of neural activity between multiple brain locations encode information. However, whether such temporal patterns of neural activity are used in the representation of visual objects is unknown. Here, we examined whether and how visual object categories could be predicted (or decoded) from temporal patterns of electrocorticographic (ECoG) signals from the temporal cortex in five patients with epilepsy. We used temporal correlations between electrodes as input features, and compared the decoding performance with features defined by spectral power and phase from individual electrodes. While using power or phase alone, the decoding accuracy was significantly better than chance, correlations alone or those combined with power outperformed other features. Decoding performance with correlations was degraded by shuffling the order of trials of the same category in each electrode, indicating that the relative time series between electrodes in each trial is critical. Analysis using a sliding time window revealed that decoding performance with correlations began to rise earlier than that with power. This earlier increase in performance was replicated by a model using phase differences to encode categories. These results suggest that activity patterns arising from interactions between multiple neuronal units carry additional information on visual object categories. Copyright © 2013 Elsevier Inc. All rights reserved.

  9. The role of object categories in hybrid visual and memory search

    Science.gov (United States)

    Cunningham, Corbin A.; Wolfe, Jeremy M.

    2014-01-01

    In hybrid search, observers (Os) search for any of several possible targets in a visual display containing distracting items and, perhaps, a target. Wolfe (2012) found that responses times (RT) in such tasks increased linearly with increases in the number of items in the display. However, RT increased linearly with the log of the number of items in the memory set. In earlier work, all items in the memory set were unique instances (e.g. this apple in this pose). Typical real world tasks involve more broadly defined sets of stimuli (e.g. any “apple” or, perhaps, “fruit”). The present experiments show how sets or categories of targets are handled in joint visual and memory search. In Experiment 1, searching for a digit among letters was not like searching for targets from a 10-item memory set, though searching for targets from an N-item memory set of arbitrary alphanumeric characters was like searching for targets from an N-item memory set of arbitrary objects. In Experiment 2, Os searched for any instance of N sets or categories held in memory. This hybrid search was harder than search for specific objects. However, memory search remained logarithmic. Experiment 3 illustrates the interaction of visual guidance and memory search when a subset of visual stimuli are drawn from a target category. Furthermore, we outline a conceptual model, supported by our results, defining the core components that would be necessary to support such categorical hybrid searches. PMID:24661054

  10. The role of object categories in hybrid visual and memory search.

    Science.gov (United States)

    Cunningham, Corbin A; Wolfe, Jeremy M

    2014-08-01

    In hybrid search, observers search for any of several possible targets in a visual display containing distracting items and, perhaps, a target. Wolfe (2012) found that response times (RTs) in such tasks increased linearly with increases in the number of items in the display. However, RT increased linearly with the log of the number of items in the memory set. In earlier work, all items in the memory set were unique instances (e.g., this apple in this pose). Typical real-world tasks involve more broadly defined sets of stimuli (e.g., any "apple" or, perhaps, "fruit"). The present experiments show how sets or categories of targets are handled in joint visual and memory search. In Experiment 1, searching for a digit among letters was not like searching for targets from a 10-item memory set, though searching for targets from an N-item memory set of arbitrary alphanumeric characters was like searching for targets from an N-item memory set of arbitrary objects. In Experiment 2, observers searched for any instance of N sets or categories held in memory. This hybrid search was harder than search for specific objects. However, memory search remained logarithmic. Experiment 3 illustrates the interaction of visual guidance and memory search when a subset of visual stimuli are drawn from a target category. Furthermore, we outline a conceptual model, supported by our results, defining the core components that would be necessary to support such categorical hybrid searches. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  11. Visual memory needs categories

    OpenAIRE

    Olsson, Henrik; Poom, Leo

    2005-01-01

    Capacity limitations in the way humans store and process information in working memory have been extensively studied, and several memory systems have been distinguished. In line with previous capacity estimates for verbal memory and memory for spatial information, recent studies suggest that it is possible to retain up to four objects in visual working memory. The objects used have typically been categorically different colors and shapes. Because knowledge about categories is stored in long-t...

  12. Combining features from ERP components in single-trial EEG for discriminating four-category visual objects

    Science.gov (United States)

    Wang, Changming; Xiong, Shi; Hu, Xiaoping; Yao, Li; Zhang, Jiacai

    2012-10-01

    Categorization of images containing visual objects can be successfully recognized using single-trial electroencephalograph (EEG) measured when subjects view images. Previous studies have shown that task-related information contained in event-related potential (ERP) components could discriminate two or three categories of object images. In this study, we investigated whether four categories of objects (human faces, buildings, cats and cars) could be mutually discriminated using single-trial EEG data. Here, the EEG waveforms acquired while subjects were viewing four categories of object images were segmented into several ERP components (P1, N1, P2a and P2b), and then Fisher linear discriminant analysis (Fisher-LDA) was used to classify EEG features extracted from ERP components. Firstly, we compared the classification results using features from single ERP components, and identified that the N1 component achieved the highest classification accuracies. Secondly, we discriminated four categories of objects using combining features from multiple ERP components, and showed that combination of ERP components improved four-category classification accuracies by utilizing the complementarity of discriminative information in ERP components. These findings confirmed that four categories of object images could be discriminated with single-trial EEG and could direct us to select effective EEG features for classifying visual objects.

  13. Stimulus Dependency of Object-Evoked Responses in Human Visual Cortex: An Inverse Problem for Category Specificity

    Science.gov (United States)

    Graewe, Britta; De Weerd, Peter; Farivar, Reza; Castelo-Branco, Miguel

    2012-01-01

    Many studies have linked the processing of different object categories to specific event-related potentials (ERPs) such as the face-specific N170. Despite reports showing that object-related ERPs are influenced by visual stimulus features, there is consensus that these components primarily reflect categorical aspects of the stimuli. Here, we re-investigated this idea by systematically measuring the effects of visual feature manipulations on ERP responses elicited by both structure-from-motion (SFM)-defined and luminance-defined object stimuli. SFM objects elicited a novel component at 200–250 ms (N250) over parietal and posterior temporal sites. We found, however, that the N250 amplitude was unaffected by restructuring SFM stimuli into meaningless objects based on identical visual cues. This suggests that this N250 peak was not uniquely linked to categorical aspects of the objects, but is strongly determined by visual stimulus features. We provide strong support for this hypothesis by parametrically manipulating the depth range of both SFM- and luminance-defined object stimuli and showing that the N250 evoked by SFM stimuli as well as the well-known N170 to static faces were sensitive to this manipulation. Importantly, this effect could not be attributed to compromised object categorization in low depth stimuli, confirming a strong impact of visual stimulus features on object-related ERP signals. As ERP components linked with visual categorical object perception are likely determined by multiple stimulus features, this creates an interesting inverse problem when deriving specific perceptual processes from variations in ERP components. PMID:22363479

  14. Fourier power, subjective distance, and object categories all provide plausible models of BOLD responses in scene-selective visual areas

    Science.gov (United States)

    Lescroart, Mark D.; Stansbury, Dustin E.; Gallant, Jack L.

    2015-01-01

    Perception of natural visual scenes activates several functional areas in the human brain, including the Parahippocampal Place Area (PPA), Retrosplenial Complex (RSC), and the Occipital Place Area (OPA). It is currently unclear what specific scene-related features are represented in these areas. Previous studies have suggested that PPA, RSC, and/or OPA might represent at least three qualitatively different classes of features: (1) 2D features related to Fourier power; (2) 3D spatial features such as the distance to objects in a scene; or (3) abstract features such as the categories of objects in a scene. To determine which of these hypotheses best describes the visual representation in scene-selective areas, we applied voxel-wise modeling (VM) to BOLD fMRI responses elicited by a set of 1386 images of natural scenes. VM provides an efficient method for testing competing hypotheses by comparing predictions of brain activity based on encoding models that instantiate each hypothesis. Here we evaluated three different encoding models that instantiate each of the three hypotheses listed above. We used linear regression to fit each encoding model to the fMRI data recorded from each voxel, and we evaluated each fit model by estimating the amount of variance it predicted in a withheld portion of the data set. We found that voxel-wise models based on Fourier power or the subjective distance to objects in each scene predicted much of the variance predicted by a model based on object categories. Furthermore, the response variance explained by these three models is largely shared, and the individual models explain little unique variance in responses. Based on an evaluation of previous studies and the data we present here, we conclude that there is currently no good basis to favor any one of the three alternative hypotheses about visual representation in scene-selective areas. We offer suggestions for further studies that may help resolve this issue. PMID:26594164

  15. The Role of Sensory-Motor Information in Object Recognition: Evidence from Category-Specific Visual Agnosia

    Science.gov (United States)

    Wolk, D.A.; Coslett, H.B.; Glosser, G.

    2005-01-01

    The role of sensory-motor representations in object recognition was investigated in experiments involving AD, a patient with mild visual agnosia who was impaired in the recognition of visually presented living as compared to non-living entities. AD named visually presented items for which sensory-motor information was available significantly more…

  16. Color descriptors for object category recognition

    NARCIS (Netherlands)

    van de Sande, K.E.A.; Gevers, T.; Snoek, C.G.M.

    2008-01-01

    Category recognition is important to access visual information on the level of objects. A common approach is to compute image descriptors first and then to apply machine learning to achieve category recognition from annotated examples. As a consequence, the choice of image descriptors is of great

  17. Top-down modulation of visual processing and knowledge after 250 ms supports object constancy of category decisions.

    Science.gov (United States)

    Schendan, Haline E; Ganis, Giorgio

    2015-01-01

    People categorize objects more slowly when visual input is highly impoverished instead of optimal. While bottom-up models may explain a decision with optimal input, perceptual hypothesis testing (PHT) theories implicate top-down processes with impoverished input. Brain mechanisms and the time course of PHT are largely unknown. This event-related potential study used a neuroimaging paradigm that implicated prefrontal cortex in top-down modulation of occipitotemporal cortex. Subjects categorized more impoverished and less impoverished real and pseudo objects. PHT theories predict larger impoverishment effects for real than pseudo objects because top-down processes modulate knowledge only for real objects, but different PHT variants predict different timing. Consistent with parietal-prefrontal PHT variants, around 250 ms, the earliest impoverished real object interaction started on an N3 complex, which reflects interactive cortical activity for object cognition. N3 impoverishment effects localized to both prefrontal and occipitotemporal cortex for real objects only. The N3 also showed knowledge effects by 230 ms that localized to occipitotemporal cortex. Later effects reflected (a) word meaning in temporal cortex during the N400, (b) internal evaluation of prior decision and memory processes and secondary higher-order memory involving anterotemporal parts of a default mode network during posterior positivity (P600), and (c) response related activity in posterior cingulate during an anterior slow wave (SW) after 700 ms. Finally, response activity in supplementary motor area during a posterior SW after 900 ms showed impoverishment effects that correlated with RTs. Convergent evidence from studies of vision, memory, and mental imagery which reflects purely top-down inputs, indicates that the N3 reflects the critical top-down processes of PHT. A hybrid multiple-state interactive, PHT and decision theory best explains the visual constancy of object cognition.

  18. Top-down modulation of visual processing and knowledge after 250 ms supports object constancy of category decisions

    Directory of Open Access Journals (Sweden)

    Haline E. Schendan

    2015-09-01

    Full Text Available People categorize objects slowly when visual input is highly impoverished instead of optimal. While bottom-up models may explain a decision with optimal input, perceptual hypothesis testing (PHT theories implicate top-down processes with impoverished input. Brain mechanisms and the time course of PHT are largely unknown. This event-related potential study used a neuroimaging paradigm that implicated prefrontal cortex in top-down modulation of occipitotemporal cortex. Subjects categorized more impoverished and less impoverished real and pseudo objects. PHT theories predict larger impoverishment effects for real than pseudo objects because top-down processes modulate knowledge only for real objects, but different PHT variants predict different timing. Consistent with parietal-prefrontal PHT variants, around 250 ms, the earliest impoverished real object interaction started on an N3 complex, which reflects interactive cortical activity for object cognition. N3 impoverishment effects localized to both prefrontal and occipitotemporal cortex for real objects only. The N3 also showed knowledge effects by 230 ms that localized to occipitotemporal cortex. Later effects reflected (a word meaning in temporal cortex during the N400, (b internal evaluation of prior decision and memory processes and secondary higher-order memory involving anterotemporal parts of a default mode network during posterior positivity (P600, and (c response related activity in posterior cingulate during an anterior slow wave (SW after 700 ms. Finally, response activity in supplementary motor area during a posterior SW after 900 ms showed impoverishment effects that correlated with RTs. Convergent evidence from studies of vision, memory, and mental imagery which reflects purely top-down inputs, indicates that the N3 reflects the critical top-down processes of PHT. A hybrid multiple-state interactive, PHT and decision theory best explains the visual constancy of object cognition.

  19. Now you see it, now you don’t: The context dependent nature of category-effects in visual object recognition

    DEFF Research Database (Denmark)

    Gerlach, Christian; Toft, Kristian Olesen

    2011-01-01

    In two experiments, we test predictions regarding processing advantages/disadvantages for natural objects and artefacts in visual object recognition. Varying three important parameters*degree of perceptual differentiation, stimulus format, and stimulus exposure duration*we show how different......-effects are products of common operations which are differentially affected by the structural similarity among objects (with natural objects being more structurally similar than artefacts). The potentially most important aspect of the present study is the demonstration that category-effects are very context dependent...

  20. Category-Specific Visual Recognition and Aging from the PACE Theory Perspective: Evidence for a Presemantic Deficit in Aging Object Recognition

    DEFF Research Database (Denmark)

    Bordaberry, Pierre; Gerlach, Christian; Lenoble, Quentin

    2016-01-01

    Background/Study Context: The objective of this study was to investigate the object recognition deficit in aging. Age-related declines were examined from the presemantic account of category effects (PACE) theory perspective (Gerlach, 2009, Cognition, 111, 281–301). This view assumes that the stru......Background/Study Context: The objective of this study was to investigate the object recognition deficit in aging. Age-related declines were examined from the presemantic account of category effects (PACE) theory perspective (Gerlach, 2009, Cognition, 111, 281–301). This view assumes...... that the structural similarity/dissimilarity inherent in living and nonliving objects, respectively, can account for a wide range of category-specific effects. Methods: In two experiments on object recognition, young (36 participants, 18–27 years) and older (36 participants, 53–69 years) adult participants...... in the selection stage of the PACE theory (visual long-term memory matching) could be responsible for these impairments. Indeed, the older group showed a deficit when this stage was most relevant. This article emphasize on the critical need for taking into account structural component of the stimuli and type...

  1. Two Types of Visual Objects

    Directory of Open Access Journals (Sweden)

    Skrzypulec Błażej

    2015-06-01

    Full Text Available While it is widely accepted that human vision represents objects, it is less clear which of the various philosophical notions of ‘object’ adequately characterizes visual objects. In this paper, I show that within contemporary cognitive psychology visual objects are characterized in two distinct, incompatible ways. On the one hand, models of visual organization describe visual objects in terms of combinations of features, in accordance with the philosophical bundle theories of objects. However, models of visual persistence apply a notion of visual objects that is more similar to that endorsed in philosophical substratum theories. Here I discuss arguments that might show either that only one of the above notions of visual objects is adequate in the context of human vision, or that the category of visual objects is not uniform and contains entities properly characterized by different philosophical conceptions.

  2. Mere exposure alters category learning of novel objects

    Directory of Open Access Journals (Sweden)

    Jonathan R Folstein

    2010-08-01

    Full Text Available We investigated how mere exposure to complex objects with correlated or uncorrelated object features affects later category learning of new objects not seen during exposure. Correlations among pre-exposed object dimensions influenced later category learning. Unlike other published studies, the collection of pre-exposed objects provided no information regarding the categories to be learned, ruling out unsupervised or incidental category learning during pre-exposure. Instead, results are interpreted with respect to statistical learning mechanisms, providing one of the first demonstrations of how statistical learning can influence visual object learning.

  3. Mere exposure alters category learning of novel objects.

    Science.gov (United States)

    Folstein, Jonathan R; Gauthier, Isabel; Palmeri, Thomas J

    2010-01-01

    We investigated how mere exposure to complex objects with correlated or uncorrelated object features affects later category learning of new objects not seen during exposure. Correlations among pre-exposed object dimensions influenced later category learning. Unlike other published studies, the collection of pre-exposed objects provided no information regarding the categories to be learned, ruling out unsupervised or incidental category learning during pre-exposure. Instead, results are interpreted with respect to statistical learning mechanisms, providing one of the first demonstrations of how statistical learning can influence visual object learning.

  4. Perceptual differentiation and category effects in normal object recognition

    DEFF Research Database (Denmark)

    Gerlach, Christian; Law, I; Gade, A

    1999-01-01

    The purpose of the present PET study was (i) to investigate the neural correlates of object recognition, i.e. the matching of visual forms to memory, and (ii) to test the hypothesis that this process is more difficult for natural objects than for artefacts. This was done by using object decision...... tasks where subjects decided whether pictures represented real objects or non-objects. The object decision tasks differed in their difficulty (the degree of perceptual differentiation needed to perform them) and in the category of the real objects used (natural objects versus artefacts). A clear effect...... be the neural correlate of matching visual forms to memory, and the amount of activation in these regions may correspond to the degree of perceptual differentiation required for recognition to occur. With respect to behaviour, it took significantly longer to make object decisions on natural objects than...

  5. Object representations in visual memory: evidence from visual illusions.

    Science.gov (United States)

    Ben-Shalom, Asaf; Ganel, Tzvi

    2012-07-26

    Human visual memory is considered to contain different levels of object representations. Representations in visual working memory (VWM) are thought to contain relatively elaborated information about object structure. Conversely, representations in iconic memory are thought to be more perceptual in nature. In four experiments, we tested the effects of two different categories of visual illusions on representations in VWM and in iconic memory. Unlike VWM that was affected by both types of illusions, iconic memory was immune to the effects of within-object contextual illusions and was affected only by illusions driven by between-objects contextual properties. These results show that iconic and visual working memory contain dissociable representations of object shape. These findings suggest that the global properties of the visual scene are processed prior to the processing of specific elements.

  6. Incremental Visualizer for Visible Objects

    DEFF Research Database (Denmark)

    Bukauskas, Linas; Bøhlen, Michael Hanspeter

    This paper discusses the integration of database back-end and visualizer front-end into a one tightly coupled system. The main aim which we achieve is to reduce the data pipeline from database to visualization by using incremental data extraction of visible objects in a fly-through scenarios. We...... also argue that passing only relevant data from the database will substantially reduce the overall load of the visualization system. We propose the system Incremental Visualizer for Visible Objects (IVVO) which considers visible objects and enables incremental visualization along the observer movement...... path. IVVO is the novel solution which allows data to be visualized and loaded on the fly from the database and which regards visibilities of objects. We run a set of experiments to convince that IVVO is feasible in terms of I/O operations and CPU load. We consider the example of data which uses...

  7. Creating Objects and Object Categories for Studying Perception and Perceptual Learning

    Science.gov (United States)

    Hauffen, Karin; Bart, Eugene; Brady, Mark; Kersten, Daniel; Hegdé, Jay

    2012-01-01

    In order to quantitatively study object perception, be it perception by biological systems or by machines, one needs to create objects and object categories with precisely definable, preferably naturalistic, properties1. Furthermore, for studies on perceptual learning, it is useful to create novel objects and object categories (or object classes) with such properties2. Many innovative and useful methods currently exist for creating novel objects and object categories3-6 (also see refs. 7,8). However, generally speaking, the existing methods have three broad types of shortcomings. First, shape variations are generally imposed by the experimenter5,9,10, and may therefore be different from the variability in natural categories, and optimized for a particular recognition algorithm. It would be desirable to have the variations arise independently of the externally imposed constraints. Second, the existing methods have difficulty capturing the shape complexity of natural objects11-13. If the goal is to study natural object perception, it is desirable for objects and object categories to be naturalistic, so as to avoid possible confounds and special cases. Third, it is generally hard to quantitatively measure the available information in the stimuli created by conventional methods. It would be desirable to create objects and object categories where the available information can be precisely measured and, where necessary, systematically manipulated (or 'tuned'). This allows one to formulate the underlying object recognition tasks in quantitative terms. Here we describe a set of algorithms, or methods, that meet all three of the above criteria. Virtual morphogenesis (VM) creates novel, naturalistic virtual 3-D objects called 'digital embryos' by simulating the biological process of embryogenesis14. Virtual phylogenesis (VP) creates novel, naturalistic object categories by simulating the evolutionary process of natural selection9,12,13. Objects and object categories created

  8. Lifting to cluster-tilting objects in higher cluster categories

    OpenAIRE

    Liu, Pin

    2008-01-01

    In this note, we consider the $d$-cluster-tilted algebras, the endomorphism algebras of $d$-cluster-tilting objects in $d$-cluster categories. We show that a tilting module over such an algebra lifts to a $d$-cluster-tilting object in this $d$-cluster category.

  9. Category-based attentional guidance can operate in parallel for multiple target objects.

    Science.gov (United States)

    Jenkins, Michael; Grubert, Anna; Eimer, Martin

    2018-04-30

    The question whether the control of attention during visual search is always feature-based or can also be based on the category of objects remains unresolved. Here, we employed the N2pc component as an on-line marker for target selection processes to compare the efficiency of feature-based and category-based attentional guidance. Two successive displays containing pairs of real-world objects (line drawings of kitchen or clothing items) were separated by a 10 ms SOA. In Experiment 1, target objects were defined by their category. In Experiment 2, one specific visual object served as target (exemplar-based search). On different trials, targets appeared either in one or in both displays, and participants had to report the number of targets (one or two). Target N2pc components were larger and emerged earlier during exemplar-based search than during category-based search, demonstrating the superior efficiency of feature-based attentional guidance. On trials where target objects appeared in both displays, both targets elicited N2pc components that overlapped in time, suggesting that attention was allocated in parallel to these target objects. Critically, this was the case not only in the exemplar-based task, but also when targets were defined by their category. These results demonstrate that attention can be guided by object categories, and that this type of category-based attentional control can operate concurrently for multiple target objects. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Understanding visualization: a formal approach using category theory and semiotics.

    Science.gov (United States)

    Vickers, Paul; Faith, Joe; Rossiter, Nick

    2013-06-01

    This paper combines the vocabulary of semiotics and category theory to provide a formal analysis of visualization. It shows how familiar processes of visualization fit the semiotic frameworks of both Saussure and Peirce, and extends these structures using the tools of category theory to provide a general framework for understanding visualization in practice, including: Relationships between systems, data collected from those systems, renderings of those data in the form of representations, the reading of those representations to create visualizations, and the use of those visualizations to create knowledge and understanding of the system under inspection. The resulting framework is validated by demonstrating how familiar information visualization concepts (such as literalness, sensitivity, redundancy, ambiguity, generalizability, and chart junk) arise naturally from it and can be defined formally and precisely. This paper generalizes previous work on the formal characterization of visualization by, inter alia, Ziemkiewicz and Kosara and allows us to formally distinguish properties of the visualization process that previous work does not.

  11. Categorization and category effects in normal object recognition

    DEFF Research Database (Denmark)

    Gerlach, Christian; Law, Ian; Gade, Anders

    2000-01-01

    and that the categorization of artefacts, as opposed to the categorization of natural objects, is based, in part, on action knowledge mediated by the left premotor cortex. However, because artefacts and natural objects often caused activation in the same regions within tasks, processing of these categories is not totally...

  12. Tracking Multiple Statistics: Simultaneous Learning of Object Names and Categories in English and Mandarin Speakers.

    Science.gov (United States)

    Chen, Chi-Hsin; Gershkoff-Stowe, Lisa; Wu, Chih-Yi; Cheung, Hintat; Yu, Chen

    2017-08-01

    Two experiments were conducted to examine adult learners' ability to extract multiple statistics in simultaneously presented visual and auditory input. Experiment 1 used a cross-situational learning paradigm to test whether English speakers were able to use co-occurrences to learn word-to-object mappings and concurrently form object categories based on the commonalities across training stimuli. Experiment 2 replicated the first experiment and further examined whether speakers of Mandarin, a language in which final syllables of object names are more predictive of category membership than English, were able to learn words and form object categories when trained with the same type of structures. The results indicate that both groups of learners successfully extracted multiple levels of co-occurrence and used them to learn words and object categories simultaneously. However, marked individual differences in performance were also found, suggesting possible interference and competition in processing the two concurrent streams of regularities. Copyright © 2016 Cognitive Science Society, Inc.

  13. Prior knowledge of category size impacts visual search.

    Science.gov (United States)

    Wu, Rachel; McGee, Brianna; Echiverri, Chelsea; Zinszer, Benjamin D

    2018-03-30

    Prior research has shown that category search can be similar to one-item search (as measured by the N2pc ERP marker of attentional selection) for highly familiar, smaller categories (e.g., letters and numbers) because the finite set of items in a category can be grouped into one unit to guide search. Other studies have shown that larger, more broadly defined categories (e.g., healthy food) also can elicit N2pc components during category search, but the amplitude of these components is typically attenuated. Two experiments investigated whether the perceived size of a familiar category impacts category and exemplar search. We presented participants with 16 familiar company logos: 8 from a smaller category (social media companies) and 8 from a larger category (entertainment/recreation manufacturing companies). The ERP results from Experiment 1 revealed that, in a two-item search array, search was more efficient for the smaller category of logos compared to the larger category. In a four-item search array (Experiment 2), where two of the four items were placeholders, search was largely similar between the category types, but there was more attentional capture by nontarget members from the same category as the target for smaller rather than larger categories. These results support a growing literature on how prior knowledge of categories affects attentional selection and capture during visual search. We discuss the implications of these findings in relation to assessing cognitive abilities across the lifespan, given that prior knowledge typically increases with age. © 2018 Society for Psychophysiological Research.

  14. Large-scale weakly supervised object localization via latent category learning.

    Science.gov (United States)

    Chong Wang; Kaiqi Huang; Weiqiang Ren; Junge Zhang; Maybank, Steve

    2015-04-01

    Localizing objects in cluttered backgrounds is challenging under large-scale weakly supervised conditions. Due to the cluttered image condition, objects usually have large ambiguity with backgrounds. Besides, there is also a lack of effective algorithm for large-scale weakly supervised localization in cluttered backgrounds. However, backgrounds contain useful latent information, e.g., the sky in the aeroplane class. If this latent information can be learned, object-background ambiguity can be largely reduced and background can be suppressed effectively. In this paper, we propose the latent category learning (LCL) in large-scale cluttered conditions. LCL is an unsupervised learning method which requires only image-level class labels. First, we use the latent semantic analysis with semantic object representation to learn the latent categories, which represent objects, object parts or backgrounds. Second, to determine which category contains the target object, we propose a category selection strategy by evaluating each category's discrimination. Finally, we propose the online LCL for use in large-scale conditions. Evaluation on the challenging PASCAL Visual Object Class (VOC) 2007 and the large-scale imagenet large-scale visual recognition challenge 2013 detection data sets shows that the method can improve the annotation precision by 10% over previous methods. More importantly, we achieve the detection precision which outperforms previous results by a large margin and can be competitive to the supervised deformable part model 5.0 baseline on both data sets.

  15. Category Specificity in Normal Episodic Learning: Applications to Object Recognition and Category-Specific Agnosia

    Science.gov (United States)

    Bukach, Cindy M.; Bub, Daniel N.; Masson, Michael E. J.; Lindsay, D. Stephen

    2004-01-01

    Studies of patients with category-specific agnosia (CSA) have given rise to multiple theories of object recognition, most of which assume the existence of a stable, abstract semantic memory system. We applied an episodic view of memory to questions raised by CSA in a series of studies examining normal observers' recall of newly learned attributes…

  16. Basic level category structure emerges gradually across human ventral visual cortex.

    Science.gov (United States)

    Iordan, Marius Cătălin; Greene, Michelle R; Beck, Diane M; Fei-Fei, Li

    2015-07-01

    Objects can be simultaneously categorized at multiple levels of specificity ranging from very broad ("natural object") to very distinct ("Mr. Woof"), with a mid-level of generality (basic level: "dog") often providing the most cognitively useful distinction between categories. It is unknown, however, how this hierarchical representation is achieved in the brain. Using multivoxel pattern analyses, we examined how well each taxonomic level (superordinate, basic, and subordinate) of real-world object categories is represented across occipitotemporal cortex. We found that, although in early visual cortex objects are best represented at the subordinate level (an effect mostly driven by low-level feature overlap between objects in the same category), this advantage diminishes compared to the basic level as we move up the visual hierarchy, disappearing in object-selective regions of occipitotemporal cortex. This pattern stems from a combined increase in within-category similarity (category cohesion) and between-category dissimilarity (category distinctiveness) of neural activity patterns at the basic level, relative to both subordinate and superordinate levels, suggesting that successive visual areas may be optimizing basic level representations.

  17. Category-specific visual responses: an intracranial study comparing gamma, beta, alpha and ERP response selectivity

    Directory of Open Access Journals (Sweden)

    Juan R Vidal

    2010-11-01

    Full Text Available The specificity of neural responses to visual objects is a major topic in visual neuroscience. In humans, functional magnetic resonance imaging (fMRI studies have identified several regions of the occipital and temporal lobe that appear specific to faces, letter-strings, scenes, or tools. Direct electrophysiological recordings in the visual cortical areas of epileptic patients have largely confirmed this modular organization, using either single-neuron peri-stimulus time-histogram or intracerebral event-related potentials (iERP. In parallel, a new research stream has emerged using high-frequency gamma-band activity (50-150 Hz (GBR and low-frequency alpha/beta activity (8-24 Hz (ABR to map functional networks in humans. An obvious question is now whether the functional organization of the visual cortex revealed by fMRI, ERP, GBR, and ABR coincide. We used direct intracerebral recordings in 18 epileptic patients to directly compare GBR, ABR, and ERP elicited by the presentation of seven major visual object categories (faces, scenes, houses, consonants, pseudowords, tools, and animals, in relation to previous fMRI studies. Remarkably both GBR and iERP showed strong category-specificity that was in many cases sufficient to infer stimulus object category from the neural response at single-trial level. However, we also found a strong discrepancy between the selectivity of GBR, ABR, and ERP with less than 10% of spatial overlap between sites eliciting the same category-specificity. Overall, we found that selective neural responses to visual objects were broadly distributed in the brain with a prominent spatial cluster located in the posterior temporal cortex. Moreover, the different neural markers (GBR, ABR, and iERP that elicit selectivity towards specific visual object categories present little spatial overlap suggesting that the information content of each marker can uniquely characterize high-level visual information in the brain.

  18. Task-relevant perceptual features can define categories in visual memory too.

    Science.gov (United States)

    Antonelli, Karla B; Williams, Carrick C

    2017-11-01

    Although Konkle, Brady, Alvarez, and Oliva (2010, Journal of Experimental Psychology: General, 139(3), 558) claim that visual long-term memory (VLTM) is organized on underlying conceptual, not perceptual, information, visual memory results from visual search tasks are not well explained by this theory. We hypothesized that when viewing an object, any task-relevant visual information is critical to the organizational structure of VLTM. In two experiments, we examined the organization of VLTM by measuring the amount of retroactive interference created by objects possessing different combinations of task-relevant features. Based on task instructions, only the conceptual category was task relevant or both the conceptual category and a perceptual object feature were task relevant. Findings indicated that when made task relevant, perceptual object feature information, along with conceptual category information, could affect memory organization for objects in VLTM. However, when perceptual object feature information was task irrelevant, it did not contribute to memory organization; instead, memory defaulted to being organized around conceptual category information. These findings support the theory that a task-defined organizational structure is created in VLTM based on the relevance of particular object features and information.

  19. Category-based guidance of spatial attention during visual search for feature conjunctions.

    Science.gov (United States)

    Nako, Rebecca; Grubert, Anna; Eimer, Martin

    2016-10-01

    The question whether alphanumerical category is involved in the control of attentional target selection during visual search remains a contentious issue. We tested whether category-based attentional mechanisms would guide the allocation of attention under conditions where targets were defined by a combination of alphanumerical category and a basic visual feature, and search displays could contain both targets and partially matching distractor objects. The N2pc component was used as an electrophysiological marker of attentional object selection in tasks where target objects were defined by a conjunction of color and category (Experiment 1) or shape and category (Experiment 2). Some search displays contained the target or a nontarget object that matched either the target color/shape or its category among 3 nonmatching distractors. In other displays, the target and a partially matching nontarget object appeared together. N2pc components were elicited not only by targets and by color- or shape-matching nontargets, but also by category-matching nontarget objects, even on trials where a target was present in the same display. On these trials, the summed N2pc components to the 2 types of partially matching nontargets were initially equal in size to the target N2pc, suggesting that attention was allocated simultaneously and independently to all objects with target-matching features during the early phase of attentional processing. Results demonstrate that alphanumerical category is a genuine guiding feature that can operate in parallel with color or shape information to control the deployment of attention during visual search. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  20. The effect of category learning on attentional modulation of visual cortex.

    Science.gov (United States)

    Folstein, Jonathan R; Fuller, Kelly; Howard, Dorothy; DePatie, Thomas

    2017-09-01

    Learning about visual object categories causes changes in the way we perceive those objects. One likely mechanism by which this occurs is the application of attention to potentially relevant objects. Here we test the hypothesis that category membership influences the allocation of attention, allowing attention to be applied not only to object features, but to entire categories. Participants briefly learned to categorize a set of novel cartoon animals after which EEG was recorded while participants distinguished between a target and non-target category. A second identical EEG session was conducted after two sessions of categorization practice. The category structure and task design allowed parametric manipulation of number of target features while holding feature frequency and category membership constant. We found no evidence that category membership influenced attentional selection: a postero-lateral negative component, labeled the selection negativity/N250, increased over time and was sensitive to number of target features, not target categories. In contrast, the right hemisphere N170 was not sensitive to target features. The P300 appeared sensitive to category in the first session, but showed a graded sensitivity to number of target features in the second session, possibly suggesting a transition from rule-based to similarity based categorization. Copyright © 2017. Published by Elsevier Ltd.

  1. Structural and effective connectivity reveals potential network-based influences on category-sensitive visual areas

    Directory of Open Access Journals (Sweden)

    Nicholas eFurl

    2015-05-01

    Full Text Available Visual category perception is thought to depend on brain areas that respond specifically when certain categories are viewed. These category-sensitive areas are often assumed to be modules (with some degree of processing autonomy and to act predominantly on feedforward visual input. This modular view can be complemented by a view that treats brain areas as elements within more complex networks and as influenced by network properties. This network-oriented viewpoint is emerging from studies using either diffusion tensor imaging to map structural connections or effective connectivity analyses to measure how their functional responses influence each other. This literature motivates several hypotheses that predict category-sensitive activity based on network properties. Large, long-range fiber bundles such as inferior fronto-occipital, arcuate and inferior longitudinal fasciculi are associated with behavioural recognition and could play crucial roles in conveying backward influences on visual cortex from anterior temporal and frontal areas. Such backward influences could support top-down functions such as visual search and emotion-based visual modulation. Within visual cortex itself, areas sensitive to different categories appear well-connected (e.g., face areas connect to object- and motion sensitive areas and their responses can be predicted by backward modulation. Evidence supporting these propositions remains incomplete and underscores the need for better integration of DTI and functional imaging.

  2. Category-specific responses to faces and objects in primate auditory cortex

    Directory of Open Access Journals (Sweden)

    Kari L Hoffman

    2008-03-01

    Full Text Available Auditory and visual signals often occur together, and the two sensory channels are known to infl uence each other to facilitate perception. The neural basis of this integration is not well understood, although other forms of multisensory infl uences have been shown to occur at surprisingly early stages of processing in cortex. Primary visual cortex neurons can show frequency-tuning to auditory stimuli, and auditory cortex responds selectively to certain somatosensory stimuli, supporting the possibility that complex visual signals may modulate early stages of auditory processing. To elucidate which auditory regions, if any, are responsive to complex visual stimuli, we recorded from auditory cortex and the superior temporal sulcus while presenting visual stimuli consisting of various objects, neutral faces, and facial expressions generated during vocalization. Both objects and conspecifi c faces elicited robust fi eld potential responses in auditory cortex sites, but the responses varied by category: both neutral and vocalizing faces had a highly consistent negative component (N100 followed by a broader positive component (P180 whereas object responses were more variable in time and shape, but could be discriminated consistently from the responses to faces. The face response did not vary within the face category, i.e., for expressive vs. neutral face stimuli. The presence of responses for both objects and neutral faces suggests that auditory cortex receives highly informative visual input that is not restricted to those stimuli associated with auditory components. These results reveal selectivity for complex visual stimuli in a brain region conventionally described as non-visual unisensory cortex.

  3. Dependence of behavioral performance on material category in an object grasping task with monkeys.

    Science.gov (United States)

    Yokoi, Isao; Tachibana, Atsumichi; Minamimoto, Takafumi; Goda, Naokazu; Komatsu, Hidehiko

    2018-05-02

    Material perception is an essential part of our cognitive function that enables us to properly interact with our complex daily environment. One important aspect of material perception is its multimodal nature. When we see an object, we generally recognize its haptic properties as well as its visual properties. Consequently, one must examine behavior using real objects that are perceived both visually and haptically to fully understand the characteristics of material perception. As a first step, we examined whether there is any difference in the behavioral responses to different materials in monkeys trained to perform an object grasping task in which they saw and grasped rod-shaped real objects made of various materials. We found that the monkeys' behavior in the grasping task, measured based on the success rate and the pulling force, differed depending on the material category. Monkeys easily and correctly grasped objects of some materials, such as metal and glass, but failed to grasp objects of other materials. In particular, monkeys avoided grasping fur-covered objects. The differences in the behavioral responses to the material categories cannot be explained solely based on the degree of familiarity with the different materials. These results shed light on the organization of multimodal representation of materials, where their biological significance is an important factor. In addition, a monkey that avoided touching real fur-covered objects readily touched images of the same objects presented on a CRT display. This suggests employing real objects is important when studying behaviors related to material perception.

  4. Perceptual category learning and visual processing: An exercise in computational cognitive neuroscience.

    Science.gov (United States)

    Cantwell, George; Riesenhuber, Maximilian; Roeder, Jessica L; Ashby, F Gregory

    2017-05-01

    The field of computational cognitive neuroscience (CCN) builds and tests neurobiologically detailed computational models that account for both behavioral and neuroscience data. This article leverages a key advantage of CCN-namely, that it should be possible to interface different CCN models in a plug-and-play fashion-to produce a new and biologically detailed model of perceptual category learning. The new model was created from two existing CCN models: the HMAX model of visual object processing and the COVIS model of category learning. Using bitmap images as inputs and by adjusting only a couple of learning-rate parameters, the new HMAX/COVIS model provides impressively good fits to human category-learning data from two qualitatively different experiments that used different types of category structures and different types of visual stimuli. Overall, the model provides a comprehensive neural and behavioral account of basal ganglia-mediated learning. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Visual object recognition and tracking

    Science.gov (United States)

    Chang, Chu-Yin (Inventor); English, James D. (Inventor); Tardella, Neil M. (Inventor)

    2010-01-01

    This invention describes a method for identifying and tracking an object from two-dimensional data pictorially representing said object by an object-tracking system through processing said two-dimensional data using at least one tracker-identifier belonging to the object-tracking system for providing an output signal containing: a) a type of the object, and/or b) a position or an orientation of the object in three-dimensions, and/or c) an articulation or a shape change of said object in said three dimensions.

  6. Emerging category representation in the visual forebrain hierarchy of pigeons (Columba livia).

    Science.gov (United States)

    Azizi, Amir Hossein; Pusch, Roland; Koenen, Charlotte; Klatt, Sebastian; Bröcker, Franziska; Thiele, Samuel; Kellermann, Janosch; Güntürkün, Onur; Cheng, Sen

    2018-06-06

    Recognizing and categorizing visual stimuli are cognitive functions vital for survival, and an important feature of visual systems in primates as well as in birds. Visual stimuli are processed along the ventral visual pathway. At every stage in the hierarchy, neurons respond selectively to more complex features, transforming the population representation of the stimuli. It is therefore easier to read-out category information in higher visual areas. While explicit category representations have been observed in the primate brain, less is known on equivalent processes in the avian brain. Even though their brain anatomies are radically different, it has been hypothesized that visual object representations are comparable across mammals and birds. In the present study, we investigated category representations in the pigeon visual forebrain using recordings from single cells responding to photographs of real-world objects. Using a linear classifier, we found that the population activity in the visual associative area mesopallium ventrolaterale (MVL) distinguishes between animate and inanimate objects, although this distinction is not required by the task. By contrast, a population of cells in the entopallium, a region that is lower in the hierarchy of visual areas and that is related to the primate extrastriate cortex, lacked this information. A model that pools responses of simple cells, which function as edge detectors, can account for the animate vs. inanimate categorization in the MVL, but performance in the model is based on different features than in MVL. Therefore, processing in MVL cells is very likely more abstract than simple computations on the output of edge detectors. Copyright © 2018. Published by Elsevier B.V.

  7. Refining Visually Detected Object poses

    DEFF Research Database (Denmark)

    Holm, Preben; Petersen, Henrik Gordon

    2010-01-01

    to the particular object and in order to handle the demand for flexibility, there is an increasing demand for avoiding such dedicated mechanical alignment systems. Rather, it would be desirable to automatically locate and grasp randomly placed objects from tables, conveyor belts or even bins with a high accuracy...

  8. Category specific spatial dissociations of parallel processes underlying visual naming.

    Science.gov (United States)

    Conner, Christopher R; Chen, Gang; Pieters, Thomas A; Tandon, Nitin

    2014-10-01

    The constituent elements and dynamics of the networks responsible for word production are a central issue to understanding human language. Of particular interest is their dependency on lexical category, particularly the possible segregation of nouns and verbs into separate processing streams. We applied a novel mixed-effects, multilevel analysis to electrocorticographic data collected from 19 patients (1942 electrodes) to examine the activity of broadly disseminated cortical networks during the retrieval of distinct lexical categories. This approach was designed to overcome the issues of sparse sampling and individual variability inherent to invasive electrophysiology. Both noun and verb generation evoked overlapping, yet distinct nonhierarchical processes favoring ventral and dorsal visual streams, respectively. Notable differences in activity patterns were noted in Broca's area and superior lateral temporo-occipital regions (verb > noun) and in parahippocampal and fusiform cortices (noun > verb). Comparisons with functional magnetic resonance imaging (fMRI) results yielded a strong correlation of blood oxygen level-dependent signal and gamma power and an independent estimate of group size needed for fMRI studies of cognition. Our findings imply parallel, lexical category-specific processes and reconcile discrepancies between lesional and functional imaging studies. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  9. Right away: A late, right-lateralized category effect complements an early, left-lateralized category effect in visual search.

    Science.gov (United States)

    Constable, Merryn D; Becker, Stefanie I

    2017-10-01

    According to the Sapir-Whorf hypothesis, learned semantic categories can influence early perceptual processes. A central finding in support of this view is the lateralized category effect-namely, the finding that categorically different colors (e.g., blue and green hues) can be discriminated faster than colors within the same color category (e.g., different hues of green), especially when they are presented in the right visual field. Because the right visual field projects to the left hemisphere, this finding has been popularly couched in terms of the left-lateralization of language. However, other studies have reported bilateral category effects, which has led some researchers to question the linguistic origins of the effect. Here we examined the time course of lateralized and bilateral category effects in the classical visual search paradigm by means of eyetracking and RT distribution analyses. Our results show a bilateral category effect in the manual responses, which is combined of an early, left-lateralized category effect and a later, right-lateralized category effect. The newly discovered late, right-lateralized category effect occurred only when observers had difficulty locating the target, indicating a specialization of the right hemisphere to find categorically different targets after an initial error. The finding that early and late stages of visual search show different lateralized category effects can explain a wide range of previously discrepant findings.

  10. Aerial Object Following Using Visual Fuzzy Servoing

    OpenAIRE

    Olivares Méndez, Miguel Ángel; Mondragon Bernal, Ivan Fernando; Campoy Cervera, Pascual; Mejias Alvarez, Luis; Martínez Luna, Carol Viviana

    2011-01-01

    This article presents a visual servoing system to follow a 3D moving object by a Micro Unmanned Aerial Vehicle (MUAV). The presented control strategy is based only on the visual information given by an adaptive tracking method based on the color information. A visual fuzzy system has been developed for servoing the camera situated on a rotary wing MAUV, that also considers its own dynamics. This system is focused on continuously following of an aerial moving target object, maintai...

  11. Adaptive learning in a compartmental model of visual cortex - how feedback enables stable category learning and refinement

    Directory of Open Access Journals (Sweden)

    Georg eLayher

    2014-12-01

    Full Text Available The categorization of real world objects is often reflected in the similarity of their visual appearances. Such categories of objects do not necessarily form disjunct sets of objects, neither semantically nor visually. The relationship between categories can often be described in terms of a hierarchical structure. For instance, tigers and leopards build two separate mammalian categories, but both belong to the category of felines. In other words, tigers and leopards are subcategories of the category Felidae. In the last decades, the unsupervised learning of categories of visual input stimuli has been addressed by numerous approaches in machine learning as well as in the computational neurosciences. However, the question of what kind of mechanisms might be involved in the process of subcategory learning, or category refinement, remains a topic of active investigation. We propose a recurrent computational network architecture for the unsupervised learning of categorial and subcategorial visual input representations. During learning, the connection strengths of bottom-up weights from input to higher-level category representations are adapted according to the input activity distribution. In a similar manner, top-down weights learn to encode the characteristics of a specific stimulus category. Feedforward and feedback learning in combination realize an associative memory mechanism, enabling the selective top-down propagation of a category's feedback weight distribution. We suggest that the difference between the expected input encoded in the projective field of a category node and the current input pattern controls the amplification of feedforward-driven representations. Large enough differences trigger the recruitment of new representational resources and the establishment of (sub- category representations. We demonstrate the temporal evolution of such learning and show how the approach successully establishes category and subcategory

  12. Visual Priming of Inverted and Rotated Objects

    Science.gov (United States)

    Knowlton, Barbara J.; McAuliffe, Sean P.; Coelho, Chase J.; Hummel, John E.

    2009-01-01

    Object images are identified more efficiently after prior exposure. Here, the authors investigated shape representations supporting object priming. The dependent measure in all experiments was the minimum exposure duration required to correctly identify an object image in a rapid serial visual presentation stream. Priming was defined as the change…

  13. Adaptive learning in a compartmental model of visual cortex—how feedback enables stable category learning and refinement

    Science.gov (United States)

    Layher, Georg; Schrodt, Fabian; Butz, Martin V.; Neumann, Heiko

    2014-01-01

    The categorization of real world objects is often reflected in the similarity of their visual appearances. Such categories of objects do not necessarily form disjunct sets of objects, neither semantically nor visually. The relationship between categories can often be described in terms of a hierarchical structure. For instance, tigers and leopards build two separate mammalian categories, both of which are subcategories of the category Felidae. In the last decades, the unsupervised learning of categories of visual input stimuli has been addressed by numerous approaches in machine learning as well as in computational neuroscience. However, the question of what kind of mechanisms might be involved in the process of subcategory learning, or category refinement, remains a topic of active investigation. We propose a recurrent computational network architecture for the unsupervised learning of categorial and subcategorial visual input representations. During learning, the connection strengths of bottom-up weights from input to higher-level category representations are adapted according to the input activity distribution. In a similar manner, top-down weights learn to encode the characteristics of a specific stimulus category. Feedforward and feedback learning in combination realize an associative memory mechanism, enabling the selective top-down propagation of a category's feedback weight distribution. We suggest that the difference between the expected input encoded in the projective field of a category node and the current input pattern controls the amplification of feedforward-driven representations. Large enough differences trigger the recruitment of new representational resources and the establishment of additional (sub-) category representations. We demonstrate the temporal evolution of such learning and show how the proposed combination of an associative memory with a modulatory feedback integration successfully establishes category and subcategory representations

  14. Semantic Wavelet-Induced Frequency-Tagging (SWIFT Periodically Activates Category Selective Areas While Steadily Activating Early Visual Areas.

    Directory of Open Access Journals (Sweden)

    Roger Koenig-Robert

    Full Text Available Primate visual systems process natural images in a hierarchical manner: at the early stage, neurons are tuned to local image features, while neurons in high-level areas are tuned to abstract object categories. Standard models of visual processing assume that the transition of tuning from image features to object categories emerges gradually along the visual hierarchy. Direct tests of such models remain difficult due to confounding alteration in low-level image properties when contrasting distinct object categories. When such contrast is performed in a classic functional localizer method, the desired activation in high-level visual areas is typically accompanied with activation in early visual areas. Here we used a novel image-modulation method called SWIFT (semantic wavelet-induced frequency-tagging, a variant of frequency-tagging techniques. Natural images modulated by SWIFT reveal object semantics periodically while keeping low-level properties constant. Using functional magnetic resonance imaging (fMRI, we indeed found that faces and scenes modulated with SWIFT periodically activated the prototypical category-selective areas while they elicited sustained and constant responses in early visual areas. SWIFT and the localizer were selective and specific to a similar extent in activating category-selective areas. Only SWIFT progressively activated the visual pathway from low- to high-level areas, consistent with predictions from standard hierarchical models. We confirmed these results with criterion-free methods, generalizing the validity of our approach and show that it is possible to dissociate neural activation in early and category-selective areas. Our results provide direct evidence for the hierarchical nature of the representation of visual objects along the visual stream and open up future applications of frequency-tagging methods in fMRI.

  15. What are the visual features underlying rapid object recognition?

    Directory of Open Access Journals (Sweden)

    Sébastien M Crouzet

    2011-11-01

    Full Text Available Research progress in machine vision has been very significant in recent years. Robust face detection and identification algorithms are already readily available to consumers, and modern computer vision algorithms for generic object recognition are now coping with the richness and complexity of natural visual scenes. Unlike early vision models of object recognition that emphasized the role of figure-ground segmentation and spatial information between parts, recent successful approaches are based on the computation of loose collections of image features without prior segmentation or any explicit encoding of spatial relations. While these models remain simplistic models of visual processing, they suggest that, in principle, bottom-up activation of a loose collection of image features could support the rapid recognition of natural object categories and provide an initial coarse visual representation before more complex visual routines and attentional mechanisms take place. Focusing on biologically-plausible computational models of (bottom-up pre-attentive visual recognition, we review some of the key visual features that have been described in the literature. We discuss the consistency of these feature-based representations with classical theories from visual psychology and test their ability to account for human performance on a rapid object categorization task.

  16. Higher Level Visual Cortex Represents Retinotopic, Not Spatiotopic, Object Location

    Science.gov (United States)

    Kanwisher, Nancy

    2012-01-01

    The crux of vision is to identify objects and determine their locations in the environment. Although initial visual representations are necessarily retinotopic (eye centered), interaction with the real world requires spatiotopic (absolute) location information. We asked whether higher level human visual cortex—important for stable object recognition and action—contains information about retinotopic and/or spatiotopic object position. Using functional magnetic resonance imaging multivariate pattern analysis techniques, we found information about both object category and object location in each of the ventral, dorsal, and early visual regions tested, replicating previous reports. By manipulating fixation position and stimulus position, we then tested whether these location representations were retinotopic or spatiotopic. Crucially, all location information was purely retinotopic. This pattern persisted when location information was irrelevant to the task, and even when spatiotopic (not retinotopic) stimulus position was explicitly emphasized. We also conducted a “searchlight” analysis across our entire scanned volume to explore additional cortex but again found predominantly retinotopic representations. The lack of explicit spatiotopic representations suggests that spatiotopic object position may instead be computed indirectly and continually reconstructed with each eye movement. Thus, despite our subjective impression that visual information is spatiotopic, even in higher level visual cortex, object location continues to be represented in retinotopic coordinates. PMID:22190434

  17. The Timing of Visual Object Categorization

    Science.gov (United States)

    Mack, Michael L.; Palmeri, Thomas J.

    2011-01-01

    An object can be categorized at different levels of abstraction: as natural or man-made, animal or plant, bird or dog, or as a Northern Cardinal or Pyrrhuloxia. There has been growing interest in understanding how quickly categorizations at different levels are made and how the timing of those perceptual decisions changes with experience. We specifically contrast two perspectives on the timing of object categorization at different levels of abstraction. By one account, the relative timing implies a relative timing of stages of visual processing that are tied to particular levels of object categorization: Fast categorizations are fast because they precede other categorizations within the visual processing hierarchy. By another account, the relative timing reflects when perceptual features are available over time and the quality of perceptual evidence used to drive a perceptual decision process: Fast simply means fast, it does not mean first. Understanding the short-term and long-term temporal dynamics of object categorizations is key to developing computational models of visual object recognition. We briefly review a number of models of object categorization and outline how they explain the timing of visual object categorization at different levels of abstraction. PMID:21811480

  18. Infant visual attention and object recognition.

    Science.gov (United States)

    Reynolds, Greg D

    2015-05-15

    This paper explores the role visual attention plays in the recognition of objects in infancy. Research and theory on the development of infant attention and recognition memory are reviewed in three major sections. The first section reviews some of the major findings and theory emerging from a rich tradition of behavioral research utilizing preferential looking tasks to examine visual attention and recognition memory in infancy. The second section examines research utilizing neural measures of attention and object recognition in infancy as well as research on brain-behavior relations in the early development of attention and recognition memory. The third section addresses potential areas of the brain involved in infant object recognition and visual attention. An integrated synthesis of some of the existing models of the development of visual attention is presented which may account for the observed changes in behavioral and neural measures of visual attention and object recognition that occur across infancy. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Graph comprehension in science and mathematics education: Objects and categories

    DEFF Research Database (Denmark)

    Voetmann Christiansen, Frederik; May, Michael

    types of registers. In the second part of the paper, we consider how diagrams in science are often composites of iconic and indexical elements, and how this fact may lead to confusion for students. In the discussion the utility of the Peircian semiotic framework for educational studies......, the typological mistake of considering graphs as images is discussed related to litterature, and two examples from engineering education are given. The educational implications for science and engineering are discussed, with emphasis on the need for students to work explicitly with conversions between different...... of representational forms in science is discussed, and how the objects of mathematics and science relate to their semiotic representations....

  20. Cross-Cultural Differences in Children's Beliefs about the Objectivity of Social Categories

    Science.gov (United States)

    Diesendruck, Gil; Goldfein-Elbaz, Rebecca; Rhodes, Marjorie; Gelman, Susan; Neumark, Noam

    2013-01-01

    The present study compared 5-and 10-year-old North American and Israeli children's beliefs about the objectivity of different categories (n = 109). Children saw picture triads composed of two exemplars of the same category (e.g., two women) and an exemplar of a contrasting category (e.g., a man). Children were asked whether it would be acceptable…

  1. Manifold-Based Visual Object Counting.

    Science.gov (United States)

    Wang, Yi; Zou, Yuexian; Wang, Wenwu

    2018-07-01

    Visual object counting (VOC) is an emerging area in computer vision which aims to estimate the number of objects of interest in a given image or video. Recently, object density based estimation method is shown to be promising for object counting as well as rough instance localization. However, the performance of this method tends to degrade when dealing with new objects and scenes. To address this limitation, we propose a manifold-based method for visual object counting (M-VOC), based on the manifold assumption that similar image patches share similar object densities. Firstly, the local geometry of a given image patch is represented linearly by its neighbors using a predefined patch training set, and the object density of this given image patch is reconstructed by preserving the local geometry using locally linear embedding. To improve the characterization of local geometry, additional constraints such as sparsity and non-negativity are also considered via regularization, nonlinear mapping, and kernel trick. Compared with the state-of-the-art VOC methods, our proposed M-VOC methods achieve competitive performance on seven benchmark datasets. Experiments verify that the proposed M-VOC methods have several favorable properties, such as robustness to the variation in the size of training dataset and image resolution, as often encountered in real-world VOC applications.

  2. Cross-category adaptation: objects produce gender adaptation in the perception of faces.

    Directory of Open Access Journals (Sweden)

    Amir Homayoun Javadi

    Full Text Available Adaptation aftereffects have been found for low-level visual features such as colour, motion and shape perception, as well as higher-level features such as gender, race and identity in domains such as faces and biological motion. It is not yet clear if adaptation effects in humans extend beyond this set of higher order features. The aim of this study was to investigate whether objects highly associated with one gender, e.g. high heels for females or electric shavers for males can modulate gender perception of a face. In two separate experiments, we adapted subjects to a series of objects highly associated with one gender and subsequently asked participants to judge the gender of an ambiguous face. Results showed that participants are more likely to perceive an ambiguous face as male after being exposed to objects highly associated to females and vice versa. A gender adaptation aftereffect was obtained despite the adaptor and test stimuli being from different global categories (objects and faces respectively. These findings show that our perception of gender from faces is highly affected by our environment and recent experience. This suggests two possible mechanisms: (a that perception of the gender associated with an object shares at least some brain areas with those responsible for gender perception of faces and (b adaptation to gender, which is a high-level concept, can modulate brain areas that are involved in facial gender perception through top-down processes.

  3. Cross-category adaptation: objects produce gender adaptation in the perception of faces.

    Science.gov (United States)

    Javadi, Amir Homayoun; Wee, Natalie

    2012-01-01

    Adaptation aftereffects have been found for low-level visual features such as colour, motion and shape perception, as well as higher-level features such as gender, race and identity in domains such as faces and biological motion. It is not yet clear if adaptation effects in humans extend beyond this set of higher order features. The aim of this study was to investigate whether objects highly associated with one gender, e.g. high heels for females or electric shavers for males can modulate gender perception of a face. In two separate experiments, we adapted subjects to a series of objects highly associated with one gender and subsequently asked participants to judge the gender of an ambiguous face. Results showed that participants are more likely to perceive an ambiguous face as male after being exposed to objects highly associated to females and vice versa. A gender adaptation aftereffect was obtained despite the adaptor and test stimuli being from different global categories (objects and faces respectively). These findings show that our perception of gender from faces is highly affected by our environment and recent experience. This suggests two possible mechanisms: (a) that perception of the gender associated with an object shares at least some brain areas with those responsible for gender perception of faces and (b) adaptation to gender, which is a high-level concept, can modulate brain areas that are involved in facial gender perception through top-down processes.

  4. An interactive visualization tool for mobile objects

    Science.gov (United States)

    Kobayashi, Tetsuo

    Recent advancements in mobile devices---such as Global Positioning System (GPS), cellular phones, car navigation system, and radio-frequency identification (RFID)---have greatly influenced the nature and volume of data about individual-based movement in space and time. Due to the prevalence of mobile devices, vast amounts of mobile objects data are being produced and stored in databases, overwhelming the capacity of traditional spatial analytical methods. There is a growing need for discovering unexpected patterns, trends, and relationships that are hidden in the massive mobile objects data. Geographic visualization (GVis) and knowledge discovery in databases (KDD) are two major research fields that are associated with knowledge discovery and construction. Their major research challenges are the integration of GVis and KDD, enhancing the ability to handle large volume mobile objects data, and high interactivity between the computer and users of GVis and KDD tools. This dissertation proposes a visualization toolkit to enable highly interactive visual data exploration for mobile objects datasets. Vector algebraic representation and online analytical processing (OLAP) are utilized for managing and querying the mobile object data to accomplish high interactivity of the visualization tool. In addition, reconstructing trajectories at user-defined levels of temporal granularity with time aggregation methods allows exploration of the individual objects at different levels of movement generality. At a given level of generality, individual paths can be combined into synthetic summary paths based on three similarity measures, namely, locational similarity, directional similarity, and geometric similarity functions. A visualization toolkit based on the space-time cube concept exploits these functionalities to create a user-interactive environment for exploring mobile objects data. Furthermore, the characteristics of visualized trajectories are exported to be utilized for data

  5. Conceptual Distinctiveness Supports Detailed Visual Long-Term Memory for Real-World Objects

    Science.gov (United States)

    Konkle, Talia; Brady, Timothy F.; Alvarez, George A.; Oliva, Aude

    2010-01-01

    Humans have a massive capacity to store detailed information in visual long-term memory. The present studies explored the fidelity of these visual long-term memory representations and examined how conceptual and perceptual features of object categories support this capacity. Observers viewed 2,800 object images with a different number of exemplars…

  6. A model of primate visual cortex based on category-specific redundancies in natural images

    Science.gov (United States)

    Malmir, Mohsen; Shiry Ghidary, S.

    2010-12-01

    Neurophysiological and computational studies have proposed that properties of natural images have a prominent role in shaping selectivity of neurons in the visual cortex. An important property of natural images that has been studied extensively is the inherent redundancy in these images. In this paper, the concept of category-specific redundancies is introduced to describe the complex pattern of dependencies between responses of linear filters to natural images. It is proposed that structural similarities between images of different object categories result in dependencies between responses of linear filters in different spatial scales. It is also proposed that the brain gradually removes these dependencies in different areas of the ventral visual hierarchy to provide a more efficient representation of its sensory input. The authors proposed a model to remove these redundancies and trained it with a set of natural images using general learning rules that are developed to remove dependencies between responses of neighbouring neurons. Results of experiments demonstrate the close resemblance of neuronal selectivity between different layers of the model and their corresponding visual areas.

  7. Visual awareness of objects and their colour.

    Science.gov (United States)

    Pilling, Michael; Gellatly, Angus

    2011-10-01

    At any given moment, our awareness of what we 'see' before us seems to be rather limited. If, for instance, a display containing multiple objects is shown (red or green disks), when one object is suddenly covered at random, observers are often little better than chance in reporting about its colour (Wolfe, Reinecke, & Brawn, Visual Cognition, 14, 749-780, 2006). We tested whether, when object attributes (such as colour) are unknown, observers still retain any knowledge of the presence of that object at a display location. Experiments 1-3 involved a task requiring two-alternative (yes/no) responses about the presence or absence of a colour-defined object at a probed location. On this task, if participants knew about the presence of an object at a location, responses indicated that they also knew about its colour. A fourth experiment presented the same displays but required a three-alternative response. This task did result in a data pattern consistent with participants' knowing more about the locations of objects within a display than about their individual colours. However, this location advantage, while highly significant, was rather small in magnitude. Results are compared with those of Huang (Journal of Vision, 10(10, Art. 24), 1-17, 2010), who also reported an advantage for object locations, but under quite different task conditions.

  8. The perceptual effects of learning object categories that predict perceptual goals

    Science.gov (United States)

    Van Gulick, Ana E.; Gauthier, Isabel

    2014-01-01

    In classic category learning studies, subjects typically learn to assign items to one of two categories, with no further distinction between how items on each side of the category boundary should be treated. In real life, however, we often learn categories that dictate further processing goals, for instance with objects in only one category requiring further individuation. Using methods from category learning and perceptual expertise, we studied the perceptual consequences of experience with objects in tasks that rely on attention to different dimensions in different parts of the space. In two experiments, subjects first learned to categorize complex objects from a single morphspace into two categories based on one morph dimension, and then learned to perform a different task, either naming or a local feature judgment, for each of the two categories. A same-different discrimination test before and after each training measured sensitivity to feature dimensions of the space. After initial categorization, sensitivity increased along the category-diagnostic dimension. After task association, sensitivity increased more for the category that was named, especially along the non-diagnostic dimension. The results demonstrate that local attentional weights, associated with individual exemplars as a function of task requirements, can have lasting effects on perceptual representations. PMID:24820671

  9. Social Vision: Visual cues communicate categories to observers

    OpenAIRE

    Johnson, Kerri L

    2009-01-01

    This information ranges from appreciating category membership to evaluating more enduring traits and dispositions. These aspects of social perception appear to be highly automated, some would even call them obligatory, and they are heavily influenced by two sources of information: the face and the body. From minimal information such as brief exposure to the face or degraded images of dynamic body motion, social judgments are made with remarkable efficiency and, at times, surprising accuracy.

  10. How semantic category modulates preschool children's visual memory.

    Science.gov (United States)

    Giganti, Fiorenza; Viggiano, Maria Pia

    2015-01-01

    The dynamic interplay between perception and memory has been explored in preschool children by presenting filtered stimuli regarding animals and artifacts. The identification of filtered images was markedly influenced by both prior exposure and the semantic nature of the stimuli. The identification of animals required less physical information than artifacts did. Our results corroborate the notion that the human attention system evolves to reliably develop definite category-specific selection criteria by which living entities are monitored in different ways.

  11. Category Specific Spatial Dissociations of Parallel Processes Underlying Visual Naming

    OpenAIRE

    Conner, Christopher R.; Chen, Gang; Pieters, Thomas A.; Tandon, Nitin

    2013-01-01

    The constituent elements and dynamics of the networks responsible for word production are a central issue to understanding human language. Of particular interest is their dependency on lexical category, particularly the possible segregation of nouns and verbs into separate processing streams. We applied a novel mixed-effects, multilevel analysis to electrocorticographic data collected from 19 patients (1942 electrodes) to examine the activity of broadly disseminated cortical networks during t...

  12. MM-MDS: a multidimensional scaling database with similarity ratings for 240 object categories from the Massive Memory picture database.

    Directory of Open Access Journals (Sweden)

    Michael C Hout

    Full Text Available Cognitive theories in visual attention and perception, categorization, and memory often critically rely on concepts of similarity among objects, and empirically require measures of "sameness" among their stimuli. For instance, a researcher may require similarity estimates among multiple exemplars of a target category in visual search, or targets and lures in recognition memory. Quantifying similarity, however, is challenging when everyday items are the desired stimulus set, particularly when researchers require several different pictures from the same category. In this article, we document a new multidimensional scaling database with similarity ratings for 240 categories, each containing color photographs of 16-17 exemplar objects. We collected similarity ratings using the spatial arrangement method. Reports include: the multidimensional scaling solutions for each category, up to five dimensions, stress and fit measures, coordinate locations for each stimulus, and two new classifications. For each picture, we categorized the item's prototypicality, indexed by its proximity to other items in the space. We also classified pairs of images along a continuum of similarity, by assessing the overall arrangement of each MDS space. These similarity ratings will be useful to any researcher that wishes to control the similarity of experimental stimuli according to an objective quantification of "sameness."

  13. MM-MDS: a multidimensional scaling database with similarity ratings for 240 object categories from the Massive Memory picture database.

    Science.gov (United States)

    Hout, Michael C; Goldinger, Stephen D; Brady, Kyle J

    2014-01-01

    Cognitive theories in visual attention and perception, categorization, and memory often critically rely on concepts of similarity among objects, and empirically require measures of "sameness" among their stimuli. For instance, a researcher may require similarity estimates among multiple exemplars of a target category in visual search, or targets and lures in recognition memory. Quantifying similarity, however, is challenging when everyday items are the desired stimulus set, particularly when researchers require several different pictures from the same category. In this article, we document a new multidimensional scaling database with similarity ratings for 240 categories, each containing color photographs of 16-17 exemplar objects. We collected similarity ratings using the spatial arrangement method. Reports include: the multidimensional scaling solutions for each category, up to five dimensions, stress and fit measures, coordinate locations for each stimulus, and two new classifications. For each picture, we categorized the item's prototypicality, indexed by its proximity to other items in the space. We also classified pairs of images along a continuum of similarity, by assessing the overall arrangement of each MDS space. These similarity ratings will be useful to any researcher that wishes to control the similarity of experimental stimuli according to an objective quantification of "sameness."

  14. Attribute conjunctions and the part configuration advantage in object category learning.

    Science.gov (United States)

    Saiki, J; Hummel, J E

    1996-07-01

    Five experiments demonstrated that in object category learning people are particularly sensitive to conjunctions of part shapes and relative locations. Participants learned categories defined by a part's shape and color (part-color conjunctions) or by a part's shape and its location relative to another part (part-location conjunctions). The statistical properties of the categories were identical across these conditions, as were the salience of color and relative location. Participants were better at classifying objects defined by part-location conjunctions than objects defined by part-color conjunctions. Subsequent experiments revealed that this effect was not due to the specific color manipulation or the role of location per se. These results suggest that the shape bias in object categorization is at least partly due to sensitivity to part-location conjunctions and suggest a new processing constraint on category learning.

  15. Effects of Grammatical Categories on Children's Visual Language Processing: Evidence from Event-Related Brain Potentials

    Science.gov (United States)

    Weber-Fox, Christine; Hart, Laura J.; Spruill, John E., III

    2006-01-01

    This study examined how school-aged children process different grammatical categories. Event-related brain potentials elicited by words in visually presented sentences were analyzed according to seven grammatical categories with naturally varying characteristics of linguistic functions, semantic features, and quantitative attributes of length and…

  16. Is Syntactic-Category Processing Obligatory in Visual Word Recognition? Evidence from Chinese

    Science.gov (United States)

    Wong, Andus Wing-Kuen; Chen, Hsuan-Chih

    2012-01-01

    Three experiments were conducted to investigate how syntactic-category and semantic information is processed in visual word recognition. The stimuli were two-character Chinese words in which semantic and syntactic-category ambiguities were factorially manipulated. A lexical decision task was employed in Experiment 1, whereas a semantic relatedness…

  17. Conceptual distinctiveness supports detailed visual long-term memory for real-world objects.

    Science.gov (United States)

    Konkle, Talia; Brady, Timothy F; Alvarez, George A; Oliva, Aude

    2010-08-01

    Humans have a massive capacity to store detailed information in visual long-term memory. The present studies explored the fidelity of these visual long-term memory representations and examined how conceptual and perceptual features of object categories support this capacity. Observers viewed 2,800 object images with a different number of exemplars presented from each category. At test, observers indicated which of 2 exemplars they had previously studied. Memory performance was high and remained quite high (82% accuracy) with 16 exemplars from a category in memory, demonstrating a large memory capacity for object exemplars. However, memory performance decreased as more exemplars were held in memory, implying systematic categorical interference. Object categories with conceptually distinctive exemplars showed less interference in memory as the number of exemplars increased. Interference in memory was not predicted by the perceptual distinctiveness of exemplars from an object category, though these perceptual measures predicted visual search rates for an object target among exemplars. These data provide evidence that observers' capacity to remember visual information in long-term memory depends more on conceptual structure than perceptual distinctiveness. 2010 APA, all rights reserved

  18. Storage and binding of object features in visual working memory

    OpenAIRE

    Bays, Paul M; Wu, Emma Y; Husain, Masud

    2010-01-01

    An influential conception of visual working memory is of a small number of discrete memory “slots”, each storing an integrated representation of a single visual object, including all its component features. When a scene contains more objects than there are slots, visual attention controls which objects gain access to memory.

  19. Visual Processing of Object Velocity and Acceleration

    Science.gov (United States)

    1994-02-04

    A failure of motion deblurring in the human visual system. Investigative Opthalmology and Visual Sciences (Suppl),34, 1230 Watamaniuk, S.N.J. and...McKee, S.P. Why is a trajectory more detectable in noise than correlated signal dots? Investigative Opthalmology and Visual Sciences (Suppl),34, 1364

  20. Object Localization Does Not Imply Awareness of Object Category at the Break of Continuous Flash Suppression

    Directory of Open Access Journals (Sweden)

    Florian Kobylka

    2017-06-01

    Full Text Available In continuous flash suppression (CFS, a dynamic noise masker, presented to one eye, suppresses conscious perception of a test stimulus, presented to the other eye, until the suppressed stimulus comes to awareness after few seconds. But what do we see breaking the dominance of the masker in the transition period? We addressed this question with a dual-task in which observers indicated (i whether the test object was left or right of the fixation mark (localization and (ii whether it was a face or a house (categorization. As done recently Stein et al. (2011a, we used two experimental varieties to rule out confounds with decisional strategy. In the terminated mode, stimulus and masker were presented for distinct durations, and the observers were asked to give both judgments at the end of the trial. In the self-paced mode, presentation lasted until the observers responded. In the self-paced mode, b-CFS durations for object categorization were about half a second longer than for object localization. In the terminated mode, correct categorization rates were consistently lower than correct detection rates, measured at five duration intervals ranging up to 2 s. In both experiments we observed an upright face advantage compared to inverted faces and houses, as concurrently reported in b-CFS studies. Our findings reveal that more time is necessary to enable observers judging the nature of the object, compared to judging that there is “something other” than the noise which can be localized, but not recognized. This suggests gradual transitions in the first break of CFS. Further, the results imply that suppression is such that no cues to object identity are conveyed in potential “leaks” of CFS (Gelbard-Sagiv et al., 2016.

  1. Social Categories are Natural Kinds, not Objective Types (and Why it Matters Politically

    Directory of Open Access Journals (Sweden)

    Bach Theodore

    2016-08-01

    Full Text Available There is growing support for the view that social categories like men and women refer to “objective types.” An objective type is a similarity class for which the axis of similarity is an objective rather than nominal or fictional property. Such types are independently real and causally relevant, yet their unity does not derive from an essential property. Given this tandem of features, it is not surprising why empirically-minded researchers interested in fighting oppression and marginalization have found this ontological category so attractive: objective types have the ontological credentials to secure the reality (and thus political representation of social categories, and yet they do not impose exclusionary essences that also naturalize and legitimize social inequalities. This essay argues that, from the perspective of these political goals of fighting oppression and marginalization, the category of objective types is in fact a Trojan horse; it looks like a gift, but it ends up creating trouble. I argue that objective type classifications often lack empirical adequacy, and as a result they lack political adequacy. I also provide, and in reference to the normative goals described above, several arguments for preferring a social ontology of natural kinds with historical essences.

  2. 2-Cosemisimplicial objects in a 2-category, permutohedra and deformations of pseudofunctors

    OpenAIRE

    Elgueta, Josep

    2004-01-01

    In this paper we take up again the deformation theory for $K$-linear pseudofunctors initiated in a previous work (Adv. Math. 182 (2004) 204-277). We start by introducing a notion of a 2-cosemisimplicial object in an arbitrary 2-category and analyzing the corresponding coherence question, where the permutohedra make their appearence. We then describe a general method to obtain cochain complexes of K-modules from (enhanced) 2-cosemisimplicial objects in the 2-category ${\\bf Cat}_K$ of small $K$...

  3. Linguistic labels, dynamic visual features, and attention in infant category learning.

    Science.gov (United States)

    Deng, Wei Sophia; Sloutsky, Vladimir M

    2015-06-01

    How do words affect categorization? According to some accounts, even early in development words are category markers and are different from other features. According to other accounts, early in development words are part of the input and are akin to other features. The current study addressed this issue by examining the role of words and dynamic visual features in category learning in 8- to 12-month-old infants. Infants were familiarized with exemplars from one category in a label-defined or motion-defined condition and then tested with prototypes from the studied category and from a novel contrast category. Eye-tracking results indicated that infants exhibited better category learning in the motion-defined condition than in the label-defined condition, and their attention was more distributed among different features when there was a dynamic visual feature compared with the label-defined condition. These results provide little evidence for the idea that linguistic labels are category markers that facilitate category learning. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Orienting attention to objects in visual short-term memory

    NARCIS (Netherlands)

    Dell'Acqua, Roberto; Sessa, Paola; Toffanin, Paolo; Luria, Roy; Joliccoeur, Pierre

    We measured electroencephalographic activity during visual search of a target object among objects available to perception or among objects held in visual short-term memory (VSTM). For perceptual search, a single shape was shown first (pre-cue) followed by a search-array and the task was to decide

  5. Emerging Object Representations in the Visual System Predict Reaction Times for Categorization

    Science.gov (United States)

    Ritchie, J. Brendan; Tovar, David A.; Carlson, Thomas A.

    2015-01-01

    Recognizing an object takes just a fraction of a second, less than the blink of an eye. Applying multivariate pattern analysis, or “brain decoding”, methods to magnetoencephalography (MEG) data has allowed researchers to characterize, in high temporal resolution, the emerging representation of object categories that underlie our capacity for rapid recognition. Shortly after stimulus onset, object exemplars cluster by category in a high-dimensional activation space in the brain. In this emerging activation space, the decodability of exemplar category varies over time, reflecting the brain’s transformation of visual inputs into coherent category representations. How do these emerging representations relate to categorization behavior? Recently it has been proposed that the distance of an exemplar representation from a categorical boundary in an activation space is critical for perceptual decision-making, and that reaction times should therefore correlate with distance from the boundary. The predictions of this distance hypothesis have been born out in human inferior temporal cortex (IT), an area of the brain crucial for the representation of object categories. When viewed in the context of a time varying neural signal, the optimal time to “read out” category information is when category representations in the brain are most decodable. Here, we show that the distance from a decision boundary through activation space, as measured using MEG decoding methods, correlates with reaction times for visual categorization during the period of peak decodability. Our results suggest that the brain begins to read out information about exemplar category at the optimal time for use in choice behaviour, and support the hypothesis that the structure of the representation for objects in the visual system is partially constitutive of the decision process in recognition. PMID:26107634

  6. Visualization of object-oriented (Java) programs

    NARCIS (Netherlands)

    Huizing, C.; Kuiper, R.; Luijten, C.A.A.M.; Vandalon, V.; Helfert, M.; Martins, M.J.; Cordeiro, J.

    2012-01-01

    We provide an explicit, consistent, execution model for OO programs, specifically Java, together with a tool that visualizes the model This equips the student with a model to think and communicate about OO programs. Especially for an e-learning situation this is significant. Firstly, such a model

  7. Visual Form Perception Can Be a Cognitive Correlate of Lower Level Math Categories for Teenagers

    Science.gov (United States)

    Cui, Jiaxin; Zhang, Yiyun; Cheng, Dazhi; Li, Dawei; Zhou, Xinlin

    2017-01-01

    Numerous studies have assessed the cognitive correlates of performance in mathematics, but little research has been conducted to systematically examine the relations between visual perception as the starting point of visuospatial processing and typical mathematical performance. In the current study, we recruited 223 seventh graders to perform a visual form perception task (figure matching), numerosity comparison, digit comparison, exact computation, approximate computation, and curriculum-based mathematical achievement tests. Results showed that, after controlling for gender, age, and five general cognitive processes (choice reaction time, visual tracing, mental rotation, spatial working memory, and non-verbal matrices reasoning), visual form perception had unique contributions to numerosity comparison, digit comparison, and exact computation, but had no significant relation with approximate computation or curriculum-based mathematical achievement. These results suggest that visual form perception is an important independent cognitive correlate of lower level math categories, including the approximate number system, digit comparison, and exact computation. PMID:28824513

  8. Timing, timing, timing: Fast decoding of object information from intracranial field potentials in human visual cortex

    Science.gov (United States)

    Liu, Hesheng; Agam, Yigal; Madsen, Joseph R.; Kreiman, Gabriel

    2010-01-01

    Summary The difficulty of visual recognition stems from the need to achieve high selectivity while maintaining robustness to object transformations within hundreds of milliseconds. Theories of visual recognition differ in whether the neuronal circuits invoke recurrent feedback connections or not. The timing of neurophysiological responses in visual cortex plays a key role in distinguishing between bottom-up and top-down theories. Here we quantified at millisecond resolution the amount of visual information conveyed by intracranial field potentials from 912 electrodes in 11 human subjects. We could decode object category information from human visual cortex in single trials as early as 100 ms post-stimulus. Decoding performance was robust to depth rotation and scale changes. The results suggest that physiological activity in the temporal lobe can account for key properties of visual recognition. The fast decoding in single trials is compatible with feed-forward theories and provides strong constraints for computational models of human vision. PMID:19409272

  9. Object attributes combine additively in visual search

    OpenAIRE

    Pramod, R. T.; Arun, S. P.

    2016-01-01

    We perceive objects as containing a variety of attributes: local features, relations between features, internal details, and global properties. But we know little about how they combine. Here, we report a remarkably simple additive rule that governs how these diverse object attributes combine in vision. The perceived dissimilarity between two objects was accurately explained as a sum of (a) spatially tuned local contour-matching processes modulated by part decomposition; (b) differences in in...

  10. Storage of features, conjunctions and objects in visual working memory.

    Science.gov (United States)

    Vogel, E K; Woodman, G F; Luck, S J

    2001-02-01

    Working memory can be divided into separate subsystems for verbal and visual information. Although the verbal system has been well characterized, the storage capacity of visual working memory has not yet been established for simple features or for conjunctions of features. The authors demonstrate that it is possible to retain information about only 3-4 colors or orientations in visual working memory at one time. Observers are also able to retain both the color and the orientation of 3-4 objects, indicating that visual working memory stores integrated objects rather than individual features. Indeed, objects defined by a conjunction of four features can be retained in working memory just as well as single-feature objects, allowing many individual features to be retained when distributed across a small number of objects. Thus, the capacity of visual working memory must be understood in terms of integrated objects rather than individual features.

  11. Object attributes combine additively in visual search.

    Science.gov (United States)

    Pramod, R T; Arun, S P

    2016-01-01

    We perceive objects as containing a variety of attributes: local features, relations between features, internal details, and global properties. But we know little about how they combine. Here, we report a remarkably simple additive rule that governs how these diverse object attributes combine in vision. The perceived dissimilarity between two objects was accurately explained as a sum of (a) spatially tuned local contour-matching processes modulated by part decomposition; (b) differences in internal details, such as texture; (c) differences in emergent attributes, such as symmetry; and (d) differences in global properties, such as orientation or overall configuration of parts. Our results elucidate an enduring question in object vision by showing that the whole object is not a sum of its parts but a sum of its many attributes.

  12. The Representation of Object Viewpoint in Human Visual Cortex

    OpenAIRE

    Andresen, David R.; Vinberg, Joakim; Grill-Spector, Kalanit

    2008-01-01

    Understanding the nature of object representations in the human brain is critical for understanding the neural basis of invariant object recognition. However, the degree to which object representations are sensitive to object viewpoint is unknown. Using fMRI we employed a parametric approach to examine the sensitivity to object view as a function of rotation (0°–180°), category (animal/vehicle) and fMRI-adaptation paradigm (short or long-lagged). For both categories and fMRI-adaptation paradi...

  13. Nouns, verbs, objects, actions, and abstractions: local fMRI activity indexes semantics, not lexical categories.

    Science.gov (United States)

    Moseley, Rachel L; Pulvermüller, Friedemann

    2014-05-01

    Noun/verb dissociations in the literature defy interpretation due to the confound between lexical category and semantic meaning; nouns and verbs typically describe concrete objects and actions. Abstract words, pertaining to neither, are a critical test case: dissociations along lexical-grammatical lines would support models purporting lexical category as the principle governing brain organisation, whilst semantic models predict dissociation between concrete words but not abstract items. During fMRI scanning, participants read orthogonalised word categories of nouns and verbs, with or without concrete, sensorimotor meaning. Analysis of inferior frontal/insula, precentral and central areas revealed an interaction between lexical class and semantic factors with clear category differences between concrete nouns and verbs but not abstract ones. Though the brain stores the combinatorial and lexical-grammatical properties of words, our data show that topographical differences in brain activation, especially in the motor system and inferior frontal cortex, are driven by semantics and not by lexical class. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  14. Reader error, object recognition, and visual search

    Science.gov (United States)

    Kundel, Harold L.

    2004-05-01

    Small abnormalities such as hairline fractures, lung nodules and breast tumors are missed by competent radiologists with sufficient frequency to make them a matter of concern to the medical community; not only because they lead to litigation but also because they delay patient care. It is very easy to attribute misses to incompetence or inattention. To do so may be placing an unjustified stigma on the radiologists involved and may allow other radiologists to continue a false optimism that it can never happen to them. This review presents some of the fundamentals of visual system function that are relevant to understanding the search for and the recognition of small targets embedded in complicated but meaningful backgrounds like chests and mammograms. It presents a model for visual search that postulates a pre-attentive global analysis of the retinal image followed by foveal checking fixations and eventually discovery scanning. The model will be used to differentiate errors of search, recognition and decision making. The implications for computer aided diagnosis and for functional workstation design are discussed.

  15. The visual extent of an object: suppose we know the object locations

    NARCIS (Netherlands)

    Uijlings, J.R.R.; Smeulders, A.W.M.; Scha, R.J.H.

    2012-01-01

    The visual extent of an object reaches beyond the object itself. This is a long standing fact in psychology and is reflected in image retrieval techniques which aggregate statistics from the whole image in order to identify the object within. However, it is unclear to what degree and how the visual

  16. The Functional Architecture of Visual Object Recognition

    Science.gov (United States)

    1991-07-01

    different forms of agnosia can provide clues to the representations underlying normal object recognition (Farah, 1990). For example, the pair-wise...patterns of deficit and sparing occur. In a review of 99 published cases of agnosia , the observed patterns of co- occurrence implicated two underlying

  17. Assessing the Cartographic Visualization of Moving Objects ...

    African Journals Online (AJOL)

    Four representations are considered in this research: the single static map, multiple static maps, animation, and the space-time cube. The study is conducted by considering four movement characteristics (or aspects of moving objects): speed change, returns, stops, and path of movement. The ability of users to perceive and ...

  18. Exploiting core knowledge for visual object recognition.

    Science.gov (United States)

    Schurgin, Mark W; Flombaum, Jonathan I

    2017-03-01

    Humans recognize thousands of objects, and with relative tolerance to variable retinal inputs. The acquisition of this ability is not fully understood, and it remains an area in which artificial systems have yet to surpass people. We sought to investigate the memory process that supports object recognition. Specifically, we investigated the association of inputs that co-occur over short periods of time. We tested the hypothesis that human perception exploits expectations about object kinematics to limit the scope of association to inputs that are likely to have the same token as a source. In several experiments we exposed participants to images of objects, and we then tested recognition sensitivity. Using motion, we manipulated whether successive encounters with an image took place through kinematics that implied the same or a different token as the source of those encounters. Images were injected with noise, or shown at varying orientations, and we included 2 manipulations of motion kinematics. Across all experiments, memory performance was better for images that had been previously encountered with kinematics that implied a single token. A model-based analysis similarly showed greater memory strength when images were shown via kinematics that implied a single token. These results suggest that constraints from physics are built into the mechanisms that support memory about objects. Such constraints-often characterized as 'Core Knowledge'-are known to support perception and cognition broadly, even in young infants. But they have never been considered as a mechanism for memory with respect to recognition. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  19. Attitudes and evaluative practices: category vs. item and subjective vs. objective constructions in everyday food assessments.

    Science.gov (United States)

    Wiggins, Sally; Potter, Jonathan

    2003-12-01

    In social psychology, evaluative expressions have traditionally been understood in terms of their relationship to, and as the expression of, underlying 'attitudes'. In contrast, discursive approaches have started to study evaluative expressions as part of varied social practices, considering what such expressions are doing rather than their relationship to attitudinal objects or other putative mental entities. In this study the latter approach will be used to examine the construction of food and drink evaluations in conversation. The data are taken from a corpus of family mealtimes recorded over a period of months. The aim of this study is to highlight two distinctions that are typically obscured in traditional attitude work ('subjective' vs. 'objective' expressions, category vs. item evaluations). A set of extracts is examined to document the presence of these distinctions in talk that evaluates food and the way they are used and rhetorically developed to perform particular activities (accepting/refusing food, complimenting the food provider, persuading someone to eat). The analysis suggests that researchers (a) should be aware of the potential significance of these distinctions; (b) should be cautious when treating evaluative terms as broadly equivalent and (c) should be cautious when blurring categories and instances. This analysis raises the broader question of how far evaluative practices may be specific to particular domains, and what this specificity might consist in. It is concluded that research in this area could benefit from starting to focus on the role of evaluations in practices and charting their association with specific topics and objects.

  20. Binding Objects to Locations: The Relationship between Object Files and Visual Working Memory

    Science.gov (United States)

    Hollingworth, Andrew; Rasmussen, Ian P.

    2010-01-01

    The relationship between object files and visual working memory (VWM) was investigated in a new paradigm combining features of traditional VWM experiments (color change detection) and object-file experiments (memory for the properties of moving objects). Object-file theory was found to account for a key component of object-position binding in VWM:…

  1. A foreground object features-based stereoscopic image visual comfort assessment model

    Science.gov (United States)

    Jin, Xin; Jiang, G.; Ying, H.; Yu, M.; Ding, S.; Peng, Z.; Shao, F.

    2014-11-01

    Since stereoscopic images provide observers with both realistic and discomfort viewing experience, it is necessary to investigate the determinants of visual discomfort. By considering that foreground object draws most attention when human observing stereoscopic images. This paper proposes a new foreground object based visual comfort assessment (VCA) metric. In the first place, a suitable segmentation method is applied to disparity map and then the foreground object is ascertained as the one having the biggest average disparity. In the second place, three visual features being average disparity, average width and spatial complexity of foreground object are computed from the perspective of visual attention. Nevertheless, object's width and complexity do not consistently influence the perception of visual comfort in comparison with disparity. In accordance with this psychological phenomenon, we divide the whole images into four categories on the basis of different disparity and width, and exert four different models to more precisely predict its visual comfort in the third place. Experimental results show that the proposed VCA metric outperformance other existing metrics and can achieve a high consistency between objective and subjective visual comfort scores. The Pearson Linear Correlation Coefficient (PLCC) and Spearman Rank Order Correlation Coefficient (SROCC) are over 0.84 and 0.82, respectively.

  2. Object formation in visual working memory: Evidence from object-based attention.

    Science.gov (United States)

    Zhou, Jifan; Zhang, Haihang; Ding, Xiaowei; Shui, Rende; Shen, Mowei

    2016-09-01

    We report on how visual working memory (VWM) forms intact perceptual representations of visual objects using sub-object elements. Specifically, when objects were divided into fragments and sequentially encoded into VWM, the fragments were involuntarily integrated into objects in VWM, as evidenced by the occurrence of both positive and negative object-based attention effects: In Experiment 1, when subjects' attention was cued to a location occupied by the VWM object, the target presented at the location of that object was perceived as occurring earlier than that presented at the location of a different object. In Experiment 2, responses to a target were significantly slower when a distractor was presented at the same location as the cued object (Experiment 2). These results suggest that object fragments can be integrated into objects within VWM in a manner similar to that of visual perception. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Robust selectivity to two-object images in human visual cortex

    Science.gov (United States)

    Agam, Yigal; Liu, Hesheng; Papanastassiou, Alexander; Buia, Calin; Golby, Alexandra J.; Madsen, Joseph R.; Kreiman, Gabriel

    2010-01-01

    SUMMARY We can recognize objects in a fraction of a second in spite of the presence of other objects [1–3]. The responses in macaque areas V4 and inferior temporal cortex [4–15] to a neuron’s preferred stimuli are typically suppressed by the addition of a second object within the receptive field (see however [16, 17]). How can this suppression be reconciled with rapid visual recognition in complex scenes? One option is that certain “special categories” are unaffected by other objects [18] but this leaves the problem unsolved for other categories. Another possibility is that serial attentional shifts help ameliorate the problem of distractor objects [19–21]. Yet, psychophysical studies [1–3], scalp recordings [1] and neurophysiological recordings [14, 16, 22–24], suggest that the initial sweep of visual processing contains a significant amount of information. We recorded intracranial field potentials in human visual cortex during presentation of flashes of two-object images. Visual selectivity from temporal cortex during the initial ~200 ms was largely robust to the presence of other objects. We could train linear decoders on the responses to isolated objects and decode information in two-object images. These observations are compatible with parallel, hierarchical and feed-forward theories of rapid visual recognition [25] and may provide a neural substrate to begin to unravel rapid recognition in natural scenes. PMID:20417105

  4. Semantic priming effects of synonyms, antonyms, frame, implication and verb-object categories

    Directory of Open Access Journals (Sweden)

    Elsa Skënderi-Rakipllari

    2017-12-01

    Full Text Available Semantic priming has been a major subject of interest for psycholinguists, whose aim is to discover how lexical memory is structured and organized. The facilitation process of word retrieval through semantic priming has long been studied. The present research is aimed to reveal which semantic category has the best priming effect. Through a lexical decision task experiment we compared the reaction times of masked primed pairs and unprimed pairs. In addition, we analyzed the reaction times and priming effect of connected semantic relations: antonymy, frame, synonymy, implication and verb-object. The data collected and interpreted unveiled that the mean reaction times of primed pairs were shorter than those of unprimed pairs. As to semantic priming, the most significantly primed pairs were those of implications and verb- objects, and not those of synonymy or antonymy as it might be expected.

  5. Visual Object Pattern Separation Varies in Older Adults

    Science.gov (United States)

    Holden, Heather M.; Toner, Chelsea; Pirogovsky, Eva; Kirwan, C. Brock; Gilbert, Paul E.

    2013-01-01

    Young and nondemented older adults completed a visual object continuous recognition memory task in which some stimuli (lures) were similar but not identical to previously presented objects. The lures were hypothesized to result in increased interference and increased pattern separation demand. To examine variability in object pattern separation…

  6. Visual attention is required for multiple object tracking.

    Science.gov (United States)

    Tran, Annie; Hoffman, James E

    2016-12-01

    In the multiple object tracking task, participants attempt to keep track of a moving set of target objects embedded in an identical set of moving distractors. Depending on several display parameters, observers are usually only able to accurately track 3 to 4 objects. Various proposals attribute this limit to a fixed number of discrete indexes (Pylyshyn, 1989), limits in visual attention (Cavanagh & Alvarez, 2005), or "architectural limits" in visual cortical areas (Franconeri, 2013). The present set of experiments examined the specific role of visual attention in tracking using a dual-task methodology in which participants tracked objects while identifying letter probes appearing on the tracked objects and distractors. As predicted by the visual attention model, probe identification was faster and/or more accurate when probes appeared on tracked objects. This was the case even when probes were more than twice as likely to appear on distractors suggesting that some minimum amount of attention is required to maintain accurate tracking performance. When the need to protect tracking accuracy was relaxed, participants were able to allocate more attention to distractors when probes were likely to appear there but only at the expense of large reductions in tracking accuracy. A final experiment showed that people attend to tracked objects even when letters appearing on them are task-irrelevant, suggesting that allocation of attention to tracked objects is an obligatory process. These results support the claim that visual attention is required for tracking objects. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  7. Development of Object Permanence in Visually Impaired Infants.

    Science.gov (United States)

    Rogers, S. J.; Puchalski, C. B.

    1988-01-01

    Development of object permanence skills was examined longitudinally in 20 visually impaired infants (ages 4-25 months). Order of skill acquisition and span of time required to master skills paralleled that of sighted infants, but the visually impaired subjects were 8-12 months older than sighted counterparts when similar skills were acquired.…

  8. Multimedia Visualizer: An Animated, Object-Based OPAC.

    Science.gov (United States)

    Lee, Newton S.

    1991-01-01

    Describes the Multimedia Visualizer, an online public access catalog (OPAC) that uses animated visualizations to make it more user friendly. Pictures of the system are shown that illustrate the interactive objects that patrons can access, including card catalog drawers, librarian desks, and bookshelves; and access to multimedia items is described.…

  9. Visual Memory for Objects Following Foveal Vision Loss

    Science.gov (United States)

    Geringswald, Franziska; Herbik, Anne; Hofmüller, Wolfram; Hoffmann, Michael B.; Pollmann, Stefan

    2015-01-01

    Allocation of visual attention is crucial for encoding items into visual long-term memory. In free vision, attention is closely linked to the center of gaze, raising the question whether foveal vision loss entails suboptimal deployment of attention and subsequent impairment of object encoding. To investigate this question, we examined visual…

  10. Online Feature Transformation Learning for Cross-Domain Object Category Recognition.

    Science.gov (United States)

    Zhang, Xuesong; Zhuang, Yan; Wang, Wei; Pedrycz, Witold

    2017-06-09

    In this paper, we introduce a new research problem termed online feature transformation learning in the context of multiclass object category recognition. The learning of a feature transformation is viewed as learning a global similarity metric function in an online manner. We first consider the problem of online learning a feature transformation matrix expressed in the original feature space and propose an online passive aggressive feature transformation algorithm. Then these original features are mapped to kernel space and an online single kernel feature transformation (OSKFT) algorithm is developed to learn a nonlinear feature transformation. Based on the OSKFT and the existing Hedge algorithm, a novel online multiple kernel feature transformation algorithm is also proposed, which can further improve the performance of online feature transformation learning in large-scale application. The classifier is trained with k nearest neighbor algorithm together with the learned similarity metric function. Finally, we experimentally examined the effect of setting different parameter values in the proposed algorithms and evaluate the model performance on several multiclass object recognition data sets. The experimental results demonstrate the validity and good performance of our methods on cross-domain and multiclass object recognition application.

  11. Foraging through multiple target categories reveals the flexibility of visual working memory.

    Science.gov (United States)

    Kristjánsson, Tómas; Kristjánsson, Árni

    2018-02-01

    A key assumption in the literature on visual attention is that templates, actively maintained in visual working memory (VWM), guide visual attention. An important question therefore involves the nature and capacity of VWM. According to load theories, more than one search template can be active at the same time and capacity is determined by the total load rather than a precise number of templates. By an alternative account only one search template can be active within visual working memory at any given time, while other templates are in an accessory state - but do not affect visual selection. We addressed this question by varying the number of targets and distractors in a visual foraging task for 40 targets among 40 distractors in two ways: 1) Fixed-distractor-number, involving two distractor types while target categories varied from one to four. 2) Fixed-color-number (7), so that if the target types were two, distractors types were five, while if target number increased to three, distractor types were four (etc.). The two accounts make differing predictions. Under the single-template account, we should expect large switch costs as target types increase to two, but switch-costs should not increase much as target types increase beyond two. Load accounts predict an approximately linear increase in switch costs with increased target type number. The results were that switch costs increased roughly linearly in both conditions, in line with load accounts. The results are discussed in light of recent proposals that working memory reflects lingering neural activity at various sites that operate on the stimuli in each case and findings showing neurally silent working memory representations. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. The representation of object viewpoint in human visual cortex.

    Science.gov (United States)

    Andresen, David R; Vinberg, Joakim; Grill-Spector, Kalanit

    2009-04-01

    Understanding the nature of object representations in the human brain is critical for understanding the neural basis of invariant object recognition. However, the degree to which object representations are sensitive to object viewpoint is unknown. Using fMRI we employed a parametric approach to examine the sensitivity to object view as a function of rotation (0 degrees-180 degrees ), category (animal/vehicle) and fMRI-adaptation paradigm (short or long-lagged). For both categories and fMRI-adaptation paradigms, object-selective regions recovered from adaptation when a rotated view of an object was shown after adaptation to a specific view of that object, suggesting that representations are sensitive to object rotation. However, we found evidence for differential representations across categories and ventral stream regions. Rotation cross-adaptation was larger for animals than vehicles, suggesting higher sensitivity to vehicle than animal rotation, and was largest in the left fusiform/occipito-temporal sulcus (pFUS/OTS), suggesting that this region has low sensitivity to rotation. Moreover, right pFUS/OTS and FFA responded more strongly to front than back views of animals (without adaptation) and rotation cross-adaptation depended both on the level of rotation and the adapting view. This result suggests a prevalence of neurons that prefer frontal views of animals in fusiform regions. Using a computational model of view-tuned neurons, we demonstrate that differential neural view tuning widths and relative distributions of neural-tuned populations in fMRI voxels can explain the fMRI results. Overall, our findings underscore the utility of parametric approaches for studying the neural basis of object invariance and suggest that there is no complete invariance to object view in the human ventral stream.

  13. Integrative and distinctive coding of visual and conceptual object features in the ventral visual stream

    Science.gov (United States)

    Douglas, Danielle; Newsome, Rachel N; Man, Louisa LY

    2018-01-01

    A significant body of research in cognitive neuroscience is aimed at understanding how object concepts are represented in the human brain. However, it remains unknown whether and where the visual and abstract conceptual features that define an object concept are integrated. We addressed this issue by comparing the neural pattern similarities among object-evoked fMRI responses with behavior-based models that independently captured the visual and conceptual similarities among these stimuli. Our results revealed evidence for distinctive coding of visual features in lateral occipital cortex, and conceptual features in the temporal pole and parahippocampal cortex. By contrast, we found evidence for integrative coding of visual and conceptual object features in perirhinal cortex. The neuroanatomical specificity of this effect was highlighted by results from a searchlight analysis. Taken together, our findings suggest that perirhinal cortex uniquely supports the representation of fully specified object concepts through the integration of their visual and conceptual features. PMID:29393853

  14. Integrative and distinctive coding of visual and conceptual object features in the ventral visual stream.

    Science.gov (United States)

    Martin, Chris B; Douglas, Danielle; Newsome, Rachel N; Man, Louisa Ly; Barense, Morgan D

    2018-02-02

    A significant body of research in cognitive neuroscience is aimed at understanding how object concepts are represented in the human brain. However, it remains unknown whether and where the visual and abstract conceptual features that define an object concept are integrated. We addressed this issue by comparing the neural pattern similarities among object-evoked fMRI responses with behavior-based models that independently captured the visual and conceptual similarities among these stimuli. Our results revealed evidence for distinctive coding of visual features in lateral occipital cortex, and conceptual features in the temporal pole and parahippocampal cortex. By contrast, we found evidence for integrative coding of visual and conceptual object features in perirhinal cortex. The neuroanatomical specificity of this effect was highlighted by results from a searchlight analysis. Taken together, our findings suggest that perirhinal cortex uniquely supports the representation of fully specified object concepts through the integration of their visual and conceptual features. © 2018, Martin et al.

  15. Category Selectivity of Human Visual Cortex in Perception of Rubin Face–Vase Illusion

    Directory of Open Access Journals (Sweden)

    Xiaogang Wang

    2017-09-01

    Full Text Available When viewing the Rubin face–vase illusion, our conscious perception spontaneously alternates between the face and the vase; this illusion has been widely used to explore bistable perception. Previous functional magnetic resonance imaging (fMRI studies have studied the neural mechanisms underlying bistable perception through univariate and multivariate pattern analyses; however, no studies have investigated the issue of category selectivity. Here, we used fMRI to investigate the neural mechanisms underlying the Rubin face–vase illusion by introducing univariate amplitude and multivariate pattern analyses. The results from the amplitude analysis suggested that the activity in the fusiform face area was likely related to the subjective face perception. Furthermore, the pattern analysis results showed that the early visual cortex (EVC and the face-selective cortex could discriminate the activity patterns of the face and vase perceptions. However, further analysis of the activity patterns showed that only the face-selective cortex contains the face information. These findings indicated that although the EVC and face-selective cortex activities could discriminate the visual information, only the activity and activity pattern in the face-selective areas contained the category information of face perception in the Rubin face–vase illusion.

  16. Task context impacts visual object processing differentially across the cortex

    Science.gov (United States)

    Harel, Assaf; Kravitz, Dwight J.; Baker, Chris I.

    2014-01-01

    Perception reflects an integration of “bottom-up” (sensory-driven) and “top-down” (internally generated) signals. Although models of visual processing often emphasize the central role of feed-forward hierarchical processing, less is known about the impact of top-down signals on complex visual representations. Here, we investigated whether and how the observer’s goals modulate object processing across the cortex. We examined responses elicited by a diverse set of objects under six distinct tasks, focusing on either physical (e.g., color) or conceptual properties (e.g., man-made). Critically, the same stimuli were presented in all tasks, allowing us to investigate how task impacts the neural representations of identical visual input. We found that task has an extensive and differential impact on object processing across the cortex. First, we found task-dependent representations in the ventral temporal and prefrontal cortex. In particular, although object identity could be decoded from the multivoxel response within task, there was a significant reduction in decoding across tasks. In contrast, the early visual cortex evidenced equivalent decoding within and across tasks, indicating task-independent representations. Second, task information was pervasive and present from the earliest stages of object processing. However, although the responses of the ventral temporal, prefrontal, and parietal cortex enabled decoding of both the type of task (physical/conceptual) and the specific task (e.g., color), the early visual cortex was not sensitive to type of task and could only be used to decode individual physical tasks. Thus, object processing is highly influenced by the behavioral goal of the observer, highlighting how top-down signals constrain and inform the formation of visual representations. PMID:24567402

  17. An object-based visual attention model for robotic applications.

    Science.gov (United States)

    Yu, Yuanlong; Mann, George K I; Gosine, Raymond G

    2010-10-01

    By extending integrated competition hypothesis, this paper presents an object-based visual attention model, which selects one object of interest using low-dimensional features, resulting that visual perception starts from a fast attentional selection procedure. The proposed attention model involves seven modules: learning of object representations stored in a long-term memory (LTM), preattentive processing, top-down biasing, bottom-up competition, mediation between top-down and bottom-up ways, generation of saliency maps, and perceptual completion processing. It works in two phases: learning phase and attending phase. In the learning phase, the corresponding object representation is trained statistically when one object is attended. A dual-coding object representation consisting of local and global codings is proposed. Intensity, color, and orientation features are used to build the local coding, and a contour feature is employed to constitute the global coding. In the attending phase, the model preattentively segments the visual field into discrete proto-objects using Gestalt rules at first. If a task-specific object is given, the model recalls the corresponding representation from LTM and deduces the task-relevant feature(s) to evaluate top-down biases. The mediation between automatic bottom-up competition and conscious top-down biasing is then performed to yield a location-based saliency map. By combination of location-based saliency within each proto-object, the proto-object-based saliency is evaluated. The most salient proto-object is selected for attention, and it is finally put into the perceptual completion processing module to yield a complete object region. This model has been applied into distinct tasks of robots: detection of task-specific stationary and moving objects. Experimental results under different conditions are shown to validate this model.

  18. Visualizing Data as Objects by DC (Difference of Convex) Optimization

    DEFF Research Database (Denmark)

    Carrizosa, Emilio; Guerrero, Vanesa; Morales, Dolores Romero

    2018-01-01

    In this paper we address the problem of visualizing in a bounded region a set of individuals, which has attached a dissimilarity measure and a statistical value, as convex objects. This problem, which extends the standard Multidimensional Scaling Analysis, is written as a global optimization...... problem whose objective is the difference of two convex functions (DC). Suitable DC decompositions allow us to use the Difference of Convex Algorithm (DCA) in a very efficient way. Our algorithmic approach is used to visualize two real-world datasets....

  19. Use of subjective and objective criteria to categorise visual disability.

    Science.gov (United States)

    Kajla, Garima; Rohatgi, Jolly; Dhaliwal, Upreet

    2014-04-01

    Visual disability is categorised using objective criteria. Subjective measures are not considered. To use subjective criteria along with objective ones to categorise visual disability. Ophthalmology out-patient department; teaching hospital; observational study. Consecutive persons aged >25 years, with vision disability; group-zero: normal range of vision, to group-X: no perception of light, bilaterally. Snellen's vision; binocular contrast sensitivity (Pelli-Robson chart); automated binocular visual field (Humphrey; Esterman test); and vision-related quality of life (Indian Visual Function Questionnaire-33; IND-VFQ33) were recorded. SPSS version-17; Kruskal-wallis test was used to compare contrast sensitivity and visual fields across groups, and Mann-Whitney U test for pair-wise comparison (Bonferroni adjustment; P visual fields were comparable for differing disability grades except when disability was severe (P disability grades but comparable for groups III (78.51 ± 6.86) and IV (82.64 ± 5.80), and groups IV and V (77.23 ± 3.22); these were merged to generate group 345; similarly, global scores were comparable for adjacent groups V and VI (72.53 ± 6.77), VI and VII (74.46 ± 4.32), and VII and VIII (69.12 ± 5.97); these were merged to generate group 5678; thereafter, contrast sensitivity and global and individual IND-VFQ33 scores could differentiate between different grades of disability in the five new groups. Subjective criteria made it possible to objectively reclassify visual disability. Visual disability grades could be redefined to accommodate all from zero-100%.

  20. Object versus spatial visual mental imagery in patients with schizophrenia

    Science.gov (United States)

    Aleman, André; de Haan, Edward H.F.; Kahn, René S.

    2005-01-01

    Objective Recent research has revealed a larger impairment of object perceptual discrimination than of spatial perceptual discrimination in patients with schizophrenia. It has been suggested that mental imagery may share processing systems with perception. We investigated whether patients with schizophrenia would show greater impairment regarding object imagery than spatial imagery. Methods Forty-four patients with schizophrenia and 20 healthy control subjects were tested on a task of object visual mental imagery and on a task of spatial visual mental imagery. Both tasks included a condition in which no imagery was needed for adequate performance, but which was in other respects identical to the imagery condition. This allowed us to adjust for nonspecific differences in individual performance. Results The results revealed a significant difference between patients and controls on the object imagery task (F1,63 = 11.8, p = 0.001) but not on the spatial imagery task (F1,63 = 0.14, p = 0.71). To test for a differential effect, we conducted a 2 (patients v. controls) х 2 (object task v. spatial task) analysis of variance. The interaction term was statistically significant (F1,62 = 5.2, p = 0.026). Conclusions Our findings suggest a differential dysfunction of systems mediating object and spatial visual mental imagery in schizophrenia. PMID:15644999

  1. Visual memory for objects following foveal vision loss.

    Science.gov (United States)

    Geringswald, Franziska; Herbik, Anne; Hofmüller, Wolfram; Hoffmann, Michael B; Pollmann, Stefan

    2015-09-01

    Allocation of visual attention is crucial for encoding items into visual long-term memory. In free vision, attention is closely linked to the center of gaze, raising the question whether foveal vision loss entails suboptimal deployment of attention and subsequent impairment of object encoding. To investigate this question, we examined visual long-term memory for objects in patients suffering from foveal vision loss due to age-related macular degeneration. We measured patients' change detection sensitivity after a period of free scene exploration monocularly with their worse eye when possible, and under binocular vision, comparing sensitivity and eye movements to matched normal-sighted controls. A highly salient cue was used to capture attention to a nontarget location before a target change occurred in half of the trials, ensuring that change detection relied on memory. Patients' monocular and binocular sensitivity to object change was comparable to controls, even after more than 4 intervening fixations, and not significantly correlated with visual impairment. We conclude that extrafoveal vision suffices for efficient encoding into visual long-term memory. (c) 2015 APA, all rights reserved).

  2. Brain activity related to integrative processes in visual object recognition

    DEFF Research Database (Denmark)

    Gerlach, Christian; Aaside, C T; Humphreys, G W

    2002-01-01

    We report evidence from a PET activation study that the inferior occipital gyri (likely to include area V2) and the posterior parts of the fusiform and inferior temporal gyri are involved in the integration of visual elements into perceptual wholes (single objects). Of these areas, the fusiform a......) that perceptual and memorial processes can be dissociated on both functional and anatomical grounds. No evidence was obtained for the involvement of the parietal lobes in the integration of single objects....

  3. Defining Auditory-Visual Objects: Behavioral Tests and Physiological Mechanisms.

    Science.gov (United States)

    Bizley, Jennifer K; Maddox, Ross K; Lee, Adrian K C

    2016-02-01

    Crossmodal integration is a term applicable to many phenomena in which one sensory modality influences task performance or perception in another sensory modality. We distinguish the term binding as one that should be reserved specifically for the process that underpins perceptual object formation. To unambiguously differentiate binding form other types of integration, behavioral and neural studies must investigate perception of a feature orthogonal to the features that link the auditory and visual stimuli. We argue that supporting true perceptual binding (as opposed to other processes such as decision-making) is one role for cross-sensory influences in early sensory cortex. These early multisensory interactions may therefore form a physiological substrate for the bottom-up grouping of auditory and visual stimuli into auditory-visual (AV) objects. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  4. Objective Evaluation of Visual Fatigue Using Binocular Fusion Maintenance.

    Science.gov (United States)

    Hirota, Masakazu; Morimoto, Takeshi; Kanda, Hiroyuki; Endo, Takao; Miyoshi, Tomomitsu; Miyagawa, Suguru; Hirohara, Yoko; Yamaguchi, Tatsuo; Saika, Makoto; Fujikado, Takashi

    2018-03-01

    In this study, we investigated whether an individual's visual fatigue can be evaluated objectively and quantitatively from their ability to maintain binocular fusion. Binocular fusion maintenance (BFM) was measured using a custom-made binocular open-view Shack-Hartmann wavefront aberrometer equipped with liquid crystal shutters, wherein eye movements and wavefront aberrations were measured simultaneously. Transmittance in the liquid crystal shutter in front of the subject's nondominant eye was reduced linearly, and BFM was determined from the transmittance at the point when binocular fusion was broken and vergence eye movement was induced. In total, 40 healthy subjects underwent the BFM test and completed a questionnaire regarding subjective symptoms before and after a visual task lasting 30 minutes. BFM was significantly reduced after the visual task ( P eye symptom score (adjusted R 2 = 0.752, P devices, such as head-mount display, objectively.

  5. Visualizing Data as Objects by DC (Difference of Convex) Optimization

    DEFF Research Database (Denmark)

    Carrizosa, Emilio; Guerrero, Vanesa; Morales, Dolores Romero

    In this paper we address the problem of visualizing in a bounded region a set of individuals, which has attached a dissimilarity measure and a statistical value. This problem, which extends the standard Multidimensional Scaling Analysis, is written as a global optimization problem whose objective...

  6. Computing with Connections in Visual Recognition of Origami Objects.

    Science.gov (United States)

    Sabbah, Daniel

    1985-01-01

    Summarizes an initial foray in tackling artificial intelligence problems using a connectionist approach. The task chosen is visual recognition of Origami objects, and the questions answered are how to construct a connectionist network to represent and recognize projected Origami line drawings and the advantages such an approach would have. (30…

  7. Functional dissociation between action and perception of object shape in developmental visual object agnosia.

    Science.gov (United States)

    Freud, Erez; Ganel, Tzvi; Avidan, Galia; Gilaie-Dotan, Sharon

    2016-03-01

    According to the two visual systems model, the cortical visual system is segregated into a ventral pathway mediating object recognition, and a dorsal pathway mediating visuomotor control. In the present study we examined whether the visual control of action could develop normally even when visual perceptual abilities are compromised from early childhood onward. Using his fingers, LG, an individual with a rare developmental visual object agnosia, manually estimated (perceptual condition) the width of blocks that varied in width and length (but not in overall size), or simply picked them up across their width (grasping condition). LG's perceptual sensitivity to target width was profoundly impaired in the manual estimation task compared to matched controls. In contrast, the sensitivity to object shape during grasping, as measured by maximum grip aperture (MGA), the time to reach the MGA, the reaction time and the total movement time were all normal in LG. Further analysis, however, revealed that LG's sensitivity to object shape during grasping emerged at a later time stage during the movement compared to controls. Taken together, these results demonstrate a dissociation between action and perception of object shape, and also point to a distinction between different stages of the grasping movement, namely planning versus online control. Moreover, the present study implies that visuomotor abilities can develop normally even when perceptual abilities developed in a profoundly impaired fashion. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. The Visual Object Tracking VOT2016 Challenge Results

    KAUST Repository

    Kristan, Matej; Leonardis, Aleš; Matas, Jiři; Felsberg, Michael; Pflugfelder, Roman; Čehovin, Luka; Vojí r̃, Tomá š; Hä ger, Gustav; Lukežič, Alan; Ferná ndez, Gustavo; Gupta, Abhinav; Petrosino, Alfredo; Memarmoghadam, Alireza; Garcia-Martin, Alvaro; Solí s Montero, André s; Vedaldi, Andrea; Robinson, Andreas; Ma, Andy J.; Varfolomieiev, Anton; Alatan, Aydin; Erdem, Aykut; Ghanem, Bernard; Liu, Bin; Han, Bohyung; Martinez, Brais; Chang, Chang-Ming; Xu, Changsheng; Sun, Chong; Kim, Daijin; Chen, Dapeng; Du, Dawei; Mishra, Deepak; Yeung, Dit-Yan; Gundogdu, Erhan; Erdem, Erkut; Khan, Fahad; Porikli, Fatih; Zhao, Fei; Bunyak, Filiz; Battistone, Francesco; Zhu, Gao; Roffo, Giorgio; Subrahmanyam, Gorthi R. K. Sai; Bastos, Guilherme; Seetharaman, Guna; Medeiros, Henry; Li, Hongdong; Qi, Honggang; Bischof, Horst; Possegger, Horst; Lu, Huchuan; Lee, Hyemin; Nam, Hyeonseob; Chang, Hyung Jin; Drummond, Isabela; Valmadre, Jack; Jeong, Jae-chan; Cho, Jae-il; Lee, Jae-Yeong; Zhu, Jianke; Feng, Jiayi; Gao, Jin; Choi, Jin Young; Xiao, Jingjing; Kim, Ji-Wan; Jeong, Jiyeoup; Henriques, Joã o F.; Lang, Jochen; Choi, Jongwon; Martinez, Jose M.; Xing, Junliang; Gao, Junyu; Palaniappan, Kannappan; Lebeda, Karel; Gao, Ke; Mikolajczyk, Krystian; Qin, Lei; Wang, Lijun; Wen, Longyin; Bertinetto, Luca; Rapuru, Madan Kumar; Poostchi, Mahdieh; Maresca, Mario; Danelljan, Martin; Mueller, Matthias; Zhang, Mengdan; Arens, Michael; Valstar, Michel; Tang, Ming; Baek, Mooyeol; Khan, Muhammad Haris; Wang, Naiyan; Fan, Nana; Al-Shakarji, Noor; Miksik, Ondrej; Akin, Osman; Moallem, Payman; Senna, Pedro; Torr, Philip H. S.; Yuen, Pong C.; Huang, Qingming; Martin-Nieto, Rafael; Pelapur, Rengarajan; Bowden, Richard; Laganiè re, Robert; Stolkin, Rustam; Walsh, Ryan; Krah, Sebastian B.; Li, Shengkun; Zhang, Shengping; Yao, Shizeng; Hadfield, Simon; Melzi, Simone; Lyu, Siwei; Li, Siyi; Becker, Stefan; Golodetz, Stuart; Kakanuru, Sumithra; Choi, Sunglok; Hu, Tao; Mauthner, Thomas; Zhang, Tianzhu; Pridmore, Tony; Santopietro, Vincenzo; Hu, Weiming; Li, Wenbo; Hü bner, Wolfgang; Lan, Xiangyuan; Wang, Xiaomeng; Li, Xin; Li, Yang; Demiris, Yiannis; Wang, Yifan; Qi, Yuankai; Yuan, Zejian; Cai, Zexiong; Xu, Zhan; He, Zhenyu; Chi, Zhizhen

    2016-01-01

    The Visual Object Tracking challenge VOT2016 aims at comparing short-term single-object visual trackers that do not apply pre-learned models of object appearance. Results of 70 trackers are presented, with a large number of trackers being published at major computer vision conferences and journals in the recent years. The number of tested state-of-the-art trackers makes the VOT 2016 the largest and most challenging benchmark on short-term tracking to date. For each participating tracker, a short description is provided in the Appendix. The VOT2016 goes beyond its predecessors by (i) introducing a new semi-automatic ground truth bounding box annotation methodology and (ii) extending the evaluation system with the no-reset experiment. The dataset, the evaluation kit as well as the results are publicly available at the challenge website (http://votchallenge.net).

  9. The Visual Object Tracking VOT2015 Challenge Results

    KAUST Repository

    Kristan, Matej; Matas, Jiri; Leonardis, Ale; Felsberg, Michael; Cehovin, Luka; Fernandez, Gustavo; Vojir, Toma; Hager, Gustav; Nebehay, Georg; Pflugfelder, Roman; Gupta, Abhinav; Bibi, Adel Aamer; Lukezic, Alan; Garcia-Martin, Alvaro; Saffari, Amir; Petrosino, Alfredo; Montero, Andres Solıs; Varfolomieiev, Anton; Baskurt, Atilla; Zhao, Baojun; Ghanem, Bernard; Martinez, Brais; Lee, ByeongJu; Han, Bohyung; Wang, Chaohui; Garcia, Christophe; Zhang, Chunyuan; Schmid, Cordelia; Tao, Dacheng; Kim, Daijin; Huang, Dafei; Prokhorov, Danil; Du, Dawei; Yeung, Dit-Yan; Ribeiro, Eraldo; Khan, Fahad Shahbaz; Porikli, Fatih; Bunyak, Filiz; Zhu, Gao; Seetharaman, Guna; Kieritz, Hilke; Yau, Hing Tuen; Li, Hongdong; Qi, Honggang; Bischof, Horst; Possegger, Horst; Lee, Hyemin; Nam, Hyeonseob; Bogun, Ivan; Jeong, Jae-chan; Cho, Jae-il; Lee, Jae-Yeong; Zhu, Jianke; Shi, Jianping; Li, Jiatong; Jia, Jiaya; Feng, Jiayi; Gao, Jin; Choi, Jin Young; Kim, Ji-Wan; Lang, Jochen; Martinez, Jose M.; Choi, Jongwon; Xing, Junliang; Xue, Kai; Palaniappan, Kannappan; Lebeda, Karel; Alahari, Karteek; Gao, Ke; Yun, Kimin; Wong, Kin Hong; Luo, Lei; Ma, Liang; Ke, Lipeng; Wen, Longyin; Bertinetto, Luca; Pootschi, Mahdieh; Maresca, Mario; Danelljan, Martin; Wen, Mei; Zhang, Mengdan; Arens, Michael; Valstar, Michel; Tang, Ming; Chang, Ming-Ching; Khan, Muhammad Haris; Fan, Nana; Wang, Naiyan; Miksik, Ondrej; Torr, Philip H S; Wang, Qiang; Martin-Nieto, Rafael; Pelapur, Rengarajan; Bowden, Richard; Laganiere, Robert; Moujtahid, Salma; Hare, Sam; Hadfield, Simon; Lyu, Siwei; Li, Siyi; Zhu, Song-Chun; Becker, Stefan; Duffner, Stefan; Hicks, Stephen L; Golodetz, Stuart; Choi, Sunglok; Wu, Tianfu; Mauthner, Thomas; Pridmore, Tony; Hu, Weiming; Hubner, Wolfgang; Wang, Xiaomeng; Li, Xin; Shi, Xinchu; Zhao, Xu; Mei, Xue; Shizeng, Yao; Hua, Yang; Li, Yang; Lu, Yang; Li, Yuezun; Chen, Zhaoyun; Huang, Zehua; Chen, Zhe; Zhang, Zhe; He, Zhenyu; Hong, Zhibin

    2015-01-01

    The Visual Object Tracking challenge 2015, VOT2015, aims at comparing short-term single-object visual trackers that do not apply pre-learned models of object appearance. Results of 62 trackers are presented. The number of tested trackers makes VOT 2015 the largest benchmark on short-term tracking to date. For each participating tracker, a short description is provided in the appendix. Features of the VOT2015 challenge that go beyond its VOT2014 predecessor are: (i) a new VOT2015 dataset twice as large as in VOT2014 with full annotation of targets by rotated bounding boxes and per-frame attribute, (ii) extensions of the VOT2014 evaluation methodology by introduction of a new performance measure. The dataset, the evaluation kit as well as the results are publicly available at the challenge website.

  10. The Visual Object Tracking VOT2016 Challenge Results

    KAUST Repository

    Kristan, Matej

    2016-11-02

    The Visual Object Tracking challenge VOT2016 aims at comparing short-term single-object visual trackers that do not apply pre-learned models of object appearance. Results of 70 trackers are presented, with a large number of trackers being published at major computer vision conferences and journals in the recent years. The number of tested state-of-the-art trackers makes the VOT 2016 the largest and most challenging benchmark on short-term tracking to date. For each participating tracker, a short description is provided in the Appendix. The VOT2016 goes beyond its predecessors by (i) introducing a new semi-automatic ground truth bounding box annotation methodology and (ii) extending the evaluation system with the no-reset experiment. The dataset, the evaluation kit as well as the results are publicly available at the challenge website (http://votchallenge.net).

  11. The Visual Object Tracking VOT2015 Challenge Results

    KAUST Repository

    Kristan, Matej

    2015-12-07

    The Visual Object Tracking challenge 2015, VOT2015, aims at comparing short-term single-object visual trackers that do not apply pre-learned models of object appearance. Results of 62 trackers are presented. The number of tested trackers makes VOT 2015 the largest benchmark on short-term tracking to date. For each participating tracker, a short description is provided in the appendix. Features of the VOT2015 challenge that go beyond its VOT2014 predecessor are: (i) a new VOT2015 dataset twice as large as in VOT2014 with full annotation of targets by rotated bounding boxes and per-frame attribute, (ii) extensions of the VOT2014 evaluation methodology by introduction of a new performance measure. The dataset, the evaluation kit as well as the results are publicly available at the challenge website.

  12. Size matters: large objects capture attention in visual search.

    Science.gov (United States)

    Proulx, Michael J

    2010-12-23

    Can objects or events ever capture one's attention in a purely stimulus-driven manner? A recent review of the literature set out the criteria required to find stimulus-driven attentional capture independent of goal-directed influences, and concluded that no published study has satisfied that criteria. Here visual search experiments assessed whether an irrelevantly large object can capture attention. Capture of attention by this static visual feature was found. The results suggest that a large object can indeed capture attention in a stimulus-driven manner and independent of displaywide features of the task that might encourage a goal-directed bias for large items. It is concluded that these results are either consistent with the stimulus-driven criteria published previously or alternatively consistent with a flexible, goal-directed mechanism of saliency detection.

  13. Object similarity affects the perceptual strategy underlying invariant visual object recognition in rats

    Directory of Open Access Journals (Sweden)

    Federica Bianca Rosselli

    2015-03-01

    Full Text Available In recent years, a number of studies have explored the possible use of rats as models of high-level visual functions. One central question at the root of such an investigation is to understand whether rat object vision relies on the processing of visual shape features or, rather, on lower-order image properties (e.g., overall brightness. In a recent study, we have shown that rats are capable of extracting multiple features of an object that are diagnostic of its identity, at least when those features are, structure-wise, distinct enough to be parsed by the rat visual system. In the present study, we have assessed the impact of object structure on rat perceptual strategy. We trained rats to discriminate between two structurally similar objects, and compared their recognition strategies with those reported in our previous study. We found that, under conditions of lower stimulus discriminability, rat visual discrimination strategy becomes more view-dependent and subject-dependent. Rats were still able to recognize the target objects, in a way that was largely tolerant (i.e., invariant to object transformation; however, the larger structural and pixel-wise similarity affected the way objects were processed. Compared to the findings of our previous study, the patterns of diagnostic features were: i smaller and more scattered; ii only partially preserved across object views; and iii only partially reproducible across rats. On the other hand, rats were still found to adopt a multi-featural processing strategy and to make use of part of the optimal discriminatory information afforded by the two objects. Our findings suggest that, as in humans, rat invariant recognition can flexibly rely on either view-invariant representations of distinctive object features or view-specific object representations, acquired through learning.

  14. Object-based target templates guide attention during visual search.

    Science.gov (United States)

    Berggren, Nick; Eimer, Martin

    2018-05-03

    During visual search, attention is believed to be controlled in a strictly feature-based fashion, without any guidance by object-based target representations. To challenge this received view, we measured electrophysiological markers of attentional selection (N2pc component) and working memory (sustained posterior contralateral negativity; SPCN) in search tasks where two possible targets were defined by feature conjunctions (e.g., blue circles and green squares). Critically, some search displays also contained nontargets with two target features (incorrect conjunction objects, e.g., blue squares). Because feature-based guidance cannot distinguish these objects from targets, any selective bias for targets will reflect object-based attentional control. In Experiment 1, where search displays always contained only one object with target-matching features, targets and incorrect conjunction objects elicited identical N2pc and SPCN components, demonstrating that attentional guidance was entirely feature-based. In Experiment 2, where targets and incorrect conjunction objects could appear in the same display, clear evidence for object-based attentional control was found. The target N2pc became larger than the N2pc to incorrect conjunction objects from 250 ms poststimulus, and only targets elicited SPCN components. This demonstrates that after an initial feature-based guidance phase, object-based templates are activated when they are required to distinguish target and nontarget objects. These templates modulate visual processing and control access to working memory, and their activation may coincide with the start of feature integration processes. Results also suggest that while multiple feature templates can be activated concurrently, only a single object-based target template can guide attention at any given time. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  15. Eye movements during object recognition in visual agnosia.

    Science.gov (United States)

    Charles Leek, E; Patterson, Candy; Paul, Matthew A; Rafal, Robert; Cristino, Filipe

    2012-07-01

    This paper reports the first ever detailed study about eye movement patterns during single object recognition in visual agnosia. Eye movements were recorded in a patient with an integrative agnosic deficit during two recognition tasks: common object naming and novel object recognition memory. The patient showed normal directional biases in saccades and fixation dwell times in both tasks and was as likely as controls to fixate within object bounding contour regardless of recognition accuracy. In contrast, following initial saccades of similar amplitude to controls, the patient showed a bias for short saccades. In object naming, but not in recognition memory, the similarity of the spatial distributions of patient and control fixations was modulated by recognition accuracy. The study provides new evidence about how eye movements can be used to elucidate the functional impairments underlying object recognition deficits. We argue that the results reflect a breakdown in normal functional processes involved in the integration of shape information across object structure during the visual perception of shape. Copyright © 2012 Elsevier Ltd. All rights reserved.

  16. The visual system supports online translation invariance for object identification.

    Science.gov (United States)

    Bowers, Jeffrey S; Vankov, Ivan I; Ludwig, Casimir J H

    2016-04-01

    The ability to recognize the same image projected to different retinal locations is critical for visual object recognition in natural contexts. According to many theories, the translation invariance for objects extends only to trained retinal locations, so that a familiar object projected to a nontrained location should not be identified. In another approach, invariance is achieved "online," such that learning to identify an object in one location immediately affords generalization to other locations. We trained participants to name novel objects at one retinal location using eyetracking technology and then tested their ability to name the same images presented at novel retinal locations. Across three experiments, we found robust generalization. These findings provide a strong constraint for theories of vision.

  17. Coding the presence of visual objects in a recurrent neural network of visual cortex.

    Science.gov (United States)

    Zwickel, Timm; Wachtler, Thomas; Eckhorn, Reinhard

    2007-01-01

    Before we can recognize a visual object, our visual system has to segregate it from its background. This requires a fast mechanism for establishing the presence and location of objects independently of their identity. Recently, border-ownership neurons were recorded in monkey visual cortex which might be involved in this task [Zhou, H., Friedmann, H., von der Heydt, R., 2000. Coding of border ownership in monkey visual cortex. J. Neurosci. 20 (17), 6594-6611]. In order to explain the basic mechanisms required for fast coding of object presence, we have developed a neural network model of visual cortex consisting of three stages. Feed-forward and lateral connections support coding of Gestalt properties, including similarity, good continuation, and convexity. Neurons of the highest area respond to the presence of an object and encode its position, invariant of its form. Feedback connections to the lowest area facilitate orientation detectors activated by contours belonging to potential objects, and thus generate the experimentally observed border-ownership property. This feedback control acts fast and significantly improves the figure-ground segregation required for the consecutive task of object recognition.

  18. Visual-Spatial Attention Aids the Maintenance of Object Representations in Visual Working Memory

    Science.gov (United States)

    Williams, Melonie; Pouget, Pierre; Boucher, Leanne; Woodman, Geoffrey F.

    2013-01-01

    Theories have proposed that the maintenance of object representations in visual working memory is aided by a spatial rehearsal mechanism. In this study, we used two different approaches to test the hypothesis that overt and covert visual-spatial attention mechanisms contribute to the maintenance of object representations in visual working memory. First, we tracked observers’ eye movements while remembering a variable number of objects during change-detection tasks. We observed that during the blank retention interval, participants spontaneously shifted gaze to the locations that the objects had occupied in the memory array. Next, we hypothesized that if attention mechanisms contribute to the maintenance of object representations, then drawing attention away from the object locations during the retention interval would impair object memory during these change-detection tasks. Supporting this prediction, we found that attending to the fixation point in anticipation of a brief probe stimulus during the retention interval reduced change-detection accuracy even on the trials in which no probe occurred. These findings support models of working memory in which visual-spatial selection mechanisms contribute to the maintenance of object representations. PMID:23371773

  19. Sequential sampling of visual objects during sustained attention.

    Directory of Open Access Journals (Sweden)

    Jianrong Jia

    2017-06-01

    Full Text Available In a crowded visual scene, attention must be distributed efficiently and flexibly over time and space to accommodate different contexts. It is well established that selective attention enhances the corresponding neural responses, presumably implying that attention would persistently dwell on the task-relevant item. Meanwhile, recent studies, mostly in divided attentional contexts, suggest that attention does not remain stationary but samples objects alternately over time, suggesting a rhythmic view of attention. However, it remains unknown whether the dynamic mechanism essentially mediates attentional processes at a general level. Importantly, there is also a complete lack of direct neural evidence reflecting whether and how the brain rhythmically samples multiple visual objects during stimulus processing. To address these issues, in this study, we employed electroencephalography (EEG and a temporal response function (TRF approach, which can dissociate responses that exclusively represent a single object from the overall neuronal activity, to examine the spatiotemporal characteristics of attention in various attentional contexts. First, attention, which is characterized by inhibitory alpha-band (approximately 10 Hz activity in TRFs, switches between attended and unattended objects every approximately 200 ms, suggesting a sequential sampling even when attention is required to mostly stay on the attended object. Second, the attentional spatiotemporal pattern is modulated by the task context, such that alpha-mediated switching becomes increasingly prominent as the task requires a more uniform distribution of attention. Finally, the switching pattern correlates with attentional behavioral performance. Our work provides direct neural evidence supporting a generally central role of temporal organization mechanism in attention, such that multiple objects are sequentially sorted according to their priority in attentional contexts. The results suggest

  20. Visuospatial and visual object cognition in early Parkinson's disease

    OpenAIRE

    Possin, Katherine L.

    2007-01-01

    Recent evidence suggests that Parkinson's disease (PD) may be associated with greater impairment in visuospatial working memory as compared to visual object working memory. The nature of this selective impairment is not well understood, however, in part because successful performance on working memory tasks requires numerous cognitive processes. For example, the impairment may be limited to either the encoding or maintenance aspects of spatial working memory. Further, it is unknown at this po...

  1. Object-based target templates guide attention during visual search

    OpenAIRE

    Berggren, Nick; Eimer, Martin

    2018-01-01

    During visual search, attention is believed to be controlled in a strictly feature-based fashion, without any guidance by object-based target representations. To challenge this received view, we measured electrophysiological markers of attentional selection (N2pc component) and working memory (SPCN) in search tasks where two possible targets were defined by feature conjunctions (e.g., blue circles and green squares). Critically, some search displays also contained nontargets with two target f...

  2. Enhanced Recognition Memory in Grapheme-Colour Synaesthesia for Different Categories of Visual Stimuli

    Directory of Open Access Journals (Sweden)

    Jamie eWard

    2013-10-01

    Full Text Available Memory has been shown to be enhanced in grapheme-colour synaesthesia, and this enhancement extends to certain visual stimuli (that don’t induce synaesthesia as well as stimuli comprised of graphemes (which do. Previous studies have used a variety of testing procedures to assess memory in synaesthesia (e.g. free recall, recognition, associative learning making it hard to know the extent to which memory benefits are attributable to the stimulus properties themselves, the testing method, participant strategies, or some combination of these factors. In the first experiment, we use the same testing procedure (recognition memory for a variety of stimuli (written words, nonwords, scenes, and fractals and also check which memorisation strategies were used. We demonstrate that grapheme-colour synaesthetes show enhanced memory across all these stimuli, but this is not found for a non-visual type of synaesthesia (lexical-gustatory. In the second experiment, the memory advantage for scenes is explored further by manipulating the properties of the old and new images (changing colour, orientation, or object presence. Again, grapheme-colour synaesthetes show a memory advantage for scenes across all manipulations. Although recognition memory is generally enhanced in this study, the largest effects were found for abstract visual images (fractals and scenes for which colour can be used to discriminate old/new status.

  3. Enhanced recognition memory in grapheme-color synaesthesia for different categories of visual stimuli.

    Science.gov (United States)

    Ward, Jamie; Hovard, Peter; Jones, Alicia; Rothen, Nicolas

    2013-01-01

    Memory has been shown to be enhanced in grapheme-color synaesthesia, and this enhancement extends to certain visual stimuli (that don't induce synaesthesia) as well as stimuli comprised of graphemes (which do). Previous studies have used a variety of testing procedures to assess memory in synaesthesia (e.g., free recall, recognition, associative learning) making it hard to know the extent to which memory benefits are attributable to the stimulus properties themselves, the testing method, participant strategies, or some combination of these factors. In the first experiment, we use the same testing procedure (recognition memory) for a variety of stimuli (written words, non-words, scenes, and fractals) and also check which memorization strategies were used. We demonstrate that grapheme-color synaesthetes show enhanced memory across all these stimuli, but this is not found for a non-visual type of synaesthesia (lexical-gustatory). In the second experiment, the memory advantage for scenes is explored further by manipulating the properties of the old and new images (changing color, orientation, or object presence). Again, grapheme-color synaesthetes show a memory advantage for scenes across all manipulations. Although recognition memory is generally enhanced in this study, the largest effects were found for abstract visual images (fractals) and scenes for which color can be used to discriminate old/new status.

  4. Feature Types and Object Categories: Is Sensorimotoric Knowledge Different for Living and Nonliving Things?

    Science.gov (United States)

    Ankerstein, Carrie A.; Varley, Rosemary A.; Cowell, Patricia E.

    2012-01-01

    Some models of semantic memory claim that items from living and nonliving domains have different feature-type profiles. Data from feature generation and perceptual modality rating tasks were compared to evaluate this claim. Results from two living (animals, fruits/vegetables) and two nonliving (tools, vehicles) categories showed that…

  5. Tracking Location and Features of Objects within Visual Working Memory

    Directory of Open Access Journals (Sweden)

    Michael Patterson

    2012-10-01

    Full Text Available Four studies examined how color or shape features can be accessed to retrieve the memory of an object's location. In each trial, 6 colored dots (Experiments 1 and 2 or 6 black shapes (Experiments 3 and 4 were displayed in randomly selected locations for 1.5 s. An auditory cue for either the shape or the color to-be-remembered was presented either simultaneously, immediately, or 2 s later. Non-informative cues appeared in some trials to serve as a control condition. After a 4 s delay, 5/6 objects were re-presented, and participants indicated the location of the missing object either by moving the mouse (Experiments 1 and 3, or by typing coordinates using a grid (Experiments 2 and 4. Compared to the control condition, cues presented simultaneously or immediately after stimuli improved location accuracy in all experiments. However, cues presented after 2 s only improved accuracy in Experiment 1. These results suggest that location information may not be addressable within visual working memory using shape features. In Experiment 1, but not Experiments 2–4, cues significantly improved accuracy when they indicated the missing object could be any of the three identical objects. In Experiments 2–4, location accuracy was highly impaired when the missing object came from a group of identical rather than uniquely identifiable objects. This indicates that when items with similar features are presented, location accuracy may be reduced. In summary, both feature type and response mode can influence the accuracy and accessibility of visual working memory for object location.

  6. Impairments in part-whole representations of objects in two cases of integrative visual agnosia.

    Science.gov (United States)

    Behrmann, Marlene; Williams, Pepper

    2007-10-01

    How complex multipart visual objects are represented perceptually remains a subject of ongoing investigation. One source of evidence that has been used to shed light on this issue comes from the study of individuals who fail to integrate disparate parts of visual objects. This study reports a series of experiments that examine the ability of two such patients with this form of agnosia (integrative agnosia; IA), S.M. and C.R., to discriminate and categorize exemplars of a rich set of novel objects, "Fribbles", whose visual similarity (number of shared parts) and category membership (shared overall shape) can be manipulated. Both patients performed increasingly poorly as the number of parts required for differentiating one Fribble from another increased. Both patients were also impaired at determining when two Fribbles belonged in the same category, a process that relies on abstracting spatial relations between parts. C.R., the less impaired of the two, but not S.M., eventually learned to categorize the Fribbles but required substantially more training than normal perceivers. S.M.'s failure is not attributable to a problem in learning to use a label for identification nor is it obviously attributable to a visual memory deficit. Rather, the findings indicate that, although the patients may be able to represent a small number of parts independently, in order to represent multipart images, the parts need to be integrated or chunked into a coherent whole. It is this integrative process that is impaired in IA and appears to play a critical role in the normal object recognition of complex images.

  7. Convolutional Neural Network for Multi-Category Rapid Serial Visual Presentation BCI

    Directory of Open Access Journals (Sweden)

    Ran eManor

    2015-12-01

    Full Text Available Brain computer interfaces rely on machine learning algorithms to decode the brain's electrical activity into decisions. For example, in rapid serial visual presentation (RSVP tasks, the subject is presented with a continuous stream of images containing rare target images among standard images, while the algorithm has to detect brain activity associated with target images. Here, we continue our previous work, presenting a deep neural network model for the use of single trial EEG classification in RSVP tasks. Deep neural networks have shown state of the art performance in computer vision and speech recognition and thus have great promise for other learning tasks, like classification of EEG samples. In our model, we introduce a novel spatio-temporal regularization for EEG data to reduce overfitting. We show improved classification performance compared to our earlier work on a five categories RSVP experiment. In addition, we compare performance on data from different sessions and validate the model on a public benchmark data set of a P300 speller task. Finally, we discuss the advantages of using neural network models compared to manually designing feature extraction algorithms.

  8. Visual search for arbitrary objects in real scenes

    Science.gov (United States)

    Alvarez, George A.; Rosenholtz, Ruth; Kuzmova, Yoana I.; Sherman, Ashley M.

    2011-01-01

    How efficient is visual search in real scenes? In searches for targets among arrays of randomly placed distractors, efficiency is often indexed by the slope of the reaction time (RT) × Set Size function. However, it may be impossible to define set size for real scenes. As an approximation, we hand-labeled 100 indoor scenes and used the number of labeled regions as a surrogate for set size. In Experiment 1, observers searched for named objects (a chair, bowl, etc.). With set size defined as the number of labeled regions, search was very efficient (~5 ms/item). When we controlled for a possible guessing strategy in Experiment 2, slopes increased somewhat (~15 ms/item), but they were much shallower than search for a random object among other distinctive objects outside of a scene setting (Exp. 3: ~40 ms/item). In Experiments 4–6, observers searched repeatedly through the same scene for different objects. Increased familiarity with scenes had modest effects on RTs, while repetition of target items had large effects (>500 ms). We propose that visual search in scenes is efficient because scene-specific forms of attentional guidance can eliminate most regions from the “functional set size” of items that could possibly be the target. PMID:21671156

  9. Visual search for arbitrary objects in real scenes.

    Science.gov (United States)

    Wolfe, Jeremy M; Alvarez, George A; Rosenholtz, Ruth; Kuzmova, Yoana I; Sherman, Ashley M

    2011-08-01

    How efficient is visual search in real scenes? In searches for targets among arrays of randomly placed distractors, efficiency is often indexed by the slope of the reaction time (RT) × Set Size function. However, it may be impossible to define set size for real scenes. As an approximation, we hand-labeled 100 indoor scenes and used the number of labeled regions as a surrogate for set size. In Experiment 1, observers searched for named objects (a chair, bowl, etc.). With set size defined as the number of labeled regions, search was very efficient (~5 ms/item). When we controlled for a possible guessing strategy in Experiment 2, slopes increased somewhat (~15 ms/item), but they were much shallower than search for a random object among other distinctive objects outside of a scene setting (Exp. 3: ~40 ms/item). In Experiments 4-6, observers searched repeatedly through the same scene for different objects. Increased familiarity with scenes had modest effects on RTs, while repetition of target items had large effects (>500 ms). We propose that visual search in scenes is efficient because scene-specific forms of attentional guidance can eliminate most regions from the "functional set size" of items that could possibly be the target.

  10. How learning might strengthen existing visual object representations in human object-selective cortex.

    Science.gov (United States)

    Brants, Marijke; Bulthé, Jessica; Daniels, Nicky; Wagemans, Johan; Op de Beeck, Hans P

    2016-02-15

    Visual object perception is an important function in primates which can be fine-tuned by experience, even in adults. Which factors determine the regions and the neurons that are modified by learning is still unclear. Recently, it was proposed that the exact cortical focus and distribution of learning effects might depend upon the pre-learning mapping of relevant functional properties and how this mapping determines the informativeness of neural units for the stimuli and the task to be learned. From this hypothesis we would expect that visual experience would strengthen the pre-learning distributed functional map of the relevant distinctive object properties. Here we present a first test of this prediction in twelve human subjects who were trained in object categorization and differentiation, preceded and followed by a functional magnetic resonance imaging session. Specifically, training increased the distributed multi-voxel pattern information for trained object distinctions in object-selective cortex, resulting in a generalization from pre-training multi-voxel activity patterns to after-training activity patterns. Simulations show that the increased selectivity combined with the inter-session generalization is consistent with a training-induced strengthening of a pre-existing selectivity map. No training-related neural changes were detected in other regions. In sum, training to categorize or individuate objects strengthened pre-existing representations in human object-selective cortex, providing a first indication that the neuroanatomical distribution of learning effects depends upon the pre-learning mapping of visual object properties. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Visual Tracking of Deformation and Classification of Non-Rigid Objects with Robot Hand Probing

    Directory of Open Access Journals (Sweden)

    Fei Hui

    2017-03-01

    Full Text Available Performing tasks with a robot hand often requires a complete knowledge of the manipulated object, including its properties (shape, rigidity, surface texture and its location in the environment, in order to ensure safe and efficient manipulation. While well-established procedures exist for the manipulation of rigid objects, as well as several approaches for the manipulation of linear or planar deformable objects such as ropes or fabric, research addressing the characterization of deformable objects occupying a volume remains relatively limited. The paper proposes an approach for tracking the deformation of non-rigid objects under robot hand manipulation using RGB-D data. The purpose is to automatically classify deformable objects as rigid, elastic, plastic, or elasto-plastic, based on the material they are made of, and to support recognition of the category of such objects through a robotic probing process in order to enhance manipulation capabilities. The proposed approach combines advantageously classical color and depth image processing techniques and proposes a novel combination of the fast level set method with a log-polar mapping of the visual data to robustly detect and track the contour of a deformable object in a RGB-D data stream. Dynamic time warping is employed to characterize the object properties independently from the varying length of the tracked contour as the object deforms. The proposed solution achieves a classification rate over all categories of material of up to 98.3%. When integrated in the control loop of a robot hand, it can contribute to ensure stable grasp, and safe manipulation capability that will preserve the physical integrity of the object.

  12. Cultural differences in visual object recognition in 3-year-old children

    Science.gov (United States)

    Kuwabara, Megumi; Smith, Linda B.

    2016-01-01

    Recent research indicates that culture penetrates fundamental processes of perception and cognition (e.g. Nisbett & Miyamoto, 2005). Here, we provide evidence that these influences begin early and influence how preschool children recognize common objects. The three tasks (n=128) examined the degree to which nonface object recognition by 3 year olds was based on individual diagnostic features versus more configural and holistic processing. Task 1 used a 6-alternative forced choice task in which children were asked to find a named category in arrays of masked objects in which only 3 diagnostic features were visible for each object. U.S. children outperformed age-matched Japanese children. Task 2 presented pictures of objects to children piece by piece. U.S. children recognized the objects given fewer pieces than Japanese children and likelihood of recognition increased for U.S., but not Japanese children when the piece added was rated by both U.S. and Japanese adults as highly defining. Task 3 used a standard measure of configural progressing, asking the degree to which recognition of matching pictures was disrupted by the rotation of one picture. Japanese children’s recognition was more disrupted by inversion than was that of U.S. children, indicating more configural processing by Japanese than U.S. children. The pattern suggests early cross-cultural differences in visual processing; findings that raise important questions about how visual experiences differ across cultures and about universal patterns of cognitive development. PMID:26985576

  13. Cultural differences in visual object recognition in 3-year-old children.

    Science.gov (United States)

    Kuwabara, Megumi; Smith, Linda B

    2016-07-01

    Recent research indicates that culture penetrates fundamental processes of perception and cognition. Here, we provide evidence that these influences begin early and influence how preschool children recognize common objects. The three tasks (N=128) examined the degree to which nonface object recognition by 3-year-olds was based on individual diagnostic features versus more configural and holistic processing. Task 1 used a 6-alternative forced choice task in which children were asked to find a named category in arrays of masked objects where only three diagnostic features were visible for each object. U.S. children outperformed age-matched Japanese children. Task 2 presented pictures of objects to children piece by piece. U.S. children recognized the objects given fewer pieces than Japanese children, and the likelihood of recognition increased for U.S. children, but not Japanese children, when the piece added was rated by both U.S. and Japanese adults as highly defining. Task 3 used a standard measure of configural progressing, asking the degree to which recognition of matching pictures was disrupted by the rotation of one picture. Japanese children's recognition was more disrupted by inversion than was that of U.S. children, indicating more configural processing by Japanese than U.S. children. The pattern suggests early cross-cultural differences in visual processing; findings that raise important questions about how visual experiences differ across cultures and about universal patterns of cognitive development. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Color-Function Categories that Prime Infants to Use Color Information in an Object Individuation Task

    Science.gov (United States)

    Wilcox, Teresa; Woods, Rebecca; Chapa, Catherine

    2008-01-01

    There is evidence for developmental hierarchies in the type of information to which infants attend when reasoning about objects. Investigators have questioned the origin of these hierarchies and how infants come to identify new sources of information when reasoning about objects. The goal of the present experiments was to shed light on this debate…

  15. Effects of object shape on the visual guidance of action.

    Science.gov (United States)

    Eloka, Owino; Franz, Volker H

    2011-04-22

    Little is known of how visual coding of the shape of an object affects grasping movements. We addressed this issue by investigating the influence of shape perturbations on grasping. Twenty-six participants grasped a disc or a bar that were chosen such that they could in principle be grasped with identical movements (i.e., relevant sizes were identical such that the final grips consisted of identical separations of the fingers and no parts of the objects constituted obstacles for the movement). Nevertheless, participants took object shape into account and grasped the bar with a larger maximum grip aperture and a different hand angle than the disc. In 20% of the trials, the object changed its shape from bar to disc or vice versa early or late during the movement. If there was enough time (early perturbations), grasps were often adapted in flight to the new shape. These results show that the motor system takes into account even small and seemingly irrelevant changes of object shape and adapts the movement in a fine-grained manner. Although this adaptation might seem computationally expensive, we presume that its benefits (e.g., a more comfortable and more accurate movement) outweigh the costs. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. Abnormalities of Object Visual Processing in Body Dysmorphic Disorder

    Science.gov (United States)

    Feusner, Jamie D.; Hembacher, Emily; Moller, Hayley; Moody, Teena D.

    2013-01-01

    Background Individuals with body dysmorphic disorder may have perceptual distortions for their appearance. Previous studies suggest imbalances in detailed relative to configural/holistic visual processing when viewing faces. No study has investigated the neural correlates of processing non-symptom-related stimuli. The objective of this study was to determine whether individuals with body dysmorphic disorder have abnormal patterns of brain activation when viewing non-face/non-body object stimuli. Methods Fourteen medication-free participants with DSM-IV body dysmorphic disorder and 14 healthy controls participated. We performed functional magnetic resonance imaging while participants matched photographs of houses that were unaltered, contained only high spatial frequency (high detail) information, or only low spatial frequency (low detail) information. The primary outcome was group differences in blood oxygen level-dependent signal changes. Results The body dysmorphic disorder group showed lesser activity in the parahippocampal gyrus, lingual gyrus, and precuneus for low spatial frequency images. There were greater activations in medial prefrontal regions for high spatial frequency images, although no significant differences when compared to a low-level baseline. Greater symptom severity was associated with lesser activity in dorsal occipital cortex and ventrolateral prefrontal cortex for normal and high spatial frequency images. Conclusions Individuals with body dysmorphic disorder have abnormal brain activation patterns when viewing objects. Hypoactivity in visual association areas for configural and holistic (low detail) elements and abnormal allocation of prefrontal systems for details is consistent with a model of imbalances in global vs. local processing. This may occur not only for appearance but also for general stimuli unrelated to their symptoms. PMID:21557897

  17. Investigating category- and shape-selective neural processing in ventral and dorsal visual stream under interocular suppression.

    Science.gov (United States)

    Ludwig, Karin; Kathmann, Norbert; Sterzer, Philipp; Hesselmann, Guido

    2015-01-01

    Recent behavioral and neuroimaging studies using continuous flash suppression (CFS) have suggested that action-related processing in the dorsal visual stream might be independent of perceptual awareness, in line with the "vision-for-perception" versus "vision-for-action" distinction of the influential dual-stream theory. It remains controversial if evidence suggesting exclusive dorsal stream processing of tool stimuli under CFS can be explained by their elongated shape alone or by action-relevant category representations in dorsal visual cortex. To approach this question, we investigated category- and shape-selective functional magnetic resonance imaging-blood-oxygen level-dependent responses in both visual streams using images of faces and tools. Multivariate pattern analysis showed enhanced decoding of elongated relative to non-elongated tools, both in the ventral and dorsal visual stream. The second aim of our study was to investigate whether the depth of interocular suppression might differentially affect processing in dorsal and ventral areas. However, parametric modulation of suppression depth by varying the CFS mask contrast did not yield any evidence for differential modulation of category-selective activity. Together, our data provide evidence for shape-selective processing under CFS in both dorsal and ventral stream areas and, therefore, do not support the notion that dorsal "vision-for-action" processing is exclusively preserved under interocular suppression. © 2014 Wiley Periodicals, Inc.

  18. Visual object imagery and autobiographical memory: Object Imagers are better at remembering their personal past.

    Science.gov (United States)

    Vannucci, Manila; Pelagatti, Claudia; Chiorri, Carlo; Mazzoni, Giuliana

    2016-01-01

    In the present study we examined whether higher levels of object imagery, a stable characteristic that reflects the ability and preference in generating pictorial mental images of objects, facilitate involuntary and voluntary retrieval of autobiographical memories (ABMs). Individuals with high (High-OI) and low (Low-OI) levels of object imagery were asked to perform an involuntary and a voluntary ABM task in the laboratory. Results showed that High-OI participants generated more involuntary and voluntary ABMs than Low-OI, with faster retrieval times. High-OI also reported more detailed memories compared to Low-OI and retrieved memories as visual images. Theoretical implications of these findings for research on voluntary and involuntary ABMs are discussed.

  19. The difference in subjective and objective complexity in the visual short-term memory

    DEFF Research Database (Denmark)

    Dall, Jonas Olsen; Sørensen, Thomas Alrik

    Several studies discuss the influence of complexity on the visual short term memory; some have demonstrated that short-term memory is surprisingly stable regardless of content (e.g. Luck & Vogel, 1997) where others have shown that memory can be influenced by the complexity of stimulus (e.g. Alvarez...... characters. On the contrary expertise or word frequency may reflect what could be termed subjective complexity, as this relate directly to the individual mental categories established. This study will be able to uncover more details on how we should define complexity of objects to be encoded into short-term....... & Cavanagh, 2004). But the term complexity is often not clearly defined. Sørensen (2008; see also Dall, Katsumi, & Sørensen, 2016) suggested that complexity can be related to two different types; objective and subjective complexity. This distinction is supported by a number of studies on the influence...

  20. Efficient Cross-Modal Transfer of Shape Information in Visual and Haptic Object Categorization

    Directory of Open Access Journals (Sweden)

    Nina Gaissert

    2011-10-01

    Full Text Available Categorization has traditionally been studied in the visual domain with only a few studies focusing on the abilities of the haptic system in object categorization. During the first years of development, however, touch and vision are closely coupled in the exploratory procedures used by the infant to gather information about objects. Here, we investigate how well shape information can be transferred between those two modalities in a categorization task. Our stimuli consisted of amoeba-like objects that were parametrically morphed in well-defined steps. Participants explored the objects in a categorization task either visually or haptically. Interestingly, both modalities led to similar categorization behavior suggesting that similar shape processing might occur in vision and haptics. Next, participants received training on specific categories in one of the two modalities. As would be expected, training increased performance in the trained modality; however, we also found significant transfer of training to the other, untrained modality after only relatively few training trials. Taken together, our results demonstrate that complex shape information can be transferred efficiently across the two modalities, which speaks in favor of multisensory, higher-level representations of shape.

  1. EXPENSESES OF THE BUILDING ENTERPRISE AS ECONOMIC CATEGORY AND OBJECT OF MANAGEMENT

    Directory of Open Access Journals (Sweden)

    M. Z. Zeynalov

    2015-01-01

    Full Text Available The notions «production costs» and «expenses» of building enterprise are elaborated. The designed methods warning regulations of the expenses of the building enterprise. The different approaches are considered to shaping the vector of the factors, characterizing expenses of the building enterprise and model of the expenses as object of management in the manner of «black box» that allows to organize their efficient regulation on deflection and indignation in sloppy economic ambience. 

  2. Scale-adaptive Local Patches for Robust Visual Object Tracking

    Directory of Open Access Journals (Sweden)

    Kang Sun

    2014-04-01

    Full Text Available This paper discusses the problem of robustly tracking objects which undergo rapid and dramatic scale changes. To remove the weakness of global appearance models, we present a novel scheme that combines object’s global and local appearance features. The local feature is a set of local patches that geometrically constrain the changes in the target’s appearance. In order to adapt to the object’s geometric deformation, the local patches could be removed and added online. The addition of these patches is constrained by the global features such as color, texture and motion. The global visual features are updated via the stable local patches during tracking. To deal with scale changes, we adapt the scale of patches in addition to adapting the object bound box. We evaluate our method by comparing it to several state-of-the-art trackers on publicly available datasets. The experimental results on challenging sequences confirm that, by using this scale-adaptive local patches and global properties, our tracker outperforms the related trackers in many cases by having smaller failure rate as well as better accuracy.

  3. An insect-inspired model for visual binding I: learning objects and their characteristics.

    Science.gov (United States)

    Northcutt, Brandon D; Dyhr, Jonathan P; Higgins, Charles M

    2017-04-01

    Visual binding is the process of associating the responses of visual interneurons in different visual submodalities all of which are responding to the same object in the visual field. Recently identified neuropils in the insect brain termed optic glomeruli reside just downstream of the optic lobes and have an internal organization that could support visual binding. Working from anatomical similarities between optic and olfactory glomeruli, we have developed a model of visual binding based on common temporal fluctuations among signals of independent visual submodalities. Here we describe and demonstrate a neural network model capable both of refining selectivity of visual information in a given visual submodality, and of associating visual signals produced by different objects in the visual field by developing inhibitory neural synaptic weights representing the visual scene. We also show that this model is consistent with initial physiological data from optic glomeruli. Further, we discuss how this neural network model may be implemented in optic glomeruli at a neuronal level.

  4. Real-world visual statistics and infants' first-learned object names.

    Science.gov (United States)

    Clerkin, Elizabeth M; Hart, Elizabeth; Rehg, James M; Yu, Chen; Smith, Linda B

    2017-01-05

    We offer a new solution to the unsolved problem of how infants break into word learning based on the visual statistics of everyday infant-perspective scenes. Images from head camera video captured by 8 1/2 to 10 1/2 month-old infants at 147 at-home mealtime events were analysed for the objects in view. The images were found to be highly cluttered with many different objects in view. However, the frequency distribution of object categories was extremely right skewed such that a very small set of objects was pervasively present-a fact that may substantially reduce the problem of referential ambiguity. The statistical structure of objects in these infant egocentric scenes differs markedly from that in the training sets used in computational models and in experiments on statistical word-referent learning. Therefore, the results also indicate a need to re-examine current explanations of how infants break into word learning.This article is part of the themed issue 'New frontiers for statistical learning in the cognitive sciences'. © 2016 The Author(s).

  5. The impact of visual gaze direction on auditory object tracking

    OpenAIRE

    Pomper, U.; Chait, M.

    2017-01-01

    Subjective experience suggests that we are able to direct our auditory attention independent of our visual gaze, e.g when shadowing a nearby conversation at a cocktail party. But what are the consequences at the behavioural and neural level? While numerous studies have investigated both auditory attention and visual gaze independently, little is known about their interaction during selective listening. In the present EEG study, we manipulated visual gaze independently of auditory attention wh...

  6. Visual object agnosia is associated with a breakdown of object-selective responses in the lateral occipital cortex.

    Science.gov (United States)

    Ptak, Radek; Lazeyras, François; Di Pietro, Marie; Schnider, Armin; Simon, Stéphane R

    2014-07-01

    Patients with visual object agnosia fail to recognize the identity of visually presented objects despite preserved semantic knowledge. Object agnosia may result from damage to visual cortex lying close to or overlapping with the lateral occipital complex (LOC), a brain region that exhibits selectivity to the shape of visually presented objects. Despite this anatomical overlap the relationship between shape processing in the LOC and shape representations in object agnosia is unknown. We studied a patient with object agnosia following isolated damage to the left occipito-temporal cortex overlapping with the LOC. The patient showed intact processing of object structure, yet often made identification errors that were mainly based on the global visual similarity between objects. Using functional Magnetic Resonance Imaging (fMRI) we found that the damaged as well as the contralateral, structurally intact right LOC failed to show any object-selective fMRI activity, though the latter retained selectivity for faces. Thus, unilateral damage to the left LOC led to a bilateral breakdown of neural responses to a specific stimulus class (objects and artefacts) while preserving the response to a different stimulus class (faces). These findings indicate that representations of structure necessary for the identification of objects crucially rely on bilateral, distributed coding of shape features. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Learning to Match Auditory and Visual Speech Cues: Social Influences on Acquisition of Phonological Categories

    Science.gov (United States)

    Altvater-Mackensen, Nicole; Grossmann, Tobias

    2015-01-01

    Infants' language exposure largely involves face-to-face interactions providing acoustic and visual speech cues but also social cues that might foster language learning. Yet, both audiovisual speech information and social information have so far received little attention in research on infants' early language development. Using a preferential…

  8. Visual working memory capacity and stimulus categories: a behavioral and electrophysiological investigation

    NARCIS (Netherlands)

    Diamantopoulou, Sofia; Poom, Leo; Klaver, Peter; Talsma, D.

    2011-01-01

    It has recently been suggested that visual working memory capacity may vary depending on the type of material that has to be memorized. Here, we use a delayed match-to-sample paradigm and event-related potentials (ERP) to investigate the neural correlates that are linked to these changes in

  9. Category Specific Knowledge Modulate Capacity Limitations of Visual Short-Term Memory

    DEFF Research Database (Denmark)

    Dall, Jonas Olsen; Watanabe, Katsumi; Sørensen, Thomas Alrik

    2016-01-01

    We explore whether expertise can modulate the capacity of visual short-term memory, as some seem to argue that training affects capacity of short-term memory [13] while others are not able to find this modulation [12]. We extend on a previous study [3] demonstrating expertise effects by investiga...... are in line with the theoretical interpretation that visual short-term memory reflects the sum of the reverberating feedback loops to representations in long-term memory.......We explore whether expertise can modulate the capacity of visual short-term memory, as some seem to argue that training affects capacity of short-term memory [13] while others are not able to find this modulation [12]. We extend on a previous study [3] demonstrating expertise effects......), and expert observers (Japanese university students). For both the picture and the letter condition we find no performance difference in memory capacity, however, in the critical hiragana condition we demonstrate a systematic difference relating expertise differences between the groups. These results...

  10. Visual Field Preferences of Object Analysis for Grasping with One Hand

    Directory of Open Access Journals (Sweden)

    Ada eLe

    2014-10-01

    Full Text Available When we grasp an object using one hand, the opposite hemisphere predominantly guides the motor control of grasp movements (Davare et al. 2007; Rice et al. 2007. However, it is unclear whether visual object analysis for grasp control relies more on inputs (a from the contralateral than the ipsilateral visual field, (b from one dominant visual field regardless of the grasping hand, or (c from both visual fields equally. For bimanual grasping of a single object we have recently demonstrated a visual field preference for the left visual field (Le and Niemeier 2013a, 2013b, consistent with a general right-hemisphere dominance for sensorimotor control of bimanual grasps (Le et al., 2013. But visual field differences have never been tested for unimanual grasping. Therefore, here we asked right-handed participants to fixate to the left or right of an object and then grasp the object either with their right or left hand using a precision grip. We found that participants grasping with their right hand performed better with objects in the right visual field: maximum grip apertures (MGAs were more closely matched to the object width and were smaller than for objects in the left visual field. In contrast, when people grasped with their left hand, preferences switched to the left visual field. What is more, MGA scaling showed greater visual field differences compared to right-hand grasping. Our data suggest that, visual object analysis for unimanual grasping shows a preference for visual information from the ipsilateral visual field, and that the left hemisphere is better equipped to control grasps in both visual fields.

  11. Shape configuration and category-specificity

    DEFF Research Database (Denmark)

    Gerlach, Christian; Law, Ian; Paulson, Olaf B.

    2006-01-01

    a recent account of category-specificity and lends support to the notion that category-specific impairments can occur for both natural objects and artefacts following damage to pre-semantic stages in visual object recognition. The implications of the present findings are discussed in relation to theories...

  12. How Does Using Object Names Influence Visual Recognition Memory?

    Science.gov (United States)

    Richler, Jennifer J.; Palmeri, Thomas J.; Gauthier, Isabel

    2013-01-01

    Two recent lines of research suggest that explicitly naming objects at study influences subsequent memory for those objects at test. Lupyan (2008) suggested that naming "impairs" memory by a representational shift of stored representations of named objects toward the prototype (labeling effect). MacLeod, Gopie, Hourihan, Neary, and Ozubko (2010)…

  13. On the time required for identification of visual objects

    DEFF Research Database (Denmark)

    Petersen, Anders

    The starting point for this thesis is a review of Bundesen’s theory of visual attention. This theory has been widely accepted as an appropriate model for describing data from an important class of psychological experiments known as whole and partial report. Analysing data from this class of exper......The starting point for this thesis is a review of Bundesen’s theory of visual attention. This theory has been widely accepted as an appropriate model for describing data from an important class of psychological experiments known as whole and partial report. Analysing data from this class...... of experiments with the help of the theory of visual attention – have proven to be an effective approach to examine cognitive parameters that are essential for a broad range of different patient groups. The theory of visual attention relies on a psychometric function that describes the ability to identify......, with the dataset that we collected, to directly analyse how confusability develops as a certain letter is exposed for increasingly longer time. An important scientific question is what shapes the psychometric function. It is conceivable that the function reflects both limitations and structure of the physical...

  14. 1/f 2 Characteristics and isotropy in the fourier power spectra of visual art, cartoons, comics, mangas, and different categories of photographs.

    Science.gov (United States)

    Koch, Michael; Denzler, Joachim; Redies, Christoph

    2010-08-19

    Art images and natural scenes have in common that their radially averaged (1D) Fourier spectral power falls according to a power-law with increasing spatial frequency (1/f(2) characteristics), which implies that the power spectra have scale-invariant properties. In the present study, we show that other categories of man-made images, cartoons and graphic novels (comics and mangas), have similar properties. Further on, we extend our investigations to 2D power spectra. In order to determine whether the Fourier power spectra of man-made images differed from those of other categories of images (photographs of natural scenes, objects, faces and plants and scientific illustrations), we analyzed their 2D power spectra by principal component analysis. Results indicated that the first fifteen principal components allowed a partial separation of the different image categories. The differences between the image categories were studied in more detail by analyzing whether the mean power and the slope of the power gradients from low to high spatial frequencies varied across orientations in the power spectra. Mean power was generally higher in cardinal orientations both in real-world photographs and artworks, with no systematic difference between the two types of images. However, the slope of the power gradients showed a lower degree of mean variability across spectral orientations (i.e., more isotropy) in art images, cartoons and graphic novels than in photographs of comparable subject matters. Taken together, these results indicate that art images, cartoons and graphic novels possess relatively uniform 1/f(2) characteristics across all orientations. In conclusion, the man-made stimuli studied, which were presumably produced to evoke pleasant and/or enjoyable visual perception in human observers, form a subset of all images and share statistical properties in their Fourier power spectra. Whether these properties are necessary or sufficient to induce aesthetic perception remains

  15. The impact of visual gaze direction on auditory object tracking.

    Science.gov (United States)

    Pomper, Ulrich; Chait, Maria

    2017-07-05

    Subjective experience suggests that we are able to direct our auditory attention independent of our visual gaze, e.g when shadowing a nearby conversation at a cocktail party. But what are the consequences at the behavioural and neural level? While numerous studies have investigated both auditory attention and visual gaze independently, little is known about their interaction during selective listening. In the present EEG study, we manipulated visual gaze independently of auditory attention while participants detected targets presented from one of three loudspeakers. We observed increased response times when gaze was directed away from the locus of auditory attention. Further, we found an increase in occipital alpha-band power contralateral to the direction of gaze, indicative of a suppression of distracting input. Finally, this condition also led to stronger central theta-band power, which correlated with the observed effect in response times, indicative of differences in top-down processing. Our data suggest that a misalignment between gaze and auditory attention both reduce behavioural performance and modulate underlying neural processes. The involvement of central theta-band and occipital alpha-band effects are in line with compensatory neural mechanisms such as increased cognitive control and the suppression of task irrelevant inputs.

  16. Preliminary assessment of RTR and visual characterization for selected waste categories

    International Nuclear Information System (INIS)

    Ziegler, D.L.

    1992-01-01

    The first transuranic (TRU) waste shipped to the Waste Isolation Pilot Plant (WIPP) will be for the WIPP Experimental Program. The purpose of the Experimental Program is to determine the gas generation rates and potential for gas generation by the waste after it has been permanently stored at the WIPP. The first phase of these tests will be performed at WIPP with test bins that have been filled and sealed in accordance with the test plan for bin scale tests. A second phase of the testing, the Alcove Test, will involve drummed waste placed in sealed rooms within WIPP. A preliminary test was conducted at the Rocky Flats Plant (RFP) to evaluate potential methods for use in the characterization of waste. The waste material types to be identified were as defined in the bin-scale test plan -- Cellulosics, Plastic, Rubber, Corroding Metal/Steel, Corroding Metal/Aluminum, Non-corroding Metal, Solid Inorganic, Inorganic Sludges, other organics and Cements. A total of 19 drums representing eleven different waste types (Rocky Flats Plant -- Identification Description Codes (IDC)) and seven different TRUCON Code materials were evaluated. They included Dry Combustibles, Wet Combustibles, Plastic, light Metal, Glass (Non-Raschig Ring). Raschig Rings, M g O crucibles, HEPA Filters, Insulation, Leaded Dry Box Gloves, and Graphite. These Identification Description Codes were chosen because of their abundance on plant, as well as the variability in drum loading techniques. The goal of this test was to evaluate the effectiveness of RTR inspection and visual inspection as characterization methods for waste. In addition, gas analysis of the head space was conducted to provide an indication of the types of gas generated

  17. The Economic Essence of the Category of «Costs» as an Object of Accounting and Internal Control

    Directory of Open Access Journals (Sweden)

    Zhadan Tetiana A.

    2017-11-01

    Full Text Available The purpose of the article is to disclose the economic essence of the category of «costs» as an object of accounting and internal control. A number of approaches to defining the economic essence of the concepts of «costs», «control» and «internal control» have been allocated. In the course of the study it was found that the most common approaches to interpretation of the concept of «costs» is the disclosure of its essence from the standpoints of economic theory and accounting; concepts of «control» and «internal control» – from the standpoints of functional, systemic and process approaches. The authors’ own definition of the synthesis concept of «internal control of costs» has been proposed, which suggests understanding it as the system of control measures, observations and procedures aimed at identifying deviations in the accounting of costs of enterprise, establishing legitimacy, validity, rationality, efficiency and economic feasibility of their implementation with a view to preventing and exclusion such deviations and unjustified losses in the future. Prospect for further researches in this direction is search of efficient forms, methods and instruments for management of costs of enterprise.

  18. The Correlation between Subjective and Objective Visual Function Test in Optic Neuropathy Patients

    Directory of Open Access Journals (Sweden)

    Ungsoo Kim

    2012-10-01

    Full Text Available Purpose: To investigate the correlation between visual acuity and quantitative measurements of visual evoked potentials (VEP, optical coherence tomography (OCT, and visual field test (VF in optic neuropathy patients. Methods: We evaluated 28 patients with optic neuropathy. Patients who had pale disc, visual acuity of less than 0.5 and abnormal visual field defect were included. At the first visit, we performed visual acuity and VF as subjective methods and OCT and VEP as objective methods. In the spectral domain OCT, rim volume, average and temporal quadrant retinal nerve fiber layer (RNFL thickness were measured. And pattern VEP (N75, P100, N135 latency, and P100 amplitude and Humphrey 24-2 visual field test (mean deviation and pattern standard deviation were obtained. Using Spearman's correlation coefficient, the correlation between visual acuity and various techniques were assessed. Results: Visual acuity was most correlated with the mean deviation of Humphrey perimetry.

  19. Visual object tracking by correlation filters and online learning

    Science.gov (United States)

    Zhang, Xin; Xia, Gui-Song; Lu, Qikai; Shen, Weiming; Zhang, Liangpei

    2018-06-01

    Due to the complexity of background scenarios and the variation of target appearance, it is difficult to achieve high accuracy and fast speed for object tracking. Currently, correlation filters based trackers (CFTs) show promising performance in object tracking. The CFTs estimate the target's position by correlation filters with different kinds of features. However, most of CFTs can hardly re-detect the target in the case of long-term tracking drifts. In this paper, a feature integration object tracker named correlation filters and online learning (CFOL) is proposed. CFOL estimates the target's position and its corresponding correlation score using the same discriminative correlation filter with multi-features. To reduce tracking drifts, a new sampling and updating strategy for online learning is proposed. Experiments conducted on 51 image sequences demonstrate that the proposed algorithm is superior to the state-of-the-art approaches.

  20. Visual Servoing for Object Manipulation: A Case Study in Slaughterhouse

    DEFF Research Database (Denmark)

    Wu, Haiyan; Andersen, Thomas Timm; Andersen, Nils Axel

    2016-01-01

    Automation for slaughterhouse challenges the design of the control system due to the variety of the objects. Realtime sensing provides instantaneous information about each piece of work and thus, is useful for robotic system developed for slaughterhouse. In this work, a pick and place task which....... An online and offline combined path planning algorithm is proposed to generate the desired path for the robot control. An industrial robot arm is applied to execute the path. The system is implemented for a lab-scale experiment, and the results show a high success rate of object manipulation in the pick...

  1. Comparative study of SVM methods combined with voxel selection for object category classification on fMRI data.

    Science.gov (United States)

    Song, Sutao; Zhan, Zhichao; Long, Zhiying; Zhang, Jiacai; Yao, Li

    2011-02-16

    Support vector machine (SVM) has been widely used as accurate and reliable method to decipher brain patterns from functional MRI (fMRI) data. Previous studies have not found a clear benefit for non-linear (polynomial kernel) SVM versus linear one. Here, a more effective non-linear SVM using radial basis function (RBF) kernel is compared with linear SVM. Different from traditional studies which focused either merely on the evaluation of different types of SVM or the voxel selection methods, we aimed to investigate the overall performance of linear and RBF SVM for fMRI classification together with voxel selection schemes on classification accuracy and time-consuming. Six different voxel selection methods were employed to decide which voxels of fMRI data would be included in SVM classifiers with linear and RBF kernels in classifying 4-category objects. Then the overall performances of voxel selection and classification methods were compared. Results showed that: (1) Voxel selection had an important impact on the classification accuracy of the classifiers: in a relative low dimensional feature space, RBF SVM outperformed linear SVM significantly; in a relative high dimensional space, linear SVM performed better than its counterpart; (2) Considering the classification accuracy and time-consuming holistically, linear SVM with relative more voxels as features and RBF SVM with small set of voxels (after PCA) could achieve the better accuracy and cost shorter time. The present work provides the first empirical result of linear and RBF SVM in classification of fMRI data, combined with voxel selection methods. Based on the findings, if only classification accuracy was concerned, RBF SVM with appropriate small voxels and linear SVM with relative more voxels were two suggested solutions; if users concerned more about the computational time, RBF SVM with relative small set of voxels when part of the principal components were kept as features was a better choice.

  2. Computational Modelling of the Neural Representation of Object Shape in the Primate Ventral Visual System

    Directory of Open Access Journals (Sweden)

    Akihiro eEguchi

    2015-08-01

    Full Text Available Neurons in successive stages of the primate ventral visual pathway encode the spatial structure of visual objects. In this paper, we investigate through computer simulation how these cell firing properties may develop through unsupervised visually-guided learning. Individual neurons in the model are shown to exploit statistical regularity and temporal continuity of the visual inputs during training to learn firing properties that are similar to neurons in V4 and TEO. Neurons in V4 encode the conformation of boundary contour elements at a particular position within an object regardless of the location of the object on the retina, while neurons in TEO integrate information from multiple boundary contour elements. This representation goes beyond mere object recognition, in which neurons simply respond to the presence of a whole object, but provides an essential foundation from which the brain is subsequently able to recognise the whole object.

  3. High-Performance Neural Networks for Visual Object Classification

    OpenAIRE

    Cireşan, Dan C.; Meier, Ueli; Masci, Jonathan; Gambardella, Luca M.; Schmidhuber, Jürgen

    2011-01-01

    We present a fast, fully parameterizable GPU implementation of Convolutional Neural Network variants. Our feature extractors are neither carefully designed nor pre-wired, but rather learned in a supervised way. Our deep hierarchical architectures achieve the best published results on benchmarks for object classification (NORB, CIFAR10) and handwritten digit recognition (MNIST), with error rates of 2.53%, 19.51%, 0.35%, respectively. Deep nets trained by simple back-propagation perform better ...

  4. ROBUSTNESS AND PREDICTION ACCURACY OF MACHINE LEARNING FOR OBJECTIVE VISUAL QUALITY ASSESSMENT

    OpenAIRE

    Hines, Andrew; Kendrick, Paul; Barri, Adriaan; Narwaria, Manish; Redi, Judith A.

    2014-01-01

    Machine Learning (ML) is a powerful tool to support the development of objective visual quality assessment metrics, serving as a substitute model for the perceptual mechanisms acting in visual quality appreciation. Nevertheless, the reliability of ML-based techniques within objective quality assessment metrics is often questioned. In this study, the robustness of ML in supporting objective quality assessment is investigated, specifically when the feature set adopted for prediction is suboptim...

  5. Robustness and prediction accuracy of machine learning for objective visual quality assessment

    OpenAIRE

    HINES, ANDREW

    2014-01-01

    PUBLISHED Lisbon, Portugal Machine Learning (ML) is a powerful tool to support the development of objective visual quality assessment metrics, serving as a substitute model for the perceptual mechanisms acting in visual quality appreciation. Nevertheless, the reli- ability of ML-based techniques within objective quality as- sessment metrics is often questioned. In this study, the ro- bustness of ML in supporting objective quality assessment is investigated, specific...

  6. The Role of Fixation and Visual Attention in Object Recognition.

    Science.gov (United States)

    1995-01-01

    computers", Technical Report, Aritificial Intelligence Lab, M.I. T., AI-Memo-915, June 1986. [29] D.P. Huttenlocher and S.Ullman, "Object Recognition Using...attention", Technical Report, Aritificial Intelligence Lab, M.I. T., AI-memo-770, Jan 1984. [35] E.Krotkov, K. Henriksen and R. Kories, "Stereo...MIT Artificial Intelligence Laboratory [ PCTBTBimON STATEMENT X \\ Afipioved tor puciic reieo*«* \\ »?*•;.., jDi*tiibutK» U»lisut»d* 19951004

  7. Visual recognition and tracking of objects for robot sensing

    International Nuclear Information System (INIS)

    Lowe, D.G.

    1994-01-01

    An overview is presented of a number of techniques used for recognition and motion tracking of articulated 3-D objects. With recent advances in robust methods for model-based vision and improved performance of computer systems, it will soon be possible to build low-cost, high-reliability systems for model-based motion tracking. Such systems can be expected to open up a wide range of applications in robotics by providing machines with real-time information about their environment. This paper describes a number of techniques for efficiently matching parameterized 3-D models to image features. The matching methods are robust with respect to missing and ambiguous features as well as measurement errors. Unlike most previous work on model-based motion tracking, this system provides for the integrated treatment of matching and measurement errors during motion tracking. The initial application is in a system for real-time motion tracking of articulated 3-D objects. With the future addition of an indexing component, these same techniques can also be used for general model-based recognition. The current real-time implementation is based on matching straight line segments, but some preliminary experiments on matching arbitrary curves are also described. (author)

  8. Relevance of useful visual words in object retrieval

    Science.gov (United States)

    Qi, Siyuan; Luo, Yupin

    2013-07-01

    The most popular methods in object retrieval are almost based on bag-of-words(BOW) which is both effective and efficient. In this paper we present a method use the relations between words of the vocabulary to improve the retrieval performance based on the BOW framework. In basic BOW retrieval framework, only a few words of the vocabulary is useful for retrieval, which are spatial consistent in images. We introduce a method to useful select useful words and build a relevance between these words. We combine useful relevance with basic BOW framework and query expansion as well. The useful relevance is able to discover latent related words which is not exist in the query image, so that we can get a more accurate vector model for retrieval. Combined with query expansion method, the retrieval performance are better and fewer time cost.

  9. Object-based attention underlies the rehearsal of feature binding in visual working memory.

    Science.gov (United States)

    Shen, Mowei; Huang, Xiang; Gao, Zaifeng

    2015-04-01

    Feature binding is a core concept in many research fields, including the study of working memory (WM). Over the past decade, it has been debated whether keeping the feature binding in visual WM consumes more visual attention than the constituent single features. Previous studies have only explored the contribution of domain-general attention or space-based attention in the binding process; no study so far has explored the role of object-based attention in retaining binding in visual WM. We hypothesized that object-based attention underlay the mechanism of rehearsing feature binding in visual WM. Therefore, during the maintenance phase of a visual WM task, we inserted a secondary mental rotation (Experiments 1-3), transparent motion (Experiment 4), or an object-based feature report task (Experiment 5) to consume the object-based attention available for binding. In line with the prediction of the object-based attention hypothesis, Experiments 1-5 revealed a more significant impairment for binding than for constituent single features. However, this selective binding impairment was not observed when inserting a space-based visual search task (Experiment 6). We conclude that object-based attention underlies the rehearsal of binding representation in visual WM. (c) 2015 APA, all rights reserved.

  10. Joint Tensor Feature Analysis For Visual Object Recognition.

    Science.gov (United States)

    Wong, Wai Keung; Lai, Zhihui; Xu, Yong; Wen, Jiajun; Ho, Chu Po

    2015-11-01

    Tensor-based object recognition has been widely studied in the past several years. This paper focuses on the issue of joint feature selection from the tensor data and proposes a novel method called joint tensor feature analysis (JTFA) for tensor feature extraction and recognition. In order to obtain a set of jointly sparse projections for tensor feature extraction, we define the modified within-class tensor scatter value and the modified between-class tensor scatter value for regression. The k-mode optimization technique and the L(2,1)-norm jointly sparse regression are combined together to compute the optimal solutions. The convergent analysis, computational complexity analysis and the essence of the proposed method/model are also presented. It is interesting to show that the proposed method is very similar to singular value decomposition on the scatter matrix but with sparsity constraint on the right singular value matrix or eigen-decomposition on the scatter matrix with sparse manner. Experimental results on some tensor datasets indicate that JTFA outperforms some well-known tensor feature extraction and selection algorithms.

  11. Blindness to background: an inbuilt bias for visual objects.

    Science.gov (United States)

    O'Hanlon, Catherine G; Read, Jenny C A

    2017-09-01

    Sixty-eight 2- to 12-year-olds and 30 adults were shown colorful displays on a touchscreen monitor and trained to point to the location of a named color. Participants located targets near-perfectly when presented with four abutting colored patches. When presented with three colored patches on a colored background, toddlers failed to locate targets in the background. Eye tracking demonstrated that the effect was partially mediated by a tendency not to fixate the background. However, the effect was abolished when the targets were named as nouns, whilst the change to nouns had little impact on eye movement patterns. Our results imply a powerful, inbuilt tendency to attend to objects, which may slow the development of color concepts and acquisition of color words. A video abstract of this article can be viewed at: https://youtu.be/TKO1BPeAiOI. [Correction added on 27 January 2017, after first online publication: The video abstract link was added.]. © 2016 John Wiley & Sons Ltd.

  12. Visual Short-Term Memory for Complex Objects in 6- and 8-Month-Old Infants

    Science.gov (United States)

    Kwon, Mee-Kyoung; Luck, Steven J.; Oakes, Lisa M.

    2014-01-01

    Infants' visual short-term memory (VSTM) for simple objects undergoes dramatic development: Six-month-old infants can store in VSTM information about only a simple object presented in isolation, whereas 8-month-old infants can store information about simple objects presented in multiple-item arrays. This study extended this work to examine…

  13. Priming Contour-Deleted Images: Evidence for Immediate Representations in Visual Object Recognition.

    Science.gov (United States)

    Biederman, Irving; Cooper, Eric E.

    1991-01-01

    Speed and accuracy of identification of pictures of objects are facilitated by prior viewing. Contributions of image features, convex or concave components, and object models in a repetition priming task were explored in 2 studies involving 96 college students. Results provide evidence of intermediate representations in visual object recognition.…

  14. A Visual Short-Term Memory Advantage for Objects of Expertise

    Science.gov (United States)

    Curby, Kim M.; Glazek, Kuba; Gauthier, Isabel

    2009-01-01

    Visual short-term memory (VSTM) is limited, especially for complex objects. Its capacity, however, is greater for faces than for other objects; this advantage may stem from the holistic nature of face processing. If the holistic processing explains this advantage, object expertise--which also relies on holistic processing--should endow experts…

  15. Multisensory object perception in infancy: 4-month-olds perceive a mistuned harmonic as a separate auditory and visual object.

    Science.gov (United States)

    Smith, Nicholas A; Folland, Nicole A; Martinez, Diana M; Trainor, Laurel J

    2017-07-01

    Infants learn to use auditory and visual information to organize the sensory world into identifiable objects with particular locations. Here we use a behavioural method to examine infants' use of harmonicity cues to auditory object perception in a multisensory context. Sounds emitted by different objects sum in the air and the auditory system must figure out which parts of the complex waveform belong to different sources (auditory objects). One important cue to this source separation is that complex tones with pitch typically contain a fundamental frequency and harmonics at integer multiples of the fundamental. Consequently, adults hear a mistuned harmonic in a complex sound as a distinct auditory object (Alain, Theunissen, Chevalier, Batty, & Taylor, 2003). Previous work by our group demonstrated that 4-month-old infants are also sensitive to this cue. They behaviourally discriminate a complex tone with a mistuned harmonic from the same complex with in-tune harmonics, and show an object-related event-related potential (ERP) electrophysiological (EEG) response to the stimulus with mistuned harmonics. In the present study we use an audiovisual procedure to investigate whether infants perceive a complex tone with an 8% mistuned harmonic as emanating from two objects, rather than merely detecting the mistuned cue. We paired in-tune and mistuned complex tones with visual displays that contained either one or two bouncing balls. Four-month-old infants showed surprise at the incongruous pairings, looking longer at the display of two balls when paired with the in-tune complex and at the display of one ball when paired with the mistuned harmonic complex. We conclude that infants use harmonicity as a cue for source separation when integrating auditory and visual information in object perception. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Encoding of faces and objects into visual working memory: an event-related brain potential study.

    Science.gov (United States)

    Meinhardt-Injac, Bozana; Persike, Malte; Berti, Stefan

    2013-09-11

    Visual working memory (VWM) is an important prerequisite for cognitive functions, but little is known on whether the general perceptual processing advantage for faces also applies to VWM processes. The aim of the present study was (a) to test whether there is a general advantage for face stimuli in VWM and (b) to unravel whether this advantage is related to early sensory processing stages. To address these questions, we compared encoding of faces and complex nonfacial objects into VWM within a combined behavioral and event-related brain potential (ERP) study. In detail, we tested whether the N170 ERP component - which is associated with face-specific holistic processing - is affected by memory load for faces or whether it might be involved in WM encoding of any complex object. Participants performed a same-different task with either face or watch stimuli and with two different levels of memory load. Behavioral measures show an advantage for faces on the level of VWM, mirrored in higher estimated VWM capacity (i.e. Cowan's K) for faces compared with watches. In the ERP, the N170 amplitude was enhanced for faces compared with watches. However, the N170 was not modulated by working memory load either for faces or for watches. In contrast, the P3b component was affected by memory load irrespective of the stimulus category. Taken together, the results suggest that the VWM advantage for faces is not reflected at the sensory stages of stimulus processing, but rather at later higher-level processes as reflected by the P3b component.

  17. What Is the Unit of Visual Attention? Object for Selection, but Boolean Map for Access

    Science.gov (United States)

    Huang, Liqiang

    2010-01-01

    In the past 20 years, numerous theories and findings have suggested that the unit of visual attention is the object. In this study, I first clarify 2 different meanings of unit of visual attention, namely the unit of access in the sense of measurement and the unit of selection in the sense of division. In accordance with this distinction, I argue…

  18. The Nature of Experience Determines Object Representations in the Visual System

    Science.gov (United States)

    Wong, Yetta K.; Folstein, Jonathan R.; Gauthier, Isabel

    2012-01-01

    Visual perceptual learning (PL) and perceptual expertise (PE) traditionally lead to different training effects and recruit different brain areas, but reasons for these differences are largely unknown. Here, we tested how the learning history influences visual object representations. Two groups were trained with tasks typically used in PL or PE…

  19. Research on Visual Servo Grasping of Household Objects for Nonholonomic Mobile Manipulator

    Directory of Open Access Journals (Sweden)

    Huangsheng Xie

    2014-01-01

    Full Text Available This paper focuses on the problem of visual servo grasping of household objects for nonholonomic mobile manipulator. Firstly, a new kind of artificial object mark based on QR (Quick Response Code is designed, which can be affixed to the surface of household objects. Secondly, after summarizing the vision-based autonomous mobile manipulation system as a generalized manipulator, the generalized manipulator’s kinematic model is established, the analytical inverse kinematic solutions of the generalized manipulator are acquired, and a novel active vision based camera calibration method is proposed to determine the hand-eye relationship. Finally, a visual servo switching control law is designed to control the service robot to finish object grasping operation. Experimental results show that QR Code-based artificial object mark can overcome the difficulties brought by household objects’ variety and operation complexity, and the proposed visual servo scheme makes it possible for service robot to grasp and deliver objects efficiently.

  20. Activity in human visual and parietal cortex reveals object-based attention in working memory.

    Science.gov (United States)

    Peters, Benjamin; Kaiser, Jochen; Rahm, Benjamin; Bledowski, Christoph

    2015-02-25

    Visual attention enables observers to select behaviorally relevant information based on spatial locations, features, or objects. Attentional selection is not limited to physically present visual information, but can also operate on internal representations maintained in working memory (WM) in service of higher-order cognition. However, only little is known about whether attention to WM contents follows the same principles as attention to sensory stimuli. To address this question, we investigated in humans whether the typically observed effects of object-based attention in perception are also evident for object-based attentional selection of internal object representations in WM. In full accordance with effects in visual perception, the key behavioral and neuronal characteristics of object-based attention were observed in WM. Specifically, we found that reaction times were shorter when shifting attention to memory positions located on the currently attended object compared with equidistant positions on a different object. Furthermore, functional magnetic resonance imaging and multivariate pattern analysis of visuotopic activity in visual (areas V1-V4) and parietal cortex revealed that directing attention to one position of an object held in WM also enhanced brain activation for other positions on the same object, suggesting that attentional selection in WM activates the entire object. This study demonstrated that all characteristic features of object-based attention are present in WM and thus follows the same principles as in perception. Copyright © 2015 the authors 0270-6474/15/353360-10$15.00/0.

  1. An investigation of visual selection priority of objects with texture and crossed and uncrossed disparities

    Science.gov (United States)

    Khaustova, Dar'ya; Fournier, Jérôme; Wyckens, Emmanuel; Le Meur, Olivier

    2014-02-01

    The aim of this research is to understand the difference in visual attention to 2D and 3D content depending on texture and amount of depth. Two experiments were conducted using an eye-tracker and a 3DTV display. Collected fixation data were used to build saliency maps and to analyze the differences between 2D and 3D conditions. In the first experiment 51 observers participated in the test. Using scenes that contained objects with crossed disparity, it was discovered that such objects are the most salient, even if observers experience discomfort due to the high level of disparity. The goal of the second experiment is to decide whether depth is a determinative factor for visual attention. During the experiment, 28 observers watched the scenes that contained objects with crossed and uncrossed disparities. We evaluated features influencing the saliency of the objects in stereoscopic conditions by using contents with low-level visual features. With univariate tests of significance (MANOVA), it was detected that texture is more important than depth for selection of objects. Objects with crossed disparity are significantly more important for selection processes when compared to 2D. However, objects with uncrossed disparity have the same influence on visual attention as 2D objects. Analysis of eyemovements indicated that there is no difference in saccade length. Fixation durations were significantly higher in stereoscopic conditions for low-level stimuli than in 2D. We believe that these experiments can help to refine existing models of visual attention for 3D content.

  2. Visual SLAM and Moving-object Detection for a Small-size Humanoid Robot

    Directory of Open Access Journals (Sweden)

    Yin-Tien Wang

    2010-09-01

    Full Text Available In the paper, a novel moving object detection (MOD algorithm is developed and integrated with robot visual Simultaneous Localization and Mapping (vSLAM. The moving object is assumed to be a rigid body and its coordinate system in space is represented by a position vector and a rotation matrix. The MOD algorithm is composed of detection of image features, initialization of image features, and calculation of object coordinates. Experimentation is implemented on a small-size humanoid robot and the results show that the performance of the proposed algorithm is efficient for robot visual SLAM and moving object detection.

  3. Visual working memory for global, object, and part-based information.

    Science.gov (United States)

    Patterson, Michael D; Bly, Benjamin Martin; Porcelli, Anthony J; Rypma, Bart

    2007-06-01

    We investigated visual working memory for novel objects and parts of novel objects. After a delay period, participants showed strikingly more accurate performance recognizing a single whole object than the parts of that object. This bias to remember whole objects, rather than parts, persisted even when the division between parts was clearly defined and the parts were disconnected from each other so that, in order to remember the single whole object, the participants needed to mentally combine the parts. In addition, the bias was confirmed when the parts were divided by color. These experiments indicated that holistic perceptual-grouping biases are automatically used to organize storage in visual working memory. In addition, our results suggested that the bias was impervious to top-down consciously directed control, because when task demands were manipulated through instruction and catch trials, the participants still recognized whole objects more quickly and more accurately than their parts. This bias persisted even when the whole objects were novel and the parts were familiar. We propose that visual working memory representations depend primarily on the global configural properties of whole objects, rather than part-based representations, even when the parts themselves can be clearly perceived as individual objects. This global configural bias beneficially reduces memory load on a capacity-limited system operating in a complex visual environment, because fewer distinct items must be remembered.

  4. Visual Neurons in the Superior Colliculus Discriminate Many Objects by Their Historical Values

    Directory of Open Access Journals (Sweden)

    Whitney S. Griggs

    2018-06-01

    Full Text Available The superior colliculus (SC is an important structure in the mammalian brain that orients the animal toward distinct visual events. Visually responsive neurons in SC are modulated by visual object features, including size, motion, and color. However, it remains unclear whether SC activity is modulated by non-visual object features, such as the reward value associated with the object. To address this question, three monkeys were trained (>10 days to saccade to multiple fractal objects, half of which were consistently associated with large rewards while other half were associated with small rewards. This created historically high-valued (‘good’ and low-valued (‘bad’ objects. During the neuronal recordings from the SC, the monkeys maintained fixation at the center while the objects were flashed in the receptive field of the neuron without any reward. We found that approximately half of the visual neurons responded more strongly to the good than bad objects. In some neurons, this value-coding remained intact for a long time (>1 year after the last object-reward association learning. Notably, the neuronal discrimination of reward values started about 100 ms after the appearance of visual objects and lasted for more than 100 ms. These results provide evidence that SC neurons can discriminate objects by their historical (long-term values. This object value information may be provided by the basal ganglia, especially the circuit originating from the tail of the caudate nucleus. The information may be used by the neural circuits inside SC for motor (saccade output or may be sent to the circuits outside SC for future behavior.

  5. Fragile visual short-term memory is an object-based and location-specific store.

    Science.gov (United States)

    Pinto, Yaïr; Sligte, Ilja G; Shapiro, Kimron L; Lamme, Victor A F

    2013-08-01

    Fragile visual short-term memory (FM) is a recently discovered form of visual short-term memory. Evidence suggests that it provides rich and high-capacity storage, like iconic memory, yet it exists, without interference, almost as long as visual working memory. In the present study, we sought to unveil the functional underpinnings of this memory storage. We found that FM is only completely erased when the new visual scene appears at the same location and consists of the same objects as the to-be-recalled information. This result has two important implications: First, it shows that FM is an object- and location-specific store, and second, it suggests that FM might be used in everyday life when the presentation of visual information is appropriately designed.

  6. Effects of verbal and nonverbal interference on spatial and object visual working memory.

    Science.gov (United States)

    Postle, Bradley R; Desposito, Mark; Corkin, Suzanne

    2005-03-01

    We tested the hypothesis that a verbal coding mechanism is necessarily engaged by object, but not spatial, visual working memory tasks. We employed a dual-task procedure that paired n-back working memory tasks with domain-specific distractor trials inserted into each interstimulus interval of the n-back tasks. In two experiments, object n-back performance demonstrated greater sensitivity to verbal distraction, whereas spatial n-back performance demonstrated greater sensitivity to motion distraction. Visual object and spatial working memory may differ fundamentally in that the mnemonic representation of featural characteristics of objects incorporates a verbal (perhaps semantic) code, whereas the mnemonic representation of the location of objects does not. Thus, the processes supporting working memory for these two types of information may differ in more ways than those dictated by the "what/where" organization of the visual system, a fact more easily reconciled with a component process than a memory systems account of working memory function.

  7. Object representation in the bottlenose dolphin (Tursiops truncatus): integration of visual and echoic information.

    Science.gov (United States)

    Harley, H E; Roitblat, H L; Nachtigall, P E

    1996-04-01

    A dolphin performed a 3-alternative matching-to-sample task in different modality conditions (visual/echoic, both vision and echolocation: visual, vision only; echoic, echolocation only). In Experiment 1, training occurred in the dual-modality (visual/echoic) condition. Choice accuracy in tests of all conditions was above chance without further training. In Experiment 2, unfamiliar objects with complementary similarity relations in vision and echolocation were presented in single-modality conditions until accuracy was about 70%. When tested in the visual/echoic condition, accuracy immediately rose (95%), suggesting integration across modalities. In Experiment 3, conditions varied between presentation of sample and alternatives. The dolphin successfully matched familiar objects in the cross-modal conditions. These data suggest that the dolphin has an object-based representational system.

  8. Deformation-specific and deformation-invariant visual object recognition: pose vs identity recognition of people and deforming objects

    Directory of Open Access Journals (Sweden)

    Tristan J Webb

    2014-04-01

    Full Text Available When we see a human sitting down, standing up, or walking, we can recognise one of these poses independently of the individual, or we can recognise the individual person, independently of the pose. The same issues arise for deforming objects. For example, if we see a flag deformed by the wind, either blowing out or hanging languidly, we can usually recognise the flag, independently of its deformation; or we can recognise the deformation independently of the identity of the flag. We hypothesize that these types of recognition can be implemented by the primate visual system using temporo-spatial continuity as objects transform as a learning principle. In particular, we hypothesize that pose or deformation can be learned under conditions in which large numbers of different people are successively seen in the same pose, or objects in the same deformation. We also hypothesize that person-specific representations that are independent of pose, and object-specific representations that are independent of deformation and view, could be built, when individual people or objects are observed successively transforming from one pose or deformation and view to another. These hypotheses were tested in a simulation of the ventral visual system, VisNet, that uses temporal continuity, implemented in a synaptic learning rule with a short-term memory trace of previous neuronal activity, to learn invariant representations. It was found that depending on the statistics of the visual input, either pose-specific or deformation-specific representations could be built that were invariant with respect to individual and view; or that identity-specific representations could be built that were invariant with respect to pose or deformation and view. We propose that this is how pose-specific and pose-invariant, and deformation-specific and deformation-invariant, perceptual representations are built in the brain.

  9. Right fusiform response patterns reflect visual object identity rather than semantic similarity.

    Science.gov (United States)

    Bruffaerts, Rose; Dupont, Patrick; De Grauwe, Sophie; Peeters, Ronald; De Deyne, Simon; Storms, Gerrit; Vandenberghe, Rik

    2013-12-01

    We previously reported the neuropsychological consequences of a lesion confined to the middle and posterior part of the right fusiform gyrus (case JA) causing a partial loss of knowledge of visual attributes of concrete entities in the absence of category-selectivity (animate versus inanimate). We interpreted this in the context of a two-step model that distinguishes structural description knowledge from associative-semantic processing and implicated the lesioned area in the former process. To test this hypothesis in the intact brain, multi-voxel pattern analysis was used in a series of event-related fMRI studies in a total of 46 healthy subjects. We predicted that activity patterns in this region would be determined by the identity of rather than the conceptual similarity between concrete entities. In a prior behavioral experiment features were generated for each entity by more than 1000 subjects. Based on a hierarchical clustering analysis the entities were organised into 3 semantic clusters (musical instruments, vehicles, tools). Entities were presented as words or pictures. With foveal presentation of pictures, cosine similarity between fMRI response patterns in right fusiform cortex appeared to reflect both the identity of and the semantic similarity between the entities. No such effects were found for words in this region. The effect of object identity was invariant for location, scaling, orientation axis and color (grayscale versus color). It also persisted for different exemplars referring to a same concrete entity. The apparent semantic similarity effect however was not invariant. This study provides further support for a neurobiological distinction between structural description knowledge and processing of semantic relationships and confirms the role of right mid-posterior fusiform cortex in the former process, in accordance with previous lesion evidence. © 2013.

  10. Invariant visual object and face recognition: neural and computational bases, and a model, VisNet

    Directory of Open Access Journals (Sweden)

    Edmund T eRolls

    2012-06-01

    Full Text Available Neurophysiological evidence for invariant representations of objects and faces in the primate inferior temporal visual cortex is described. Then a computational approach to how invariant representations are formed in the brain is described that builds on the neurophysiology. A feature hierarchy modelin which invariant representations can be built by self-organizing learning based on the temporal and spatialstatistics of the visual input produced by objects as they transform in the world is described. VisNet can use temporal continuity in an associativesynaptic learning rule with a short term memory trace, and/or it can use spatialcontinuity in Continuous Spatial Transformation learning which does not require a temporal trace. The model of visual processing in theventral cortical stream can build representations of objects that are invariant withrespect to translation, view, size, and also lighting. The modelhas been extended to provide an account of invariant representations in the dorsal visualsystem of the global motion produced by objects such as looming, rotation, and objectbased movement. The model has been extended to incorporate top-down feedback connectionsto model the control of attention by biased competition in for example spatial and objectsearch tasks. The model has also been extended to account for how the visual system canselect single objects in complex visual scenes, and how multiple objects can berepresented in a scene. The model has also been extended to provide, with an additional layer, for the development of representations of spatial scenes of the type found in the hippocampus.

  11. Supporting Sensemaking of Complex Objects with Visualizations: Visibility and Complementarity of Interactions

    Directory of Open Access Journals (Sweden)

    Kamran Sedig

    2016-10-01

    Full Text Available Making sense of complex objects is difficult, and typically requires the use of external representations to support cognitive demands while reasoning about the objects. Visualizations are one type of external representation that can be used to support sensemaking activities. In this paper, we investigate the role of two design strategies in making the interactive features of visualizations more supportive of users’ exploratory needs when trying to make sense of complex objects. These two strategies are visibility and complementarity of interactions. We employ a theoretical framework concerned with human–information interaction and complex cognitive activities to inform, contextualize, and interpret the effects of the design strategies. The two strategies are incorporated in the design of Polyvise, a visualization tool that supports making sense of complex four-dimensional geometric objects. A mixed-methods study was conducted to evaluate the design strategies and the overall usability of Polyvise. We report the findings of the study, discuss some implications for the design of visualization tools that support sensemaking of complex objects, and propose five design guidelines. We anticipate that our results are transferrable to other contexts, and that these two design strategies can be used broadly in visualization tools intended to support activities with complex objects and information spaces.

  12. BlueJ Visual Debugger for Learning the Execution of Object-Oriented Programs?

    Science.gov (United States)

    Bennedsen, Jens; Schulte, Carsten

    2010-01-01

    This article reports on an experiment undertaken in order to evaluate the effect of a program visualization tool for helping students to better understand the dynamics of object-oriented programs. The concrete tool used was BlueJ's debugger and object inspector. The study was done as a control-group experiment in an introductory programming…

  13. Internal attention to features in visual short-term memory guides object learning.

    Science.gov (United States)

    Fan, Judith E; Turk-Browne, Nicholas B

    2013-11-01

    Attending to objects in the world affects how we perceive and remember them. What are the consequences of attending to an object in mind? In particular, how does reporting the features of a recently seen object guide visual learning? In three experiments, observers were presented with abstract shapes in a particular color, orientation, and location. After viewing each object, observers were cued to report one feature from visual short-term memory (VSTM). In a subsequent test, observers were cued to report features of the same objects from visual long-term memory (VLTM). We tested whether reporting a feature from VSTM: (1) enhances VLTM for just that feature (practice-benefit hypothesis), (2) enhances VLTM for all features (object-based hypothesis), or (3) simultaneously enhances VLTM for that feature and suppresses VLTM for unreported features (feature-competition hypothesis). The results provided support for the feature-competition hypothesis, whereby the representation of an object in VLTM was biased towards features reported from VSTM and away from unreported features (Experiment 1). This bias could not be explained by the amount of sensory exposure or response learning (Experiment 2) and was amplified by the reporting of multiple features (Experiment 3). Taken together, these results suggest that selective internal attention induces competitive dynamics among features during visual learning, flexibly tuning object representations to align with prior mnemonic goals. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. Patterns of Visual Attention to Faces and Objects in Autism Spectrum Disorder

    Science.gov (United States)

    McPartland, James C.; Webb, Sara Jane; Keehn, Brandon; Dawson, Geraldine

    2011-01-01

    This study used eye-tracking to examine visual attention to faces and objects in adolescents with autism spectrum disorder (ASD) and typical peers. Point of gaze was recorded during passive viewing of images of human faces, inverted human faces, monkey faces, three-dimensional curvilinear objects, and two-dimensional geometric patterns.…

  15. Cortical Dynamics of Contextually Cued Attentive Visual Learning and Search: Spatial and Object Evidence Accumulation

    Science.gov (United States)

    Huang, Tsung-Ren; Grossberg, Stephen

    2010-01-01

    How do humans use target-predictive contextual information to facilitate visual search? How are consistently paired scenic objects and positions learned and used to more efficiently guide search in familiar scenes? For example, humans can learn that a certain combination of objects may define a context for a kitchen and trigger a more efficient…

  16. The role of space and time in object-based visual search

    NARCIS (Netherlands)

    Schreij, D.B.B.; Olivers, C.N.L.

    2013-01-01

    Recently we have provided evidence that observers more readily select a target from a visual search display if the motion trajectory of the display object suggests that the observer has dealt with it before. Here we test the prediction that this object-based memory effect on search breaks down if

  17. Autonomous learning of robust visual object detection and identification on a humanoid

    NARCIS (Netherlands)

    Leitner, J.; Chandrashekhariah, P.; Harding, S.; Frank, M.; Spina, G.; Förster, A.; Triesch, J.; Schmidhuber, J.

    2012-01-01

    In this work we introduce a technique for a humanoid robot to autonomously learn the representations of objects within its visual environment. Our approach involves an attention mechanism in association with feature based segmentation that explores the environment and provides object samples for

  18. Error-Driven Learning in Visual Categorization and Object Recognition: A Common-Elements Model

    Science.gov (United States)

    Soto, Fabian A.; Wasserman, Edward A.

    2010-01-01

    A wealth of empirical evidence has now accumulated concerning animals' categorizing photographs of real-world objects. Although these complex stimuli have the advantage of fostering rapid category learning, they are difficult to manipulate experimentally and to represent in formal models of behavior. We present a solution to the representation…

  19. Different measures of structural similarity tap different aspects of visual object processing

    DEFF Research Database (Denmark)

    Gerlach, Christian

    2017-01-01

    The structural similarity of objects has been an important variable in explaining why some objects are easier to categorize at a superordinate level than to individuate, and also why some patients with brain injury have more difficulties in recognizing natural (structurally similar) objects than...... artifacts (structurally distinct objects). In spite of its merits as an explanatory variable, structural similarity is not a unitary construct, and it has been operationalized in different ways. Furthermore, even though measures of structural similarity have been successful in explaining task and category-effects...

  20. Invariant Visual Object and Face Recognition: Neural and Computational Bases, and a Model, VisNet.

    Science.gov (United States)

    Rolls, Edmund T

    2012-01-01

    Neurophysiological evidence for invariant representations of objects and faces in the primate inferior temporal visual cortex is described. Then a computational approach to how invariant representations are formed in the brain is described that builds on the neurophysiology. A feature hierarchy model in which invariant representations can be built by self-organizing learning based on the temporal and spatial statistics of the visual input produced by objects as they transform in the world is described. VisNet can use temporal continuity in an associative synaptic learning rule with a short-term memory trace, and/or it can use spatial continuity in continuous spatial transformation learning which does not require a temporal trace. The model of visual processing in the ventral cortical stream can build representations of objects that are invariant with respect to translation, view, size, and also lighting. The model has been extended to provide an account of invariant representations in the dorsal visual system of the global motion produced by objects such as looming, rotation, and object-based movement. The model has been extended to incorporate top-down feedback connections to model the control of attention by biased competition in, for example, spatial and object search tasks. The approach has also been extended to account for how the visual system can select single objects in complex visual scenes, and how multiple objects can be represented in a scene. The approach has also been extended to provide, with an additional layer, for the development of representations of spatial scenes of the type found in the hippocampus.

  1. Perceptual organization of shape, color, shade, and lighting in visual and pictorial objects.

    Science.gov (United States)

    Pinna, Baingio

    2012-01-01

    THE MAIN QUESTIONS WE ASKED IN THIS WORK ARE THE FOLLOWING: Where are representations of shape, color, depth, and lighting mostly located? Does their formation take time to develop? How do they contribute to determining and defining a visual object, and how do they differ? How do visual artists use them to create objects and scenes? Is the way artists use them related to the way we perceive them? To answer these questions, we studied the microgenetic development of the object perception and formation. Our hypothesis is that the main object properties are extracted in sequential order and in the same order that these roles are also used by artists and children of different age to paint objects. The results supported the microgenesis of object formation according to the following sequence: contours, color, shading, and lighting.

  2. Perceptual Organization of Shape, Color, Shade, and Lighting in Visual and Pictorial Objects

    Directory of Open Access Journals (Sweden)

    Baingio Pinna

    2012-06-01

    Full Text Available The main questions we asked in this work are the following: Where are representations of shape, color, depth, and lighting mostly located? Does their formation take time to develop? How do they contribute to determining and defining a visual object, and how do they differ? How do visual artists use them to create objects and scenes? Is the way artists use them related to the way we perceive them? To answer these questions, we studied the microgenetic development of the object perception and formation. Our hypothesis is that the main object properties are extracted in sequential order and in the same order that these roles are also used by artists and children of different age to paint objects. The results supported the microgenesis of object formation according to the following sequence: contours, color, shading, and lighting.

  3. Face Recognition Is Affected by Similarity in Spatial Frequency Range to a Greater Degree Than Within-Category Object Recognition

    Science.gov (United States)

    Collin, Charles A.; Liu, Chang Hong; Troje, Nikolaus F.; McMullen, Patricia A.; Chaudhuri, Avi

    2004-01-01

    Previous studies have suggested that face identification is more sensitive to variations in spatial frequency content than object recognition, but none have compared how sensitive the 2 processes are to variations in spatial frequency overlap (SFO). The authors tested face and object matching accuracy under varying SFO conditions. Their results…

  4. Thickness and clearance visualization based on distance field of 3D objects

    Directory of Open Access Journals (Sweden)

    Masatomo Inui

    2015-07-01

    Full Text Available This paper proposes a novel method for visualizing the thickness and clearance of 3D objects in a polyhedral representation. The proposed method uses the distance field of the objects in the visualization. A parallel algorithm is developed for constructing the distance field of polyhedral objects using the GPU. The distance between a voxel and the surface polygons of the model is computed many times in the distance field construction. Similar sets of polygons are usually selected as close polygons for close voxels. By using this spatial coherence, a parallel algorithm is designed to compute the distances between a cluster of close voxels and the polygons selected by the culling operation so that the fast shared memory mechanism of the GPU can be fully utilized. The thickness/clearance of the objects is visualized by distributing points on the visible surfaces of the objects and painting them with a unique color corresponding to the thickness/clearance values at those points. A modified ray casting method is developed for computing the thickness/clearance using the distance field of the objects. A system based on these algorithms can compute the distance field of complex objects within a few minutes for most cases. After the distance field construction, thickness/clearance visualization at a near interactive rate is achieved.

  5. Colour Terms Affect Detection of Colour and Colour-Associated Objects Suppressed from Visual Awareness

    OpenAIRE

    Forder, Lewis; Taylor, Olivia; Mankin, Helen; Scott, Ryan B.; Franklin, Anna

    2016-01-01

    The idea that language can affect how we see the world continues to create controversy. A potentially important study in this field has shown that when an object is suppressed from visual awareness using continuous flash suppression (a form of binocular rivalry), detection of the object is differently affected by a preceding word prime depending on whether the prime matches or does not match the object. This may suggest that language can affect early stages of vision. We replicated this parad...

  6. Barack Obama Blindness (BOB): Absence of Visual Awareness to a Single Object.

    Science.gov (United States)

    Persuh, Marjan; Melara, Robert D

    2016-01-01

    In two experiments, we evaluated whether a perceiver's prior expectations could alone obliterate his or her awareness of a salient visual stimulus. To establish expectancy, observers first made a demanding visual discrimination on each of three baseline trials. Then, on a fourth, critical trial, a single, salient and highly visible object appeared in full view at the center of the visual field and in the absence of any competing visual input. Surprisingly, fully half of the participants were unaware of the solitary object in front of their eyes. Dramatically, observers were blind even when the only stimulus on display was the face of U.S. President Barack Obama. We term this novel, counterintuitive phenomenon, Barack Obama Blindness (BOB). Employing a method that rules out putative memory effects by probing awareness immediately after presentation of the critical stimulus, we demonstrate that the BOB effect is a true failure of conscious vision.

  7. Barack Obama Blindness (BOB: Absence of visual awareness to a single object

    Directory of Open Access Journals (Sweden)

    Marjan ePersuh

    2016-03-01

    Full Text Available In two experiments we evaluated whether a perceiver’s prior expectations could alone obliterate his or her awareness of a salient visual stimulus. To establish expectancy, observers first made a demanding visual discrimination on each of three baseline trials. Then, on a fourth, critical trial, a single, salient and highly visible object appeared in full view at the center of the visual field and in the absence of any competing visual input. Surprisingly, fully half of the participants were unaware of the solitary object in front of their eyes. Dramatically, observers were blind even when the only stimulus on display was the face of U.S. President Barack Obama. We term this novel, counterintuitive phenomenon, Barack Obama Blindness (BOB. Employing a method that rules out putative memory effects by probing awareness immediately after presentation of the critical stimulus, we demonstrate that the BOB effect is a true failure of conscious vision.

  8. Visual hull method for tomographic PIV measurement of flow around moving objects

    Energy Technology Data Exchange (ETDEWEB)

    Adhikari, D.; Longmire, E.K. [University of Minnesota, Department of Aerospace Engineering and Mechanics, Minneapolis, MN (United States)

    2012-10-15

    Tomographic particle image velocimetry (PIV) is a recently developed method to measure three components of velocity within a volumetric space. We present a visual hull technique that automates identification and masking of discrete objects within the measurement volume, and we apply existing tomographic PIV reconstruction software to measure the velocity surrounding the objects. The technique is demonstrated by considering flow around falling bodies of different shape with Reynolds number {proportional_to}1,000. Acquired image sets are processed using separate routines to reconstruct both the volumetric mask around the object and the surrounding tracer particles. After particle reconstruction, the reconstructed object mask is used to remove any ghost particles that otherwise appear within the object volume. Velocity vectors corresponding with fluid motion can then be determined up to the boundary of the visual hull without being contaminated or affected by the neighboring object velocity. Although the visual hull method is not meant for precise tracking of objects, the reconstructed object volumes nevertheless can be used to estimate the object location and orientation at each time step. (orig.)

  9. Figure–ground organization and the emergence of proto-objects in the visual cortex

    OpenAIRE

    von der Heydt, Rüdiger

    2015-01-01

    A long history of studies of perception has shown that the visual system organizes the incoming information early on, interpreting the 2D image in terms of a 3D world and producing a structure that provides perceptual continuity and enables object-based attention. Recordings from monkey visual cortex show that many neurons, especially in area V2, are selective for border ownership. These neurons are edge selective and have ordinary classical receptive fields, but in addition their responses a...

  10. Mobile device geo-localization and object visualization in sensor networks

    Science.gov (United States)

    Lemaire, Simon; Bodensteiner, Christoph; Arens, Michael

    2014-10-01

    In this paper we present a method to visualize geo-referenced objects on modern smartphones using a multi- functional application design. The application applies different localization and visualization methods including the smartphone camera image. The presented application copes well with different scenarios. A generic application work flow and augmented reality visualization techniques are described. The feasibility of the approach is experimentally validated using an online desktop selection application in a network with a modern of-the-shelf smartphone. Applications are widespread and include for instance crisis and disaster management or military applications.

  11. Visualization of the tire-soil interaction area by means of ObjectARX programming interface

    Science.gov (United States)

    Mueller, W.; Gruszczyński, M.; Raba, B.; Lewicki, A.; Przybył, K.; Zaborowicz, M.; Koszela, K.; Boniecki, P.

    2014-04-01

    The process of data visualization, important for their analysis, becomes problematic when large data sets generated via computer simulations are available. This problem concerns, among others, the models that describe the geometry of tire-soil interaction. For the purpose of a graphical representation of this area and implementation of various geometric calculations the authors have developed a plug-in application for AutoCAD, based on the latest technologies, including ObjectARX, LINQ and the use of Visual Studio platform. Selected programming tools offer a wide variety of IT structures that enable data visualization and data analysis and are important e.g. in model verification.

  12. Semantic and functional relationships among objects increase the capacity of visual working memory.

    Science.gov (United States)

    O'Donnell, Ryan E; Clement, Andrew; Brockmole, James R

    2018-04-12

    Visual working memory (VWM) has a limited capacity of approximately 3-4 visual objects. Current theories of VWM propose that a limited pool of resources can be flexibly allocated to objects, allowing them to be represented at varying levels of precision. Factors that influence the allocation of these resources, such as the complexity and perceptual grouping of objects, can thus affect the capacity of VWM. We sought to identify whether semantic and functional relationships between objects could influence the grouping of objects, thereby increasing the functional capacity of VWM. Observers viewed arrays of 8 to-be-remembered objects arranged into 4 pairs. We manipulated both the semantic association and functional interaction between the objects, then probed participants' memory for the arrays. When objects were semantically related, participants' memory for the arrays improved. Participants' memory further improved when semantically related objects were positioned to interact with each other. However, when we increased the spacing between the objects in each pair, the benefits of functional but not semantic relatedness were eliminated. These findings suggest that action-relevant properties of objects can increase the functional capacity of VWM, but only when objects are positioned to directly interact with each other. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  13. Visual objects and universal meanings: AIDS posters and the politics of globalisation and history.

    Science.gov (United States)

    Stein, Claudia; Cooter, Roger

    2011-01-01

    Drawing on recent visual and spatial turns in history writing, this paper considers AIDS posters from the perspective of their museum 'afterlife' as collected material objects. Museum spaces serve changing political and epistemological projects, and the visual objects they house are not immune from them. A recent globally themed exhibition of AIDS posters at an arts and crafts museum in Hamburg is cited in illustration. The exhibition also serves to draw attention to institutional continuities in collecting agendas. Revealed, contrary to postmodernist expectations, is how today's application of aesthetic display for the purpose of making 'global connections' does not radically break with the virtues and morals attached to the visual at the end of the nineteenth century. The historicisation of such objects needs to take into account this complicated mix of change and continuity in aesthetic concepts and political inscriptions. Otherwise, historians fall prey to seductive aesthetics without being aware of the politics of them. This article submits that aesthetics is politics.

  14. Use of interactive data visualization in multi-objective forest planning.

    Science.gov (United States)

    Haara, Arto; Pykäläinen, Jouni; Tolvanen, Anne; Kurttila, Mikko

    2018-03-15

    Common to multi-objective forest planning situations is that they all require comparisons, searches and evaluation among decision alternatives. Through these actions, the decision maker can learn from the information presented and thus make well-justified decisions. Interactive data visualization is an evolving approach that supports learning and decision making in multidimensional decision problems and planning processes. Data visualization contributes the formation of mental image data and this process is further boosted by allowing interaction with the data. In this study, we introduce a multi-objective forest planning decision problem framework and the corresponding characteristics of data. We utilize the framework with example planning data to illustrate and evaluate the potential of 14 interactive data visualization techniques to support multi-objective forest planning decisions. Furthermore, broader utilization possibilities of these techniques to incorporate the provisioning of ecosystem services into forest management and planning are discussed. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. Predictive coding of visual object position ahead of moving objects revealed by time-resolved EEG decoding.

    Science.gov (United States)

    Hogendoorn, Hinze; Burkitt, Anthony N

    2018-05-01

    Due to the delays inherent in neuronal transmission, our awareness of sensory events necessarily lags behind the occurrence of those events in the world. If the visual system did not compensate for these delays, we would consistently mislocalize moving objects behind their actual position. Anticipatory mechanisms that might compensate for these delays have been reported in animals, and such mechanisms have also been hypothesized to underlie perceptual effects in humans such as the Flash-Lag Effect. However, to date no direct physiological evidence for anticipatory mechanisms has been found in humans. Here, we apply multivariate pattern classification to time-resolved EEG data to investigate anticipatory coding of object position in humans. By comparing the time-course of neural position representation for objects in both random and predictable apparent motion, we isolated anticipatory mechanisms that could compensate for neural delays when motion trajectories were predictable. As well as revealing an early neural position representation (lag 80-90 ms) that was unaffected by the predictability of the object's trajectory, we demonstrate a second neural position representation at 140-150 ms that was distinct from the first, and that was pre-activated ahead of the moving object when it moved on a predictable trajectory. The latency advantage for predictable motion was approximately 16 ± 2 ms. To our knowledge, this provides the first direct experimental neurophysiological evidence of anticipatory coding in human vision, revealing the time-course of predictive mechanisms without using a spatial proxy for time. The results are numerically consistent with earlier animal work, and suggest that current models of spatial predictive coding in visual cortex can be effectively extended into the temporal domain. Copyright © 2018 Elsevier Inc. All rights reserved.

  16. It's all connected: Pathways in visual object recognition and early noun learning.

    Science.gov (United States)

    Smith, Linda B

    2013-11-01

    A developmental pathway may be defined as the route, or chain of events, through which a new structure or function forms. For many human behaviors, including object name learning and visual object recognition, these pathways are often complex and multicausal and include unexpected dependencies. This article presents three principles of development that suggest the value of a developmental psychology that explicitly seeks to trace these pathways and uses empirical evidence on developmental dependencies among motor development, action on objects, visual object recognition, and object name learning in 12- to 24-month-old infants to make the case. The article concludes with a consideration of the theoretical implications of this approach. (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  17. Single-trial multisensory memories affect later auditory and visual object discrimination.

    Science.gov (United States)

    Thelen, Antonia; Talsma, Durk; Murray, Micah M

    2015-05-01

    Multisensory memory traces established via single-trial exposures can impact subsequent visual object recognition. This impact appears to depend on the meaningfulness of the initial multisensory pairing, implying that multisensory exposures establish distinct object representations that are accessible during later unisensory processing. Multisensory contexts may be particularly effective in influencing auditory discrimination, given the purportedly inferior recognition memory in this sensory modality. The possibility of this generalization and the equivalence of effects when memory discrimination was being performed in the visual vs. auditory modality were at the focus of this study. First, we demonstrate that visual object discrimination is affected by the context of prior multisensory encounters, replicating and extending previous findings by controlling for the probability of multisensory contexts during initial as well as repeated object presentations. Second, we provide the first evidence that single-trial multisensory memories impact subsequent auditory object discrimination. Auditory object discrimination was enhanced when initial presentations entailed semantically congruent multisensory pairs and was impaired after semantically incongruent multisensory encounters, compared to sounds that had been encountered only in a unisensory manner. Third, the impact of single-trial multisensory memories upon unisensory object discrimination was greater when the task was performed in the auditory vs. visual modality. Fourth, there was no evidence for correlation between effects of past multisensory experiences on visual and auditory processing, suggestive of largely independent object processing mechanisms between modalities. We discuss these findings in terms of the conceptual short term memory (CSTM) model and predictive coding. Our results suggest differential recruitment and modulation of conceptual memory networks according to the sensory task at hand. Copyright

  18. Figure-ground organization and the emergence of proto-objects in the visual cortex.

    Science.gov (United States)

    von der Heydt, Rüdiger

    2015-01-01

    A long history of studies of perception has shown that the visual system organizes the incoming information early on, interpreting the 2D image in terms of a 3D world and producing a structure that provides perceptual continuity and enables object-based attention. Recordings from monkey visual cortex show that many neurons, especially in area V2, are selective for border ownership. These neurons are edge selective and have ordinary classical receptive fields (CRF), but in addition their responses are modulated (enhanced or suppressed) depending on the location of a 'figure' relative to the edge in their receptive field. Each neuron has a fixed preference for location on one side or the other. This selectivity is derived from the image context far beyond the CRF. This paper reviews evidence indicating that border ownership selectivity reflects the formation of early object representations ('proto-objects'). The evidence includes experiments showing (1) reversal of border ownership signals with change of perceived object structure, (2) border ownership specific enhancement of responses in object-based selective attention, (3) persistence of border ownership signals in accordance with continuity of object perception, and (4) remapping of border ownership signals across saccades and object movements. Findings 1 and 2 can be explained by hypothetical grouping circuits that sum contour feature signals in search of objectness, and, via recurrent projections, enhance the corresponding low-level feature signals. Findings 3 and 4 might be explained by assuming that the activity of grouping circuits persists and can be remapped. Grouping, persistence, and remapping are fundamental operations of vision. Finding these operations manifest in low-level visual areas challenges traditional views of visual processing. New computational models need to be developed for a comprehensive understanding of the function of the visual cortex.

  19. Ensemble coding remains accurate under object and spatial visual working memory load.

    Science.gov (United States)

    Epstein, Michael L; Emmanouil, Tatiana A

    2017-10-01

    A number of studies have provided evidence that the visual system statistically summarizes large amounts of information that would exceed the limitations of attention and working memory (ensemble coding). However the necessity of working memory resources for ensemble coding has not yet been tested directly. In the current study, we used a dual task design to test the effect of object and spatial visual working memory load on size averaging accuracy. In Experiment 1, we tested participants' accuracy in comparing the mean size of two sets under various levels of object visual working memory load. Although the accuracy of average size judgments depended on the difference in mean size between the two sets, we found no effect of working memory load. In Experiment 2, we tested the same average size judgment while participants were under spatial visual working memory load, again finding no effect of load on averaging accuracy. Overall our results reveal that ensemble coding can proceed unimpeded and highly accurately under both object and spatial visual working memory load, providing further evidence that ensemble coding reflects a basic perceptual process distinct from that of individual object processing.

  20. Learning Deep Visual Object Models From Noisy Web Data: How to Make it Work

    OpenAIRE

    Massouh, Nizar; Babiloni, Francesca; Tommasi, Tatiana; Young, Jay; Hawes, Nick; Caputo, Barbara

    2017-01-01

    Deep networks thrive when trained on large scale data collections. This has given ImageNet a central role in the development of deep architectures for visual object classification. However, ImageNet was created during a specific period in time, and as such it is prone to aging, as well as dataset bias issues. Moving beyond fixed training datasets will lead to more robust visual systems, especially when deployed on robots in new environments which must train on the objects they encounter there...

  1. A Multi-Objective Approach to Visualize Proportions and Similarities Between Individuals by Rectangular Maps

    DEFF Research Database (Denmark)

    Carrizosa, Emilio; Guerrero, Vanesa; Morales, Dolores Romero

    In this paper we address the problem of visualizing the proportions and the similarities attached to a set of individuals. We represent this information using a rectangular map, i.e., a subdivision of a rectangle into rectangular portions so that each portion is associated with one individual...... area and adjacency requirements, this visualization problem is formulated as a three-objective Mixed Integer Nonlinear Problem. The first objective seeks to maximize the number of true adjacencies that the rectangular map is able to reproduce, the second one is to minimize the number of false...

  2. Figure–ground organization and the emergence of proto-objects in the visual cortex

    Science.gov (United States)

    von der Heydt, Rüdiger

    2015-01-01

    A long history of studies of perception has shown that the visual system organizes the incoming information early on, interpreting the 2D image in terms of a 3D world and producing a structure that provides perceptual continuity and enables object-based attention. Recordings from monkey visual cortex show that many neurons, especially in area V2, are selective for border ownership. These neurons are edge selective and have ordinary classical receptive fields (CRF), but in addition their responses are modulated (enhanced or suppressed) depending on the location of a ‘figure’ relative to the edge in their receptive field. Each neuron has a fixed preference for location on one side or the other. This selectivity is derived from the image context far beyond the CRF. This paper reviews evidence indicating that border ownership selectivity reflects the formation of early object representations (‘proto-objects’). The evidence includes experiments showing (1) reversal of border ownership signals with change of perceived object structure, (2) border ownership specific enhancement of responses in object-based selective attention, (3) persistence of border ownership signals in accordance with continuity of object perception, and (4) remapping of border ownership signals across saccades and object movements. Findings 1 and 2 can be explained by hypothetical grouping circuits that sum contour feature signals in search of objectness, and, via recurrent projections, enhance the corresponding low-level feature signals. Findings 3 and 4 might be explained by assuming that the activity of grouping circuits persists and can be remapped. Grouping, persistence, and remapping are fundamental operations of vision. Finding these operations manifest in low-level visual areas challenges traditional views of visual processing. New computational models need to be developed for a comprehensive understanding of the function of the visual cortex. PMID:26579062

  3. Crossmodal Activation of Visual Object Regions for Auditorily Presented Concrete Words

    Directory of Open Access Journals (Sweden)

    Jasper J F van den Bosch

    2011-10-01

    Full Text Available Dual-coding theory (Paivio, 1986 postulates that the human mind represents objects not just with an analogous, or semantic code, but with a perceptual representation as well. Previous studies (eg, Fiebach & Friederici, 2004 indicated that the modality of this representation is not necessarily the one that triggers the representation. The human visual cortex contains several regions, such as the Lateral Occipital Complex (LOC, that respond specifically to object stimuli. To investigate whether these principally visual representations regions are also recruited for auditory stimuli, we presented subjects with spoken words with specific, concrete meanings (‘car’ as well as words with abstract meanings (‘hope’. Their brain activity was measured with functional magnetic resonance imaging. Whole-brain contrasts showed overlap between regions differentially activated by words for concrete objects compared to words for abstract concepts with visual regions activated by a contrast of object versus non-object visual stimuli. We functionally localized LOC for individual subjects and a preliminary analysis showed a trend for a concreteness effect in this region-of-interest on the group level. Appropriate further analysis might include connectivity and classification measures. These results can shed light on the role of crossmodal representations in cognition.

  4. Objects Classification by Learning-Based Visual Saliency Model and Convolutional Neural Network.

    Science.gov (United States)

    Li, Na; Zhao, Xinbo; Yang, Yongjia; Zou, Xiaochun

    2016-01-01

    Humans can easily classify different kinds of objects whereas it is quite difficult for computers. As a hot and difficult problem, objects classification has been receiving extensive interests with broad prospects. Inspired by neuroscience, deep learning concept is proposed. Convolutional neural network (CNN) as one of the methods of deep learning can be used to solve classification problem. But most of deep learning methods, including CNN, all ignore the human visual information processing mechanism when a person is classifying objects. Therefore, in this paper, inspiring the completed processing that humans classify different kinds of objects, we bring forth a new classification method which combines visual attention model and CNN. Firstly, we use the visual attention model to simulate the processing of human visual selection mechanism. Secondly, we use CNN to simulate the processing of how humans select features and extract the local features of those selected areas. Finally, not only does our classification method depend on those local features, but also it adds the human semantic features to classify objects. Our classification method has apparently advantages in biology. Experimental results demonstrated that our method made the efficiency of classification improve significantly.

  5. Structural similarity and category-specificity

    DEFF Research Database (Denmark)

    Gerlach, Christian; Law, Ian; Paulson, Olaf B

    2004-01-01

    It has been suggested that category-specific recognition disorders for natural objects may reflect that natural objects are more structurally (visually) similar than artefacts and therefore more difficult to recognize following brain damage. On this account one might expect a positive relationshi...

  6. Where vision meets memory: prefrontal-posterior networks for visual object constancy during categorization and recognition.

    Science.gov (United States)

    Schendan, Haline E; Stern, Chantal E

    2008-07-01

    Objects seen from unusual relative to more canonical views require more time to categorize and recognize, and, according to object model verification theories, additionally recruit prefrontal processes for cognitive control that interact with parietal processes for mental rotation. To test this using functional magnetic resonance imaging, people categorized and recognized known objects from unusual and canonical views. Canonical views activated some components of a default network more on categorization than recognition. Activation to unusual views showed that both ventral and dorsal visual pathways, and prefrontal cortex, have key roles in visual object constancy. Unusual views activated object-sensitive and mental rotation (and not saccade) regions in ventrocaudal intraparietal, transverse occipital, and inferotemporal sulci, and ventral premotor cortex for verification processes of model testing on any task. A collateral-lingual sulci "place" area activated for mental rotation, working memory, and unusual views on correct recognition and categorization trials to accomplish detailed spatial matching. Ventrolateral prefrontal cortex and object-sensitive lateral occipital sulcus activated for mental rotation and unusual views on categorization more than recognition, supporting verification processes of model prediction. This visual knowledge framework integrates vision and memory theories to explain how distinct prefrontal-posterior networks enable meaningful interactions with objects in diverse situations.

  7. Deconstructing visual scenes in cortex: gradients of object and spatial layout information.

    Science.gov (United States)

    Harel, Assaf; Kravitz, Dwight J; Baker, Chris I

    2013-04-01

    Real-world visual scenes are complex cluttered, and heterogeneous stimuli engaging scene- and object-selective cortical regions including parahippocampal place area (PPA), retrosplenial complex (RSC), and lateral occipital complex (LOC). To understand the unique contribution of each region to distributed scene representations, we generated predictions based on a neuroanatomical framework adapted from monkey and tested them using minimal scenes in which we independently manipulated both spatial layout (open, closed, and gradient) and object content (furniture, e.g., bed, dresser). Commensurate with its strong connectivity with posterior parietal cortex, RSC evidenced strong spatial layout information but no object information, and its response was not even modulated by object presence. In contrast, LOC, which lies within the ventral visual pathway, contained strong object information but no background information. Finally, PPA, which is connected with both the dorsal and the ventral visual pathway, showed information about both objects and spatial backgrounds and was sensitive to the presence or absence of either. These results suggest that 1) LOC, PPA, and RSC have distinct representations, emphasizing different aspects of scenes, 2) the specific representations in each region are predictable from their patterns of connectivity, and 3) PPA combines both spatial layout and object information as predicted by connectivity.

  8. Studying visual attention using the multiple object tracking paradigm: A tutorial review.

    Science.gov (United States)

    Meyerhoff, Hauke S; Papenmeier, Frank; Huff, Markus

    2017-07-01

    Human observers are capable of tracking multiple objects among identical distractors based only on their spatiotemporal information. Since the first report of this ability in the seminal work of Pylyshyn and Storm (1988, Spatial Vision, 3, 179-197), multiple object tracking has attracted many researchers. A reason for this is that it is commonly argued that the attentional processes studied with the multiple object paradigm apparently match the attentional processing during real-world tasks such as driving or team sports. We argue that multiple object tracking provides a good mean to study the broader topic of continuous and dynamic visual attention. Indeed, several (partially contradicting) theories of attentive tracking have been proposed within the almost 30 years since its first report, and a large body of research has been conducted to test these theories. With regard to the richness and diversity of this literature, the aim of this tutorial review is to provide researchers who are new in the field of multiple object tracking with an overview over the multiple object tracking paradigm, its basic manipulations, as well as links to other paradigms investigating visual attention and working memory. Further, we aim at reviewing current theories of tracking as well as their empirical evidence. Finally, we review the state of the art in the most prominent research fields of multiple object tracking and how this research has helped to understand visual attention in dynamic settings.

  9. Navon's classical paradigm concerning local and global processing relates systematically to visual object classification performance.

    Science.gov (United States)

    Gerlach, Christian; Poirel, Nicolas

    2018-01-10

    Forty years ago David Navon tried to tackle a central problem in psychology concerning the time course of perceptual processing: Do we first see the details (local level) followed by the overall outlay (global level) or is it rather the other way around? He did this by developing a now classical paradigm involving the presentation of compound stimuli; large letters composed of smaller letters. Despite the usefulness of this paradigm it remains uncertain whether effects found with compound stimuli relate directly to visual object recognition. It does so because compound stimuli are not actual objects but rather formations of elements and because the elements that form the global shape of compound stimuli are not features of the global shape but rather objects in their own right. To examine the relationship between performance on Navon's paradigm and visual object processing we derived two indexes from Navon's paradigm that reflect different aspects of the relationship between global and local processing. We find that individual differences on these indexes can explain a considerable amount of variance in two standard object classification paradigms; object decision and superordinate categorization, suggesting that Navon's paradigm does relate to visual object processing.

  10. Visual perspective in autobiographical memories: reliability, consistency, and relationship to objective memory performance.

    Science.gov (United States)

    Siedlecki, Karen L

    2015-01-01

    Visual perspective in autobiographical memories was examined in terms of reliability, consistency, and relationship to objective memory performance in a sample of 99 individuals. Autobiographical memories may be recalled from two visual perspectives--a field perspective in which individuals experience the memory through their own eyes, or an observer perspective in which individuals experience the memory from the viewpoint of an observer in which they can see themselves. Participants recalled nine word-cued memories that differed in emotional valence (positive, negative and neutral) and rated their memories on 18 scales. Results indicate that visual perspective was the most reliable memory characteristic overall and is consistently related to emotional intensity at the time of recall and amount of emotion experienced during the memory. Visual perspective is unrelated to memory for words, stories, abstract line drawings or faces.

  11. Figure-ground organization and the emergence of proto-objects in the visual cortex

    Directory of Open Access Journals (Sweden)

    Rüdiger evon der Heydt

    2015-11-01

    Full Text Available A long history of studies of perception has shown that the visual system organizes the incoming information early on, interpreting the 2D image in terms of a 3D world and producing a structure that provides perceptual continuity and enables object-based attention. Recordings from monkey visual cortex show that many neurons, especially in area V2, are selective for border ownership. These neurons are edge selective and have ordinary classical receptive fields, but in addition their responses are modulated (enhanced or suppressed depending on the location of a ‘figure’ relative to the edge in their receptive field. Each neuron has a fixed preference for location on one side or the other. This selectivity is derived from the image context far beyond the classical receptive field. This paper reviews evidence indicating that border ownership selectivity reflects the formation of early object representations (‘proto-objects’. The evidence includes experiments showing (1 reversal of border ownership signals with change of perceived object structure, (2 border ownership specific enhancement of responses in object-based selective attention, (3 persistence of border ownership signals in accordance with continuity of object perception, and (4 remapping of border ownership signals across saccades and object movements. Findings 1 and 2 can be explained by hypothetical grouping circuits that sum contour feature signals in search of objecthood, and, via recurrent projections, enhance the corresponding low-level feature signals. Findings 3 and 4 might be explained by assuming that the activity of grouping circuits persists and can be remapped. Grouping, persistence and remapping are fundamental operations of vision. Finding these operations manifest in low-level visual areas challenges traditional views of visual processing. New computational models need to be developed for a comprehensive understanding of the function of the visual cortex.

  12. [Recognition of visual objects under forward masking. Effects of cathegorial similarity of test and masking stimuli].

    Science.gov (United States)

    Gerasimenko, N Iu; Slavutskaia, A V; Kalinin, S A; Kulikov, M A; Mikhaĭlova, E S

    2013-01-01

    In 38 healthy subjects accuracy and response time were examined during recognition of two categories of images--animals andnonliving objects--under forward masking. We revealed new data that masking effects depended of categorical similarity of target and masking stimuli. The recognition accuracy was the lowest and the response time was the most slow, when the target and masking stimuli belongs to the same category, that was combined with high dispersion of response times. The revealed effects were more clear in the task of animal recognition in comparison with the recognition of nonliving objects. We supposed that the revealed effects connected with interference between cortical representations of the target and masking stimuli and discussed our results in context of cortical interference and negative priming.

  13. Retrospective Cues Based on Object Features Improve Visual Working Memory Performance in Older Adults

    OpenAIRE

    Gilchrist, Amanda L.; Duarte, Audrey; Verhaeghen, Paul

    2015-01-01

    Research with younger adults has shown that retrospective cues can be used to orient top-down attention toward relevant items in working memory. We examined whether older adults could take advantage of these cues to improve memory performance. Younger and older adults were presented with visual arrays of five colored shapes; during maintenance, participants were either presented with an informative cue based on an object feature (here, object shape or color) that would be probed, or with an u...

  14. Object integration requires attention: visual search for Kanizsa figures in parietal extinction

    OpenAIRE

    Gögler, N.; Finke, K.; Keller, I.; Muller, Hermann J.; Conci, M.

    2016-01-01

    The contribution of selective attention to object integration is a topic of debate: integration of parts into coherent wholes, such as in Kanizsa figures, is thought to arise either from pre-attentive, automatic coding processes or from higher-order processes involving selective attention. Previous studies have attempted to examine the role of selective attention in object integration either by employing visual search paradigms or by studying patients with unilateral deficits in selective att...

  15. Do object refixations during scene viewing indicate rehearsal in visual working memory?

    Science.gov (United States)

    Zelinsky, Gregory J; Loschky, Lester C; Dickinson, Christopher A

    2011-05-01

    Do refixations serve a rehearsal function in visual working memory (VWM)? We analyzed refixations from observers freely viewing multiobject scenes. An eyetracker was used to limit the viewing of a scene to a specified number of objects fixated after the target (intervening objects), followed by a four-alternative forced choice recognition test. Results showed that the probability of target refixation increased with the number of fixated intervening objects, and these refixations produced a 16% accuracy benefit over the first five intervening-object conditions. Additionally, refixations most frequently occurred after fixations on only one to two other objects, regardless of the intervening-object condition. These behaviors could not be explained by random or minimally constrained computational models; a VWM component was required to completely describe these data. We explain these findings in terms of a monitor-refixate rehearsal system: The activations of object representations in VWM are monitored, with refixations occurring when these activations decrease suddenly.

  16. Methodology for the Efficient Progressive Distribution and Visualization of 3D Building Objects

    Directory of Open Access Journals (Sweden)

    Bo Mao

    2016-10-01

    Full Text Available Three-dimensional (3D, city models have been applied in a variety of fields. One of the main problems in 3D city model utilization, however, is the large volume of data. In this paper, a method is proposed to generalize the 3D building objects in 3D city models at different levels of detail, and to combine multiple Levels of Detail (LODs for a progressive distribution and visualization of the city models. First, an extended structure for multiple LODs of building objects, BuildingTree, is introduced that supports both single buildings and building groups; second, constructive solid geometry (CSG representations of buildings are created and generalized. Finally, the BuildingTree is stored in the NoSQL database MongoDB for dynamic visualization requests. The experimental results indicate that the proposed progressive method can efficiently visualize 3D city models, especially for large areas.

  17. The ventral visual pathway: an expanded neural framework for the processing of object quality.

    Science.gov (United States)

    Kravitz, Dwight J; Saleem, Kadharbatcha S; Baker, Chris I; Ungerleider, Leslie G; Mishkin, Mortimer

    2013-01-01

    Since the original characterization of the ventral visual pathway, our knowledge of its neuroanatomy, functional properties, and extrinsic targets has grown considerably. Here we synthesize this recent evidence and propose that the ventral pathway is best understood as a recurrent occipitotemporal network containing neural representations of object quality both utilized and constrained by at least six distinct cortical and subcortical systems. Each system serves its own specialized behavioral, cognitive, or affective function, collectively providing the raison d'être for the ventral visual pathway. This expanded framework contrasts with the depiction of the ventral visual pathway as a largely serial staged hierarchy culminating in singular object representations and more parsimoniously incorporates attentional, contextual, and feedback effects. Published by Elsevier Ltd.

  18. Dissociable influences of auditory object vs. spatial attention on visual system oscillatory activity.

    Directory of Open Access Journals (Sweden)

    Jyrki Ahveninen

    Full Text Available Given that both auditory and visual systems have anatomically separate object identification ("what" and spatial ("where" pathways, it is of interest whether attention-driven cross-sensory modulations occur separately within these feature domains. Here, we investigated how auditory "what" vs. "where" attention tasks modulate activity in visual pathways using cortically constrained source estimates of magnetoencephalograpic (MEG oscillatory activity. In the absence of visual stimuli or tasks, subjects were presented with a sequence of auditory-stimulus pairs and instructed to selectively attend to phonetic ("what" vs. spatial ("where" aspects of these sounds, or to listen passively. To investigate sustained modulatory effects, oscillatory power was estimated from time periods between sound-pair presentations. In comparison to attention to sound locations, phonetic auditory attention was associated with stronger alpha (7-13 Hz power in several visual areas (primary visual cortex; lingual, fusiform, and inferior temporal gyri, lateral occipital cortex, as well as in higher-order visual/multisensory areas including lateral/medial parietal and retrosplenial cortices. Region-of-interest (ROI analyses of dynamic changes, from which the sustained effects had been removed, suggested further power increases during Attend Phoneme vs. Location centered at the alpha range 400-600 ms after the onset of second sound of each stimulus pair. These results suggest distinct modulations of visual system oscillatory activity during auditory attention to sound object identity ("what" vs. sound location ("where". The alpha modulations could be interpreted to reflect enhanced crossmodal inhibition of feature-specific visual pathways and adjacent audiovisual association areas during "what" vs. "where" auditory attention.

  19. On hierarchical models for visual recognition and learning of objects, scenes, and activities

    CERN Document Server

    Spehr, Jens

    2015-01-01

    In many computer vision applications, objects have to be learned and recognized in images or image sequences. This book presents new probabilistic hierarchical models that allow an efficient representation of multiple objects of different categories, scales, rotations, and views. The idea is to exploit similarities between objects and object parts in order to share calculations and avoid redundant information. Furthermore inference approaches for fast and robust detection are presented. These new approaches combine the idea of compositional and similarity hierarchies and overcome limitations of previous methods. Besides classical object recognition the book shows the use for detection of human poses in a project for gait analysis. The use of activity detection is presented for the design of environments for ageing, to identify activities and behavior patterns in smart homes. In a presented project for parking spot detection using an intelligent vehicle, the proposed approaches are used to hierarchically model...

  20. The neural basis of precise visual short-term memory for complex recognisable objects.

    Science.gov (United States)

    Veldsman, Michele; Mitchell, Daniel J; Cusack, Rhodri

    2017-10-01

    Recent evidence suggests that visual short-term memory (VSTM) capacity estimated using simple objects, such as colours and oriented bars, may not generalise well to more naturalistic stimuli. More visual detail can be stored in VSTM when complex, recognisable objects are maintained compared to simple objects. It is not yet known if it is recognisability that enhances memory precision, nor whether maintenance of recognisable objects is achieved with the same network of brain regions supporting maintenance of simple objects. We used a novel stimulus generation method to parametrically warp photographic images along a continuum, allowing separate estimation of the precision of memory representations and the number of items retained. The stimulus generation method was also designed to create unrecognisable, though perceptually matched, stimuli, to investigate the impact of recognisability on VSTM. We adapted the widely-used change detection and continuous report paradigms for use with complex, photographic images. Across three functional magnetic resonance imaging (fMRI) experiments, we demonstrated greater precision for recognisable objects in VSTM compared to unrecognisable objects. This clear behavioural advantage was not the result of recruitment of additional brain regions, or of stronger mean activity within the core network. Representational similarity analysis revealed greater variability across item repetitions in the representations of recognisable, compared to unrecognisable complex objects. We therefore propose that a richer range of neural representations support VSTM for complex recognisable objects. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Visual long-term memory has a massive storage capacity for object details.

    Science.gov (United States)

    Brady, Timothy F; Konkle, Talia; Alvarez, George A; Oliva, Aude

    2008-09-23

    One of the major lessons of memory research has been that human memory is fallible, imprecise, and subject to interference. Thus, although observers can remember thousands of images, it is widely assumed that these memories lack detail. Contrary to this assumption, here we show that long-term memory is capable of storing a massive number of objects with details from the image. Participants viewed pictures of 2,500 objects over the course of 5.5 h. Afterward, they were shown pairs of images and indicated which of the two they had seen. The previously viewed item could be paired with either an object from a novel category, an object of the same basic-level category, or the same object in a different state or pose. Performance in each of these conditions was remarkably high (92%, 88%, and 87%, respectively), suggesting that participants successfully maintained detailed representations of thousands of images. These results have implications for cognitive models, in which capacity limitations impose a primary computational constraint (e.g., models of object recognition), and pose a challenge to neural models of memory storage and retrieval, which must be able to account for such a large and detailed storage capacity.

  2. Visual Attention to Competing Social and Object Images by Preschool Children with Autism Spectrum Disorder

    Science.gov (United States)

    Sasson, Noah J.; Touchstone, Emily W.

    2014-01-01

    Eye tracking studies of young children with autism spectrum disorder (ASD) report a reduction in social attention and an increase in visual attention to non-social stimuli, including objects related to circumscribed interests (CI) (e.g., trains). In the current study, fifteen preschoolers with ASD and 15 typically developing controls matched on…

  3. Relations of Preschoolers' Visual-Motor and Object Manipulation Skills With Executive Function and Social Behavior.

    Science.gov (United States)

    MacDonald, Megan; Lipscomb, Shannon; McClelland, Megan M; Duncan, Rob; Becker, Derek; Anderson, Kim; Kile, Molly

    2016-12-01

    The purpose of this article was to examine specific linkages between early visual-motor integration skills and executive function, as well as between early object manipulation skills and social behaviors in the classroom during the preschool year. Ninety-two children aged 3 to 5 years old (M age  = 4.31 years) were recruited to participate. Comprehensive measures of visual-motor integration skills, object manipulation skills, executive function, and social behaviors were administered in the fall and spring of the preschool year. Our findings indicated that children who had better visual-motor integration skills in the fall had better executive function scores (B = 0.47 [0.20], p gender, Head Start status, and site location, but not after controlling for children's baseline levels of executive function. In addition, children who demonstrated better object manipulation skills in the fall showed significantly stronger social behavior in their classrooms (as rated by teachers) in the spring, including more self-control (B - 0.03 [0.00], p social behavior in the fall and other covariates. Children's visual-motor integration and object manipulation skills in the fall have modest to moderate relations with executive function and social behaviors later in the preschool year. These findings have implications for early learning initiatives and school readiness.

  4. Humans use visual and remembered information about object location to plan pointing movements

    NARCIS (Netherlands)

    Brouwer, A.-M.; Knill, D.C.

    2009-01-01

    We investigated whether humans use a target's remembered location to plan reaching movements to targets according to the relative reliabilities of visual and remembered information. Using their index finger, subjects moved a virtual object from one side of a table to the other, and then went back to

  5. Visual Short-Term Memory Capacity for Simple and Complex Objects

    Science.gov (United States)

    Luria, Roy; Sessa, Paola; Gotler, Alex; Jolicoeur, Pierre; Dell'Acqua, Roberto

    2010-01-01

    Does the capacity of visual short-term memory (VSTM) depend on the complexity of the objects represented in memory? Although some previous findings indicated lower capacity for more complex stimuli, other results suggest that complexity effects arise during retrieval (due to errors in the comparison process with what is in memory) that is not…

  6. Real-time tracking of visually attended objects in virtual environments and its application to LOD.

    Science.gov (United States)

    Lee, Sungkil; Kim, Gerard Jounghyun; Choi, Seungmoon

    2009-01-01

    This paper presents a real-time framework for computationally tracking objects visually attended by the user while navigating in interactive virtual environments. In addition to the conventional bottom-up (stimulus-driven) saliency map, the proposed framework uses top-down (goal-directed) contexts inferred from the user's spatial and temporal behaviors, and identifies the most plausibly attended objects among candidates in the object saliency map. The computational framework was implemented using GPU, exhibiting high computational performance adequate for interactive virtual environments. A user experiment was also conducted to evaluate the prediction accuracy of the tracking framework by comparing objects regarded as visually attended by the framework to actual human gaze collected with an eye tracker. The results indicated that the accuracy was in the level well supported by the theory of human cognition for visually identifying single and multiple attentive targets, especially owing to the addition of top-down contextual information. Finally, we demonstrate how the visual attention tracking framework can be applied to managing the level of details in virtual environments, without any hardware for head or eye tracking.

  7. Relations of Preschoolers' Visual-Motor and Object Manipulation Skills with Executive Function and Social Behavior

    Science.gov (United States)

    MacDonald, Megan; Lipscomb, Shannon; McClelland, Megan M.; Duncan, Rob; Becker, Derek; Anderson, Kim; Kile, Molly

    2016-01-01

    Purpose: The purpose of this article was to examine specific linkages between early visual-motor integration skills and executive function, as well as between early object manipulation skills and social behaviors in the classroom during the preschool year. Method: Ninety-two children aged 3 to 5 years old (M[subscript age] = 4.31 years) were…

  8. Visualization: A Tool for Enhancing Students' Concept Images of Basic Object-Oriented Concepts

    Science.gov (United States)

    Cetin, Ibrahim

    2013-01-01

    The purpose of this study was twofold: to investigate students' concept images about class, object, and their relationship and to help them enhance their learning of these notions with a visualization tool. Fifty-six second-year university students participated in the study. To investigate his/her concept images, the researcher developed a survey…

  9. Role of early visual cortex in trans-saccadic memory of object features.

    Science.gov (United States)

    Malik, Pankhuri; Dessing, Joost C; Crawford, J Douglas

    2015-08-01

    Early visual cortex (EVC) participates in visual feature memory and the updating of remembered locations across saccades, but its role in the trans-saccadic integration of object features is unknown. We hypothesized that if EVC is involved in updating object features relative to gaze, feature memory should be disrupted when saccades remap an object representation into a simultaneously perturbed EVC site. To test this, we applied transcranial magnetic stimulation (TMS) over functional magnetic resonance imaging-localized EVC clusters corresponding to the bottom left/right visual quadrants (VQs). During experiments, these VQs were probed psychophysically by briefly presenting a central object (Gabor patch) while subjects fixated gaze to the right or left (and above). After a short memory interval, participants were required to detect the relative change in orientation of a re-presented test object at the same spatial location. Participants either sustained fixation during the memory interval (fixation task) or made a horizontal saccade that either maintained or reversed the VQ of the object (saccade task). Three TMS pulses (coinciding with the pre-, peri-, and postsaccade intervals) were applied to the left or right EVC. This had no effect when (a) fixation was maintained, (b) saccades kept the object in the same VQ, or (c) the EVC quadrant corresponding to the first object was stimulated. However, as predicted, TMS reduced performance when saccades (especially larger saccades) crossed the remembered object location and brought it into the VQ corresponding to the TMS site. This suppression effect was statistically significant for leftward saccades and followed a weaker trend for rightward saccades. These causal results are consistent with the idea that EVC is involved in the gaze-centered updating of object features for trans-saccadic memory and perception.

  10. Colour Terms Affect Detection of Colour and Colour-Associated Objects Suppressed from Visual Awareness.

    Science.gov (United States)

    Forder, Lewis; Taylor, Olivia; Mankin, Helen; Scott, Ryan B; Franklin, Anna

    2016-01-01

    The idea that language can affect how we see the world continues to create controversy. A potentially important study in this field has shown that when an object is suppressed from visual awareness using continuous flash suppression (a form of binocular rivalry), detection of the object is differently affected by a preceding word prime depending on whether the prime matches or does not match the object. This may suggest that language can affect early stages of vision. We replicated this paradigm and further investigated whether colour terms likewise influence the detection of colours or colour-associated object images suppressed from visual awareness by continuous flash suppression. This method presents rapidly changing visual noise to one eye while the target stimulus is presented to the other. It has been shown to delay conscious perception of a target for up to several minutes. In Experiment 1 we presented greyscale photos of objects. They were either preceded by a congruent object label, an incongruent label, or white noise. Detection sensitivity (d') and hit rates were significantly poorer for suppressed objects preceded by an incongruent label compared to a congruent label or noise. In Experiment 2, targets were coloured discs preceded by a colour term. Detection sensitivity was significantly worse for suppressed colour patches preceded by an incongruent colour term as compared to a congruent term or white noise. In Experiment 3 targets were suppressed greyscale object images preceded by an auditory presentation of a colour term. On congruent trials the colour term matched the object's stereotypical colour and on incongruent trials the colour term mismatched. Detection sensitivity was significantly poorer on incongruent trials than congruent trials. Overall, these findings suggest that colour terms affect awareness of coloured stimuli and colour- associated objects, and provide new evidence for language-perception interaction in the brain.

  11. Colour Terms Affect Detection of Colour and Colour-Associated Objects Suppressed from Visual Awareness.

    Directory of Open Access Journals (Sweden)

    Lewis Forder

    Full Text Available The idea that language can affect how we see the world continues to create controversy. A potentially important study in this field has shown that when an object is suppressed from visual awareness using continuous flash suppression (a form of binocular rivalry, detection of the object is differently affected by a preceding word prime depending on whether the prime matches or does not match the object. This may suggest that language can affect early stages of vision. We replicated this paradigm and further investigated whether colour terms likewise influence the detection of colours or colour-associated object images suppressed from visual awareness by continuous flash suppression. This method presents rapidly changing visual noise to one eye while the target stimulus is presented to the other. It has been shown to delay conscious perception of a target for up to several minutes. In Experiment 1 we presented greyscale photos of objects. They were either preceded by a congruent object label, an incongruent label, or white noise. Detection sensitivity (d' and hit rates were significantly poorer for suppressed objects preceded by an incongruent label compared to a congruent label or noise. In Experiment 2, targets were coloured discs preceded by a colour term. Detection sensitivity was significantly worse for suppressed colour patches preceded by an incongruent colour term as compared to a congruent term or white noise. In Experiment 3 targets were suppressed greyscale object images preceded by an auditory presentation of a colour term. On congruent trials the colour term matched the object's stereotypical colour and on incongruent trials the colour term mismatched. Detection sensitivity was significantly poorer on incongruent trials than congruent trials. Overall, these findings suggest that colour terms affect awareness of coloured stimuli and colour- associated objects, and provide new evidence for language-perception interaction in the brain.

  12. Deep neural networks rival the representation of primate IT cortex for core visual object recognition.

    Directory of Open Access Journals (Sweden)

    Charles F Cadieu

    2014-12-01

    Full Text Available The primate visual system achieves remarkable visual object recognition performance even in brief presentations, and under changes to object exemplar, geometric transformations, and background variation (a.k.a. core visual object recognition. This remarkable performance is mediated by the representation formed in inferior temporal (IT cortex. In parallel, recent advances in machine learning have led to ever higher performing models of object recognition using artificial deep neural networks (DNNs. It remains unclear, however, whether the representational performance of DNNs rivals that of the brain. To accurately produce such a comparison, a major difficulty has been a unifying metric that accounts for experimental limitations, such as the amount of noise, the number of neural recording sites, and the number of trials, and computational limitations, such as the complexity of the decoding classifier and the number of classifier training examples. In this work, we perform a direct comparison that corrects for these experimental limitations and computational considerations. As part of our methodology, we propose an extension of "kernel analysis" that measures the generalization accuracy as a function of representational complexity. Our evaluations show that, unlike previous bio-inspired models, the latest DNNs rival the representational performance of IT cortex on this visual object recognition task. Furthermore, we show that models that perform well on measures of representational performance also perform well on measures of representational similarity to IT, and on measures of predicting individual IT multi-unit responses. Whether these DNNs rely on computational mechanisms similar to the primate visual system is yet to be determined, but, unlike all previous bio-inspired models, that possibility cannot be ruled out merely on representational performance grounds.

  13. Colour Terms Affect Detection of Colour and Colour-Associated Objects Suppressed from Visual Awareness

    Science.gov (United States)

    Forder, Lewis; Taylor, Olivia; Mankin, Helen; Scott, Ryan B.; Franklin, Anna

    2016-01-01

    The idea that language can affect how we see the world continues to create controversy. A potentially important study in this field has shown that when an object is suppressed from visual awareness using continuous flash suppression (a form of binocular rivalry), detection of the object is differently affected by a preceding word prime depending on whether the prime matches or does not match the object. This may suggest that language can affect early stages of vision. We replicated this paradigm and further investigated whether colour terms likewise influence the detection of colours or colour-associated object images suppressed from visual awareness by continuous flash suppression. This method presents rapidly changing visual noise to one eye while the target stimulus is presented to the other. It has been shown to delay conscious perception of a target for up to several minutes. In Experiment 1 we presented greyscale photos of objects. They were either preceded by a congruent object label, an incongruent label, or white noise. Detection sensitivity (d’) and hit rates were significantly poorer for suppressed objects preceded by an incongruent label compared to a congruent label or noise. In Experiment 2, targets were coloured discs preceded by a colour term. Detection sensitivity was significantly worse for suppressed colour patches preceded by an incongruent colour term as compared to a congruent term or white noise. In Experiment 3 targets were suppressed greyscale object images preceded by an auditory presentation of a colour term. On congruent trials the colour term matched the object’s stereotypical colour and on incongruent trials the colour term mismatched. Detection sensitivity was significantly poorer on incongruent trials than congruent trials. Overall, these findings suggest that colour terms affect awareness of coloured stimuli and colour- associated objects, and provide new evidence for language-perception interaction in the brain. PMID:27023274

  14. Object-based Encoding in Visual Working Memory: Evidence from Memory-driven Attentional Capture.

    Science.gov (United States)

    Gao, Zaifeng; Yu, Shixian; Zhu, Chengfeng; Shui, Rende; Weng, Xuchu; Li, Peng; Shen, Mowei

    2016-03-09

    Visual working memory (VWM) adopts a specific manner of object-based encoding (OBE) to extract perceptual information: Whenever one feature-dimension is selected for entry into VWM, the others are also extracted. Currently most studies revealing OBE probed an 'irrelevant-change distracting effect', where changes of irrelevant-features dramatically affected the performance of the target feature. However, the existence of irrelevant-feature change may affect participants' processing manner, leading to a false-positive result. The current study conducted a strict examination of OBE in VWM, by probing whether irrelevant-features guided the deployment of attention in visual search. The participants memorized an object's colour yet ignored shape and concurrently performed a visual-search task. They searched for a target line among distractor lines, each embedded within a different object. One object in the search display could match the shape, colour, or both dimensions of the memory item, but this object never contained the target line. Relative to a neutral baseline, where there was no match between the memory and search displays, search time was significantly prolonged in all match conditions, regardless of whether the memory item was displayed for 100 or 1000 ms. These results suggest that task-irrelevant shape was extracted into VWM, supporting OBE in VWM.

  15. How high is visual short-term memory capacity for object layout?

    Science.gov (United States)

    Sanocki, Thomas; Sellers, Eric; Mittelstadt, Jeff; Sulman, Noah

    2010-05-01

    Previous research measuring visual short-term memory (VSTM) suggests that the capacity for representing the layout of objects is fairly high. In four experiments, we further explored the capacity of VSTM for layout of objects, using the change detection method. In Experiment 1, participants retained most of the elements in displays of 4 to 8 elements. In Experiments 2 and 3, with up to 20 elements, participants retained many of them, reaching a capacity of 13.4 stimulus elements. In Experiment 4, participants retained much of a complex naturalistic scene. In most cases, increasing display size caused only modest reductions in performance, consistent with the idea of configural, variable-resolution grouping. The results indicate that participants can retain a substantial amount of scene layout information (objects and locations) in short-term memory. We propose that this is a case of remote visual understanding, where observers' ability to integrate information from a scene is paramount.

  16. Attention enhances multi-voxel representation of novel objects in frontal, parietal and visual cortices.

    Science.gov (United States)

    Woolgar, Alexandra; Williams, Mark A; Rich, Anina N

    2015-04-01

    Selective attention is fundamental for human activity, but the details of its neural implementation remain elusive. One influential theory, the adaptive coding hypothesis (Duncan, 2001, An adaptive coding model of neural function in prefrontal cortex, Nature Reviews Neuroscience 2:820-829), proposes that single neurons in certain frontal and parietal regions dynamically adjust their responses to selectively encode relevant information. This selective representation may in turn support selective processing in more specialized brain regions such as the visual cortices. Here, we use multi-voxel decoding of functional magnetic resonance images to demonstrate selective representation of attended--and not distractor--objects in frontal, parietal, and visual cortices. In addition, we highlight a critical role for task demands in determining which brain regions exhibit selective coding. Strikingly, representation of attended objects in frontoparietal cortex was highest under conditions of high perceptual demand, when stimuli were hard to perceive and coding in early visual cortex was weak. Coding in early visual cortex varied as a function of attention and perceptual demand, while coding in higher visual areas was sensitive to the allocation of attention but robust to changes in perceptual difficulty. Consistent with high-profile reports, peripherally presented objects could also be decoded from activity at the occipital pole, a region which corresponds to the fovea. Our results emphasize the flexibility of frontoparietal and visual systems. They support the hypothesis that attention enhances the multi-voxel representation of information in the brain, and suggest that the engagement of this attentional mechanism depends critically on current task demands. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. How hearing aids, background noise, and visual cues influence objective listening effort.

    Science.gov (United States)

    Picou, Erin M; Ricketts, Todd A; Hornsby, Benjamin W Y

    2013-09-01

    The purpose of this article was to evaluate factors that influence the listening effort experienced when processing speech for people with hearing loss. Specifically, the change in listening effort resulting from introducing hearing aids, visual cues, and background noise was evaluated. An additional exploratory aim was to investigate the possible relationships between the magnitude of listening effort change and individual listeners' working memory capacity, verbal processing speed, or lipreading skill. Twenty-seven participants with bilateral sensorineural hearing loss were fitted with linear behind-the-ear hearing aids and tested using a dual-task paradigm designed to evaluate listening effort. The primary task was monosyllable word recognition and the secondary task was a visual reaction time task. The test conditions varied by hearing aids (unaided, aided), visual cues (auditory-only, auditory-visual), and background noise (present, absent). For all participants, the signal to noise ratio was set individually so that speech recognition performance in noise was approximately 60% in both the auditory-only and auditory-visual conditions. In addition to measures of listening effort, working memory capacity, verbal processing speed, and lipreading ability were measured using the Automated Operational Span Task, a Lexical Decision Task, and the Revised Shortened Utley Lipreading Test, respectively. In general, the effects measured using the objective measure of listening effort were small (~10 msec). Results indicated that background noise increased listening effort, and hearing aids reduced listening effort, while visual cues did not influence listening effort. With regard to the individual variables, verbal processing speed was negatively correlated with hearing aid benefit for listening effort; faster processors were less likely to derive benefit. Working memory capacity, verbal processing speed, and lipreading ability were related to benefit from visual cues. No

  18. An object-oriented framework for medical image registration, fusion, and visualization.

    Science.gov (United States)

    Zhu, Yang-Ming; Cochoff, Steven M

    2006-06-01

    An object-oriented framework for image registration, fusion, and visualization was developed based on the classic model-view-controller paradigm. The framework employs many design patterns to facilitate legacy code reuse, manage software complexity, and enhance the maintainability and portability of the framework. Three sample applications built a-top of this framework are illustrated to show the effectiveness of this framework: the first one is for volume image grouping and re-sampling, the second one is for 2D registration and fusion, and the last one is for visualization of single images as well as registered volume images.

  19. Finding the Correspondence of Audio-Visual Events by Object Manipulation

    Science.gov (United States)

    Nishibori, Kento; Takeuchi, Yoshinori; Matsumoto, Tetsuya; Kudo, Hiroaki; Ohnishi, Noboru

    A human being understands the objects in the environment by integrating information obtained by the senses of sight, hearing and touch. In this integration, active manipulation of objects plays an important role. We propose a method for finding the correspondence of audio-visual events by manipulating an object. The method uses the general grouping rules in Gestalt psychology, i.e. “simultaneity” and “similarity” among motion command, sound onsets and motion of the object in images. In experiments, we used a microphone, a camera, and a robot which has a hand manipulator. The robot grasps an object like a bell and shakes it or grasps an object like a stick and beat a drum in a periodic, or non-periodic motion. Then the object emits periodical/non-periodical events. To create more realistic scenario, we put other event source (a metronome) in the environment. As a result, we had a success rate of 73.8 percent in finding the correspondence between audio-visual events (afferent signal) which are relating to robot motion (efferent signal).

  20. Prior Knowledge about Objects Determines Neural Color Representation in Human Visual Cortex.

    Science.gov (United States)

    Vandenbroucke, A R E; Fahrenfort, J J; Meuwese, J D I; Scholte, H S; Lamme, V A F

    2016-04-01

    To create subjective experience, our brain must translate physical stimulus input by incorporating prior knowledge and expectations. For example, we perceive color and not wavelength information, and this in part depends on our past experience with colored objects ( Hansen et al. 2006; Mitterer and de Ruiter 2008). Here, we investigated the influence of object knowledge on the neural substrates underlying subjective color vision. In a functional magnetic resonance imaging experiment, human subjects viewed a color that lay midway between red and green (ambiguous with respect to its distance from red and green) presented on either typical red (e.g., tomato), typical green (e.g., clover), or semantically meaningless (nonsense) objects. Using decoding techniques, we could predict whether subjects viewed the ambiguous color on typical red or typical green objects based on the neural response of veridical red and green. This shift of neural response for the ambiguous color did not occur for nonsense objects. The modulation of neural responses was observed in visual areas (V3, V4, VO1, lateral occipital complex) involved in color and object processing, as well as frontal areas. This demonstrates that object memory influences wavelength information relatively early in the human visual system to produce subjective color vision. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  1. Transformation-tolerant object recognition in rats revealed by visual priming.

    Science.gov (United States)

    Tafazoli, Sina; Di Filippo, Alessandro; Zoccolan, Davide

    2012-01-04

    Successful use of rodents as models for studying object vision crucially depends on the ability of their visual system to construct representations of visual objects that tolerate (i.e., remain relatively unchanged with respect to) the tremendous changes in object appearance produced, for instance, by size and viewpoint variation. Whether this is the case is still controversial, despite some recent demonstration of transformation-tolerant object recognition in rats. In fact, it remains unknown to what extent such a tolerant recognition has a spontaneous, perceptual basis, or, alternatively, mainly reflects learning of arbitrary associative relations among trained object appearances. In this study, we addressed this question by training rats to categorize a continuum of morph objects resulting from blending two object prototypes. The resulting psychometric curve (reporting the proportion of responses to one prototype along the morph line) served as a reference when, in a second phase of the experiment, either prototype was briefly presented as a prime, immediately before a test morph object. The resulting shift of the psychometric curve showed that recognition became biased toward the identity of the prime. Critically, this bias was observed also when the primes were transformed along a variety of dimensions (i.e., size, position, viewpoint, and their combination) that the animals had never experienced before. These results indicate that rats spontaneously perceive different views/appearances of an object as similar (i.e., as instances of the same object) and argue for the existence of neuronal substrates underlying formation of transformation-tolerant object representations in rats.

  2. BOLD repetition decreases in object-responsive ventral visual areas depend on spatial attention.

    Science.gov (United States)

    Eger, E; Henson, R N A; Driver, J; Dolan, R J

    2004-08-01

    Functional imaging studies of priming-related repetition phenomena have become widely used to study neural object representation. Although blood oxygenation level-dependent (BOLD) repetition decreases can sometimes be observed without awareness of repetition, any role for spatial attention in BOLD repetition effects remains largely unknown. We used fMRI in 13 healthy subjects to test whether BOLD repetition decreases for repeated objects in ventral visual cortices depend on allocation of spatial attention to the prime. Subjects performed a size-judgment task on a probe object that had been attended or ignored in a preceding prime display of 2 lateralized objects. Reaction times showed faster responses when the probe was the same object as the attended prime, independent of the view tested (identical vs. mirror image). No behavioral effect was evident from unattended primes. BOLD repetition decreases for attended primes were found in lateral occipital and fusiform regions bilaterally, which generalized across identical and mirror-image repeats. No repetition decreases were observed for ignored primes. Our results suggest a critical role for attention in achieving visual representations of objects that lead to both BOLD signal decreases and behavioral priming on repeated presentation.

  3. D Modelling and Interactive Web-Based Visualization of Cultural Heritage Objects

    Science.gov (United States)

    Koeva, M. N.

    2016-06-01

    Nowadays, there are rapid developments in the fields of photogrammetry, laser scanning, computer vision and robotics, together aiming to provide highly accurate 3D data that is useful for various applications. In recent years, various LiDAR and image-based techniques have been investigated for 3D modelling because of their opportunities for fast and accurate model generation. For cultural heritage preservation and the representation of objects that are important for tourism and their interactive visualization, 3D models are highly effective and intuitive for present-day users who have stringent requirements and high expectations. Depending on the complexity of the objects for the specific case, various technological methods can be applied. The selected objects in this particular research are located in Bulgaria - a country with thousands of years of history and cultural heritage dating back to ancient civilizations. This motivates the preservation, visualisation and recreation of undoubtedly valuable historical and architectural objects and places, which has always been a serious challenge for specialists in the field of cultural heritage. In the present research, comparative analyses regarding principles and technological processes needed for 3D modelling and visualization are presented. The recent problems, efforts and developments in interactive representation of precious objects and places in Bulgaria are presented. Three technologies based on real projects are described: (1) image-based modelling using a non-metric hand-held camera; (2) 3D visualization based on spherical panoramic images; (3) and 3D geometric and photorealistic modelling based on architectural CAD drawings. Their suitability for web-based visualization are demonstrated and compared. Moreover the possibilities for integration with additional information such as interactive maps, satellite imagery, sound, video and specific information for the objects are described. This comparative study

  4. 3D MODELLING AND INTERACTIVE WEB-BASED VISUALIZATION OF CULTURAL HERITAGE OBJECTS

    Directory of Open Access Journals (Sweden)

    M. N. Koeva

    2016-06-01

    Full Text Available Nowadays, there are rapid developments in the fields of photogrammetry, laser scanning, computer vision and robotics, together aiming to provide highly accurate 3D data that is useful for various applications. In recent years, various LiDAR and image-based techniques have been investigated for 3D modelling because of their opportunities for fast and accurate model generation. For cultural heritage preservation and the representation of objects that are important for tourism and their interactive visualization, 3D models are highly effective and intuitive for present-day users who have stringent requirements and high expectations. Depending on the complexity of the objects for the specific case, various technological methods can be applied. The selected objects in this particular research are located in Bulgaria – a country with thousands of years of history and cultural heritage dating back to ancient civilizations. \\this motivates the preservation, visualisation and recreation of undoubtedly valuable historical and architectural objects and places, which has always been a serious challenge for specialists in the field of cultural heritage. In the present research, comparative analyses regarding principles and technological processes needed for 3D modelling and visualization are presented. The recent problems, efforts and developments in interactive representation of precious objects and places in Bulgaria are presented. Three technologies based on real projects are described: (1 image-based modelling using a non-metric hand-held camera; (2 3D visualization based on spherical panoramic images; (3 and 3D geometric and photorealistic modelling based on architectural CAD drawings. Their suitability for web-based visualization are demonstrated and compared. Moreover the possibilities for integration with additional information such as interactive maps, satellite imagery, sound, video and specific information for the objects are described. This

  5. Retrospective cues based on object features improve visual working memory performance in older adults.

    Science.gov (United States)

    Gilchrist, Amanda L; Duarte, Audrey; Verhaeghen, Paul

    2016-01-01

    Research with younger adults has shown that retrospective cues can be used to orient top-down attention toward relevant items in working memory. We examined whether older adults could take advantage of these cues to improve memory performance. Younger and older adults were presented with visual arrays of five colored shapes; during maintenance, participants were presented either with an informative cue based on an object feature (here, object shape or color) that would be probed, or with an uninformative, neutral cue. Although older adults were less accurate overall, both age groups benefited from the presentation of an informative, feature-based cue relative to a neutral cue. Surprisingly, we also observed differences in the effectiveness of shape versus color cues and their effects upon post-cue memory load. These results suggest that older adults can use top-down attention to remove irrelevant items from visual working memory, provided that task-relevant features function as cues.

  6. Object-based Encoding in Visual Working Memory: Evidence from Memory-driven Attentional Capture

    OpenAIRE

    Gao, Zaifeng; Yu, Shixian; Zhu, Chengfeng; Shui, Rende; Weng, Xuchu; Li, Peng; Shen, Mowei

    2016-01-01

    Visual working memory (VWM) adopts a specific manner of object-based encoding (OBE) to extract perceptual information: Whenever one feature-dimension is selected for entry into VWM, the others are also extracted. Currently most studies revealing OBE probed an ?irrelevant-change distracting effect?, where changes of irrelevant-features dramatically affected the performance of the target feature. However, the existence of irrelevant-feature change may affect participants? processing manner, lea...

  7. Visual marking and change blindness : moving occluders and transient masks neutralize shape changes to ignored objects

    OpenAIRE

    Watson, Derrick G.; Kunar, Melina A.

    2010-01-01

    Visual search efficiency improves by presenting (previewing) one set of distractors before the target and remaining distractor items (D. G. Watson & G. W. Humphreys, 1997). Previous work has shown that this preview benefit is abolished if the old items change their shape when the new items are added (e.g., D. G. Watson & G. W. Humphreys, 2002). Here we present 5 experiments that examined whether such object changes are still effective in recapturing attention if the changes occur while the pr...

  8. Object integration requires attention: Visual search for Kanizsa figures in parietal extinction.

    Science.gov (United States)

    Gögler, Nadine; Finke, Kathrin; Keller, Ingo; Müller, Hermann J; Conci, Markus

    2016-11-01

    The contribution of selective attention to object integration is a topic of debate: integration of parts into coherent wholes, such as in Kanizsa figures, is thought to arise either from pre-attentive, automatic coding processes or from higher-order processes involving selective attention. Previous studies have attempted to examine the role of selective attention in object integration either by employing visual search paradigms or by studying patients with unilateral deficits in selective attention. Here, we combined these two approaches to investigate object integration in visual search in a group of five patients with left-sided parietal extinction. Our search paradigm was designed to assess the effect of left- and right-grouped nontargets on detecting a Kanizsa target square. The results revealed comparable reaction time (RT) performance in patients and controls when they were presented with displays consisting of a single to-be-grouped item that had to be classified as target vs. nontarget. However, when display size increased to two items, patients showed an extinction-specific pattern of enhanced RT costs for nontargets that induced a partial shape grouping on the right, i.e., in the attended hemifield (relative to the ungrouped baseline). Together, these findings demonstrate a competitive advantage for right-grouped objects, which in turn indicates that in parietal extinction, attentional competition between objects particularly limits integration processes in the contralesional, i.e., left hemifield. These findings imply a crucial contribution of selective attentional resources to visual object integration. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Visual Objects and Universal Meanings: AIDS Posters and the Politics of Globalisation and History

    Science.gov (United States)

    STEIN, CLAUDIA; COOTER, ROGER

    2011-01-01

    Drawing on recent visual and spatial turns in history writing, this paper considers AIDS posters from the perspective of their museum ‘afterlife’ as collected material objects. Museum spaces serve changing political and epistemological projects, and the visual objects they house are not immune from them. A recent globally themed exhibition of AIDS posters at an arts and crafts museum in Hamburg is cited in illustration. The exhibition also serves to draw attention to institutional continuities in collecting agendas. Revealed, contrary to postmodernist expectations, is how today’s application of aesthetic display for the purpose of making ‘global connections’ does not radically break with the virtues and morals attached to the visual at the end of the nineteenth century. The historicisation of such objects needs to take into account this complicated mix of change and continuity in aesthetic concepts and political inscriptions. Otherwise, historians fall prey to seductive aesthetics without being aware of the politics of them. This article submits that aesthetics is politics. PMID:23752866

  10. Sex differences in visual realism in drawings of animate and inanimate objects.

    Science.gov (United States)

    Lange-Küttner, Chris

    2011-10-01

    Sex differences in a visually realistic drawing style were examined using the model of a curvy cup as an inanimate object, and the Draw-A-Person test (DAP) as a task involving animate objects, with 7- to 12-year-old children (N = 60; 30 boys). Accurately drawing the internal detail of the cup--indicating interest in a depth feature--was not dependent on age in boys, but only in girls, as 7-year-old boys were already engaging with this cup feature. However, the age effect of the correct omission of an occluded handle--indicating a transition from realism in terms of function (intellectual realism) to one of appearance (visual realism)--was the same for both sexes. The correct omission of the occluded handle was correlated with bilingualism and drawing the internal cup detail in girls, but with drawing the silhouette contour of the cup in boys. Because a figure's silhouette enables object identification from a distance, while perception of detail and language occurs in nearer space, it was concluded that boys and girls may differ in the way they conceptualize depth in pictorial space, rather than in visual realism as such.

  11. Studying the added value of visual attention in objective image quality metrics based on eye movement data

    NARCIS (Netherlands)

    Liu, H.; Heynderickx, I.E.J.

    2009-01-01

    Current research on image quality assessment tends to include visual attention in objective metrics to further enhance their performance. A variety of computational models of visual attention are implemented in different metrics, but their accuracy in representing human visual attention is not fully

  12. Object-Based Visual Attention in 8-Month-Old Infants: Evidence from an Eye-Tracking Study

    Science.gov (United States)

    Bulf, Hermann; Valenza, Eloisa

    2013-01-01

    Visual attention is one of the infant's primary tools for gathering relevant information from the environment for further processing and learning. The space-based component of visual attention in infants has been widely investigated; however, the object-based component of visual attention has received scarce interest. This scarcity is…

  13. Short-term storage capacity for visual objects depends on expertise

    DEFF Research Database (Denmark)

    Sørensen, Thomas Alrik; Kyllingsbæk, Søren

    2012-01-01

    Visual short-term memory (VSTM) has traditionally been thought to have a very limited capacity of around 3–4 objects. However, recently several researchers have argued that VSTM may be limited in the amount of information retained rather than by a specific number of objects. Here we present a study...... of the effect of long-term practice on VSTM capacity. We investigated four age groups ranging from pre-school children to adults and measured the change in VSTM capacity for letters and pictures. We found a clear increase in VSTM capacity for letters with age but not for pictures. Our results indicate that VSTM...

  14. Effects of dividing attention during encoding on perceptual priming of unfamiliar visual objects

    Science.gov (United States)

    Soldan, Anja; Mangels, Jennifer A.; Cooper, Lynn A.

    2008-01-01

    According to the distractor-selection hypothesis (Mulligan, 2003), dividing attention during encoding reduces perceptual priming when responses to non-critical (i.e., distractor) stimuli are selected frequently and simultaneously with critical stimulus encoding. Because direct support for this hypothesis comes exclusively from studies using familiar word stimuli, the present study tested whether the predictions of the distractor-selection hypothesis extend to perceptual priming of unfamiliar visual objects using the possible/impossible object-decision test. Consistent with the distractor-selection hypothesis, Experiments 1 and 2 found no reduction in priming when the non-critical stimuli were presented infrequently and non-synchronously with the critical target stimuli, even though explicit recognition memory was reduced. In Experiment 3, non-critical stimuli were presented frequently and simultaneously during encoding of critical stimuli; however, no decrement in priming was detected, even when encoding time was reduced. These results suggest that priming in the possible/impossible object-decision test is relatively immune to reductions in central attention and that not all aspects of the distractor-selection hypothesis generalize to priming of unfamiliar visual objects. Implications for theoretical models of object-decision priming are discussed. PMID:18821167

  15. Are Categorical Spatial Relations Encoded by Shifting Visual Attention between Objects?

    Science.gov (United States)

    Uttal, David; Franconeri, Steven

    2016-01-01

    Perceiving not just values, but relations between values, is critical to human cognition. We tested the predictions of a proposed mechanism for processing categorical spatial relations between two objects—the shift account of relation processing—which states that relations such as ‘above’ or ‘below’ are extracted by shifting visual attention upward or downward in space. If so, then shifts of attention should improve the representation of spatial relations, compared to a control condition of identity memory. Participants viewed a pair of briefly flashed objects and were then tested on either the relative spatial relation or identity of one of those objects. Using eye tracking to reveal participants’ voluntary shifts of attention over time, we found that when initial fixation was on neither object, relational memory showed an absolute advantage for the object following an attention shift, while identity memory showed no advantage for either object. This result is consistent with the shift account of relation processing. When initial fixation began on one of the objects, identity memory strongly benefited this fixated object, while relational memory only showed a relative benefit for objects following an attention shift. This result is also consistent, although not as uniquely, with the shift account of relation processing. Taken together, we suggest that the attention shift account provides a mechanistic explanation for the overall results. This account can potentially serve as the common mechanism underlying both linguistic and perceptual representations of spatial relations. PMID:27695104

  16. Spatial constancy of attention across eye movements is mediated by the presence of visual objects.

    Science.gov (United States)

    Lisi, Matteo; Cavanagh, Patrick; Zorzi, Marco

    2015-05-01

    Recent studies have shown that attentional facilitation lingers at the retinotopic coordinates of a previously attended position after an eye movement. These results are intriguing, because the retinotopic location becomes behaviorally irrelevant once the eyes have moved. Critically, in these studies participants were asked to maintain attention on a blank location of the screen. In the present study, we examined whether the continuing presence of a visual object at the cued location could affect the allocation of attention across eye movements. We used a trans-saccadic cueing paradigm in which the relevant positions could be defined or not by visual objects (simple square outlines). We find an attentional benefit at the spatiotopic location of the cue only when the object (the placeholder) has been continuously present at that location. We conclude that the presence of an object at the attended location is a critical factor for the maintenance of spatial constancy of attention across eye movements, a finding that helps to reconcile previous conflicting results.

  17. Evidence of gradual loss of precision for simple features and complex objects in visual working memory.

    Science.gov (United States)

    Rademaker, Rosanne L; Park, Young Eun; Sack, Alexander T; Tong, Frank

    2018-03-01

    Previous studies have suggested that people can maintain prioritized items in visual working memory for many seconds, with negligible loss of information over time. Such findings imply that working memory representations are robust to the potential contaminating effects of internal noise. However, once visual information is encoded into working memory, one might expect it to inevitably begin degrading over time, as this actively maintained information is no longer tethered to the original perceptual input. Here, we examined this issue by evaluating working memory for single central presentations of an oriented grating, color patch, or face stimulus, across a range of delay periods (1, 3, 6, or 12 s). We applied a mixture-model analysis to distinguish changes in memory precision over time from changes in the frequency of outlier responses that resemble random guesses. For all 3 types of stimuli, participants exhibited a clear and consistent decline in the precision of working memory as a function of temporal delay, as well as a modest increase in guessing-related responses for colored patches and face stimuli. We observed a similar loss of precision over time while controlling for temporal distinctiveness. Our results demonstrate that visual working memory is far from lossless: while basic visual features and complex objects can be maintained in a quite stable manner over time, these representations are still subject to noise accumulation and complete termination. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  18. Coding of visual object features and feature conjunctions in the human brain.

    Science.gov (United States)

    Martinovic, Jasna; Gruber, Thomas; Müller, Matthias M

    2008-01-01

    Object recognition is achieved through neural mechanisms reliant on the activity of distributed coordinated neural assemblies. In the initial steps of this process, an object's features are thought to be coded very rapidly in distinct neural assemblies. These features play different functional roles in the recognition process--while colour facilitates recognition, additional contours and edges delay it. Here, we selectively varied the amount and role of object features in an entry-level categorization paradigm and related them to the electrical activity of the human brain. We found that early synchronizations (approx. 100 ms) increased quantitatively when more image features had to be coded, without reflecting their qualitative contribution to the recognition process. Later activity (approx. 200-400 ms) was modulated by the representational role of object features. These findings demonstrate that although early synchronizations may be sufficient for relatively crude discrimination of objects in visual scenes, they cannot support entry-level categorization. This was subserved by later processes of object model selection, which utilized the representational value of object features such as colour or edges to select the appropriate model and achieve identification.

  19. Dissociation of object and spatial visual processing pathways in human extrastriate cortex

    Energy Technology Data Exchange (ETDEWEB)

    Haxby, J.V.; Grady, C.L.; Horwitz, B.; Ungerleider, L.G.; Mishkin, M.; Carson, R.E.; Herscovitch, P.; Schapiro, M.B.; Rapoport, S.I. (National Institutes of Health, Bethesda, MD (USA))

    1991-03-01

    The existence and neuroanatomical locations of separate extrastriate visual pathways for object recognition and spatial localization were investigated in healthy young men. Regional cerebral blood flow was measured by positron emission tomography and bolus injections of H2(15)O, while subjects performed face matching, dot-location matching, or sensorimotor control tasks. Both visual matching tasks activated lateral occipital cortex. Face discrimination alone activated a region of occipitotemporal cortex that was anterior and inferior to the occipital area activated by both tasks. The spatial location task alone activated a region of lateral superior parietal cortex. Perisylvian and anterior temporal cortices were not activated by either task. These results demonstrate the existence of three functionally dissociable regions of human visual extrastriate cortex. The ventral and dorsal locations of the regions specialized for object recognition and spatial localization, respectively, suggest some homology between human and nonhuman primate extrastriate cortex, with displacement in human brain, possibly related to the evolution of phylogenetically newer cortical areas.

  20. Neural Mechanisms Underlying Visual Short-Term Memory Gain for Temporally Distinct Objects.

    Science.gov (United States)

    Ihssen, Niklas; Linden, David E J; Miller, Claire E; Shapiro, Kimron L

    2015-08-01

    Recent research has shown that visual short-term memory (VSTM) can substantially be improved when the to-be-remembered objects are split in 2 half-arrays (i.e., sequenced) or the entire array is shown twice (i.e., repeated), rather than presented simultaneously. Here we investigate the hypothesis that sequencing and repeating displays overcomes attentional "bottlenecks" during simultaneous encoding. Using functional magnetic resonance imaging, we show that sequencing and repeating displays increased brain activation in extrastriate and primary visual areas, relative to simultaneous displays (Study 1). Passively viewing identical stimuli did not increase visual activation (Study 2), ruling out a physical confound. Importantly, areas of the frontoparietal attention network showed increased activation in repetition but not in sequential trials. This dissociation suggests that repeating a display increases attentional control by allowing attention to be reallocated in a second encoding episode. In contrast, sequencing the array poses fewer demands on control, with competition from nonattended objects being reduced by the half-arrays. This idea was corroborated by a third study in which we found optimal VSTM for sequential displays minimizing attentional demands. Importantly these results provide support within the same experimental paradigm for the role of stimulus-driven and top-down attentional control aspects of biased competition theory in setting constraints on VSTM. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  1. A review of functional imaging studies on category-specificity

    DEFF Research Database (Denmark)

    Gerlach, Christian

    2007-01-01

    such as familiarity and visual complexity. Of the most consistent activations found, none appear to be selective for natural objects or artefacts. The findings reviewed are compatible with theories of category-specificity that assume a widely distributed conceptual system not organized by category....

  2. Real-time visual tracking of less textured three-dimensional objects on mobile platforms

    Science.gov (United States)

    Seo, Byung-Kuk; Park, Jungsik; Park, Hanhoon; Park, Jong-Il

    2012-12-01

    Natural feature-based approaches are still challenging for mobile applications (e.g., mobile augmented reality), because they are feasible only in limited environments such as highly textured and planar scenes/objects, and they need powerful mobile hardware for fast and reliable tracking. In many cases where conventional approaches are not effective, three-dimensional (3-D) knowledge of target scenes would be beneficial. We present a well-established framework for real-time visual tracking of less textured 3-D objects on mobile platforms. Our framework is based on model-based tracking that efficiently exploits partially known 3-D scene knowledge such as object models and a background's distinctive geometric or photometric knowledge. Moreover, we elaborate on implementation in order to make it suitable for real-time vision processing on mobile hardware. The performance of the framework is tested and evaluated on recent commercially available smartphones, and its feasibility is shown by real-time demonstrations.

  3. A novel no-reference objective stereoscopic video quality assessment method based on visual saliency analysis

    Science.gov (United States)

    Yang, Xinyan; Zhao, Wei; Ye, Long; Zhang, Qin

    2017-07-01

    This paper proposes a no-reference objective stereoscopic video quality assessment method with the motivation that making the effect of objective experiments close to that of subjective way. We believe that the image regions with different visual salient degree should not have the same weights when designing an assessment metric. Therefore, we firstly use GBVS algorithm to each frame pairs and separate both the left and right viewing images into the regions with strong, general and week saliency. Besides, local feature information like blockiness, zero-crossing and depth are extracted and combined with a mathematical model to calculate a quality assessment score. Regions with different salient degree are assigned with different weights in the mathematical model. Experiment results demonstrate the superiority of our method compared with the existed state-of-the-art no-reference objective Stereoscopic video quality assessment methods.

  4. The Improved SVM Multi Objects' Identification For the Uncalibrated Visual Servoing

    Directory of Open Access Journals (Sweden)

    Min Wang

    2009-03-01

    Full Text Available For the assembly of multi micro objects in micromanipulation, the first task is to identify multi micro parts. We present an improved support vector machine algorithm, which employs invariant moments based edge extraction to obtain feature attribute and then presents a heuristic attribute reduction algorithm based on rough set's discernibility matrix to obtain attribute reduction, with using support vector machine to identify and classify the targets. The visual servoing is the second task. For avoiding the complicated calibration of intrinsic parameter of camera, We apply an improved broyden's method to estimate the image jacobian matrix online, which employs chebyshev polynomial to construct a cost function to approximate the optimization value, obtaining a fast convergence for online estimation. Last, a two DOF visual controller based fuzzy adaptive PD control law for micro-manipulation is presented. The experiments of micro-assembly of micro parts in microscopes confirm that the proposed methods are effective and feasible.

  5. The Improved SVM Multi Objects's Identification for the Uncalibrated Visual Servoing

    Directory of Open Access Journals (Sweden)

    Xiangjin Zeng

    2009-03-01

    Full Text Available For the assembly of multi micro objects in micromanipulation, the first task is to identify multi micro parts. We present an improved support vector machine algorithm, which employs invariant moments based edge extraction to obtain feature attribute and then presents a heuristic attribute reduction algorithm based on rough set's discernibility matrix to obtain attribute reduction, with using support vector machine to identify and classify the targets. The visual servoing is the second task. For avoiding the complicated calibration of intrinsic parameter of camera, We apply an improved broyden's method to estimate the image jacobian matrix online, which employs chebyshev polynomial to construct a cost function to approximate the optimization value, obtaining a fast convergence for online estimation. Last, a two DOF visual controller based fuzzy adaptive PD control law for micro-manipulation is presented. The experiments of micro-assembly of micro parts in microscopes confirm that the proposed methods are effective and feasible.

  6. Effect of Colour of Object on Simple Visual Reaction Time in Normal Subjects

    Directory of Open Access Journals (Sweden)

    Sunita B. Kalyanshetti

    2014-01-01

    Full Text Available The measure of simple reaction time has been used to evaluate the processing speed of CNS and the co-ordination between the sensory and motor systems. As the reaction time is influenced by different factors; the impact of colour of objects in modulating the reaction time has been investigated in this study. 200 healthy volunteers (female gender 100 and male gender100 of age group 18-25 yrs were included as subjects. The subjects were presented with two visual stimuli viz; red and green light by using an electronic response analyzer. Paired‘t’ test for comparison of visual reaction time for red and green colour in male gender shows p value<0.05 whereas in female gender shows p<0.001. It was observed that response latency for red colour was lesser than that of green colour which can be explained on the basis of trichromatic theory.

  7. Integrating Spherical Panoramas and Maps for Visualization of Cultural Heritage Objects Using Virtual Reality Technology.

    Science.gov (United States)

    Koeva, Mila; Luleva, Mila; Maldjanski, Plamen

    2017-04-11

    Development and virtual representation of 3D models of Cultural Heritage (CH) objects has triggered great interest over the past decade. The main reason for this is the rapid development in the fields of photogrammetry and remote sensing, laser scanning, and computer vision. The advantages of using 3D models for restoration, preservation, and documentation of valuable historical and architectural objects have been numerously demonstrated by scientists in the field. Moreover, 3D model visualization in virtual reality has been recognized as an efficient, fast, and easy way of representing a variety of objects worldwide for present-day users, who have stringent requirements and high expectations. However, the main focus of recent research is the visual, geometric, and textural characteristics of a single concrete object, while integration of large numbers of models with additional information-such as historical overview, detailed description, and location-are missing. Such integrated information can be beneficial, not only for tourism but also for accurate documentation. For that reason, we demonstrate in this paper an integration of high-resolution spherical panoramas, a variety of maps, GNSS, sound, video, and text information for representation of numerous cultural heritage objects. These are then displayed in a web-based portal with an intuitive interface. The users have the opportunity to choose freely from the provided information, and decide for themselves what is interesting to visit. Based on the created web application, we provide suggestions and guidelines for similar studies. We selected objects, which are located in Bulgaria-a country with thousands of years of history and cultural heritage dating back to ancient civilizations. The methods used in this research are applicable for any type of spherical or cylindrical images and can be easily followed and applied in various domains. After a visual and metric assessment of the panoramas and the evaluation of

  8. NCWin — A Component Object Model (COM) for processing and visualizing NetCDF data

    Science.gov (United States)

    Liu, Jinxun; Chen, J.M.; Price, D.T.; Liu, S.

    2005-01-01

    NetCDF (Network Common Data Form) is a data sharing protocol and library that is commonly used in large-scale atmospheric and environmental data archiving and modeling. The NetCDF tool described here, named NCWin and coded with Borland C + + Builder, was built as a standard executable as well as a COM (component object model) for the Microsoft Windows environment. COM is a powerful technology that enhances the reuse of applications (as components). Environmental model developers from different modeling environments, such as Python, JAVA, VISUAL FORTRAN, VISUAL BASIC, VISUAL C + +, and DELPHI, can reuse NCWin in their models to read, write and visualize NetCDF data. Some Windows applications, such as ArcGIS and Microsoft PowerPoint, can also call NCWin within the application. NCWin has three major components: 1) The data conversion part is designed to convert binary raw data to and from NetCDF data. It can process six data types (unsigned char, signed char, short, int, float, double) and three spatial data formats (BIP, BIL, BSQ); 2) The visualization part is designed for displaying grid map series (playing forward or backward) with simple map legend, and displaying temporal trend curves for data on individual map pixels; and 3) The modeling interface is designed for environmental model development by which a set of integrated NetCDF functions is provided for processing NetCDF data. To demonstrate that the NCWin can easily extend the functions of some current GIS software and the Office applications, examples of calling NCWin within ArcGIS and MS PowerPoint for showing NetCDF map animations are given.

  9. Shape configuration and category-specificity

    DEFF Research Database (Denmark)

    Gerlach, Christian; Law, I; Paulson, Olaf B.

    2006-01-01

    and fragmented drawings. We also examined whether fragmentation had different impact on the recognition of natural objects and artefacts and found that recognition of artefacts was more affected by fragmentation than recognition of natural objects. Thus, the usual finding of an advantage for artefacts...... in difficult object decision tasks, which is also found in the present experiments with outlines, is reversed when the stimuli are fragmented. This interaction between category (natural versus artefacts) and stimulus type (outlines versus fragmented forms) is in accordance with predictions derived from...... a recent account of category-specificity and lends support to the notion that category-specific impairments can occur for both natural objects and artefacts following damage to pre-semantic stages in visual object recognition. The implications of the present findings are discussed in relation to theories...

  10. Object Tracking Using Adaptive Covariance Descriptor and Clustering-Based Model Updating for Visual Surveillance

    Directory of Open Access Journals (Sweden)

    Lei Qin

    2014-05-01

    Full Text Available We propose a novel approach for tracking an arbitrary object in video sequences for visual surveillance. The first contribution of this work is an automatic feature extraction method that is able to extract compact discriminative features from a feature pool before computing the region covariance descriptor. As the feature extraction method is adaptive to a specific object of interest, we refer to the region covariance descriptor computed using the extracted features as the adaptive covariance descriptor. The second contribution is to propose a weakly supervised method for updating the object appearance model during tracking. The method performs a mean-shift clustering procedure among the tracking result samples accumulated during a period of time and selects a group of reliable samples for updating the object appearance model. As such, the object appearance model is kept up-to-date and is prevented from contamination even in case of tracking mistakes. We conducted comparing experiments on real-world video sequences, which confirmed the effectiveness of the proposed approaches. The tracking system that integrates the adaptive covariance descriptor and the clustering-based model updating method accomplished stable object tracking on challenging video sequences.

  11. Attribute-based classification for zero-shot visual object categorization.

    Science.gov (United States)

    Lampert, Christoph H; Nickisch, Hannes; Harmeling, Stefan

    2014-03-01

    We study the problem of object recognition for categories for which we have no training examples, a task also called zero--data or zero-shot learning. This situation has hardly been studied in computer vision research, even though it occurs frequently; the world contains tens of thousands of different object classes, and image collections have been formed and suitably annotated for only a few of them. To tackle the problem, we introduce attribute-based classification: Objects are identified based on a high-level description that is phrased in terms of semantic attributes, such as the object's color or shape. Because the identification of each such property transcends the specific learning task at hand, the attribute classifiers can be prelearned independently, for example, from existing image data sets unrelated to the current task. Afterward, new classes can be detected based on their attribute representation, without the need for a new training phase. In this paper, we also introduce a new data set, Animals with Attributes, of over 30,000 images of 50 animal classes, annotated with 85 semantic attributes. Extensive experiments on this and two more data sets show that attribute-based classification indeed is able to categorize images without access to any training images of the target classes.

  12. Holding an object one is looking at : Kinesthetic information on the object's distance does not improve visual judgments of its size

    NARCIS (Netherlands)

    Brenner, Eli; Van Damme, Wim J.M.; Smeets, Jeroen B.J.

    1997-01-01

    Visual judgments of distance are often inaccurate. Nevertheless, information on distance must be procured if retinal image size is to be used to judge an object's dimensions. In the present study, we examined whether kinesthetic information about an object's distance - based on the posture of the

  13. Spike synchrony reveals emergence of proto-objects in visual cortex.

    Science.gov (United States)

    Martin, Anne B; von der Heydt, Rüdiger

    2015-04-29

    Neurons at early stages of the visual cortex signal elemental features, such as pieces of contour, but how these signals are organized into perceptual objects is unclear. Theories have proposed that spiking synchrony between these neurons encodes how features are grouped (binding-by-synchrony), but recent studies did not find the predicted increase in synchrony with binding. Here we propose that features are grouped to "proto-objects" by intrinsic feedback circuits that enhance the responses of the participating feature neurons. This hypothesis predicts synchrony exclusively between feature neurons that receive feedback from the same grouping circuit. We recorded from neurons in macaque visual cortex and used border-ownership selectivity, an intrinsic property of the neurons, to infer whether or not two neurons are part of the same grouping circuit. We found that binding produced synchrony between same-circuit neurons, but not between other pairs of neurons, as predicted by the grouping hypothesis. In a selective attention task, synchrony emerged with ignored as well as attended objects, and higher synchrony was associated with faster behavioral responses, as would be expected from early grouping mechanisms that provide the structure for object-based processing. Thus, synchrony could be produced by automatic activation of intrinsic grouping circuits. However, the binding-related elevation of synchrony was weak compared with its random fluctuations, arguing against synchrony as a code for binding. In contrast, feedback grouping circuits encode binding by modulating the response strength of related feature neurons. Thus, our results suggest a novel coding mechanism that might underlie the proto-objects of perception. Copyright © 2015 the authors 0270-6474/15/356860-11$15.00/0.

  14. Contested Categories

    DEFF Research Database (Denmark)

    Drawing on social science perspectives, Contested Categories presents a series of empirical studies that engage with the often shifting and day-to-day realities of life sciences categories. In doing so, it shows how such categories remain contested and dynamic, and that the boundaries they create...

  15. DEEP-SEE: Joint Object Detection, Tracking and Recognition with Application to Visually Impaired Navigational Assistance

    Directory of Open Access Journals (Sweden)

    Ruxandra Tapu

    2017-10-01

    Full Text Available In this paper, we introduce the so-called DEEP-SEE framework that jointly exploits computer vision algorithms and deep convolutional neural networks (CNNs to detect, track and recognize in real time objects encountered during navigation in the outdoor environment. A first feature concerns an object detection technique designed to localize both static and dynamic objects without any a priori knowledge about their position, type or shape. The methodological core of the proposed approach relies on a novel object tracking method based on two convolutional neural networks trained offline. The key principle consists of alternating between tracking using motion information and predicting the object location in time based on visual similarity. The validation of the tracking technique is performed on standard benchmark VOT datasets, and shows that the proposed approach returns state-of-the-art results while minimizing the computational complexity. Then, the DEEP-SEE framework is integrated into a novel assistive device, designed to improve cognition of VI people and to increase their safety when navigating in crowded urban scenes. The validation of our assistive device is performed on a video dataset with 30 elements acquired with the help of VI users. The proposed system shows high accuracy (>90% and robustness (>90% scores regardless on the scene dynamics.

  16. Virtual-reality techniques resolve the visual cues used by fruit flies to evaluate object distances.

    Science.gov (United States)

    Schuster, Stefan; Strauss, Roland; Götz, Karl G

    2002-09-17

    Insects can estimate distance or time-to-contact of surrounding objects from locomotion-induced changes in their retinal position and/or size. Freely walking fruit flies (Drosophila melanogaster) use the received mixture of different distance cues to select the nearest objects for subsequent visits. Conventional methods of behavioral analysis fail to elucidate the underlying data extraction. Here we demonstrate first comprehensive solutions of this problem by substituting virtual for real objects; a tracker-controlled 360 degrees panorama converts a fruit fly's changing coordinates into object illusions that require the perception of specific cues to appear at preselected distances up to infinity. An application reveals the following: (1) en-route sampling of retinal-image changes accounts for distance discrimination within a surprising range of at least 8-80 body lengths (20-200 mm). Stereopsis and peering are not involved. (2) Distance from image translation in the expected direction (motion parallax) outweighs distance from image expansion, which accounts for impact-avoiding flight reactions to looming objects. (3) The ability to discriminate distances is robust to artificially delayed updating of image translation. Fruit flies appear to interrelate self-motion and its visual feedback within a surprisingly long time window of about 2 s. The comparative distance inspection practiced in the small fruit fly deserves utilization in self-moving robots.

  17. Development of the Object-Oriented Dynamic Simulation Models Using Visual C++ Freeware

    Directory of Open Access Journals (Sweden)

    Alexander I. Kozynchenko

    2016-01-01

    Full Text Available The paper mostly focuses on the methodological and programming aspects of developing a versatile desktop framework to provide the available basis for the high-performance simulation of dynamical models of different kinds and for diverse applications. So the paper gives some basic structure for creating a dynamical simulation model in C++ which is built on the Win32 platform with an interactive multiwindow interface and uses the lightweight Visual C++ Express as a free integrated development environment. The resultant simulation framework could be a more acceptable alternative to other solutions developed on the basis of commercial tools like Borland C++ or Visual C++ Professional, not to mention the domain specific languages and more specialized ready-made software such as Matlab, Simulink, and Modelica. This approach seems to be justified in the case of complex research object-oriented dynamical models having nonstandard structure, relationships, algorithms, and solvers, as it allows developing solutions of high flexibility. The essence of the model framework is shown using a case study of simulation of moving charged particles in the electrostatic field. The simulation model possesses the necessary visualization and control features such as an interactive input, real time graphical and text output, start, stop, and rate control.

  18. Development of a vocabulary of object shapes in a child with a very-early-acquired visual agnosia: a unique case.

    Science.gov (United States)

    Funnell, Elaine; Wilding, John

    2011-02-01

    We report a longitudinal study of an exceptional child (S.R.) whose early-acquired visual agnosia, following encephalitis at 8 weeks of age, did not prevent her from learning to construct an increasing vocabulary of visual object forms (drawn from different categories), albeit slowly. S.R. had problems perceiving subtle differences in shape; she was unable to segment local letters within global displays; and she would bring complex scenes close to her eyes: a symptom suggestive of an attempt to reduce visual crowding. Investigations revealed a robust ability to use the gestalt grouping factors of proximity and collinearity to detect fragmented forms in noisy backgrounds, compared with a very weak ability to segment fragmented forms on the basis of contrasts of shape. When contrasts in spatial grouping and shape were pitted against each other, shape made little contribution, consistent with problems in perceiving complex scenes, but when shape contrast was varied, and spatial grouping was held constant, S.R. showed the same hierarchy of difficulty as the controls, although her responses were slowed. This is the first report of a child's visual-perceptual development following very early neurological impairments to the visual cortex. Her ability to learn to perceive visual shape following damage at a rudimentary stage of perceptual development contrasts starkly with the loss of such ability in childhood cases of acquired visual agnosia that follow damage to the established perceptual system. Clearly, there is a critical period during which neurological damage to the highly active, early developing visual-perceptual system does not prevent but only impairs further learning.

  19. Quantifying the Time Course of Visual Object Processing Using ERPs: It's Time to Up the Game

    Science.gov (United States)

    Rousselet, Guillaume A.; Pernet, Cyril R.

    2011-01-01

    Hundreds of studies have investigated the early ERPs to faces and objects using scalp and intracranial recordings. The vast majority of these studies have used uncontrolled stimuli, inappropriate designs, peak measurements, poor figures, and poor inferential and descriptive group statistics. These problems, together with a tendency to discuss any effect p  condition B. Here we describe the main limitations of face and object ERP research and suggest alternative strategies to move forward. The problems plague intracranial and surface ERP studies, but also studies using more advanced techniques – e.g., source space analyses and measurements of network dynamics, as well as many behavioral, fMRI, TMS, and LFP studies. In essence, it is time to stop amassing binary results and start using single-trial analyses to build models of visual perception. PMID:21779262

  20. Foundations of computer vision computational geometry, visual image structures and object shape detection

    CERN Document Server

    Peters, James F

    2017-01-01

    This book introduces the fundamentals of computer vision (CV), with a focus on extracting useful information from digital images and videos. Including a wealth of methods used in detecting and classifying image objects and their shapes, it is the first book to apply a trio of tools (computational geometry, topology and algorithms) in solving CV problems, shape tracking in image object recognition and detecting the repetition of shapes in single images and video frames. Computational geometry provides a visualization of topological structures such as neighborhoods of points embedded in images, while image topology supplies us with structures useful in the analysis and classification of image regions. Algorithms provide a practical, step-by-step means of viewing image structures. The implementations of CV methods in Matlab and Mathematica, classification of chapter problems with the symbols (easily solved) and (challenging) and its extensive glossary of key words, examples and connections with the fabric of C...

  1. Iterative Object Localization Algorithm Using Visual Images with a Reference Coordinate

    Directory of Open Access Journals (Sweden)

    We-Duke Cho

    2008-09-01

    Full Text Available We present a simplified algorithm for localizing an object using multiple visual images that are obtained from widely used digital imaging devices. We use a parallel projection model which supports both zooming and panning of the imaging devices. Our proposed algorithm is based on a virtual viewable plane for creating a relationship between an object position and a reference coordinate. The reference point is obtained from a rough estimate which may be obtained from the preestimation process. The algorithm minimizes localization error through the iterative process with relatively low-computational complexity. In addition, nonlinearity distortion of the digital image devices is compensated during the iterative process. Finally, the performances of several scenarios are evaluated and analyzed in both indoor and outdoor environments.

  2. Object selection costs in visual working memory: A diffusion model analysis of the focus of attention.

    Science.gov (United States)

    Sewell, David K; Lilburn, Simon D; Smith, Philip L

    2016-11-01

    A central question in working memory research concerns the degree to which information in working memory is accessible to other cognitive processes (e.g., decision-making). Theories assuming that the focus of attention can only store a single object at a time require the focus to orient to a target representation before further processing can occur. The need to orient the focus of attention implies that single-object accounts typically predict response time costs associated with object selection even when working memory is not full (i.e., memory load is less than 4 items). For other theories that assume storage of multiple items in the focus of attention, predictions depend on specific assumptions about the way resources are allocated among items held in the focus, and how this affects the time course of retrieval of items from the focus. These broad theoretical accounts have been difficult to distinguish because conventional analyses fail to separate components of empirical response times related to decision-making from components related to selection and retrieval processes associated with accessing information in working memory. To better distinguish these response time components from one another, we analyze data from a probed visual working memory task using extensions of the diffusion decision model. Analysis of model parameters revealed that increases in memory load resulted in (a) reductions in the quality of the underlying stimulus representations in a manner consistent with a sample size model of visual working memory capacity and (b) systematic increases in the time needed to selectively access a probed representation in memory. The results are consistent with single-object theories of the focus of attention. The results are also consistent with a subset of theories that assume a multiobject focus of attention in which resource allocation diminishes both the quality and accessibility of the underlying representations. (PsycINFO Database Record (c) 2016

  3. Distributed dendritic processing facilitates object detection: a computational analysis on the visual system of the fly.

    Science.gov (United States)

    Hennig, Patrick; Möller, Ralf; Egelhaaf, Martin

    2008-08-28

    Detecting objects is an important task when moving through a natural environment. Flies, for example, may land on salient objects or may avoid collisions with them. The neuronal ensemble of Figure Detection cells (FD-cells) in the visual system of the fly is likely to be involved in controlling these behaviours, as these cells are more sensitive to objects than to extended background structures. Until now the computations in the presynaptic neuronal network of FD-cells and, in particular, the functional significance of the experimentally established distributed dendritic processing of excitatory and inhibitory inputs is not understood. We use model simulations to analyse the neuronal computations responsible for the preference of FD-cells for small objects. We employed a new modelling approach which allowed us to account for the spatial spread of electrical signals in the dendrites while avoiding detailed compartmental modelling. The models are based on available physiological and anatomical data. Three models were tested each implementing an inhibitory neural circuit, but differing by the spatial arrangement of the inhibitory interaction. Parameter optimisation with an evolutionary algorithm revealed that only distributed dendritic processing satisfies the constraints arising from electrophysiological experiments. In contrast to a direct dendro-dendritic inhibition of the FD-cell (Direct Distributed Inhibition model), an inhibition of its presynaptic retinotopic elements (Indirect Distributed Inhibition model) requires smaller changes in input resistance in the inhibited neurons during visual stimulation. Distributed dendritic inhibition of retinotopic elements as implemented in our Indirect Distributed Inhibition model is the most plausible wiring scheme for the neuronal circuit of FD-cells. This microcircuit is computationally similar to lateral inhibition between the retinotopic elements. Hence, distributed inhibition might be an alternative explanation of

  4. The anatomy of object recognition--visual form agnosia caused by medial occipitotemporal stroke.

    Science.gov (United States)

    Karnath, Hans-Otto; Rüter, Johannes; Mandler, André; Himmelbach, Marc

    2009-05-06

    The influential model on visual information processing by Milner and Goodale (1995) has suggested a dissociation between action- and perception-related processing in a dorsal versus ventral stream projection. It was inspired substantially by the observation of a double dissociation of disturbed visual action versus perception in patients with optic ataxia on the one hand and patients with visual form agnosia (VFA) on the other. Unfortunately, almost all cases with VFA reported so far suffered from inhalational intoxication, the majority with carbon monoxide (CO). Since CO induces a diffuse and widespread pattern of neuronal and white matter damage throughout the whole brain, precise conclusions from these patients with VFA on the selective role of ventral stream structures for shape and orientation perception were difficult. Here, we report patient J.S., who demonstrated VFA after a well circumscribed brain lesion due to stroke etiology. Like the famous patient D.F. with VFA after CO intoxication studied by Milner, Goodale, and coworkers (Goodale et al., 1991, 1994; Milner et al., 1991; Servos et al., 1995; Mon-Williams et al., 2001a,b; Wann et al., 2001; Westwood et al., 2002; McIntosh et al., 2004; Schenk and Milner, 2006), J.S. showed an obvious dissociation between disturbed visual perception of shape and orientation information on the one side and preserved visuomotor abilities based on the same information on the other. In both hemispheres, damage primarily affected the fusiform and the lingual gyri as well as the adjacent posterior cingulate gyrus. We conclude that these medial structures of the ventral occipitotemporal cortex are integral for the normal flow of shape and of contour information into the ventral stream system allowing to recognize objects.

  5. A bio-inspired method and system for visual object-based attention and segmentation

    Science.gov (United States)

    Huber, David J.; Khosla, Deepak

    2010-04-01

    This paper describes a method and system of human-like attention and object segmentation in visual scenes that (1) attends to regions in a scene in their rank of saliency in the image, (2) extracts the boundary of an attended proto-object based on feature contours, and (3) can be biased to boost the attention paid to specific features in a scene, such as those of a desired target object in static and video imagery. The purpose of the system is to identify regions of a scene of potential importance and extract the region data for processing by an object recognition and classification algorithm. The attention process can be performed in a default, bottom-up manner or a directed, top-down manner which will assign a preference to certain features over others. One can apply this system to any static scene, whether that is a still photograph or imagery captured from video. We employ algorithms that are motivated by findings in neuroscience, psychology, and cognitive science to construct a system that is novel in its modular and stepwise approach to the problems of attention and region extraction, its application of a flooding algorithm to break apart an image into smaller proto-objects based on feature density, and its ability to join smaller regions of similar features into larger proto-objects. This approach allows many complicated operations to be carried out by the system in a very short time, approaching real-time. A researcher can use this system as a robust front-end to a larger system that includes object recognition and scene understanding modules; it is engineered to function over a broad range of situations and can be applied to any scene with minimal tuning from the user.

  6. Nicotine deprivation elevates neural representation of smoking-related cues in object-sensitive visual cortex: a proof of concept study.

    Science.gov (United States)

    Havermans, Anne; van Schayck, Onno C P; Vuurman, Eric F P M; Riedel, Wim J; van den Hurk, Job

    2017-08-01

    In the current study, we use functional magnetic resonance imaging (fMRI) and multi-voxel pattern analysis (MVPA) to investigate whether tobacco addiction biases basic visual processing in favour of smoking-related images. We hypothesize that the neural representation of smoking-related stimuli in the lateral occipital complex (LOC) is elevated after a period of nicotine deprivation compared to a satiated state, but that this is not the case for object categories unrelated to smoking. Current smokers (≥10 cigarettes a day) underwent two fMRI scanning sessions: one after 10 h of nicotine abstinence and the other one after smoking ad libitum. Regional blood oxygenated level-dependent (BOLD) response was measured while participants were presented with 24 blocks of 8 colour-matched pictures of cigarettes, pencils or chairs. The functional data of 10 participants were analysed through a pattern classification approach. In bilateral LOC clusters, the classifier was able to discriminate between patterns of activity elicited by visually similar smoking-related (cigarettes) and neutral objects (pencils) above empirically estimated chance levels only during deprivation (mean = 61.0%, chance (permutations) = 50.0%, p = .01) but not during satiation (mean = 53.5%, chance (permutations) = 49.9%, ns.). For all other stimulus contrasts, there was no difference in discriminability between the deprived and satiated conditions. The discriminability between smoking and non-smoking visual objects was elevated in object-selective brain region LOC after a period of nicotine abstinence. This indicates that attention bias likely affects basic visual object processing.

  7. Object-based implicit learning in visual search: perceptual segmentation constrains contextual cueing.

    Science.gov (United States)

    Conci, Markus; Müller, Hermann J; von Mühlenen, Adrian

    2013-07-09

    In visual search, detection of a target is faster when it is presented within a spatial layout of repeatedly encountered nontarget items, indicating that contextual invariances can guide selective attention (contextual cueing; Chun & Jiang, 1998). However, perceptual regularities may interfere with contextual learning; for instance, no contextual facilitation occurs when four nontargets form a square-shaped grouping, even though the square location predicts the target location (Conci & von Mühlenen, 2009). Here, we further investigated potential causes for this interference-effect: We show that contextual cueing can reliably occur for targets located within the region of a segmented object, but not for targets presented outside of the object's boundaries. Four experiments demonstrate an object-based facilitation in contextual cueing, with a modulation of context-based learning by relatively subtle grouping cues including closure, symmetry, and spatial regularity. Moreover, the lack of contextual cueing for targets located outside the segmented region was due to an absence of (latent) learning of contextual layouts, rather than due to an attentional bias towards the grouped region. Taken together, these results indicate that perceptual segmentation provides a basic structure within which contextual scene regularities are acquired. This in turn argues that contextual learning is constrained by object-based selection.

  8. Object Representations in Human Visual Cortex Formed Through Temporal Integration of Dynamic Partial Shape Views.

    Science.gov (United States)

    Orlov, Tanya; Zohary, Ehud

    2018-01-17

    We typically recognize visual objects using the spatial layout of their parts, which are present simultaneously on the retina. Therefore, shape extraction is based on integration of the relevant retinal information over space. The lateral occipital complex (LOC) can represent shape faithfully in such conditions. However, integration over time is sometimes required to determine object shape. To study shape extraction through temporal integration of successive partial shape views, we presented human participants (both men and women) with artificial shapes that moved behind a narrow vertical or horizontal slit. Only a tiny fraction of the shape was visible at any instant at the same retinal location. However, observers perceived a coherent whole shape instead of a jumbled pattern. Using fMRI and multivoxel pattern analysis, we searched for brain regions that encode temporally integrated shape identity. We further required that the representation of shape should be invariant to changes in the slit orientation. We show that slit-invariant shape information is most accurate in the LOC. Importantly, the slit-invariant shape representations matched the conventional whole-shape representations assessed during full-image runs. Moreover, when the same slit-dependent shape slivers were shuffled, thereby preventing their spatiotemporal integration, slit-invariant shape information was reduced dramatically. The slit-invariant representation of the various shapes also mirrored the structure of shape perceptual space as assessed by perceptual similarity judgment tests. Therefore, the LOC is likely to mediate temporal integration of slit-dependent shape views, generating a slit-invariant whole-shape percept. These findings provide strong evidence for a global encoding of shape in the LOC regardless of integration processes required to generate the shape percept. SIGNIFICANCE STATEMENT Visual objects are recognized through spatial integration of features available simultaneously on

  9. Does visual working memory represent the predicted locations of future target objects? An event-related brain potential study.

    Science.gov (United States)

    Grubert, Anna; Eimer, Martin

    2015-11-11

    During the maintenance of task-relevant objects in visual working memory, the contralateral delay activity (CDA) is elicited over the hemisphere opposite to the visual field where these objects are presented. The presence of this lateralised CDA component demonstrates the existence of position-dependent object representations in working memory. We employed a change detection task to investigate whether the represented object locations in visual working memory are shifted in preparation for the known location of upcoming comparison stimuli. On each trial, bilateral memory displays were followed after a delay period by bilateral test displays. Participants had to encode and maintain three visual objects on one side of the memory display, and to judge whether they were identical or different to three objects in the test display. Task-relevant memory and test stimuli were located in the same visual hemifield in the no-shift task, and on opposite sides in the horizontal shift task. CDA components of similar size were triggered contralateral to the memorized objects in both tasks. The absence of a polarity reversal of the CDA in the horizontal shift task demonstrated that there was no preparatory shift of memorized object location towards the side of the upcoming comparison stimuli. These results suggest that visual working memory represents the locations of visual objects during encoding, and that the matching of memorized and test objects at different locations is based on a comparison process that can bridge spatial translations between these objects. This article is part of a Special Issue entitled SI: Prediction and Attention. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Gravity influences the visual representation of object tilt in parietal cortex.

    Science.gov (United States)

    Rosenberg, Ari; Angelaki, Dora E

    2014-10-22

    Sensory systems encode the environment in egocentric (e.g., eye, head, or body) reference frames, creating inherently unstable representations that shift and rotate as we move. However, it is widely speculated that the brain transforms these signals into an allocentric, gravity-centered representation of the world that is stable and independent of the observer's spatial pose. Where and how this representation may be achieved is currently unknown. Here we demonstrate that a subpopulation of neurons in the macaque caudal intraparietal area (CIP) visually encodes object tilt in nonegocentric coordinates defined relative to the gravitational vector. Neuronal responses to the tilt of a visually presented planar surface were measured with the monkey in different spatial orientations (upright and rolled left/right ear down) and then compared. This revealed a continuum of representations in which planar tilt was encoded in a gravity-centered reference frame in approximately one-tenth of the comparisons, intermediate reference frames ranging between gravity-centered and egocentric in approximately two-tenths of the comparisons, and in an egocentric reference frame in less than half of the comparisons. Altogether, almost half of the comparisons revealed a shift in the preferred tilt and/or a gain change consistent with encoding object orientation in nonegocentric coordinates. Through neural network modeling, we further show that a purely gravity-centered representation of object tilt can be achieved directly from the population activity of CIP-like units. These results suggest that area CIP may play a key role in creating a stable, allocentric representation of the environment defined relative to an "earth-vertical" direction. Copyright © 2014 the authors 0270-6474/14/3414170-11$15.00/0.

  11. Object-centered representations support flexible exogenous visual attention across translation and reflection.

    Science.gov (United States)

    Lin, Zhicheng

    2013-11-01

    Visual attention can be deployed to stimuli based on our willful, top-down goal (endogenous attention) or on their intrinsic saliency against the background (exogenous attention). Flexibility is thought to be a hallmark of endogenous attention, whereas decades of research show that exogenous attention is attracted to the retinotopic locations of the salient stimuli. However, to the extent that salient stimuli in the natural environment usually form specific spatial relations with the surrounding context and are dynamic, exogenous attention, to be adaptive, should embrace these structural regularities. Here we test a non-retinotopic, object-centered mechanism in exogenous attention, in which exogenous attention is dynamically attracted to a relative, object-centered location. Using a moving frame configuration, we presented two frames in succession, forming either apparent translational motion or in mirror reflection, with a completely uninformative, transient cue presented at one of the item locations in the first frame. Despite that the cue is presented in a spatially separate frame, in both translation and mirror reflection, behavioralperformance in visual search is enhanced when the target in the second frame appears at the same relative location as the cue location than at other locations. These results provide unambiguous evidence for non-retinotopic exogenous attention and further reveal an object-centered mechanism supporting flexible exogenous attention. Moreover, attentional generalization across mirror reflection may constitute an attentional correlate of perceptual generalization across lateral mirror images, supporting an adaptive, functional account of mirror images confusion. Copyright © 2013 Elsevier B.V. All rights reserved.

  12. The Representation of Color across the Human Visual Cortex: Distinguishing Chromatic Signals Contributing to Object Form Versus Surface Color.

    Science.gov (United States)

    Seymour, K J; Williams, M A; Rich, A N

    2016-05-01

    Many theories of visual object perception assume the visual system initially extracts borders between objects and their background and then "fills in" color to the resulting object surfaces. We investigated the transformation of chromatic signals across the human ventral visual stream, with particular interest in distinguishing representations of object surface color from representations of chromatic signals reflecting the retinal input. We used fMRI to measure brain activity while participants viewed figure-ground stimuli that differed either in the position or in the color contrast polarity of the foreground object (the figure). Multivariate pattern analysis revealed that classifiers were able to decode information about which color was presented at a particular retinal location from early visual areas, whereas regions further along the ventral stream exhibited biases for representing color as part of an object's surface, irrespective of its position on the retina. Additional analyses showed that although activity in V2 contained strong chromatic contrast information to support the early parsing of objects within a visual scene, activity in this area also signaled information about object surface color. These findings are consistent with the view that mechanisms underlying scene segmentation and the binding of color to object surfaces converge in V2. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  13. Statistics of Visual Responses to Image Object Stimuli from Primate AIT Neurons to DNN Neurons.

    Science.gov (United States)

    Dong, Qiulei; Wang, Hong; Hu, Zhanyi

    2018-02-01

    Under the goal-driven paradigm, Yamins et al. ( 2014 ; Yamins & DiCarlo, 2016 ) have shown that by optimizing only the final eight-way categorization performance of a four-layer hierarchical network, not only can its top output layer quantitatively predict IT neuron responses but its penultimate layer can also automatically predict V4 neuron responses. Currently, deep neural networks (DNNs) in the field of computer vision have reached image object categorization performance comparable to that of human beings on ImageNet, a data set that contains 1.3 million training images of 1000 categories. We explore whether the DNN neurons (units in DNNs) possess image object representational statistics similar to monkey IT neurons, particularly when the network becomes deeper and the number of image categories becomes larger, using VGG19, a typical and widely used deep network of 19 layers in the computer vision field. Following Lehky, Kiani, Esteky, and Tanaka ( 2011 , 2014 ), where the response statistics of 674 IT neurons to 806 image stimuli are analyzed using three measures (kurtosis, Pareto tail index, and intrinsic dimensionality), we investigate the three issues in this letter using the same three measures: (1) the similarities and differences of the neural response statistics between VGG19 and primate IT cortex, (2) the variation trends of the response statistics of VGG19 neurons at different layers from low to high, and (3) the variation trends of the response statistics of VGG19 neurons when the numbers of stimuli and neurons increase. We find that the response statistics on both single-neuron selectivity and population sparseness of VGG19 neurons are fundamentally different from those of IT neurons in most cases; by increasing the number of neurons in different layers and the number of stimuli, the response statistics of neurons at different layers from low to high do not substantially change; and the estimated intrinsic dimensionality values at the low

  14. Metacognition of visual short-term memory: Dissociation between objective and subjective components of VSTM

    Directory of Open Access Journals (Sweden)

    Silvia eBona

    2013-02-01

    Full Text Available The relationship between the objective accuracy of visual-short term memory (VSTM representations and their subjective conscious experience is unknown. We investigated this issue by assessing how the objective and subjective components of VSTM in a delayed cue-target orientation discrimination task are affected by intervening distracters. On each trial, participants were shown a memory cue (a grating, the orientation of which they were asked to hold in memory. On approximately half of the trials, a distractor grating appeared during the maintenance interval; its orientation was either identical to that of the memory cue, or it differed by 10 or 40 degrees. The distractors were masked and presented briefly, so they were only consciously perceived on a subset of trials. At the end of the delay period, a memory test probe was presented, and participants were asked to indicate whether it was tilted to the left or right relative to the memory cue (VSTM accuracy; objective performance. In order to assess subjective metacognition, participants were asked indicate the vividness of their memory for the original memory cue. Finally, participants were asked rate their awareness of the distracter. Results showed that objective VSTM performance was impaired by distractors only when the distractors were very different from the cue, and that this occurred with both subjectively visible and invisible distractors. Subjective metacognition, however, was impaired by distractors of all orientations, but only when these distractors were subjectively invisible. Our results thus indicate that the objective and subjective components of VSTM are to some extent dissociable.

  15. Contralateral delay activity tracks object identity information in visual short term memory.

    Science.gov (United States)

    Gao, Zaifeng; Xu, Xiaotian; Chen, Zhibo; Yin, Jun; Shen, Mowei; Shui, Rende

    2011-08-11

    Previous studies suggested that ERP component contralateral delay activity (CDA) tracks the number of objects containing identity information stored in visual short term memory (VSTM). Later MEG and fMRI studies implied that its neural source lays in superior IPS. However, since the memorized stimuli in previous studies were displayed in distinct spatial locations, hence possibly CDA tracks the object-location information instead. Moreover, a recent study implied the activation in superior IPS reflected the location load. The current research thus explored whether CDA tracks the object-location load or the object-identity load, and its neural sources. Participants were asked to remember one color, four identical colors or four distinct colors. The four-identical-color condition was the critical one because it contains the same amount of identity information as that of one color while the same amount of location information as that of four distinct colors. To ensure the participants indeed selected four colors in the four-identical-color condition, we also split the participants into two groups (low- vs. high-capacity), analyzed late positive component (LPC) in the prefrontal area, and collected participant's subjective-report. Our results revealed that most of the participants selected four identical colors. Moreover, regardless of capacity-group, there was no difference on CDA between one color and four identical colors yet both were lower than 4 distinct colors. Besides, the source of CDA was located in the superior parietal lobule, which is very close to the superior IPS. These results support the statement that CDA tracks the object identity information in VSTM. Copyright © 2011 Elsevier B.V. All rights reserved.

  16. DOCUMENTATION OF HISTORICAL UNDERGROUND OBJECT IN SKORKOV VILLAGE WITH SELECTED MEASURING METHODS, DATA ANALYSIS AND VISUALIZATION

    Directory of Open Access Journals (Sweden)

    A. Dlesk

    2016-06-01

    Full Text Available The author analyzes current methods of 3D documentation of historical tunnels in Skorkov village, which lies at the Jizera river, approximately 30 km away from Prague. The area is known as a former military camp from Thirty Years’ War in 17th Century. There is an extensive underground compound with one entrance corridor and two transverse, situated approximately 2 to 5 m under the local development. The object has been partly documented by geodetic polar method, intersection photogrammetry, image-based modelling and laser scanning. Data have been analyzed and methods have been compared. Then the 3D model of object has been created and compound with cadastral data, orthophoto, historical maps and digital surface model which was made by photogrammetric method using remotely piloted aircraft system. Then the measuring has been realized with ground penetrating radar. Data have been analyzed and the result compared with real status. All the data have been combined and visualized into one 3D model. Finally, the discussion about advantages and disadvantages of used measuring methods has been livened up. The tested methodology has been also used for other documentation of historical objects in this area. This project has been created as a part of research at EuroGV. s.r.o. Company lead by Ing. Karel Vach CSc. in cooperation with prof. Dr. Ing. Karel Pavelka from Czech Technical University in Prague and Miloš Gavenda, the renovator.

  17. Documentation of Historical Underground Object in Skorkov Village with Selected Measuring Methods, Data Analysis and Visualization

    Science.gov (United States)

    Dlesk, A.

    2016-06-01

    The author analyzes current methods of 3D documentation of historical tunnels in Skorkov village, which lies at the Jizera river, approximately 30 km away from Prague. The area is known as a former military camp from Thirty Years' War in 17th Century. There is an extensive underground compound with one entrance corridor and two transverse, situated approximately 2 to 5 m under the local development. The object has been partly documented by geodetic polar method, intersection photogrammetry, image-based modelling and laser scanning. Data have been analyzed and methods have been compared. Then the 3D model of object has been created and compound with cadastral data, orthophoto, historical maps and digital surface model which was made by photogrammetric method using remotely piloted aircraft system. Then the measuring has been realized with ground penetrating radar. Data have been analyzed and the result compared with real status. All the data have been combined and visualized into one 3D model. Finally, the discussion about advantages and disadvantages of used measuring methods has been livened up. The tested methodology has been also used for other documentation of historical objects in this area. This project has been created as a part of research at EuroGV. s.r.o. Company lead by Ing. Karel Vach CSc. in cooperation with prof. Dr. Ing. Karel Pavelka from Czech Technical University in Prague and Miloš Gavenda, the renovator.

  18. Many-objective optimization and visual analytics reveal key trade-offs for London's water supply

    Science.gov (United States)

    Matrosov, Evgenii S.; Huskova, Ivana; Kasprzyk, Joseph R.; Harou, Julien J.; Lambert, Chris; Reed, Patrick M.

    2015-12-01

    In this study, we link a water resource management simulator to multi-objective search to reveal the key trade-offs inherent in planning a real-world water resource system. We consider new supplies and demand management (conservation) options while seeking to elucidate the trade-offs between the best portfolios of schemes to satisfy projected water demands. Alternative system designs are evaluated using performance measures that minimize capital and operating costs and energy use while maximizing resilience, engineering and environmental metrics, subject to supply reliability constraints. Our analysis shows many-objective evolutionary optimization coupled with state-of-the art visual analytics can help planners discover more diverse water supply system designs and better understand their inherent trade-offs. The approach is used to explore future water supply options for the Thames water resource system (including London's water supply). New supply options include a new reservoir, water transfers, artificial recharge, wastewater reuse and brackish groundwater desalination. Demand management options include leakage reduction, compulsory metering and seasonal tariffs. The Thames system's Pareto approximate portfolios cluster into distinct groups of water supply options; for example implementing a pipe refurbishment program leads to higher capital costs but greater reliability. This study highlights that traditional least-cost reliability constrained design of water supply systems masks asset combinations whose benefits only become apparent when more planning objectives are considered.

  19. Visual Stability of Objects and Environments Viewed through Head-Mounted Displays

    Science.gov (United States)

    Ellis, Stephen R.; Adelstein, Bernard D.

    2015-01-01

    Virtual Environments (aka Virtual Reality) is again catching the public imagination and a number of startups (e.g. Oculus) and even not-so-startup companies (e.g. Microsoft) are trying to develop display systems to capitalize on this renewed interest. All acknowledge that this time they will get it right by providing the required dynamic fidelity, visual quality, and interesting content for the concept of VR to take off and change the world in ways it failed to do so in past incarnations. Some of the surprisingly long historical background of the technology that the form of direct simulation that underlies virtual environment and augmented reality displays will be briefly reviewed. An example of a mid 1990's augmented reality display system with good dynamic performance from our lab will be used to illustrate some of the underlying phenomena and technology concerning visual stability of virtual environments and objects during movement. In conclusion some idealized performance characteristics for a reference system will be proposed. Interestingly, many systems more or less on the market now may actually meet many of these proposed technical requirements. This observation leads to the conclusion that the current success of the IT firms trying to commercialize the technology will depend on the hidden costs of using the systems as well as the development of interesting and compelling content.

  20. BUILDING A BILLION SPATIO-TEMPORAL OBJECT SEARCH AND VISUALIZATION PLATFORM

    Directory of Open Access Journals (Sweden)

    D. Kakkar

    2017-10-01

    Full Text Available With funding from the Sloan Foundation and Harvard Dataverse, the Harvard Center for Geographic Analysis (CGA has developed a prototype spatio-temporal visualization platform called the Billion Object Platform or BOP. The goal of the project is to lower barriers for scholars who wish to access large, streaming, spatio-temporal datasets. The BOP is now loaded with the latest billion geo-tweets, and is fed a real-time stream of about 1 million tweets per day. The geo-tweets are enriched with sentiment and census/admin boundary codes when they enter the system. The system is open source and is currently hosted on Massachusetts Open Cloud (MOC, an OpenStack environment with all components deployed in Docker orchestrated by Kontena. This paper will provide an overview of the BOP architecture, which is built on an open source stack consisting of Apache Lucene, Solr, Kafka, Zookeeper, Swagger, scikit-learn, OpenLayers, and AngularJS. The paper will further discuss the approach used for harvesting, enriching, streaming, storing, indexing, visualizing and querying a billion streaming geo-tweets.

  1. Building a Billion Spatio-Temporal Object Search and Visualization Platform

    Science.gov (United States)

    Kakkar, D.; Lewis, B.

    2017-10-01

    With funding from the Sloan Foundation and Harvard Dataverse, the Harvard Center for Geographic Analysis (CGA) has developed a prototype spatio-temporal visualization platform called the Billion Object Platform or BOP. The goal of the project is to lower barriers for scholars who wish to access large, streaming, spatio-temporal datasets. The BOP is now loaded with the latest billion geo-tweets, and is fed a real-time stream of about 1 million tweets per day. The geo-tweets are enriched with sentiment and census/admin boundary codes when they enter the system. The system is open source and is currently hosted on Massachusetts Open Cloud (MOC), an OpenStack environment with all components deployed in Docker orchestrated by Kontena. This paper will provide an overview of the BOP architecture, which is built on an open source stack consisting of Apache Lucene, Solr, Kafka, Zookeeper, Swagger, scikit-learn, OpenLayers, and AngularJS. The paper will further discuss the approach used for harvesting, enriching, streaming, storing, indexing, visualizing and querying a billion streaming geo-tweets.

  2. Subjective and objective measurements of visual fatigue induced by excessive disparities in stereoscopic images

    Science.gov (United States)

    Jung, Yong Ju; Kim, Dongchan; Sohn, Hosik; Lee, Seong-il; Park, Hyun Wook; Ro, Yong Man

    2013-03-01

    As stereoscopic displays have spread, it is important to know what really causes the visual fatigue and discomfort and what happens in the visual system in the brain behind the retina while viewing stereoscopic 3D images on the displays. In this study, functional magnetic resonance imaging (fMRI) was used for the objective measurement to assess the human brain regions involved in the processing of the stereoscopic stimuli with excessive disparities. Based on the subjective measurement results, we selected two subsets of comfort videos and discomfort videos in our dataset. Then, a fMRI experiment was conducted with the subsets of comfort and discomfort videos in order to identify which brain regions activated while viewing the discomfort videos in a stereoscopic display. We found that, when viewing a stereoscopic display, the right middle frontal gyrus, the right inferior frontal gyrus, the right intraparietal lobule, the right middle temporal gyrus, and the bilateral cuneus were significantly activated during the processing of excessive disparities, compared to those of small disparities (< 1 degree).

  3. Efficient data exchange: Integrating a vector GIS with an object-oriented, 3-D visualization system

    International Nuclear Information System (INIS)

    Kuiper, J.; Ayers, A.; Johnson, R.; Tolbert-Smith, M.

    1996-01-01

    A common problem encountered in Geographic Information System (GIS) modeling is the exchange of data between different software packages to best utilize the unique features of each package. This paper describes a project to integrate two systems through efficient data exchange. The first is a widely used GIS based on a relational data model. This system has a broad set of data input, processing, and output capabilities, but lacks three-dimensional (3-D) visualization and certain modeling functions. The second system is a specialized object-oriented package designed for 3-D visualization and modeling. Although this second system is useful for subsurface modeling and hazardous waste site characterization, it does not provide many of the, capabilities of a complete GIS. The system-integration project resulted in an easy-to-use program to transfer information between the systems, making many of the more complex conversion issues transparent to the user. The strengths of both systems are accessible, allowing the scientist more time to focus on analysis. This paper details the capabilities of the two systems, explains the technical issues associated with data exchange and how they were solved, and outlines an example analysis project that used the integrated systems

  4. The influence of print exposure on the body-object interaction effect in visual word recognition.

    Science.gov (United States)

    Hansen, Dana; Siakaluk, Paul D; Pexman, Penny M

    2012-01-01

    We examined the influence of print exposure on the body-object interaction (BOI) effect in visual word recognition. High print exposure readers and low print exposure readers either made semantic categorizations ("Is the word easily imageable?"; Experiment 1) or phonological lexical decisions ("Does the item sound like a real English word?"; Experiment 2). The results from Experiment 1 showed that there was a larger BOI effect for the low print exposure readers than for the high print exposure readers in semantic categorization, though an effect was observed for both print exposure groups. However, the results from Experiment 2 showed that the BOI effect was observed only for the high print exposure readers in phonological lexical decision. The results of the present study suggest that print exposure does influence the BOI effect, and that this influence varies as a function of task demands.

  5. The Influence of Print Exposure on the Body-Object Interaction Effect in Visual Word Recognition

    Directory of Open Access Journals (Sweden)

    Dana eHansen

    2012-05-01

    Full Text Available We examined the influence of print exposure on the body-object interaction (BOI effect in visual word recognition. High print exposure readers and low print exposure readers either made semantic categorizations (Is the word easily imageable?; Experiment 1 or phonological lexical decisions (Does the item sound like a real English word?; Experiment 2. The results from Experiment 1 showed that there was a larger facilitatory BOI effect for the low print exposure readers than for the high print exposure readers in semantic categorization, though an effect was observed for both print exposure groups. However, the results from Experiment 2 showed that a facilitatory BOI effect was observed only for the high print exposure readers in phonological lexical decision. The results of the present study suggest that print exposure does influence the BOI effect, and that this influence varies as a function of task demands.

  6. Neural dynamics of object-based multifocal visual spatial attention and priming: object cueing, useful-field-of-view, and crowding.

    Science.gov (United States)

    Foley, Nicholas C; Grossberg, Stephen; Mingolla, Ennio

    2012-08-01

    How are spatial and object attention coordinated to achieve rapid object learning and recognition during eye movement search? How do prefrontal priming and parietal spatial mechanisms interact to determine the reaction time costs of intra-object attention shifts, inter-object attention shifts, and shifts between visible objects and covertly cued locations? What factors underlie individual differences in the timing and frequency of such attentional shifts? How do transient and sustained spatial attentional mechanisms work and interact? How can volition, mediated via the basal ganglia, influence the span of spatial attention? A neural model is developed of how spatial attention in the where cortical stream coordinates view-invariant object category learning in the what cortical stream under free viewing conditions. The model simulates psychological data about the dynamics of covert attention priming and switching requiring multifocal attention without eye movements. The model predicts how "attentional shrouds" are formed when surface representations in cortical area V4 resonate with spatial attention in posterior parietal cortex (PPC) and prefrontal cortex (PFC), while shrouds compete among themselves for dominance. Winning shrouds support invariant object category learning, and active surface-shroud resonances support conscious surface perception and recognition. Attentive competition between multiple objects and cues simulates reaction-time data from the two-object cueing paradigm. The relative strength of sustained surface-driven and fast-transient motion-driven spatial attention controls individual differences in reaction time for invalid cues. Competition between surface-driven attentional shrouds controls individual differences in detection rate of peripheral targets in useful-field-of-view tasks. The model proposes how the strength of competition can be mediated, though learning or momentary changes in volition, by the basal ganglia. A new explanation of

  7. 3D geospatial visualizations: Animation and motion effects on spatial objects

    Science.gov (United States)

    Evangelidis, Konstantinos; Papadopoulos, Theofilos; Papatheodorou, Konstantinos; Mastorokostas, Paris; Hilas, Constantinos

    2018-02-01

    Digital Elevation Models (DEMs), in combination with high quality raster graphics provide realistic three-dimensional (3D) representations of the globe (virtual globe) and amazing navigation experience over the terrain through earth browsers. In addition, the adoption of interoperable geospatial mark-up languages (e.g. KML) and open programming libraries (Javascript) makes it also possible to create 3D spatial objects and convey on them the sensation of any type of texture by utilizing open 3D representation models (e.g. Collada). One step beyond, by employing WebGL frameworks (e.g. Cesium.js, three.js) animation and motion effects are attributed on 3D models. However, major GIS-based functionalities in combination with all the above mentioned visualization capabilities such as for example animation effects on selected areas of the terrain texture (e.g. sea waves) as well as motion effects on 3D objects moving in dynamically defined georeferenced terrain paths (e.g. the motion of an animal over a hill, or of a big fish in an ocean etc.) are not widely supported at least by open geospatial applications or development frameworks. Towards this we developed and made available to the research community, an open geospatial software application prototype that provides high level capabilities for dynamically creating user defined virtual geospatial worlds populated by selected animated and moving 3D models on user specified locations, paths and areas. At the same time, the generated code may enhance existing open visualization frameworks and programming libraries dealing with 3D simulations, with the geospatial aspect of a virtual world.

  8. Visual object naming in patients with small lesions centered at the left temporopolar region.

    Science.gov (United States)

    Campo, Pablo; Poch, Claudia; Toledano, Rafael; Igoa, José Manuel; Belinchón, Mercedes; García-Morales, Irene; Gil-Nagel, Antonio

    2016-01-01

    Naming is considered a left hemisphere function that operates according to a posterior-anterior specificity gradient, with more fine-grained information processed in most anterior regions of the temporal lobe (ATL), including the temporal pole (TP). Word finding difficulties are typically assessed using visual confrontation naming tasks, and have been associated with selective damage to ATL resulting from different aetiologies. Nonetheless, the role of the ATL and, more specifically, of the TP in the naming network is not completely established. Most of the accumulated evidence is based on studies on patients with extensive lesions, often bilateral. Furthermore, there is a considerable variability in the anatomical definition of ATL. To better understand the specific involvement of the left TP in visual object naming, we assessed a group of patients with an epileptogenic lesion centered at the TP, and compared their performance with that of a strictly matched control group. We also administered a battery of verbal and non-verbal semantic tasks that was used as a semantic memory baseline. Patients showed an impaired naming ability, manifesting in a certain degree of anomia and semantically related naming errors, which was influenced by concept familiarity. This pattern took place in a context of mild semantic dysfunction that was evident in different types and modalities of semantic tasks. Therefore, current findings demonstrate that a restricted lesion to the left TP can cause a significant deficit in object naming. Of importance, the observed semantic impairment was far from the devastating degradation observed in semantic dementia and other bilateral conditions.

  9. Rapid and Objective Assessment of Neural Function in Autism Spectrum Disorder Using Transient Visual Evoked Potentials.

    Directory of Open Access Journals (Sweden)

    Paige M Siper

    Full Text Available There is a critical need to identify biomarkers and objective outcome measures that can be used to understand underlying neural mechanisms in autism spectrum disorder (ASD. Visual evoked potentials (VEPs offer a noninvasive technique to evaluate the functional integrity of neural mechanisms, specifically visual pathways, while probing for disease pathophysiology.Transient VEPs (tVEPs were obtained from 96 unmedicated children, including 37 children with ASD, 36 typically developing (TD children, and 23 unaffected siblings (SIBS. A conventional contrast-reversing checkerboard condition was compared to a novel short-duration condition, which was developed to enable objective data collection from severely affected populations who are often excluded from electroencephalographic (EEG studies.Children with ASD showed significantly smaller amplitudes compared to TD children at two of the earliest critical VEP components, P60-N75 and N75-P100. SIBS showed intermediate responses relative to ASD and TD groups. There were no group differences in response latency. Frequency band analyses indicated significantly weaker responses for the ASD group in bands encompassing gamma-wave activity. Ninety-two percent of children with ASD were able to complete the short-duration condition compared to 68% for the standard condition.The current study establishes the utility of a short-duration tVEP test for use in children at varying levels of functioning and describes neural abnormalities in children with idiopathic ASD. Implications for excitatory/inhibitory balance as well as the potential application of VEP for use in clinical trials are discussed.

  10. Development of Tool Representations in the Dorsal and Ventral Visual Object Processing Pathways

    Science.gov (United States)

    Kersey, Alyssa J.; Clark, Tyia S.; Lussier, Courtney A.; Mahon, Bradford Z.; Cantlon, Jessica F.

    2016-01-01

    Tools represent a special class of objects, because they are processed across both the dorsal and ventral visual object processing pathways. Three core regions are known to be involved in tool processing: the left posterior middle temporal gyrus, the medial fusiform gyrus (bilaterally), and the left inferior parietal lobule. A critical and relatively unexplored issue concerns whether, in development, tool preferences emerge at the same time and to a similar degree across all regions of the tool-processing network. To test this issue, we used functional magnetic resonance imaging to measure the neural amplitude, peak location, and the dispersion of tool-related neural responses in the youngest sample of children tested to date in this domain (ages 4–8 years). We show that children recruit overlapping regions of the adult tool-processing network and also exhibit similar patterns of co-activation across the network to adults. The amplitude and co-activation data show that the core components of the tool-processing network are established by age 4. Our findings on the distributions of peak location and dispersion of activation indicate that the tool network undergoes refinement between ages 4 and 8 years. PMID:26108614

  11. Auditory Scene Analysis and sonified visual images. Does consonance negatively impact on object formation when using complex sonified stimuli?

    Directory of Open Access Journals (Sweden)

    David J Brown

    2015-10-01

    Full Text Available A critical task for the brain is the sensory representation and identification of perceptual objects in the world. When the visual sense is impaired, hearing and touch must take primary roles and in recent times compensatory techniques have been developed that employ the tactile or auditory system as a substitute for the visual system. Visual-to-auditory sonifications provide a complex, feature-based auditory representation that must be decoded and integrated into an object-based representation by the listener. However, we don’t yet know what role the auditory system plays in the object integration stage and whether the principles of auditory scene analysis apply. Here we used coarse sonified images in a two-tone discrimination task to test whether auditory feature-based representations of visual objects would be confounded when their features conflicted with the principles of auditory consonance. We found that listeners (N = 36 performed worse in an object recognition task when the auditory feature-based representation was harmonically consonant. We also found that this conflict was not negated with the provision of congruent audio-visual information. The findings suggest that early auditory processes of harmonic grouping dominate the object formation process and that the complexity of the signal, and additional sensory information have limited effect on this.

  12. The interplay of bottom-up and top-down mechanisms in visual guidance during object naming.

    Science.gov (United States)

    Coco, Moreno I; Malcolm, George L; Keller, Frank

    2014-01-01

    An ongoing issue in visual cognition concerns the roles played by low- and high-level information in guiding visual attention, with current research remaining inconclusive about the interaction between the two. In this study, we bring fresh evidence into this long-standing debate by investigating visual saliency and contextual congruency during object naming (Experiment 1), a task in which visual processing interacts with language processing. We then compare the results of this experiment to data of a memorization task using the same stimuli (Experiment 2). In Experiment 1, we find that both saliency and congruency influence visual and naming responses and interact with linguistic factors. In particular, incongruent objects are fixated later and less often than congruent ones. However, saliency is a significant predictor of object naming, with salient objects being named earlier in a trial. Furthermore, the saliency and congruency of a named object interact with the lexical frequency of the associated word and mediate the time-course of fixations at naming. In Experiment 2, we find a similar overall pattern in the eye-movement responses, but only the congruency of the target is a significant predictor, with incongruent targets fixated less often than congruent targets. Crucially, this finding contrasts with claims in the literature that incongruent objects are more informative than congruent objects by deviating from scene context and hence need a longer processing. Overall, this study suggests that different sources of information are interactively used to guide visual attention on the targets to be named and raises new questions for existing theories of visual attention.

  13. Mobile visual object identification: from SIFT-BoF-RANSAC to Sketchprint

    Science.gov (United States)

    Voloshynovskiy, Sviatoslav; Diephuis, Maurits; Holotyak, Taras

    2015-03-01

    Mobile object identification based on its visual features find many applications in the interaction with physical objects and security. Discriminative and robust content representation plays a central role in object and content identification. Complex post-processing methods are used to compress descriptors and their geometrical information, aggregate them into more compact and discriminative representations and finally re-rank the results based on the similarity geometries of descriptors. Unfortunately, most of the existing descriptors are not very robust and discriminative once applied to the various contend such as real images, text or noise-like microstructures next to requiring at least 500-1'000 descriptors per image for reliable identification. At the same time, the geometric re-ranking procedures are still too complex to be applied to the numerous candidates obtained from the feature similarity based search only. This restricts that list of candidates to be less than 1'000 which obviously causes a higher probability of miss. In addition, the security and privacy of content representation has become a hot research topic in multimedia and security communities. In this paper, we introduce a new framework for non- local content representation based on SketchPrint descriptors. It extends the properties of local descriptors to a more informative and discriminative, yet geometrically invariant content representation. In particular it allows images to be compactly represented by 100 SketchPrint descriptors without being fully dependent on re-ranking methods. We consider several use cases, applying SketchPrint descriptors to natural images, text documents, packages and micro-structures and compare them with the traditional local descriptors.

  14. Visual scanning and recognition of Chinese, Caucasian, and racially ambiguous faces: Contributions from bottom-up facial physiognomic information and top-down knowledge of racial categories

    Science.gov (United States)

    Wang, Qiandong; Xiao, Naiqi G.; Quinn, Paul C.; Hu, Chao S.; Qian, Miao; Fu, Genyue; Lee, Kang

    2014-01-01

    Recent studies have shown that participants use different eye movement strategies when scanning own- and other-race faces. However, it is unclear (1) whether this effect is related to face recognition performance, and (2) to what extent this effect is influenced by top-down or bottom-up facial information. In the present study, Chinese participants performed a face recognition task with Chinese faces, Caucasian faces, and racially ambiguous morphed face stimuli. For the racially ambiguous faces, we led participants to believe that they were viewing either own-race Chinese faces or other-race Caucasian faces. Results showed that (1) Chinese participants scanned the nose of the true Chinese faces more than that of the true Caucasian faces, whereas they scanned the eyes of the Caucasian faces more than those of the Chinese faces; (2) they scanned the eyes, nose, and mouth equally for the ambiguous faces in the Chinese condition compared with those in the Caucasian condition; (3) when recognizing the true Chinese target faces, but not the true target Caucasian faces, the greater the fixation proportion on the nose, the faster the participants correctly recognized these faces. The same was true when racially ambiguous face stimuli were thought to be Chinese faces. These results provide the first evidence to show that (1) visual scanning patterns of faces are related to own-race face recognition response time, and (2) it is bottom-up facial physiognomic information of racial categories that mainly contributes to face scanning. However, top-down knowledge of racial categories can influence the relationship between face scanning patterns and recognition response time. PMID:25497461

  15. Category I structures program

    International Nuclear Information System (INIS)

    Endebrock, E.G.; Dove, R.C.

    1981-01-01

    The objective of the Category I Structure Program is to supply experimental and analytical information needed to assess the structural capacity of Category I structures (excluding the reactor cntainment building). Because the shear wall is a principal element of a Category I structure, and because relatively little experimental information is available on the shear walls, it was selected as the test element for the experimental program. The large load capacities of shear walls in Category I structures dictates that the experimental tests be conducted on small size shear wall structures that incorporates the general construction details and characteristics of as-built shear walls

  16. The Effect of Inversion on 3- to 5-Year-Olds' Recognition of Face and Nonface Visual Objects

    Science.gov (United States)

    Picozzi, Marta; Cassia, Viola Macchi; Turati, Chiara; Vescovo, Elena

    2009-01-01

    This study compared the effect of stimulus inversion on 3- to 5-year-olds' recognition of faces and two nonface object categories matched with faces for a number of attributes: shoes (Experiment 1) and frontal images of cars (Experiments 2 and 3). The inversion effect was present for faces but not shoes at 3 years of age (Experiment 1). Analogous…

  17. Objectivity

    CERN Document Server

    Daston, Lorraine

    2010-01-01

    Objectivity has a history, and it is full of surprises. In Objectivity, Lorraine Daston and Peter Galison chart the emergence of objectivity in the mid-nineteenth-century sciences--and show how the concept differs from its alternatives, truth-to-nature and trained judgment. This is a story of lofty epistemic ideals fused with workaday practices in the making of scientific images. From the eighteenth through the early twenty-first centuries, the images that reveal the deepest commitments of the empirical sciences--from anatomy to crystallography--are those featured in scientific atlases, the compendia that teach practitioners what is worth looking at and how to look at it. Galison and Daston use atlas images to uncover a hidden history of scientific objectivity and its rivals. Whether an atlas maker idealizes an image to capture the essentials in the name of truth-to-nature or refuses to erase even the most incidental detail in the name of objectivity or highlights patterns in the name of trained judgment is a...

  18. Remembering the Specific Visual Details of Presented Objects: Neuroimaging Evidence for Effects of Emotion

    Science.gov (United States)

    Kensinger, Elizabeth A.; Schacter, Daniel L.

    2007-01-01

    Memories can be retrieved with varied amounts of visual detail, and the emotional content of information can influence the likelihood that visual detail is remembered. In the present fMRI experiment (conducted with 19 adults scanned using a 3T magnet), we examined the neural processes that correspond with recognition of the visual details of…

  19. Fragile visual short-term memory is an object-based and location-specific store

    NARCIS (Netherlands)

    Pinto, Y.; Sligte, I.G.; Shapiro, K.L.; Lamme, V.A.F.

    2013-01-01

    Fragile visual short-term memory (FM) is a recently discovered form of visual short-term memory. Evidence suggests that it provides rich and high-capacity storage, like iconic memory, yet it exists, without interference, almost as long as visual working memory. In the present study, we sought to

  20. Hyper-Fractal Analysis: A visual tool for estimating the fractal dimension of 4D objects

    Science.gov (United States)

    Grossu, I. V.; Grossu, I.; Felea, D.; Besliu, C.; Jipa, Al.; Esanu, T.; Bordeianu, C. C.; Stan, E.

    2013-04-01

    This work presents a new version of a Visual Basic 6.0 application for estimating the fractal dimension of images and 3D objects (Grossu et al. (2010) [1]). The program was extended for working with four-dimensional objects stored in comma separated values files. This might be of interest in biomedicine, for analyzing the evolution in time of three-dimensional images. New version program summaryProgram title: Hyper-Fractal Analysis (Fractal Analysis v03) Catalogue identifier: AEEG_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEG_v3_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 745761 No. of bytes in distributed program, including test data, etc.: 12544491 Distribution format: tar.gz Programming language: MS Visual Basic 6.0 Computer: PC Operating system: MS Windows 98 or later RAM: 100M Classification: 14 Catalogue identifier of previous version: AEEG_v2_0 Journal reference of previous version: Comput. Phys. Comm. 181 (2010) 831-832 Does the new version supersede the previous version? Yes Nature of problem: Estimating the fractal dimension of 4D images. Solution method: Optimized implementation of the 4D box-counting algorithm. Reasons for new version: Inspired by existing applications of 3D fractals in biomedicine [3], we extended the optimized version of the box-counting algorithm [1, 2] to the four-dimensional case. This might be of interest in analyzing the evolution in time of 3D images. The box-counting algorithm was extended in order to support 4D objects, stored in comma separated values files. A new form was added for generating 2D, 3D, and 4D test data. The application was tested on 4D objects with known dimension, e.g. the Sierpinski hypertetrahedron gasket, Df=ln(5)/ln(2) (Fig. 1). The algorithm could be extended, with minimum effort, to

  1. Automatic frame-centered object representation and integration revealed by iconic memory, visual priming, and backward masking.

    Science.gov (United States)

    Lin, Zhicheng; He, Sheng

    2012-10-25

    Object identities ("what") and their spatial locations ("where") are processed in distinct pathways in the visual system, raising the question of how the what and where information is integrated. Because of object motions and eye movements, the retina-based representations are unstable, necessitating nonretinotopic representation and integration. A potential mechanism is to code and update objects according to their reference frames (i.e., frame-centered representation and integration). To isolate frame-centered processes, in a frame-to-frame apparent motion configuration, we (a) presented two preceding or trailing objects on the same frame, equidistant from the target on the other frame, to control for object-based (frame-based) effect and space-based effect, and (b) manipulated the target's relative location within its frame to probe frame-centered effect. We show that iconic memory, visual priming, and backward masking depend on objects' relative frame locations, orthogonal of the retinotopic coordinate. These findings not only reveal that iconic memory, visual priming, and backward masking can be nonretinotopic but also demonstrate that these processes are automatically constrained by contextual frames through a frame-centered mechanism. Thus, object representation is robustly and automatically coupled to its reference frame and continuously being updated through a frame-centered, location-specific mechanism. These findings lead to an object cabinet framework, in which objects ("files") within the reference frame ("cabinet") are orderly coded relative to the frame.

  2. Spatial and object-based attention modulates broadband high-frequency responses across the human visual cortical hierarchy.

    Science.gov (United States)

    Davidesco, Ido; Harel, Michal; Ramot, Michal; Kramer, Uri; Kipervasser, Svetlana; Andelman, Fani; Neufeld, Miri Y; Goelman, Gadi; Fried, Itzhak; Malach, Rafael

    2013-01-16

    One of the puzzling aspects in the visual attention literature is the discrepancy between electrophysiological and fMRI findings: whereas fMRI studies reveal strong attentional modulation in the earliest visual areas, single-unit and local field potential studies yielded mixed results. In addition, it is not clear to what extent spatial attention effects extend from early to high-order visual areas. Here we addressed these issues using electrocorticography recordings in epileptic patients. The patients performed a task that allowed simultaneous manipulation of both spatial and object-based attention. They were presented with composite stimuli, consisting of a small object (face or house) superimposed on a large one, and in separate blocks, were instructed to attend one of the objects. We found a consistent increase in broadband high-frequency (30-90 Hz) power, but not in visual evoked potentials, associated with spatial attention starting with V1/V2 and continuing throughout the visual hierarchy. The magnitude of the attentional modulation was correlated with the spatial selectivity of each electrode and its distance from the occipital pole. Interestingly, the latency of the attentional modulation showed a significant decrease along the visual hierarchy. In addition, electrodes placed over high-order visual areas (e.g., fusiform gyrus) showed both effects of spatial and object-based attention. Overall, our results help to reconcile previous observations of discrepancy between fMRI and electrophysiology. They also imply that spatial attention effects can be found both in early and high-order visual cortical areas, in parallel with their stimulus tuning properties.

  3. Object-based encoding in visual working memory: a life span study.

    Science.gov (United States)

    Zhang, Qiong; Shen, Mowei; Tang, Ning; Zhao, Guohua; Gao, Zaifeng

    2013-08-20

    Recent studies on development of visual working memory (VWM) predominantly focus on VWM capacity and spatial-based information filtering in VWM. Here we explored another new aspect of VWM development: object-based encoding (OBE), which refers to the fact that even if one feature dimension is required to be selected into VWM, the other irrelevant dimensions are also extracted. We explored the OBE in children, young adults, and old adults, by probing an "irrelevant-change distracting effect" in which a change of stored irrelevant feature dramatically affects the performance of task-relevant features in a change-detection task. Participants were required to remember two or four simple colored shapes, while color was used as the relevant dimension. We found that changes to irrelevant shapes led to a significant distracting effect across the three age groups in both load conditions; however, children showed a greater degree of OBE than did young and old adults. These results suggest that OBE exists in VWM over the life span (6-67 years), yet continues to develop along with VWM.

  4. Object-based attention benefits reveal selective abnormalities of visual integration in autism.

    Science.gov (United States)

    Falter, Christine M; Grant, Kate C Plaisted; Davis, Greg

    2010-06-01

    A pervasive integration deficit could provide a powerful and elegant account of cognitive processing in autism spectrum disorders (ASD). However, in the case of visual Gestalt grouping, typically assessed by tasks that require participants explicitly to introspect on their own grouping perception, clear evidence for such a deficit remains elusive. To resolve this issue, we adopt an index of Gestalt grouping from the object-based attention literature that does not require participants to assess their own grouping perception. Children with ASD and mental- and chronological-age matched typically developing children (TD) performed speeded orientation discriminations of two diagonal lines. The lines were superimposed on circles that were either grouped together or segmented on the basis of color, proximity or these two dimensions in competition. The magnitude of performance benefits evident for grouped circles, relative to ungrouped circles, provided an index of grouping under various conditions. Children with ASD showed comparable grouping by proximity to the TD group, but reduced grouping by similarity. ASD seems characterized by a selective bias away from grouping by similarity combined with typical levels of grouping by proximity, rather than by a pervasive integration deficit.

  5. Airport object extraction based on visual attention mechanism and parallel line detection

    Science.gov (United States)

    Lv, Jing; Lv, Wen; Zhang, Libao

    2017-10-01

    Target extraction is one of the important aspects in remote sensing image analysis and processing, which has wide applications in images compression, target tracking, target recognition and change detection. Among different targets, airport has attracted more and more attention due to its significance in military and civilian. In this paper, we propose a novel and reliable airport object extraction model combining visual attention mechanism and parallel line detection algorithm. First, a novel saliency analysis model for remote sensing images with airport region is proposed to complete statistical saliency feature analysis. The proposed model can precisely extract the most salient region and preferably suppress the background interference. Then, the prior geometric knowledge is analyzed and airport runways contained two parallel lines with similar length are detected efficiently. Finally, we use the improved Otsu threshold segmentation method to segment and extract the airport regions from the salient map of remote sensing images. The experimental results demonstrate that the proposed model outperforms existing saliency analysis models and shows good performance in the detection of the airport.

  6. Visual perception and interception of falling objects: a review of evidence for an internal model of gravity.

    Science.gov (United States)

    Zago, Myrka; Lacquaniti, Francesco

    2005-09-01

    Prevailing views on how we time the interception of a moving object assume that the visual inputs are informationally sufficient to estimate the time-to-contact from the object's kinematics. However, there are limitations in the visual system that raise questions about the general validity of these theories. Most notably, vision is poorly sensitive to arbitrary accelerations. How then does the brain deal with the motion of objects accelerated by Earth's gravity? Here we review evidence in favor of the view that the brain makes the best estimate about target motion based on visually measured kinematics and an a priori guess about the causes of motion. According to this theory, a predictive model is used to extrapolate time-to-contact from the expected kinetics in the Earth's gravitational field.

  7. Cortical activation patterns during long-term memory retrieval of visually or haptically encoded objects and locations.

    Science.gov (United States)

    Stock, Oliver; Röder, Brigitte; Burke, Michael; Bien, Siegfried; Rösler, Frank

    2009-01-01

    The present study used functional magnetic resonance imaging to delineate cortical networks that are activated when objects or spatial locations encoded either visually (visual encoding group, n=10) or haptically (haptic encoding group, n=10) had to be retrieved from long-term memory. Participants learned associations between auditorily presented words and either meaningless objects or locations in a 3-D space. During the retrieval phase one day later, participants had to decide whether two auditorily presented words shared an association with a common object or location. Thus, perceptual stimulation during retrieval was always equivalent, whereas either visually or haptically encoded object or location associations had to be reactivated. Moreover, the number of associations fanning out from each word varied systematically, enabling a parametric increase of the number of reactivated representations. Recall of visual objects predominantly activated the left superior frontal gyrus and the intraparietal cortex, whereas visually learned locations activated the superior parietal cortex of both hemispheres. Retrieval of haptically encoded material activated the left medial frontal gyrus and the intraparietal cortex in the object condition, and the bilateral superior parietal cortex in the location condition. A direct test for modality-specific effects showed that visually encoded material activated more vision-related areas (BA 18/19) and haptically encoded material more motor and somatosensory-related areas. A conjunction analysis identified supramodal and material-unspecific activations within the medial and superior frontal gyrus and the superior parietal lobe including the intraparietal sulcus. These activation patterns strongly support the idea that code-specific representations are consolidated and reactivated within anatomically distributed cell assemblies that comprise sensory and motor processing systems.

  8. The 5-HT2A/1A agonist psilocybin disrupts modal object completion associated with visual hallucinations.

    Science.gov (United States)

    Kometer, Michael; Cahn, B Rael; Andel, David; Carter, Olivia L; Vollenweider, Franz X

    2011-03-01

    Recent findings suggest that the serotonergic system and particularly the 5-HT2A/1A receptors are implicated in visual processing and possibly the pathophysiology of visual disturbances including hallucinations in schizophrenia and Parkinson's disease. To investigate the role of 5-HT2A/1A receptors in visual processing the effect of the hallucinogenic 5-HT2A/1A agonist psilocybin (125 and 250 μg/kg vs. placebo) on the spatiotemporal dynamics of modal object completion was assessed in normal volunteers (n = 17) using visual evoked potential recordings in conjunction with topographic-mapping and source analysis. These effects were then considered in relation to the subjective intensity of psilocybin-induced visual hallucinations quantified by psychometric measurement. Psilocybin dose-dependently decreased the N170 and, in contrast, slightly enhanced the P1 component selectively over occipital electrode sites. The decrease of the N170 was most apparent during the processing of incomplete object figures. Moreover, during the time period of the N170, the overall reduction of the activation in the right extrastriate and posterior parietal areas correlated positively with the intensity of visual hallucinations. These results suggest a central role of the 5-HT2A/1A-receptors in the modulation of visual processing. Specifically, a reduced N170 component was identified as potentially reflecting a key process of 5-HT2A/1A receptor-mediated visual hallucinations and aberrant modal object completion potential. Copyright © 2011 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  9. Memory for Complex Visual Objects but Not for Allocentric Locations during the First Year of Life

    Science.gov (United States)

    Dupierrix, Eve; Hillairet de Boisferon, Anne; Barbeau, Emmanuel; Pascalis, Olivier

    2015-01-01

    Although human infants demonstrate early competence to retain visual information, memory capacities during infancy remain largely undocumented. In three experiments, we used a Visual Paired Comparison (VPC) task to examine abilities to encode identity (Experiment 1) and spatial properties (Experiments 2a and 2b) of unfamiliar complex visual…

  10. Visual scanning and recognition of Chinese, Caucasian, and racially ambiguous faces: contributions from bottom-up facial physiognomic information and top-down knowledge of racial categories.

    Science.gov (United States)

    Wang, Qiandong; Xiao, Naiqi G; Quinn, Paul C; Hu, Chao S; Qian, Miao; Fu, Genyue; Lee, Kang

    2015-02-01

    Recent studies have shown that participants use different eye movement strategies when scanning own- and other-race faces. However, it is unclear (1) whether this effect is related to face recognition performance, and (2) to what extent this effect is influenced by top-down or bottom-up facial information. In the present study, Chinese participants performed a face recognition task with Chinese, Caucasian, and racially ambiguous faces. For the racially ambiguous faces, we led participants to believe that they were viewing either own-race Chinese faces or other-race Caucasian faces. Results showed that (1) Chinese participants scanned the nose of the true Chinese faces more than that of the true Caucasian faces, whereas they scanned the eyes of the Caucasian faces more than those of the Chinese faces; (2) they scanned the eyes, nose, and mouth equally for the ambiguous faces in the Chinese condition compared with those in the Caucasian condition; (3) when recognizing the true Chinese target faces, but not the true target Caucasian faces, the greater the fixation proportion on the nose, the faster the participants correctly recognized these faces. The same was true when racially ambiguous face stimuli were thought to be Chinese faces. These results provide the first evidence to show that (1) visual scanning patterns of faces are related to own-race face recognition response time, and (2) it is bottom-up facial physiognomic information that mainly contributes to face scanning. However, top-down knowledge of racial categories can influence the relationship between face scanning patterns and recognition response time. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. A configural effect in visual short-term memory for features from different parts of an object.

    Science.gov (United States)

    Delvenne, Jean-François; Bruyer, Raymond

    2006-09-01

    Previous studies have shown that change detection performance is improved when the visual display holds features (e.g., a colour and an orientation) that are grouped into different parts of the same object compared to when they are all spatially separated (Xu, 2002a, 2002b). These findings indicate that visual short-term memory (VSTM) encoding can be "object based". Recently, however, it has been demonstrated that changing the orientation of an item could affect the spatial configuration of the display (Jiang, Chun, & Olson, 2004), which may have an important influence on change detection. The perceptual grouping of features into an object obviously reduces the amount of distinct spatial relations in a display and hence the complexity of the spatial configuration. In the present study, we ask whether the object-based encoding benefit observed in previous studies may reflect the use of configural coding rather than the outcome of a true object-based effect. The results show that when configural cues are removed, the object-based encoding benefit remains for features (i.e., colour and orientation) from different parts of an object, but is significantly reduced. These findings support the view that memory for features from different parts of an object can benefit from object-based encoding, but the use of configural coding significantly helps enlarge this effect.

  12. A new 2-dimensional method for constructing visualized treatment objectives for distraction osteogenesis of the short mandible

    NARCIS (Netherlands)

    van Beek, H.

    2010-01-01

    Open bite development during distraction of the mandible is common and partly due to inaccurate planning of the treatment. Conflicting guidelines exist in the literature. A method for Visualized Treatment Objective (VTO) construction is presented as an aid for determining the correct orientation of

  13. Object-Spatial Visualization and Verbal Cognitive Styles, and Their Relation to Cognitive Abilities and Mathematical Performance

    Science.gov (United States)

    Haciomeroglu, Erhan Selcuk

    2016-01-01

    The present study investigated the object-spatial visualization and verbal cognitive styles among high school students and related differences in spatial ability, verbal-logical reasoning ability, and mathematical performance of those students. Data were collected from 348 students enrolled in Advanced Placement calculus courses at six high…

  14. Multiscale aspects of the visual system and their use for scale invariant object recognition

    NARCIS (Netherlands)

    Petkov, N; vanDeemter, J; Karsch, F; Monien, B; Satz, H

    1997-01-01

    Psychophysical, neuroanatomical and neurophysiological evidence for multiscale aspects of the visual system is considered. The stack model and its relation to the image pyramid are discussed. The results of a straightforward implementation on a parallel supercomputer are presented. The high

  15. A bilateral advantage for maintaining objects in visual short term memory

    OpenAIRE

    Holt, JL; Delvenne, JFCM

    2015-01-01

    Research has shown that attentional pre-cues can subsequently influence the transfer of information into visual short term memory (VSTM) (Schmidt, B., Vogel, E., Woodman, G., & Luck, S. (2002). Voluntary and automatic attentional control of visual working memory. Perception & Psychophysics, 64(5), 754–763). However, studies also suggest that those effects are constrained by the hemifield alignment of the pre-cues (Holt, J. L., & Delvenne, J.-F. (2014). A bilateral advantage in controlling acc...

  16. The Role of Visual Working Memory in Attentive Tracking of Unique Objects

    Science.gov (United States)

    Makovski, Tal; Jiang, Yuhong V.

    2009-01-01

    When tracking moving objects in space humans usually attend to the objects' spatial locations and update this information over time. To what extent do surface features assist attentive tracking? In this study we asked participants to track identical or uniquely colored objects. Tracking was enhanced when objects were unique in color. The benefit…

  17. Organizational Categories as Viewing Categories

    OpenAIRE

    Mik-Meyer, Nanna

    2005-01-01

    This paper explores how two Danish rehabilitation organizations textual guidelines for assessment of clients’ personality traits influence the actual evaluation of clients. The analysis will show how staff members produce institutional identities corresponding to organizational categories, which very often have little or no relevance for the clients evaluated. The goal of the article is to demonstrate how the institutional complex that frames the work of the organizations produces the client ...

  18. Detecting changes in real-world objects: The relationship between visual long-term memory and change blindness.

    Science.gov (United States)

    Brady, Timothy F; Konkle, Talia; Oliva, Aude; Alvarez, George A

    2009-01-01

    A large body of literature has shown that observers often fail to notice significant changes in visual scenes, even when these changes happen right in front of their eyes. For instance, people often fail to notice if their conversation partner is switched to another person, or if large background objects suddenly disappear.1,2 These 'change blindness' studies have led to the inference that the amount of information we remember about each item in a visual scene may be quite low.1 However, in recent work we have demonstrated that long-term memory is capable of storing a massive number of visual objects with significant detail about each item.3 In the present paper we attempt to reconcile these findings by demonstrating that observers do not experience 'change blindness' with the real world objects used in our previous experiment if they are given sufficient time to encode each item. The results reported here suggest that one of the major causes of change blindness for real-world objects is a lack of encoding time or attention to each object (see also refs. 4 and 5).

  19. Visual agnosia for line drawings and silhouettes without apparent impairment of real-object recognition: a case report.

    Science.gov (United States)

    Hiraoka, Kotaro; Suzuki, Kyoko; Hirayama, Kazumi; Mori, Etsuro

    2009-01-01

    We report on a patient with visual agnosia for line drawings and silhouette pictures following cerebral infarction in the region of the right posterior cerebral artery. The patient retained the ability to recognize real objects and their photographs, and could precisely copy line drawings of objects that she could not name. This case report highlights the importance of clinicians and researchers paying special attention to avoid overlooking agnosia in such cases. The factors that lead to problems in the identification of stimuli other than real objects in agnosic cases are discussed.

  20. Visual Agnosia for Line Drawings and Silhouettes without Apparent Impairment of Real-Object Recognition: A Case Report

    Directory of Open Access Journals (Sweden)

    Kotaro Hiraoka

    2009-01-01

    Full Text Available We report on a patient with visual agnosia for line drawings and silhouette pictures following cerebral infarction in the region of the right posterior cerebral artery. The patient retained the ability to recognize real objects and their photographs, and could precisely copy line drawings of objects that she could not name. This case report highlights the importance of clinicians and researchers paying special attention to avoid overlooking agnosia in such cases. The factors that lead to problems in the identification of stimuli other than real objects in agnosic cases are discussed.

  1. VRP09 Objective Methods to Test Visual Dysfunction in the Presence of Cognitive Impairment

    Science.gov (United States)

    2014-10-01

    cortex  in response  to  visual  stimuli  in  the  central  and  peripheral...defined  damage  to  the  retina,  optic  nerve,  visual   radiations  or  visual   cortex  will  be  used  to  study...tooth  to  the  portable   processor  or  also  to  a  nearby  computer.   The  optical  head  can  be

  2. Discrete capacity limits and neuroanatomical correlates of visual short-term memory for objects and spatial locations.

    Science.gov (United States)

    Konstantinou, Nikos; Constantinidou, Fofi; Kanai, Ryota

    2017-02-01

    Working memory is responsible for keeping information in mind when it is no longer in view, linking perception with higher cognitive functions. Despite such crucial role, short-term maintenance of visual information is severely limited. Research suggests that capacity limits in visual short-term memory (VSTM) are correlated with sustained activity in distinct brain areas. Here, we investigated whether variability in the structure of the brain is reflected in individual differences of behavioral capacity estimates for spatial and object VSTM. Behavioral capacity estimates were calculated separately for spatial and object information using a novel adaptive staircase procedure and were found to be unrelated, supporting domain-specific VSTM capacity limits. Voxel-based morphometry (VBM) analyses revealed dissociable neuroanatomical correlates of spatial versus object VSTM. Interindividual variability in spatial VSTM was reflected in the gray matter density of the inferior parietal lobule. In contrast, object VSTM was reflected in the gray matter density of the left insula. These dissociable findings highlight the importance of considering domain-specific estimates of VSTM capacity and point to the crucial brain regions that limit VSTM capacity for different types of visual information. Hum Brain Mapp 38:767-778, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  3. Indoor objects and outdoor urban scenes recognition by 3D visual primitives

    DEFF Research Database (Denmark)

    Fu, Junsheng; Kämäräinen, Joni-Kristian; Buch, Anders Glent

    2014-01-01

    , we propose an alternative appearance-driven approach which rst extracts 2D primitives justi ed by Marr's primal sketch, which are \\accumulated" over multiple views and the most stable ones are \\promoted" to 3D visual primitives. The 3D promoted primitives represent both structure and appearance...

  4. Massive Memory Revisited: Limitations on Storage Capacity for Object Details in Visual Long-Term Memory

    Science.gov (United States)

    Cunningham, Corbin A.; Yassa, Michael A.; Egeth, Howard E.

    2015-01-01

    Previous work suggests that visual long-term memory (VLTM) is highly detailed and has a massive capacity. However, memory performance is subject to the effects of the type of testing procedure used. The current study examines detail memory performance by probing the same memories within the same subjects, but using divergent probing methods. The…

  5. Real-world spatial regularities affect visual working memory for objects

    NARCIS (Netherlands)

    Kaiser, D.; Stein, T.; Peelen, M.V.

    2015-01-01

    Traditional memory research has focused on measuring and modeling the capacity of visual working memory for simple stimuli such as geometric shapes or colored disks. Although these studies have provided important insights, it is unclear how their findings apply to memory for more naturalistic

  6. Neural basis for dynamic updating of object representation in visual working memory.

    Science.gov (United States)

    Takahama, Sachiko; Miyauchi, Satoru; Saiki, Jun

    2010-02-15

    In real world, objects have multiple features and change dynamically. Thus, object representations must satisfy dynamic updating and feature binding. Previous studies have investigated the neural activity of dynamic updating or feature binding alone, but not both simultaneously. We investigated the neural basis of feature-bound object representation in a dynamically updating situation by conducting a multiple object permanence tracking task, which required observers to simultaneously process both the maintenance and dynamic updating of feature-bound objects. Using an event-related design, we separated activities during memory maintenance and change detection. In the search for regions showing selective activation in dynamic updating of feature-bound objects, we identified a network during memory maintenance that was comprised of the inferior precentral sulcus, superior parietal lobule, and middle frontal gyrus. In the change detection period, various prefrontal regions, including the anterior prefrontal cortex, were activated. In updating object representation of dynamically moving objects, the inferior precentral sulcus closely cooperates with a so-called "frontoparietal network", and subregions of the frontoparietal network can be decomposed into those sensitive to spatial updating and feature binding. The anterior prefrontal cortex identifies changes in object representation by comparing memory and perceptual representations rather than maintaining object representations per se, as previously suggested. Copyright 2009 Elsevier Inc. All rights reserved.

  7. Design and implementation of visual object-oriented LOGO using Prograph

    OpenAIRE

    Black, Emily M.; Fall, Thierno

    1994-01-01

    This thesis addresses the problem of how best to teach beginning programmers the necessary skills of object oriented programming. There is no established method of introducing object oriented concepts such as encapsulation, inheritance, and polymorphism, or providing an intuitive progression from simple programs to complex problem solving. The approach was to use two commercially available programming languages which we consider exemplify good object oriented programming techniques, to teach ...

  8. Learning Category-Specific Dictionary and Shared Dictionary for Fine-Grained Image Categorization.

    Science.gov (United States)

    Gao, Shenghua; Tsang, Ivor Wai-Hung; Ma, Yi

    2014-02-01

    This paper targets fine-grained image categorization by learning a category-specific dictionary for each category and a shared dictionary for all the categories. Such category-specific dictionaries encode subtle visual differences among different categories, while the shared dictionary encodes common visual patterns among all the categories. To this end, we impose incoherence constraints among the different dictionaries in the objective of feature coding. In addition, to make the learnt dictionary stable, we also impose the constraint that each dictionary should be self-incoherent. Our proposed dictionary learning formulation not only applies to fine-grained classification, but also improves conventional basic-level object categorization and other tasks such as event recognition. Experimental results on five data sets show that our method can outperform the state-of-the-art fine-grained image categorization frameworks as well as sparse coding based dictionary learning frameworks. All these results demonstrate the effectiveness of our method.

  9. Massive memory revisited: Limitations on storage capacity for object details in visual long-term memory

    OpenAIRE

    Cunningham, Corbin A.; Yassa, Michael A.; Egeth, Howard E.

    2015-01-01

    Previous work suggests that visual long-term memory (VLTM) is highly detailed and has a massive capacity. However, memory performance is subject to the effects of the type of testing procedure used. The current study examines detail memory performance by probing the same memories within the same subjects, but using divergent probing methods. The results reveal that while VLTM representations are typically sufficient to support performance when the procedure probes gist-based information, they...

  10. Ubiquitous Computing: Using everyday object as ambient visualization tools for persuasive design

    OpenAIRE

    Cahier, Jenny; Gullberg, Eric

    2008-01-01

    In order for companies to survive and advance in today’s competitive society, a massive amount of personal information from citizens is gathered. This thesis investigates how these digital footprints can be obtained and visualized to create awareness about personal actions and encourage change in behavior . In order to decide which data would be interesting and accessible, a map of possible application fields was generated and one single field was chosen for further study. The result is a bus...

  11. Visual Debugging of Object-Oriented Systems With the Unified Modeling Language

    Science.gov (United States)

    2004-03-01

    to be “the systematic and imaginative use of the technology of interactive computer graphics and the disciplines of graphic design, typography ...Traditional debugging involves the user creating a mental image of the structure and execution path based on source code. According to Miller, the 7 ± 2...of each FigClass (the class that represents the image of a class), the DOI and LOD for each, and finally calls a method to apply the visual

  12. Brain dynamics of upstream perceptual processes leading to visual object recognition: a high density ERP topographic mapping study.

    Science.gov (United States)

    Schettino, Antonio; Loeys, Tom; Delplanque, Sylvain; Pourtois, Gilles

    2011-04-01

    Recent studies suggest that visual object recognition is a proactive process through which perceptual evidence accumulates over time before a decision can be made about the object. However, the exact electrophysiological correlates and time-course of this complex process remain unclear. In addition, the potential influence of emotion on this process has not been investigated yet. We recorded high density EEG in healthy adult participants performing a novel perceptual recognition task. For each trial, an initial blurred visual scene was first shown, before the actual content of the stimulus was gradually revealed by progressively adding diagnostic high spatial frequency information. Participants were asked to stop this stimulus sequence as soon as they could correctly perform an animacy judgment task. Behavioral results showed that participants reliably gathered perceptual evidence before recognition. Furthermore, prolonged exploration times were observed for pleasant, relative to either neutral or unpleasant scenes. ERP results showed distinct effects starting at 280 ms post-stimulus onset in distant brain regions during stimulus processing, mainly characterized by: (i) a monotonic accumulation of evidence, involving regions of the posterior cingulate cortex/parahippocampal gyrus, and (ii) true categorical recognition effects in medial frontal regions, including the dorsal anterior cingulate cortex. These findings provide evidence for the early involvement, following stimulus onset, of non-overlapping brain networks during proactive processes eventually leading to visual object recognition. Copyright © 2011 Elsevier Inc. All rights reserved.

  13. Visual comparison for information visualization

    KAUST Repository

    Gleicher, M.; Albers, D.; Walker, R.; Jusufi, I.; Hansen, C. D.; Roberts, J. C.

    2011-01-01

    Data analysis often involves the comparison of complex objects. With the ever increasing amounts and complexity of data, the demand for systems to help with these comparisons is also growing. Increasingly, information visualization tools support such comparisons explicitly, beyond simply allowing a viewer to examine each object individually. In this paper, we argue that the design of information visualizations of complex objects can, and should, be studied in general, that is independently of what those objects are. As a first step in developing this general understanding of comparison, we propose a general taxonomy of visual designs for comparison that groups designs into three basic categories, which can be combined. To clarify the taxonomy and validate its completeness, we provide a survey of work in information visualization related to comparison. Although we find a great diversity of systems and approaches, we see that all designs are assembled from the building blocks of juxtaposition, superposition and explicit encodings. This initial exploration shows the power of our model, and suggests future challenges in developing a general understanding of comparative visualization and facilitating the development of more comparative visualization tools. © The Author(s) 2011.

  14. Visual comparison for information visualization

    KAUST Repository

    Gleicher, M.

    2011-09-07

    Data analysis often involves the comparison of complex objects. With the ever increasing amounts and complexity of data, the demand for systems to help with these comparisons is also growing. Increasingly, information visualization tools support such comparisons explicitly, beyond simply allowing a viewer to examine each object individually. In this paper, we argue that the design of information visualizations of complex objects can, and should, be studied in general, that is independently of what those objects are. As a first step in developing this general understanding of comparison, we propose a general taxonomy of visual designs for comparison that groups designs into three basic categories, which can be combined. To clarify the taxonomy and validate its completeness, we provide a survey of work in information visualization related to comparison. Although we find a great diversity of systems and approaches, we see that all designs are assembled from the building blocks of juxtaposition, superposition and explicit encodings. This initial exploration shows the power of our model, and suggests future challenges in developing a general understanding of comparative visualization and facilitating the development of more comparative visualization tools. © The Author(s) 2011.

  15. Object-Based Attention on Social Units: Visual Selection of Hands Performing a Social Interaction.

    Science.gov (United States)

    Yin, Jun; Xu, Haokui; Duan, Jipeng; Shen, Mowei

    2018-05-01

    Traditionally, objects of attention are characterized either as full-fledged entities or either as elements grouped by Gestalt principles. Because humans appear to use social groups as units to explain social activities, we proposed that a socially defined group, according to social interaction information, would also be a possible object of attentional selection. This hypothesis was examined using displays with and without handshaking interactions. Results demonstrated that object-based attention, which was measured by an object-specific attentional advantage (i.e., shorter response times to targets on a single object), was extended to two hands performing a handshake but not to hands that did not perform meaningful social interactions, even when they did perform handshake-like actions. This finding cannot be attributed to the familiarity of the frequent co-occurrence of two handshaking hands. Hence, object-based attention can select a grouped object whose parts are connected within a meaningful social interaction. This finding implies that object-based attention is constrained by top-down information.

  16. Object Manipulation and Motion Perception: Evidence of an Influence of Action Planning on Visual Processing

    NARCIS (Netherlands)

    Lindemann, O.; Bekkering, H.

    2009-01-01

    In 3 experiments, the authors investigated the bidirectional coupling of perception and action in the context of object manipulations and motion perception. Participants prepared to grasp an X-shaped object along one of its 2 diagonals and to rotate it in a clockwise or a counterclockwise direction.

  17. Convergence semigroup categories

    Directory of Open Access Journals (Sweden)

    Gary Richardson

    2013-09-01

    Full Text Available Properties of the category consisting of all objects of the form (X, S, λ are investigated, where X is a convergence space, S is a commutative semigroup, and λ: X × S → X is a continuous action. A “generalized quotient” of each object is defined without making the usual assumption that for each fixed g ∈ S, λ(., g : X  → X is an injection.

  18. Integration of Distinct Objects in Visual Working Memory Depends on Strong Objecthood Cues Even for Different-Dimension Conjunctions.

    Science.gov (United States)

    Balaban, Halely; Luria, Roy

    2016-05-01

    What makes an integrated object in visual working memory (WM)? Past evidence suggested that WM holds all features of multidimensional objects together, but struggles to integrate color-color conjunctions. This difficulty was previously attributed to a challenge in same-dimension integration, but here we argue that it arises from the integration of 2 distinct objects. To test this, we examined the integration of distinct different-dimension features (a colored square and a tilted bar). We monitored the contralateral delay activity, an event-related potential component sensitive to the number of objects in WM. The results indicated that color and orientation belonging to distinct objects in a shared location were not integrated in WM (Experiment 1), even following a common fate Gestalt cue (Experiment 2). These conjunctions were better integrated in a less demanding task (Experiment 3), and in the original WM task, but with a less individuating version of the original stimuli (Experiment 4). Our results identify the critical factor in WM integration at same- versus separate-objects, rather than at same- versus different-dimensions. Compared with the perfect integration of an object's features, the integration of several objects is demanding, and depends on an interaction between the grouping cues and task demands, among other factors. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  19. A bilateral advantage for maintaining objects in visual short term memory.

    Science.gov (United States)

    Holt, Jessica L; Delvenne, Jean-François

    2015-01-01

    Research has shown that attentional pre-cues can subsequently influence the transfer of information into visual short term memory (VSTM) (Schmidt, B., Vogel, E., Woodman, G., & Luck, S. (2002). Voluntary and automatic attentional control of visual working memory. Perception & Psychophysics, 64(5), 754-763). However, studies also suggest that those effects are constrained by the hemifield alignment of the pre-cues (Holt, J. L., & Delvenne, J.-F. (2014). A bilateral advantage in controlling access to visual short-term memory. Experimental Psychology, 61(2), 127-133), revealing better recall when distributed across hemifields relative to within a single hemifield (otherwise known as a bilateral field advantage). By manipulating the duration of the retention interval in a colour change detection task (1s, 3s), we investigated whether selective pre-cues can also influence how information is later maintained in VSTM. The results revealed that the pre-cues influenced the maintenance of the colours in VSTM, promoting consistent performance across retention intervals (Experiments 1 & 4). However, those effects were only shown when the pre-cues were directed to stimuli displayed across hemifields relative to stimuli within a single hemifield. Importantly, the results were not replicated when participants were required to memorise colours (Experiment 2) or locations (Experiment 3) in the absence of spatial pre-cues. Those findings strongly suggest that attentional pre-cues have a strong influence on both the transfer of information in VSTM and its subsequent maintenance, allowing bilateral items to better survive decay. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Attention and perceptual implicit memory: effects of selective versus divided attention and number of visual objects.

    Science.gov (United States)

    Mulligan, Neil W

    2002-08-01

    Extant research presents conflicting results on whether manipulations of attention during encoding affect perceptual priming. Two suggested mediating factors are type of manipulation (selective vs divided) and whether attention is manipulated across multiple objects or within a single object. Words printed in different colors (Experiment 1) or flanked by colored blocks (Experiment 2) were presented at encoding. In the full-attention condition, participants always read the word, in the unattended condition they always identified the color, and in the divided-attention conditions, participants attended to both word identity and color. Perceptual priming was assessed with perceptual identification and explicit memory with recognition. Relative to the full-attention condition, attending to color always reduced priming. Dividing attention between word identity and color, however, only disrupted priming when these attributes were presented as multiple objects (Experiment 2) but not when they were dimensions of a common object (Experiment 1). On the explicit test, manipulations of attention always affected recognition accuracy.

  1. Visual Debugging of Object-Oriented Systems With the Unified Modeling Language

    National Research Council Canada - National Science Library

    Fox, Wendell

    2004-01-01

    .... Debugging and analysis tools are required to aid in this process. Debugging of large object-oriented systems is a difficult cognitive process that requires understanding of both the overall and detailed behavior of the application...

  2. Glucose improves object-location binding in visual-spatial working memory.

    Science.gov (United States)

    Stollery, Brian; Christian, Leonie

    2016-02-01

    There is evidence that glucose temporarily enhances cognition and that processes dependent on the hippocampus may be particularly sensitive. As the hippocampus plays a key role in binding processes, we examined the influence of glucose on memory for object-location bindings. This study aims to study how glucose modifies performance on an object-location memory task, a task that draws heavily on hippocampal function. Thirty-one participants received 30 g glucose or placebo in a single 1-h session. After seeing between 3 and 10 objects (words or shapes) at different locations in a 9 × 9 matrix, participants attempted to immediately reproduce the display on a blank 9 × 9 matrix. Blood glucose was measured before drink ingestion, mid-way through the session, and at the end of the session. Glucose significantly improves object-location binding (d = 1.08) and location memory (d = 0.83), but not object memory (d = 0.51). Increasing working memory load impairs object memory and object-location binding, and word-location binding is more successful than shape-location binding, but the glucose improvement is robust across all difficulty manipulations. Within the glucose group, higher levels of circulating glucose are correlated with better binding memory and remembering the locations of successfully recalled objects. The glucose improvements identified are consistent with a facilitative impact on hippocampal function. The findings are discussed in the context of the relationship between cognitive processes, hippocampal function, and the implications for glucose's mode of action.

  3. The role of hemifield sector analysis in multifocal visual evoked potential objective perimetry in the early detection of glaucomatous visual field defects

    Directory of Open Access Journals (Sweden)

    Mousa MF

    2013-05-01

    Full Text Available Mohammad F Mousa,1 Robert P Cubbidge,2 Fatima Al-Mansouri,1 Abdulbari Bener3,41Department of Ophthalmology, Hamad Medical Corporation, Doha, Qatar; 2School of Life and Health Sciences, Aston University, Birmingham, UK; 3Department of Medical Statistics and Epidemiology, Hamad Medical Corporation, Department of Public Health, Weill Cornell Medical College, Doha, Qatar; 4Department Evidence for Population Health Unit, School of Epidemiology and Health Sciences, University of Manchester, Manchester, UKObjective: The purpose of this study was to examine the effectiveness of a new analysis method of mfVEP objective perimetry in the early detection of glaucomatous visual field defects compared to the gold standard technique.Methods and patients: Three groups were tested in this study; normal controls (38 eyes, glaucoma patients (36 eyes, and glaucoma suspect patients (38 eyes. All subjects underwent two standard 24-2 visual field tests: one with the Humphrey Field Analyzer and a single mfVEP test in one session. Analysis of the mfVEP results was carried out using the new analysis ­protocol: the hemifield sector analysis protocol.Results: Analysis of the mfVEP showed that the signal to noise ratio (SNR difference between superior and inferior hemifields was statistically significant between the three groups (analysis of variance, P < 0.001 with a 95% confidence interval, 2.82, 2.89 for normal group; 2.25, 2.29 for glaucoma suspect group; 1.67, 1.73 for glaucoma group. The difference between superior and inferior hemifield sectors and hemi-rings was statistically significant in 11/11 pair of sectors and hemi-rings in the glaucoma patients group (t-test P < 0.001, statistically significant in 5/11 pairs of sectors and hemi-rings in the glaucoma suspect group (t-test P < 0.01, and only 1/11 pair was statistically significant (t-test P < 0.9. The sensitivity and specificity of the hemifield sector analysis protocol in detecting glaucoma was 97% and 86

  4. Visual long-term memory has a massive storage capacity for object details

    OpenAIRE

    Brady, Timothy F.; Konkle, Talia; Alvarez, George A.; Oliva, Aude

    2008-01-01

    One of the major lessons of memory research has been that human memory is fallible, imprecise, and subject to interference. Thus, although observers can remember thousands of images, it is widely assumed that these memories lack detail. Contrary to this assumption, here we show that long-term memory is capable of storing a massive number of objects with details from the image. Participants viewed pictures of 2,500 objects over the course of 5.5 h. Afterward, they were shown pairs of images an...

  5. Sleep deprivation impairs object-selective attention: a view from the ventral visual cortex.

    Science.gov (United States)

    Lim, Julian; Tan, Jiat Chow; Parimal, Sarayu; Dinges, David F; Chee, Michael W L

    2010-02-05

    Most prior studies on selective attention in the setting of total sleep deprivation (SD) have focused on behavior or activation within fronto-parietal cognitive control areas. Here, we evaluated the effects of SD on the top-down biasing of activation of ventral visual cortex and on functional connectivity between cognitive control and other brain regions. Twenty-three healthy young adult volunteers underwent fMRI after a normal night of sleep (RW) and after sleep deprivation in a counterbalanced manner while performing a selective attention task. During this task, pictures of houses or faces were randomly interleaved among scrambled images. Across different blocks, volunteers responded to house but not face pictures, face but not house pictures, or passively viewed pictures without responding. The appearance of task-relevant pictures was unpredictable in this paradigm. SD resulted in less accurate detection of target pictures without affecting the mean false alarm rate or response time. In addition to a reduction of fronto-parietal activation, attending to houses strongly modulated parahippocampal place area (PPA) activation during RW, but this attention-driven biasing of PPA activation was abolished following SD. Additionally, SD resulted in a significant decrement in functional connectivity between the PPA and two cognitive control areas, the left intraparietal sulcus and the left inferior frontal lobe. SD impairs selective attention as evidenced by reduced selectivity in PPA activation. Further, reduction in fronto-parietal and ventral visual task-related activation suggests that it also affects sustained attention. Reductions in functional connectivity may be an important additional imaging parameter to consider in characterizing the effects of sleep deprivation on cognition.

  6. Sleep deprivation impairs object-selective attention: a view from the ventral visual cortex.

    Directory of Open Access Journals (Sweden)

    Julian Lim

    Full Text Available BACKGROUND: Most prior studies on selective attention in the setting of total sleep deprivation (SD have focused on behavior or activation within fronto-parietal cognitive control areas. Here, we evaluated the effects of SD on the top-down biasing of activation of ventral visual cortex and on functional connectivity between cognitive control and other brain regions. METHODOLOGY/PRINCIPAL FINDINGS: Twenty-three healthy young adult volunteers underwent fMRI after a normal night of sleep (RW and after sleep deprivation in a counterbalanced manner while performing a selective attention task. During this task, pictures of houses or faces were randomly interleaved among scrambled images. Across different blocks, volunteers responded to house but not face pictures, face but not house pictures, or passively viewed pictures without responding. The appearance of task-relevant pictures was unpredictable in this paradigm. SD resulted in less accurate detection of target pictures without affecting the mean false alarm rate or response time. In addition to a reduction of fronto-parietal activation, attending to houses strongly modulated parahippocampal place area (PPA activation during RW, but this attention-driven biasing of PPA activation was abolished following SD. Additionally, SD resulted in a significant decrement in functional connectivity between the PPA and two cognitive control areas, the left intraparietal sulcus and the left inferior frontal lobe. CONCLUSIONS/SIGNIFICANCE: SD impairs selective attention as evidenced by reduced selectivity in PPA activation. Further, reduction in fronto-parietal and ventral visual task-related activation suggests that it also affects sustained attention. Reductions in functional connectivity may be an important additional imaging parameter to consider in characterizing the effects of sleep deprivation on cognition.

  7. Interference of spoken word recognition through phonological priming from visual objects and printed words.

    Science.gov (United States)

    McQueen, James M; Huettig, Falk

    2014-01-01

    Three cross-modal priming experiments examined the influence of preexposure to pictures and printed words on the speed of spoken word recognition. Targets for auditory lexical decision were spoken Dutch words and nonwords, presented in isolation (Experiments 1 and 2) or after a short phrase (Experiment 3). Auditory stimuli were preceded by primes, which were pictures (Experiments 1 and 3) or those pictures' printed names (Experiment 2). Prime-target pairs were phonologically onset related (e.g., pijl-pijn, arrow-pain), were from the same semantic category (e.g., pijl-zwaard, arrow-sword), or were unrelated on both dimensions. Phonological interference and semantic facilitation were observed in all experiments. Priming magnitude was similar for pictures and printed words and did not vary with picture viewing time or number of pictures in the display (either one or four). These effects arose even though participants were not explicitly instructed to name the pictures and where strategic naming would interfere with lexical decision making. This suggests that, by default, processing of related pictures and printed words influences how quickly we recognize spoken words.

  8. Integrating spherical panoramas and maps for visualization of cultural heritage objects using virtual reality technology

    NARCIS (Netherlands)

    Koeva, M.N.; Luleva, M.I.; Maldjanski, P.

    2017-01-01

    Development and virtual representation of 3D models of Cultural Heritage (CH) objects has triggered great interest over the past decade. The main reason for this is the rapid development in the fields of photogrammetry and remote sensing, laser scanning, and computer vision. The advantages of using

  9. Reach on sound: a key to object permanence in visually impaired children.

    Science.gov (United States)

    Fazzi, Elisa; Signorini, Sabrina Giovanna; Bomba, Monica; Luparia, Antonella; Lanners, Josée; Balottin, Umberto

    2011-04-01

    The capacity to reach an object presented through sound clue indicates, in the blind child, the acquisition of object permanence and gives information over his/her cognitive development. To assess cognitive development in congenitally blind children with or without multiple disabilities. Cohort study. Thirty-seven congenitally blind subjects (17 with associated multiple disabilities, 20 mainly blind) were enrolled. We used Bigelow's protocol to evaluate "reach on sound" capacity over time (at 6, 12, 18, 24, and 36 months), and a battery of clinical, neurophysiological and cognitive instruments to assess clinical features. Tasks n.1 to 5 were acquired by most of the mainly blind children by 12 months of age. Task 6 coincided with a drop in performance, and the acquisition of the subsequent tasks showed a less agehomogeneous pattern. In blind children with multiple disabilities, task acquisition rates were lower, with the curves dipping in relation to the more complex tasks. The mainly blind subjects managed to overcome Fraiberg's "conceptual problem"--i.e., they acquired the ability to attribute an external object with identity and substance even when it manifested its presence through sound only--and thus developed the ability to reach an object presented through sound. Instead, most of the blind children with multiple disabilities presented poor performances on the "reach on sound" protocol and were unable, before 36 months of age, to develop the strategies needed to resolve Fraiberg's "conceptual problem". Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. Are Categorical Spatial Relations Encoded by Shifting Visual Attention between Objects?

    Science.gov (United States)

    Yuan, Lei; Uttal, David; Franconeri, Steven

    2016-01-01

    Perceiving not just values, but relations between values, is critical to human cognition. We tested the predictions of a proposed mechanism for processing categorical spatial relations between two objects--the "shift account" of relation processing--which states that relations such as "above" or "below" are extracted…

  11. An integrated approach for visual analysis of a multisource moving objects knowledge base

    NARCIS (Netherlands)

    Willems, N.; van Hage, W.R.; de Vries, G.; Janssens, J.H.M.; Malaisé, V.

    2010-01-01

    We present an integrated and multidisciplinary approach for analyzing the behavior of moving objects. The results originate from an ongoing research of four different partners from the Dutch Poseidon project (Embedded Systems Institute (2007)), which aims to develop new methods for Maritime Safety

  12. An Integrated Approach for Visual Analysis of a Multi-Source Moving Objects Knowledge Base

    NARCIS (Netherlands)

    Willems, C.M.E.; van Hage, W.R.; de Vries, G.K.D.; Janssens, J.; Malaisé, V.

    2010-01-01

    We present an integrated and multidisciplinary approach for analyzing the behavior of moving objects. The results originate from an ongoing research of four different partners from the Dutch Poseidon project (Embedded Systems Institute (2007)), which aims to develop new methods for Maritime Safety

  13. An integrated approach for visual analysis of a multi-source moving objects knowledge base

    NARCIS (Netherlands)

    Willems, N.; Hage, van W.R.; Vries, de G.; Janssens, J.H.M.; Malaisé, V.

    2010-01-01

    We present an integrated and multidisciplinary approach for analyzing the behavior of moving objects. The results originate from an ongoing research of four different partners from the Dutch Poseidon project (Embedded Systems Institute (2007)), which aims to develop new methods for Maritime Safety

  14. Visualization of the ROOT 3D class objects with openInventor-like viewers

    CERN Document Server

    Fine, V; Kulikova, A; Panebrattsev, M

    2004-01-01

    The class library for conversion of the ROOT 3D class objects to the iv format for 3D image viewers is described in this paper. At present the library was tested using the STAR and ATLAS detector geometry without any changes and revision for concrete detector.

  15. Object Selection Costs in Visual Working Memory: A Diffusion Model Analysis of the Focus of Attention

    Science.gov (United States)

    Sewell, David K.; Lilburn, Simon D.; Smith, Philip L.

    2016-01-01

    A central question in working memory research concerns the degree to which information in working memory is accessible to other cognitive processes (e.g., decision-making). Theories assuming that the focus of attention can only store a single object at a time require the focus to orient to a target representation before further processing can…

  16. Prior knowledge about objects determines neural color representation in human visual cortex

    NARCIS (Netherlands)

    Vandenbroucke, A.R.E.; Fahrenfort, J.J.; Meuwese, J.D.I.; Scholte, H.S.; Lamme, V.A.F.

    2016-01-01

    To create subjective experience, our brain must translate physical stimulus input by incorporating prior knowledge and expectations. For example, we perceive color and not wavelength information, and this in part depends on our past experience with colored objects ( Hansen et al. 2006; Mitterer and

  17. Similarity relations in visual search predict rapid visual categorization

    Science.gov (United States)

    Mohan, Krithika; Arun, S. P.

    2012-01-01

    How do we perform rapid visual categorization?It is widely thought that categorization involves evaluating the similarity of an object to other category items, but the underlying features and similarity relations remain unknown. Here, we hypothesized that categorization performance is based on perceived similarity relations between items within and outside the category. To this end, we measured the categorization performance of human subjects on three diverse visual categories (animals, vehicles, and tools) and across three hierarchical levels (superordinate, basic, and subordinate levels among animals). For the same subjects, we measured their perceived pair-wise similarities between objects using a visual search task. Regardless of category and hierarchical level, we found that the time taken to categorize an object could be predicted using its similarity to members within and outside its category. We were able to account for several classic categorization phenomena, such as (a) the longer times required to reject category membership; (b) the longer times to categorize atypical objects; and (c) differences in performance across tasks and across hierarchical levels. These categorization times were also accounted for by a model that extracts coarse structure from an image. The striking agreement observed between categorization and visual search suggests that these two disparate tasks depend on a shared coarse object representation. PMID:23092947

  18. Estimated capacity of object files in visual short-term memory is not improved by retrieval cueing.

    Science.gov (United States)

    Saiki, Jun; Miyatsuji, Hirofumi

    2009-03-23

    Visual short-term memory (VSTM) has been claimed to maintain three to five feature-bound object representations. Some results showing smaller capacity estimates for feature binding memory have been interpreted as the effects of interference in memory retrieval. However, change-detection tasks may not properly evaluate complex feature-bound representations such as triple conjunctions in VSTM. To understand the general type of feature-bound object representation, evaluation of triple conjunctions is critical. To test whether interference occurs in memory retrieval for complete object file representations in a VSTM task, we cued retrieval in novel paradigms that directly evaluate the memory for triple conjunctions, in comparison with a simple change-detection task. In our multiple object permanence tracking displays, observers monitored for a switch in feature combination between objects during an occlusion period, and we found that a retrieval cue provided no benefit with the triple conjunction tasks, but significant facilitation with the change-detection task, suggesting that low capacity estimates of object file memory in VSTM reflect a limit on maintenance, not retrieval.

  19. Objectively Measured Patterns of Activities of Different Intensity Categories and Steps Taken Among Working Adults in a Multi-ethnic Asian Population.

    Science.gov (United States)

    Müller-Riemenschneider, Falk; Ng, Sheryl Hui Xian; Koh, David; Chu, Anne Hin Yee

    2016-06-01

    To objectively assess sedentary behavior (SB), light- and moderate-to-vigorous intensity physical activity (MVPA), and steps among Singaporean office-based workers across days of the week. A convenience sample of office-based employees of a public University was recruited. Time spent for SB, light-, and MVPA using different validated accelerometry counts per minute (CPM), and step count were determined. Depending on applied CPM for SB (less than 100, less than 150 and less than 200 CPM), 107 working adults spent between 69.2% and 76.4% of their daily wakeful time in SB. Time spent in SB and MVPA were higher on weekdays than weekends. The hourly analysis highlights patterns of greater SB during usual working hours on weekdays but not on weekends. SB at work contributes greatly toward total daily sitting time. Low PA levels and high SB levels were found on weekends.

  20. Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes.

    Science.gov (United States)

    Yebes, J Javier; Bergasa, Luis M; García-Garrido, Miguel Ángel

    2015-04-20

    Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles). In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity), while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM.

  1. Separate channels for processing form, texture, and color: evidence from FMRI adaptation and visual object agnosia.

    Science.gov (United States)

    Cavina-Pratesi, C; Kentridge, R W; Heywood, C A; Milner, A D

    2010-10-01

    Previous neuroimaging research suggests that although object shape is analyzed in the lateral occipital cortex, surface properties of objects, such as color and texture, are dealt with in more medial areas, close to the collateral sulcus (CoS). The present study sought to determine whether there is a single medial region concerned with surface properties in general or whether instead there are multiple foci independently extracting different surface properties. We used stimuli varying in their shape, texture, or color, and tested healthy participants and 2 object-agnosic patients, in both a discrimination task and a functional MR adaptation paradigm. We found a double dissociation between medial and lateral occipitotemporal cortices in processing surface (texture or color) versus geometric (shape) properties, respectively. In Experiment 2, we found that the medial occipitotemporal cortex houses separate foci for color (within anterior CoS and lingual gyrus) and texture (caudally within posterior CoS). In addition, we found that areas selective for shape, texture, and color individually were quite distinct from those that respond to all of these features together (shape and texture and color). These latter areas appear to correspond to those associated with the perception of complex stimuli such as faces and places.

  2. Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes

    Directory of Open Access Journals (Sweden)

    J. Javier Yebes

    2015-04-01

    Full Text Available Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles. In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity, while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM.

  3. Object representations in visual working memory change according to the task context.

    Science.gov (United States)

    Balaban, Halely; Luria, Roy

    2016-08-01

    This study investigated whether an item's representation in visual working memory (VWM) can be updated according to changes in the global task context. We used a modified change detection paradigm, in which the items moved before the retention interval. In all of the experiments, we presented identical color-color conjunction items that were arranged to provide a common fate Gestalt grouping cue during their movement. Task context was manipulated by adding a condition highlighting either the integrated interpretation of the conjunction items or their individuated interpretation. We monitored the contralateral delay activity (CDA) as an online marker of VWM. Experiment 1 employed only a minimal global context; the conjunction items were integrated during their movement, but then were partially individuated, at a late stage of the retention interval. The same conjunction items were perfectly integrated in an integration context (Experiment 2). An individuation context successfully produced strong individuation, already during the movement, overriding Gestalt grouping cues (Experiment 3). In Experiment 4, a short priming of the individuation context managed to individuate the conjunction items immediately after the Gestalt cue was no longer available. Thus, the representations of identical items changed according to the task context, suggesting that VWM interprets incoming input according to global factors which can override perceptual cues. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Steady-state multifocal visual evoked potential (ssmfVEP) using dartboard stimulation as a possible tool for objective visual field assessment.

    Science.gov (United States)

    Horn, Folkert K; Selle, Franziska; Hohberger, Bettina; Kremers, Jan

    2016-02-01

    To investigate whether a conventional, monitor-based multifocal visual evoked potential (mfVEP) system can be used to record steady-state mfVEP (ssmfVEP) in healthy subjects and to study the effects of temporal frequency, electrode configuration and alpha waves. Multifocal pattern reversal VEP measurements were performed at 58 dartboard fields using VEP recording equipment. The responses were measured using m-sequences with four pattern reversals per m-step. Temporal frequencies were varied between 6 and 15 Hz. Recordings were obtained from nine normal subjects with a cross-shaped, four-electrode device (two additional channels were derived). Spectral analyses were performed on the responses at all locations. The signal to noise ratio (SNR) was computed for each response using the signal amplitude at the reversal frequency and the noise at the neighbouring frequencies. Most responses in the ssmfVEP were significantly above noise. The SNR was largest for an 8.6-Hz reversal frequency. The individual alpha electroencephalogram (EEG) did not strongly influence the results. The percentage of the records in which each of the 6 channels had the largest SNR was between 10.0 and 25.2 %. Our results in normal subjects indicate that reliable mfVEP responses can be achieved by steady-state stimulation using a conventional dartboard stimulator and multi-channel electrode device. The ssmfVEP may be useful for objective visual field assessment as spectrum analysis can be used for automated evaluation of responses. The optimal reversal frequency is 8.6 Hz. Alpha waves have only a minor influence on the analysis. Future studies must include comparisons with conventional mfVEP and psychophysical visual field tests.

  5. Multi-objective evolutionary optimization for constructing neural networks for virtual reality visual data mining: application to geophysical prospecting.

    Science.gov (United States)

    Valdés, Julio J; Barton, Alan J

    2007-05-01

    A method for the construction of virtual reality spaces for visual data mining using multi-objective optimization with genetic algorithms on nonlinear discriminant (NDA) neural networks is presented. Two neural network layers (the output and the last hidden) are used for the construction of simultaneous solutions for: (i) a supervised classification of data patterns and (ii) an unsupervised similarity structure preservation between the original data matrix and its image in the new space. A set of spaces are constructed from selected solutions along the Pareto front. This strategy represents a conceptual improvement over spaces computed by single-objective optimization. In addition, genetic programming (in particular gene expression programming) is used for finding analytic representations of the complex mappings generating the spaces (a composition of NDA and orthogonal principal components). The presented approach is domain independent and is illustrated via application to the geophysical prospecting of caves.

  6. Objective assessment of chromatic and achromatic pattern adaptation reveals the temporal response properties of different visual pathways.

    Science.gov (United States)

    Robson, Anthony G; Kulikowski, Janus J

    2012-11-01

    The aim was to investigate the temporal response properties of magnocellular, parvocellular, and koniocellular visual pathways using increment/decrement changes in contrast to elicit visual evoked potentials (VEPs). Static achromatic and isoluminant chromatic gratings were generated on a monitor. Chromatic gratings were modulated along red/green (R/G) or subject-specific tritanopic confusion axes, established using a minimum distinct border criterion. Isoluminance was determined using minimum flicker photometry. Achromatic and chromatic VEPs were recorded to contrast increments and decrements of 0.1 or 0.2 superimposed on the static gratings (masking contrast 0-0.6). Achromatic increment/decrement changes in contrast evoked a percept of apparent motion when the spatial frequency was low; VEPs to such stimuli were positive in polarity and largely unaffected by high levels of static contrast, consistent with transient response mechanisms. VEPs to finer achromatic gratings showed marked attenuation as static contrast was increased. Chromatic VEPs to R/G or tritan chromatic contrast increments were of negative polarity and showed progressive attenuation as static contrast was increased, in keeping with increasing desensitization of the sustained responses of the color-opponent visual pathways. Chromatic contrast decrement VEPs were of positive polarity and less sensitive to pattern adaptation. The relative contribution of sustained/transient mechanisms to achromatic processing is spatial frequency dependent. Chromatic contrast increment VEPs reflect the sustained temporal response properties of parvocellular and koniocellular pathways. Cortical VEPs can provide an objective measure of pattern adaptation and can be used to probe the temporal response characteristics of different visual pathways.

  7. How Fast Do Objects Fall in Visual Memory? Uncovering the Temporal and Spatial Features of Representational Gravity.

    Science.gov (United States)

    De Sá Teixeira, Nuno

    2016-01-01

    Visual memory for the spatial location where a moving target vanishes has been found to be systematically displaced downward in the direction of gravity. Moreover, it was recently reported that the magnitude of the downward error increases steadily with increasing retention intervals imposed after object's offset and before observers are allowed to perform the spatial localization task, in a pattern where the remembered vanishing location drifts downward as if following a falling trajectory. This outcome was taken to reflect the dynamics of a representational model of earth's gravity. The present study aims to establish the spatial and temporal features of this downward drift by taking into account the dynamics of the motor response. The obtained results show that the memory for the last location of the target drifts downward with time, thus replicating previous results. Moreover, the time taken for completion of the behavioural localization movements seems to add to the imposed retention intervals in determining the temporal frame during which the visual memory is updated. Overall, it is reported that the representation of spatial location drifts downward by about 3 pixels for each two-fold increase of time until response. The outcomes are discussed in relation to a predictive internal model of gravity which outputs an on-line spatial update of remembered objects' location.

  8. Position Based Visual Servoing control of a Wheelchair Mounter Robotic Arm using Parallel Tracking and Mapping of task objects

    Directory of Open Access Journals (Sweden)

    Alessandro Palla

    2017-05-01

    Full Text Available In the last few years power wheelchairs have been becoming the only device able to provide autonomy and independence to people with motor skill impairments. In particular, many power wheelchairs feature robotic arms for gesture emulation, like the interaction with objects. However, complex robotic arms often require a joystic to be controlled; this feature make the arm hard to be controlled by impaired users. Paradoxically, if the user were able to proficiently control such devices, he would not need them. For that reason, this paper presents a highly autonomous robotic arm, designed in order to minimize the effort necessary for the control of the arm. In order to do that, the arm feature an easy to use human - machine interface and is controlled by Computer Vison algorithm, implementing a Position Based Visual Servoing (PBVS control. It was realized by extracting features by the camera and fusing them with the distance from the target, obtained by a proximity sensor. The Parallel Tracking and Mapping (PTAM algorithm was used to find the 3D position of the task object in the camera reference system. The visual servoing algorithm was implemented in an embedded platform, in real time. Each part of the control loop was developed in Robotic Operative System (ROS Environment, which allows to implement the previous algorithms as different nodes. Theoretical analysis, simulations and in system measurements proved the effectiveness of the proposed solution.

  9. To call a cloud 'cirrus': sound symbolism in names for categories or items.

    Science.gov (United States)

    Ković, Vanja; Sučević, Jelena; Styles, Suzy J

    2017-01-01

    The aim of the present paper is to experimentally test whether sound symbolism has selective effects on labels with different ranges-of-reference within a simple noun-hierarchy. In two experiments, adult participants learned the make up of two categories of unfamiliar objects ('alien life forms'), and were passively exposed to either category-labels or item-labels, in a learning-by-guessing categorization task. Following category training, participants were tested on their visual discrimination of object pairs. For different groups of participants, the labels were either congruent or incongruent with the objects. In Experiment 1, when trained on items with individual labels, participants were worse (made more errors) at detecting visual object mismatches when trained labels were incongruent. In Experiment 2, when participants were trained on items in labelled categories, participants were faster at detecting a match if the trained labels were congruent, and faster at detecting a mismatch if the trained labels were incongruent. This pattern of results suggests that sound symbolism in category labels facilitates later similarity judgments when congruent, and discrimination when incongruent, whereas for item labels incongruence generates error in judgements of visual object differences. These findings reveal that sound symbolic congruence has a different outcome at different levels of labelling within a noun hierarchy. These effects emerged in the absence of the label itself, indicating subtle but pervasive effects on visual object processing.

  10. Perceptual grouping and attention in visual search for features and for objects.

    Science.gov (United States)

    Treisman, A

    1982-04-01

    This article explores the effects of perceptual grouping on search for targets defined by separate features or by conjunction of features. Treisman and Gelade proposed a feature-integration theory of attention, which claims that in the absence of prior knowledge, the separable features of objects are correctly combined only when focused attention is directed to each item in turn. If items are preattentively grouped, however, attention may be directed to groups rather than to single items whenever no recombination of features within a group could generate an illusory target. This prediction is confirmed: In search for conjunctions, subjects appear to scan serially between groups rather than items. The scanning rate shows little effect of the spatial density of distractors, suggesting that it reflects serial fixations of attention rather than eye movements. Search for features, on the other hand, appears to independent of perceptual grouping, suggesting that features are detected preattentively. A conjunction target can be camouflaged at the preattentive level by placing it at the boundary between two adjacent groups, each of which shares one of its features. This suggests that preattentive grouping creates separate feature maps within each separable dimension rather than one global configuration.

  11. Ultraviolet continuum variability and visual flickering in the peculiar object MWC 560

    Science.gov (United States)

    Michalitsianos, A. G.; Perez, M.; Shore, S. N.; Maran, S. P.; Karovska, M.; Sonneborn, G.; Webb, J. R.; Barnes, Thomas G., III; Frueh, Marian L.; Oliversen, R. J.

    1993-01-01

    High-speed U-band photometry of the peculiar emission object MWC 560 obtained with the ground-based instrumentation, and V-band photometry obtained with the International Ultraviolet Explorer-Fine Error Sensor indicates irregular brightness variations are quasi-periodic. Multiple peaks of relative brightness power indicate statistically significant quasi periods existing in a range of 3-35 minutes, that are superposed on slower hourly varying components. We present a preliminary model that explains the minute and hourly time-scale variations in MWC 560 in terms of a velocity-shear instability that arises because a white dwarf magnetosphere impinges on an accretion disk. We also find evidence for Fe II multiplet pseudocontinuum absorption opacity in far-UV spectra of CH Cygni which is also present in MWC 560. Both CH Cyg and MWC 560 may be in an evolutionary stage that is characterized by strong UV continuum opacity which changes significantly during outburst, occurring before they permanently enter the symbiotic nebular emission phase.

  12. Accelerating object detection via a visual-feature-directed search cascade: algorithm and field programmable gate array implementation

    Science.gov (United States)

    Kyrkou, Christos; Theocharides, Theocharis

    2016-07-01

    Object detection is a major step in several computer vision applications and a requirement for most smart camera systems. Recent advances in hardware acceleration for real-time object detection feature extensive use of reconfigurable hardware [field programmable gate arrays (FPGAs)], and relevant research has produced quite fascinating results, in both the accuracy of the detection algorithms as well as the performance in terms of frames per second (fps) for use in embedded smart camera systems. Detecting objects in images, however, is a daunting task and often involves hardware-inefficient steps, both in terms of the datapath design and in terms of input/output and memory access patterns. We present how a visual-feature-directed search cascade composed of motion detection, depth computation, and edge detection, can have a significant impact in reducing the data that needs to be examined by the classification engine for the presence of an object of interest. Experimental results on a Spartan 6 FPGA platform for face detection indicate data search reduction of up to 95%, which results in the system being able to process up to 50 1024×768 pixels images per second with a significantly reduced number of false positives.

  13. Electrical Resistance Tomography for Visualization of Moving Objects Using a Spatiotemporal Total Variation Regularization Algorithm

    Directory of Open Access Journals (Sweden)

    Bo Chen

    2018-05-01

    Full Text Available Electrical resistance tomography (ERT has been considered as a data collection and image reconstruction method in many multi-phase flow application areas due to its advantages of high speed, low cost and being non-invasive. In order to improve the quality of the reconstructed images, the Total Variation algorithm attracts abundant attention due to its ability to solve large piecewise and discontinuous conductivity distributions. In industrial processing tomography (IPT, techniques such as ERT have been used to extract important flow measurement information. For a moving object inside a pipe, a velocity profile can be calculated from the cross correlation between signals generated from ERT sensors. Many previous studies have used two sets of 2D ERT measurements based on pixel-pixel cross correlation, which requires two ERT systems. In this paper, a method for carrying out flow velocity measurement using a single ERT system is proposed. A novel spatiotemporal total variation regularization approach is utilised to exploit sparsity both in space and time in 4D, and a voxel-voxel cross correlation method is adopted for measurement of flow profile. Result shows that the velocity profile can be calculated with a single ERT system and that the volume fraction and movement can be monitored using the proposed method. Both semi-dynamic experimental and static simulation studies verify the suitability of the proposed method. For in plane velocity profile, a 3D image based on temporal 2D images produces velocity profile with accuracy of less than 1% error and a 4D image for 3D velocity profiling shows an error of 4%.

  14. 一种基于并行对象的可视化描述%A Visual Description Based on Concurrent Objects

    Institute of Scientific and Technical Information of China (English)

    黄永忠; 李国巨; 郭金庚

    2001-01-01

    This paper puts forward a visual concurrent programming model based on concurrent objects,which absorbs the basic thought of UML,class diagram is used to describe concurrent classes,shared classes ,general classes in SPC++ and the relationships among these olasses. Through the visual description system can generate the code framework automatically.

  15. Neural Networks for Segregation of Multiple Objects: Visual Figure-Ground Separation and Auditory Pitch Perception.

    Science.gov (United States)

    Wyse, Lonce

    An important component of perceptual object recognition is the segmentation into coherent perceptual units of the "blooming buzzing confusion" that bombards the senses. The work presented herein develops neural network models of some key processes of pre-attentive vision and audition that serve this goal. A neural network model, called an FBF (Feature -Boundary-Feature) network, is proposed for automatic parallel separation of multiple figures from each other and their backgrounds in noisy images. Figure-ground separation is accomplished by iterating operations of a Boundary Contour System (BCS) that generates a boundary segmentation of a scene, and a Feature Contour System (FCS) that compensates for variable illumination and fills-in surface properties using boundary signals. A key new feature is the use of the FBF filling-in process for the figure-ground separation of connected regions, which are subsequently more easily recognized. The new CORT-X 2 model is a feed-forward version of the BCS that is designed to detect, regularize, and complete boundaries in up to 50 percent noise. It also exploits the complementary properties of on-cells and off -cells to generate boundary segmentations and to compensate for boundary gaps during filling-in. In the realm of audition, many sounds are dominated by energy at integer multiples, or "harmonics", of a fundamental frequency. For such sounds (e.g., vowels in speech), the individual frequency components fuse, so that they are perceived as one sound source with a pitch at the fundamental frequency. Pitch is integral to separating auditory sources, as well as to speaker identification and speech understanding. A neural network model of pitch perception called SPINET (SPatial PItch NETwork) is developed and used to simulate a broader range of perceptual data than previous spectral models. The model employs a bank of narrowband filters as a simple model of basilar membrane mechanics, spectral on-center off-surround competitive

  16. How does aging affect the types of error made in a visual short-term memory 'object-recall' task?

    Science.gov (United States)

    Sapkota, Raju P; van der Linde, Ian; Pardhan, Shahina

    2014-01-01

    This study examines how normal aging affects the occurrence of different types of incorrect responses in a visual short-term memory (VSTM) object-recall task. Seventeen young (Mean = 23.3 years, SD = 3.76), and 17 normally aging older (Mean = 66.5 years, SD = 6.30) adults participated. Memory stimuli comprised two or four real world objects (the memory load) presented sequentially, each for 650 ms, at random locations on a computer screen. After a 1000 ms retention interval, a test display was presented, comprising an empty box at one of the previously presented two or four memory stimulus locations. Participants were asked to report the name of the object presented at the cued location. Errors rates wherein participants reported the names of objects that had been presented in the memory display but not at the cued location (non-target errors) vs. objects that had not been presented at all in the memory display (non-memory errors) were compared. Significant effects of aging, memory load and target recency on error type and absolute error rates were found. Non-target error rate was higher than non-memory error rate in both age groups, indicating that VSTM may have been more often than not populated with partial traces of previously presented items. At high memory load, non-memory error rate was higher in young participants (compared to older participants) when the memory target had been presented at the earliest temporal position. However, non-target error rates exhibited a reversed trend, i.e., greater error rates were found in older participants when the memory target had been presented at the two most recent temporal positions. Data are interpreted in terms of proactive interference (earlier examined non-target items interfering with more recent items), false memories (non-memory items which have a categorical relationship to presented items, interfering with memory targets), slot and flexible resource models, and spatial coding deficits.

  17. The Perceptual Root of Object-Based Storage: An Interactive Model of Perception and Visual Working Memory

    Science.gov (United States)

    Gao, Tao; Gao, Zaifeng; Li, Jie; Sun, Zhongqiang; Shen, Mowei

    2011-01-01

    Mainstream theories of visual perception assume that visual working memory (VWM) is critical for integrating online perceptual information and constructing coherent visual experiences in changing environments. Given the dynamic interaction between online perception and VWM, we propose that how visual information is processed during visual…

  18. A new 2-dimensional method for constructing visualized treatment objectives for distraction osteogenesis of the short mandible.

    Science.gov (United States)

    van Beek, H

    2010-01-01

    Open bite development during distraction of the mandible is common and partly due to inaccurate planning of the treatment. Conflicting guidelines exist in the literature. A method for Visualized Treatment Objective (VTO) construction is presented as an aid for determining the correct orientation of monodirectional and multidirectional distractors. Distraction on the left and on the right side of the mandible takes place in a parallel manner in order to maintain intercondylar width. It follows that in the absence of marked asymmetry, the amount of mandibular body distraction, the amount of ramus distraction and (should it apply), the amount of closure of the gonial angle, can be derived from a simple 2-dimensional plan. After presurgical orthodontic treatment, a cephalogram is taken and a VTO is constructed, that aims at a good occlusion with the enhanced mandible in centric relation, with little or no change of the original position of the rami.

  19. Auditory and phonetic category formation

    NARCIS (Netherlands)

    Goudbeek, Martijn; Cutler, A.; Smits, R.; Swingley, D.; Cohen, Henri; Lefebvre, Claire

    2017-01-01

    Among infants' first steps in language acquisition is learning the relevant contrasts of the language-specific phonemic repertoire. This learning is viewed as the formation of categories in a multidimensional psychophysical space. Research in the visual modality has shown that for adults, some kinds

  20. The role of auditory cortices in the retrieval of single-trial auditory-visual object memories.

    Science.gov (United States)

    Matusz, Pawel J; Thelen, Antonia; Amrein, Sarah; Geiser, Eveline; Anken, Jacques; Murray, Micah M

    2015-03-01

    Single-trial encounters with multisensory stimuli affect both memory performance and early-latency brain responses to visual stimuli. Whether and how auditory cortices support memory processes based on single-trial multisensory learning is unknown and may differ qualitatively and quantitatively from comparable processes within visual cortices due to purported differences in memory capacities across the senses. We recorded event-related potentials (ERPs) as healthy adults (n = 18) performed a continuous recognition task in the auditory modality, discriminating initial (new) from repeated (old) sounds of environmental objects. Initial presentations were either unisensory or multisensory; the latter entailed synchronous presentation of a semantically congruent or a meaningless image. Repeated presentations were exclusively auditory, thus differing only according to the context in which the sound was initially encountered. Discrimination abilities (indexed by d') were increased for repeated sounds that were initially encountered with a semantically congruent image versus sounds initially encountered with either a meaningless or no image. Analyses of ERPs within an electrical neuroimaging framework revealed that early stages of auditory processing of repeated sounds were affected by prior single-trial multisensory contexts. These effects followed from significantly reduced activity within a distributed network, including the right superior temporal cortex, suggesting an inverse relationship between brain activity and behavioural outcome on this task. The present findings demonstrate how auditory cortices contribute to long-term effects of multisensory experiences on auditory object discrimination. We propose a new framework for the efficacy of multisensory processes to impact both current multisensory stimulus processing and unisensory discrimination abilities later in time. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  1. CHURCH, Category, and Speciation

    Directory of Open Access Journals (Sweden)

    Rinderknecht Jakob Karl

    2018-01-01

    Full Text Available The Roman Catholic definition of “church”, especially as applied to groups of Protestant Christians, creates a number of well-known difficulties. The similarly complex category, “species,” provides a model for applying this term so as to neither lose the centrality of certain examples nor draw a hard boundary to rule out border cases. In this way, it can help us to more adequately apply the complex ecclesiology of the Second Vatican Council. This article draws parallels between the understanding of speciation and categorization and the definition of Church since the council. In doing so, it applies the work of cognitive linguists, including George Lakoff, Zoltan Kovecses, Giles Fauconnier and Mark Turner on categorization. We tend to think of categories as containers into which we sort objects according to essential criteria. However, categories are actually built inductively by making associations between objects. This means that natural categories, including species, are more porous than we assume, but nevertheless bear real meaning about the natural world. Taxonomists dispute the border between “zebras” and “wild asses,” but this distinction arises out of genetic and evolutionary reality; it is not merely arbitrary. Genetic descriptions of species has also led recently to the conviction that there are four species of giraffe, not one. This engagement will ground a vantage point from which the Council‘s complex ecclesiology can be more easily described so as to authentically integrate its noncompetitive vision vis-a-vis other Christians with its sense of the unique place held by Catholic Church.

  2. Recurrent processing during object recognition

    Directory of Open Access Journals (Sweden)

    Randall C. O'Reilly

    2013-04-01

    Full Text Available How does the brain learn to recognize objects visually, and perform this difficult feat robustly in the face of many sources of ambiguity and variability? We present a computational model based on the biology of the relevant visual pathways that learns to reliably recognize 100 different object categories in the face of of naturally-occurring variability in location, rotation, size, and lighting. The model exhibits robustness to highly ambiguous, partially occluded inputs. Both the unified, biologically plausible learning mechanism and the robustness to occlusion derive from the role that recurrent connectivity and recurrent processing mechanisms play in the model. Furthermore, this interaction of recurrent connectivity and learning predicts that high-level visual representations should be shaped by error signals from nearby, associated brain areas over the course of visual learning. Consistent with this prediction, we show how semantic knowledge about object categories changes the nature of their learned visual representations, as well as how this representational shift supports the mapping between perceptual and conceptual knowledge. Altogether, these findings support the potential importance of ongoing recurrent processing throughout the brain's visual system and suggest ways in which object recognition can be understood in terms of interactions within and between processes over time.

  3. Value is in the eye of the beholder: early visual cortex codes monetary value of objects during a diverted attention task.

    Science.gov (United States)

    Persichetti, Andrew S; Aguirre, Geoffrey K; Thompson-Schill, Sharon L

    2015-05-01

    A central concern in the study of learning and decision-making is the identification of neural signals associated with the values of choice alternatives. An important factor in understanding the neural correlates of value is the representation of the object itself, separate from the act of choosing. Is it the case that the representation of an object within visual areas will change if it is associated with a particular value? We used fMRI adaptation to measure the neural similarity of a set of novel objects before and after participants learned to associate monetary values with the objects. We used a range of both positive and negative values to allow us to distinguish effects of behavioral salience (i.e., large vs. small values) from effects of valence (i.e., positive vs. negative values). During the scanning session, participants made a perceptual judgment unrelated to value. Crucially, the similarity of the visual features of any pair of objects did not predict the similarity of their value, so we could distinguish adaptation effects due to each dimension of similarity. Within early visual areas, we found that value similarity modulated the neural response to the objects after training. These results show that an abstract dimension, in this case, monetary value, modulates neural response to an object in visual areas of the brain even when attention is diverted.

  4. Accuracy of Dolphin visual treatment objective (VTO prediction software on class III patients treated with maxillary advancement and mandibular setback

    Directory of Open Access Journals (Sweden)

    Robert J. Peterman

    2016-06-01

    Full Text Available Abstract Background Dolphin® visual treatment objective (VTO prediction software is routinely utilized by orthodontists during the treatment planning of orthognathic cases to help predict post-surgical soft tissue changes. Although surgical soft tissue prediction is considered to be a vital tool, its accuracy is not well understood in tow-jaw surgical procedures. The objective of this study was to quantify the accuracy of Dolphin Imaging’s VTO soft tissue prediction software on class III patients treated with maxillary advancement and mandibular setback and to validate the efficacy of the software in such complex cases. Methods This retrospective study analyzed the records of 14 patients treated with comprehensive orthodontics in conjunction with two-jaw orthognathic surgery. Pre- and post-treatment radiographs were traced and superimposed to determine the actual skeletal movements achieved in surgery. This information was then used to simulate surgery in the software and generate a final soft tissue patient profile prediction. Prediction images were then compared to the actual post-treatment profile photos to determine differences. Results Dolphin Imaging’s software was determined to be accurate within an error range of +/− 2 mm in the X-axis at most landmarks. The lower lip predictions were most inaccurate. Conclusions Clinically, the observed error suggests that the VTO may be used for demonstration and communication with a patient or consulting practitioner. However, Dolphin should not be useful for precise treatment planning of surgical movements. This program should be used with caution to prevent unrealistic patient expectations and dissatisfaction.

  5. The effects of short-term and long-term learning on the responses of lateral intraparietal neurons to visually presented objects.

    Science.gov (United States)

    Sigurdardottir, Heida M; Sheinberg, David L

    2015-07-01

    The lateral intraparietal area (LIP) is thought to play an important role in the guidance of where to look and pay attention. LIP can also respond selectively to differently shaped objects. We sought to understand to what extent short-term and long-term experience with visual orienting determines the responses of LIP to objects of different shapes. We taught monkeys to arbitrarily associate centrally presented objects of various shapes with orienting either toward or away from a preferred spatial location of a neuron. The training could last for less than a single day or for several months. We found that neural responses to objects are affected by such experience, but that the length of the learning period determines how this neural plasticity manifests. Short-term learning affects neural responses to objects, but these effects are only seen relatively late after visual onset; at this time, the responses to newly learned objects resemble those of familiar objects that share their meaning or arbitrary association. Long-term learning affects the earliest bottom-up responses to visual objects. These responses tend to be greater for objects that have been associated with looking toward, rather than away from, LIP neurons' preferred spatial locations. Responses to objects can nonetheless be distinct, although they have been similarly acted on in the past and will lead to the same orienting behavior in the future. Our results therefore indicate that a complete experience-driven override of LIP object responses may be difficult or impossible. We relate these results to behavioral work on visual attention.

  6. Multi-Label Object Categorization Using Histograms of Global Relations

    DEFF Research Database (Denmark)

    Mustafa, Wail; Xiong, Hanchen; Kraft, Dirk

    2015-01-01

    In this paper, we present an object categorization system capable of assigning multiple and related categories for novel objects using multi-label learning. In this system, objects are described using global geometric relations of 3D features. We propose using the Joint SVM method for learning......). The experiments are carried out on a dataset of 100 objects belonging to 13 visual and action-related categories. The results indicate that multi-label methods are able to identify the relation between the dependent categories and hence perform categorization accordingly. It is also found that extracting...

  7. Methods and means for building a system of visual images forming in gis of critical important objects protection

    Directory of Open Access Journals (Sweden)

    Mykhailo Vasiukhin

    2013-12-01

    Full Text Available Requirements for the visualization of dynamic scenes in security systems for are increasing in recent years. This requires develop a methods and tools for visualization of dynamic scenes for monitoring and managing the system of security like “human-operator”. The paper presents a model map data from which is a base for building real time map data, and methods of real time visualization of moving characters in air.

  8. Visual agnosia and focal brain injury.

    Science.gov (United States)

    Martinaud, O

    Visual agnosia encompasses all disorders of visual recognition within a selective visual modality not due to an impairment of elementary visual processing or other cognitive deficit. Based on a sequential dichotomy between the perceptual and memory systems, two different categories of visual object agnosia are usually considered: 'apperceptive agnosia' and 'associative agnosia'. Impaired visual recognition within a single category of stimuli is also reported in: (i) visual object agnosia of the ventral pathway, such as prosopagnosia (for faces), pure alexia (for words), or topographagnosia (for landmarks); (ii) visual spatial agnosia of the dorsal pathway, such as cerebral akinetopsia (for movement), or orientation agnosia (for the placement of objects in space). Focal brain injuries provide a unique opportunity to better understand regional brain function, particularly with the use of effective statistical approaches such as voxel-based lesion-symptom mapping (VLSM). The aim of the present work was twofold: (i) to review the various agnosia categories according to the traditional visual dual-pathway model; and (ii) to better assess the anatomical network underlying visual recognition through lesion-mapping studies correlating neuroanatomical and clinical outcomes. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  9. Contrasting effects of feature-based statistics on the categorisation and basic-level identification of visual objects.

    Science.gov (United States)

    Taylor, Kirsten I; Devereux, Barry J; Acres, Kadia; Randall, Billi; Tyler, Lorraine K

    2012-03-01

    Conceptual representations are at the heart of our mental lives, involved in every aspect of cognitive functioning. Despite their centrality, a long-standing debate persists as to how the meanings of concepts are represented and processed. Many accounts agree that the meanings of concrete concepts are represented by their individual features, but disagree about the importance of different feature-based variables: some views stress the importance of the information carried by distinctive features in conceptual processing, others the features which are shared over many concepts, and still others the extent to which features co-occur. We suggest that previously disparate theoretical positions and experimental findings can be unified by an account which claims that task demands determine how concepts are processed in addition to the effects of feature distinctiveness and co-occurrence. We tested these predictions in a basic-level naming task which relies on distinctive feature information (Experiment 1) and a domain decision task which relies on shared feature information (Experiment 2). Both used large-scale regression designs with the same visual objects, and mixed-effects models incorporating participant, session, stimulus-related and feature statistic variables to model the performance. We found that concepts with relatively more distinctive and more highly correlated distinctive relative to shared features facilitated basic-level naming latencies, while concepts with relatively more shared and more highly correlated shared relative to distinctive features speeded domain decisions. These findings demonstrate that the feature statistics of distinctiveness (shared vs. distinctive) and correlational strength, as well as the task demands, determine how concept meaning is processed in the conceptual system. Copyright © 2011 Elsevier B.V. All rights reserved.

  10. Functional magnetic resonance imaging of visual object construction and shape discrimination : relations among task, hemispheric lateralization, and gender.

    Science.gov (United States)

    Georgopoulos, A P; Whang, K; Georgopoulos, M A; Tagaris, G A; Amirikian, B; Richter, W; Kim, S G; Uğurbil, K

    2001-01-01

    We studied the brain activation patterns in two visual image processing tasks requiring judgements on object construction (FIT task) or object sameness (SAME task). Eight right-handed healthy human subjects (four women and four men) performed the two tasks in a randomized block design while 5-mm, multislice functional images of the whole brain were acquired using a 4-tesla system using blood oxygenation dependent (BOLD) activation. Pairs of objects were picked randomly from a set of 25 oriented fragments of a square and presented to the subjects approximately every 5 sec. In the FIT task, subjects had to indicate, by pushing one of two buttons, whether the two fragments could match to form a perfect square, whereas in the SAME task they had to decide whether they were the same or not. In a control task, preceding and following each of the two tasks above, a single square was presented at the same rate and subjects pushed any of the two keys at random. Functional activation maps were constructed based on a combination of conservative criteria. The areas with activated pixels were identified using Talairach coordinates and anatomical landmarks, and the number of activated pixels was determined for each area. Altogether, 379 pixels were activated. The counts of activated pixels did not differ significantly between the two tasks or between the two genders. However, there were significantly more activated pixels in the left (n = 218) than the right side of the brain (n = 161). Of the 379 activated pixels, 371 were located in the cerebral cortex. The Talairach coordinates of these pixels were analyzed with respect to their overall distribution in the two tasks. These distributions differed significantly between the two tasks. With respect to individual dimensions, the two tasks differed significantly in the anterior--posterior and superior--inferior distributions but not in the left--right (including mediolateral, within the left or right side) distribution. Specifically

  11. Supervised and Unsupervised Learning of Multidimensional Acoustic Categories

    Science.gov (United States)

    Goudbeek, Martijn; Swingley, Daniel; Smits, Roel

    2009-01-01

    Learning to recognize the contrasts of a language-specific phonemic repertoire can be viewed as forming categories in a multidimensional psychophysical space. Research on the learning of distributionally defined visual categories has shown that categories defined over 1 dimension are easy to learn and that learning multidimensional categories is…

  12. Perceptual Discrimination of Basic Object Features Is Not Facilitated When Priming Stimuli Are Prevented From Reaching Awareness by Means of Visual Masking

    Science.gov (United States)

    Peel, Hayden J.; Sperandio, Irene; Laycock, Robin; Chouinard, Philippe A.

    2018-01-01

    Our understanding of how form, orientation and size are processed within and outside of awareness is limited and requires further investigation. Therefore, we investigated whether or not the visual discrimination of basic object features can be influenced by subliminal processing of stimuli presented beforehand. Visual masking was used to render stimuli perceptually invisible. Three experiments examined if visible and invisible primes could facilitate the subsequent feature discrimination of visible targets. The experiments differed in the kind of perceptual discrimination that participants had to make. Namely, participants were asked to discriminate visual stimuli on the basis of their form, orientation, or size. In all three experiments, we demonstrated reliable priming effects when the primes were visible but not when the primes were made invisible. Our findings underscore the importance of conscious awareness in facilitating the perceptual discrimination of basic object features. PMID:29725292

  13. Perceptual Discrimination of Basic Object Features Is Not Facilitated When Priming Stimuli Are Prevented From Reaching Awareness by Means of Visual Masking.

    Science.gov (United States)

    Peel, Hayden J; Sperandio, Irene; Laycock, Robin; Chouinard, Philippe A

    2018-01-01

    Our understanding of how form, orientation and size are processed within and outside of awareness is limited and requires further investigation. Therefore, we investigated whether or not the visual discrimination of basic object features can be influenced by subliminal processing of stimuli presented beforehand. Visual masking was used to render stimuli perceptually invisible. Three experiments examined if visible and invisible primes could facilitate the subsequent feature discrimination of visible targets. The experiments differed in the kind of perceptual discrimination that participants had to make. Namely, participants were asked to discriminate visual stimuli on the basis of their form, orientation, or size. In all three experiments, we demonstrated reliable priming effects when the primes were visible but not when the primes were made invisible. Our findings underscore the importance of conscious awareness in facilitating the perceptual discrimination of basic object features.

  14. The Impairing Effect of Mental Fatigue on Visual Sustained Attention under Monotonous Multi-Object Visual Attention Task in Long Durations: An Event-Related Potential Based Study.

    Directory of Open Access Journals (Sweden)

    Zizheng Guo

    Full Text Available The impairing effects of mental fatigue on visual sustained attention were assessed by event-related potentials (ERPs. Subjects performed a dual visual task, which includes a continuous tracking task (primary task and a random signal detection task (secondary task, for 63 minutes nonstop in order to elicit ERPs. In this period, the data such as subjective levels of mental fatigue, behavioral performance measures, and electroencephalograms were recorded for each subject. Comparing data from the first interval (0-25 min to that of the second, the following phenomena were observed: the subjective fatigue ratings increased with time, which indicates that performing the tasks leads to increase in mental fatigue levels; reaction times prolonged and accuracy rates decreased in the second interval, which indicates that subjects' sustained attention decreased.; In the ERP data, the P3 amplitudes elicited by the random signals decreased, while the P3 latencies increased in the second interval. These results suggest that mental fatigue can modulate the higher-level cognitive processes, in terms of less attentional resources allocated to the random stimuli, which leads to decreased speed in information evaluating and decision making against the stimuli. These findings provide new insights into the question that how mental fatigue affects visual sustained attention and, therefore, can help to design countermeasures to prevent accidents caused by low visual sustained attention.

  15. The role of auditory cortices in the retrieval of single-trial auditory-visual object memories.

    OpenAIRE

    Matusz, P.J.; Thelen, A.; Amrein, S.; Geiser, E.; Anken, J.; Murray, M.M.

    2015-01-01

    Single-trial encounters with multisensory stimuli affect both memory performance and early-latency brain responses to visual stimuli. Whether and how auditory cortices support memory processes based on single-trial multisensory learning is unknown and may differ qualitatively and quantitatively from comparable processes within visual cortices due to purported differences in memory capacities across the senses. We recorded event-related potentials (ERPs) as healthy adults (n = 18) performed a ...

  16. 3D visualization of integrated ground penetrating radar data and EM-61 data to determine buried objects and their characteristics

    International Nuclear Information System (INIS)

    Kadioğlu, Selma; Daniels, Jeffrey J

    2008-01-01

    This paper is based on an interactive three-dimensional (3D) visualization of two-dimensional (2D) ground penetrating radar (GPR) data and their integration with electromagnetic induction (EMI) using EM-61 data in a 3D volume. This method was used to locate and identify near-surface buried old industrial remains with shape, depth and type (metallic/non-metallic) in a brownfield site. The aim of the study is to illustrate a new approach to integrating two data sets in a 3D image for monitoring and interpretation of buried remains, and this paper methodically indicates the appropriate amplitude–colour and opacity function constructions to activate buried remains in a transparent 3D view. The results showed that the interactive interpretation of the integrated 3D visualization was done using generated transparent 3D sub-blocks of the GPR data set that highlighted individual anomalies in true locations. Colour assignments and formulating of opacity of the data sets were the keys to the integrated 3D visualization and interpretation. This new visualization provided an optimum visual comparison and an interpretation of the complex data sets to identify and differentiate the metallic and non-metallic remains and to control the true interpretation on exact locations with depth. Therefore, the integrated 3D visualization of two data sets allowed more successful identification of the buried remains

  17. Categories from scratch

    NARCIS (Netherlands)

    Poss, R.

    2014-01-01

    The concept of category from mathematics happens to be useful to computer programmers in many ways. Unfortunately, all "good" explanations of categories so far have been designed by mathematicians, or at least theoreticians with a strong background in mathematics, and this makes categories

  18. Working memory capacity accounts for the ability to switch between object-based and location-based allocation of visual attention.

    Science.gov (United States)

    Bleckley, M Kathryn; Foster, Jeffrey L; Engle, Randall W

    2015-04-01

    Bleckley, Durso, Crutchfield, Engle, and Khanna (Psychonomic Bulletin & Review, 10, 884-889, 2003) found that visual attention allocation differed between groups high or low in working memory capacity (WMC). High-span, but not low-span, subjects showed an invalid-cue cost during a letter localization task in which the letter appeared closer to fixation than the cue, but not when the letter appeared farther from fixation than the cue. This suggests that low-spans allocated attention as a spotlight, whereas high-spans allocated their attention to objects. In this study, we tested whether utilizing object-based visual attention is a resource-limited process that is difficult for low-span individuals. In the first experiment, we tested the uses of object versus location-based attention with high and low-span subjects, with half of the subjects completing a demanding secondary load task. Under load, high-spans were no longer able to use object-based visual attention. A second experiment supported the hypothesis that these differences in allocation were due to high-spans using object-based allocation, whereas low-spans used location-based allocation.

  19. The Precategorical Nature of Visual Short-Term Memory

    Science.gov (United States)

    Quinlan, Philip T.; Cohen, Dale J.

    2016-01-01

    We conducted a series of recognition experiments that assessed whether visual short-term memory (VSTM) is sensitive to shared category membership of to-be-remembered (tbr) images of common objects. In Experiment 1 some of the tbr items shared the same basic level category (e.g., hand axe): Such items were no better retained than others. In the…

  20. Impaired integration of object knowledge and visual input in a case of ventral simultanagnosia with bilateral damage to area V4.

    Science.gov (United States)

    Leek, E Charles; d'Avossa, Giovanni; Tainturier, Marie-Josèphe; Roberts, Daniel J; Yuen, Sung Lai; Hu, Mo; Rafal, Robert

    2012-01-01

    This study examines how brain damage can affect the cognitive processes that support the integration of sensory input and prior knowledge during shape perception. It is based on the first detailed study of acquired ventral simultanagnosia, which was found in a patient (M.T.) with posterior occipitotemporal lesions encompassing V4 bilaterally. Despite showing normal object recognition for single items in both accuracy and response times (RTs), and intact low-level vision assessed across an extensive battery of tests, M.T. was impaired in object identification with overlapping figures displays. Task performance was modulated by familiarity: Unlike controls, M.T. was faster with overlapping displays of abstract shapes than with overlapping displays of common objects. His performance with overlapping common object displays was also influenced by both the semantic relatedness and visual similarity of the display items. These findings challenge claims that visual perception is driven solely by feedforward mechanisms and show how brain damage can selectively impair high-level perceptual processes supporting the integration of stored knowledge and visual sensory input.

  1. Bundles of C*-categories and duality

    OpenAIRE

    Vasselli, Ezio

    2005-01-01

    We introduce the notions of multiplier C*-category and continuous bundle of C*-categories, as the categorical analogues of the corresponding C*-algebraic notions. Every symmetric tensor C*-category with conjugates is a continuous bundle of C*-categories, with base space the spectrum of the C*-algebra associated with the identity object. We classify tensor C*-categories with fibre the dual of a compact Lie group in terms of suitable principal bundles. This also provides a classification for ce...

  2. [Objective assessment of disorders of visual perception following unilateral vestibular loss. Studies of the so-called Dandy symptom].

    Science.gov (United States)

    Stoll, W; Werner, F; Kauffmann, G

    1991-02-01

    Visual ability and compensatory eye movements during defined vertical oscillation were investigated in 20 patients with unilateral lesions of labyrinthine function and in 20 normal subjects. Oscillation frequencies were performed at the rate of 1 to 1.5 Hz with an amplitude of 5 cm, comparative to head locomotions of a running person. In synchronism with this, the visual function was tested with Landolt rings. Patients complaining of subjective visual disturbance during walking and running, also presented a measurable blur of vision under test conditions. In addition, eye movements were recorded and classified into three types. However, these eye movements showed no relation to gaze function. Our results suggest that the otolith-ocular reflex may participate in adjusting the vertical eye position during vertical stimulations at low frequencies. The effect of visual disturbances in patients with labyrinthine lesions is explained by the "efference-copy" initially described by von Holst. The efference-copy is responsible for the neutralisation of provoked retinal perceptions.

  3. Blocking in Category Learning

    OpenAIRE

    Bott, Lewis; Hoffman, Aaron B.; Murphy, Gregory L.

    2007-01-01

    Many theories of category learning assume that learning is driven by a need to minimize classification error. When there is no classification error, therefore, learning of individual features should be negligible. We tested this hypothesis by conducting three category learning experiments adapted from an associative learning blocking paradigm. Contrary to an error-driven account of learning, participants learned a wide range of information when they learned about categories, and blocking effe...

  4. Category Learning Research in the Interactive Online Environment Second Life

    Science.gov (United States)

    Andrews, Jan; Livingston, Ken; Sturm, Joshua; Bliss, Daniel; Hawthorne, Daniel

    2011-01-01

    The interactive online environment Second Life allows users to create novel three-dimensional stimuli that can be manipulated in a meaningful yet controlled environment. These features suggest Second Life's utility as a powerful tool for investigating how people learn concepts for unfamiliar objects. The first of two studies was designed to establish that cognitive processes elicited in this virtual world are comparable to those tapped in conventional settings by attempting to replicate the established finding that category learning systematically influences perceived similarity . From the perspective of an avatar, participants navigated a course of unfamiliar three-dimensional stimuli and were trained to classify them into two labeled categories based on two visual features. Participants then gave similarity ratings for pairs of stimuli and their responses were compared to those of control participants who did not learn the categories. Results indicated significant compression, whereby objects classified together were judged to be more similar by learning than control participants, thus supporting the validity of using Second Life as a laboratory for studying human cognition. A second study used Second Life to test the novel hypothesis that effects of learning on perceived similarity do not depend on the presence of verbal labels for categories. We presented the same stimuli but participants classified them by selecting between two complex visual patterns designed to be extremely difficult to label. While learning was more challenging in this condition , those who did learn without labels showed a compression effect identical to that found in the first study using verbal labels. Together these studies establish that at least some forms of human learning in Second Life parallel learning in the actual world and thus open the door to future studies that will make greater use of the enriched variety of objects and interactions possible in simulated environments

  5. Visual object-oriented technology and case-tools of developing the Internet / Intranet-oriented training courses

    Directory of Open Access Journals (Sweden)

    Salaimeh S. A.

    2017-12-01

    Full Text Available New information technologies, modern computers, LAN, WAN networks enable us to modernize the whole education system. One of the most perspective ways of the modern educational system’s development is online education. The questions of developing the visual instrumental system PIECE designed to automate processes of creation the cross- platform hypermedia training and controlling course (HTCC are viewed in this paper.

  6. Visual classification of emphysema heterogeneity compared with objective measurements: HRCT vs spiral CT in candidates for lung volume reduction surgery

    International Nuclear Information System (INIS)

    Cederlund, K.; Hoegberg, S.; Rasmussen, E.; Svane, B.; Bergstrand, L.; Tylen, U.; Aspelin, P.

    2002-01-01

    The aim of this study was to investigate whether spiral CT is superior to high-resolution computed tomography (HRCT) in evaluating the radiological morphology of emphysema, and whether the combination of both CT techniques improves the evaluation in patients undergoing lung volume reduction surgery (LVRS). The material consisted of HRCT (with 2-mm slice thickness) and spiral CT (with 10-mm slice thickness) of 94 candidates for LVRS. Selected image pairs from these examinations were evaluated. Each image pair consisted of one image from the cranial part of the lung and one image from the caudal part. The degree of emphysema in the two images was calculated by computer. The difference between the images determined the degree of heterogeneity. Five classes of heterogeneity were defined. The study was performed by visual classification of 95 image pairs (spiral CT) and 95 image pairs (HRCT) into one of five different classes of emphysema heterogeneity. This visual classification was compared with the computer-based classification. Spiral CT was superior to HRCT with 47% correct classifications of emphysema heterogeneity compared with 40% for HRCT-based classification (p<0.05). The combination of the techniques did not improve the evaluation (42%). Spiral CT is superior to HRCT in determining heterogeneity of emphysema visually, and should be included in the pre-operative CT evaluation of LVRS candidates. (orig.)

  7. Visual classification of emphysema heterogeneity compared with objective measurements: HRCT vs spiral CT in candidates for lung volume reduction surgery

    Energy Technology Data Exchange (ETDEWEB)

    Cederlund, K.; Hoegberg, S.; Rasmussen, E.; Svane, B. [Department of Thoracic Radiology, Karolinska Hospital, Stockholm (Sweden); Bergstrand, L. [Department of Radiology, Danderyds Hospital, Danderyd (Sweden); Tylen, U. [Deparment of Radiology, Sahlgrenska University Hospital, Gothenberg (Sweden); Aspelin, P. [Department of Radiology, Huddinge University Hospital, Huddinge (Sweden)

    2002-05-01

    The aim of this study was to investigate whether spiral CT is superior to high-resolution computed tomography (HRCT) in evaluating the radiological morphology of emphysema, and whether the combination of both CT techniques improves the evaluation in patients undergoing lung volume reduction surgery (LVRS). The material consisted of HRCT (with 2-mm slice thickness) and spiral CT (with 10-mm slice thickness) of 94 candidates for LVRS. Selected image pairs from these examinations were evaluated. Each image pair consisted of one image from the cranial part of the lung and one image from the caudal part. The degree of emphysema in the two images was calculated by computer. The difference between the images determined the degree of heterogeneity. Five classes of heterogeneity were defined. The study was performed by visual classification of 95 image pairs (spiral CT) and 95 image pairs (HRCT) into one of five different classes of emphysema heterogeneity. This visual classification was compared with the computer-based classification. Spiral CT was superior to HRCT with 47% correct classifications of emphysema heterogeneity compared with 40% for HRCT-based classification (p<0.05). The combination of the techniques did not improve the evaluation (42%). Spiral CT is superior to HRCT in determining heterogeneity of emphysema visually, and should be included in the pre-operative CT evaluation of LVRS candidates. (orig.)

  8. Constraints on Multiple Object Tracking in Williams Syndrome: How Atypical Development Can Inform Theories of Visual Processing

    Science.gov (United States)

    Ferrara, Katrina; Hoffman, James E.; O'Hearn, Kirsten; Landau, Barbara

    2016-01-01

    The ability to track moving objects is a crucial skill for performance in everyday spatial tasks. The tracking mechanism depends on representation of moving items as coherent entities, which follow the spatiotemporal constraints of objects in the world. In the present experiment, participants tracked 1 to 4 targets in a display of 8 identical…

  9. Temporal Limitations in the Effective Binding of Attended Target Attributes in the Mutual Masking of Visual Objects

    Science.gov (United States)

    Hommuk, Karita; Bachmann, Talis

    2009-01-01

    The problem of feature binding has been examined under conditions of distributed attention or with spatially dispersed stimuli. We studied binding by asking whether selective attention to a feature of a masked object enables perceptual access to the other features of that object using conditions in which spatial attention was directed at a single…

  10. Categories and logical syntax

    NARCIS (Netherlands)

    Klev, Ansten Morch

    2014-01-01

    The notions of category and type are here studied through the lens of logical syntax: Aristotle's as well as Kant's categories through the traditional form of proposition `S is P', and modern doctrines of type through the Fregean form of proposition `F(a)', function applied to argument. Topics

  11. Computing color categories

    NARCIS (Netherlands)

    Yendrikhovskij, S.N.; Rogowitz, B.E.; Pappas, T.N.

    2000-01-01

    This paper is an attempt to develop a coherent framework for understanding, modeling, and computing color categories. The main assumption is that the structure of color category systems originates from the statistical structure of the perceived color environment. This environment can be modeled as

  12. Creation and validation of a visual macroscopic hematuria scale for optimal communication and an objective hematuria index.

    Science.gov (United States)

    Wong, Lih-Ming; Chum, Jia-Min; Maddy, Peter; Chan, Steven T F; Travis, Douglas; Lawrentschuk, Nathan

    2010-07-01

    Macroscopic hematuria is a common symptom and sign that is challenging to quantify and describe. The degree of hematuria communicated is variable due to health worker experience combined with lack of a reliable grading tool. We produced a reliable, standardized visual scale to describe hematuria severity. Our secondary aim was to validate a new laboratory test to quantify hemoglobin in hematuria specimens. Nurses were surveyed to ascertain current hematuria descriptions. Blood and urine were titrated at varying concentrations and digitally photographed in catheter bag tubing. Photos were processed and printed on transparency paper to create a prototype swatch or card showing light, medium, heavy and old hematuria. Using the swatch 60 samples were rated by nurses and laymen. Interobserver variability was reported using the generalized kappa coefficient of agreement. Specimens were analyzed for hemolysis by measuring optical density at oxyhemoglobin absorption peaks. Interobserver agreement between nurses and laymen was good (kappa = 0.51, p visual scale to grade and communicate hematuria with adequate interobserver agreement is feasible. The test for optical density at oxyhemoglobin absorption peaks is a new method, validated in our study, to quantify hemoglobin in a hematuria specimen. Copyright (c) 2010 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  13. Triangulated categories (AM-148)

    CERN Document Server

    Neeman, Amnon

    2014-01-01

    The first two chapters of this book offer a modern, self-contained exposition of the elementary theory of triangulated categories and their quotients. The simple, elegant presentation of these known results makes these chapters eminently suitable as a text for graduate students. The remainder of the book is devoted to new research, providing, among other material, some remarkable improvements on Brown''s classical representability theorem. In addition, the author introduces a class of triangulated categories""--the ""well generated triangulated categories""--and studies their properties. This

  14. A method of formal requirements analysis for NPP I and C systems based on object-oriented visual modeling with SCR

    International Nuclear Information System (INIS)

    Koo, S. R.; Seong, P. H.

    1999-01-01

    In this work, a formal requirements analysis method for Nuclear Power Plant (NPP) I and C systems is suggested. This method uses Unified Modeling Language (UML) for modeling systems visually and Software Cost Reduction (SCR) formalism for checking the system models. Since object-oriented method can analyze a document by the objects in a real system, UML models that use object-oriented method are useful for understanding problems and communicating with everyone involved in the project. In order to analyze the requirement more formally, SCR tabular notations is converted from UML models. To help flow-through from UML models to SCR specifications, additional syntactic extensions for UML notation and a converting procedure are defined. The combined method has been applied to Dynamic Safety System (DSS). From this application, three kinds of errors were detected in the existing DSS requirements

  15. Subjective vs. objective evaluation of gallbladder opacification during oral cholecystography in comparative clinical trials: implications for studies involving visual assessment

    International Nuclear Information System (INIS)

    Fon, G.T.; Hunter, T.B.; Berk, R.N.; Patton, D.D.; Capp, M.P.

    1982-01-01

    Radiographs and CT images taken during oral cholecystography in dogs were interpreted in an independent, blind fashion by three radiologists on two occasions and visual assessment of gallbladder density compared to the actual CT values. While there was significant intra- and inter-observer variation, the mean scores for the observers' interpretations of both radiographs and prints correlated well with the actual CT values (p > 0.05). In five out of six comparisons between first and second readings, the observers gave a lower score on the second reading. The considerable variation reflects the problems inherent in subjective evaluation of agents that produce small but measurable differences in radiographic density. Studies involving such subjective data have to be carefully designed in order to obtain meaningful results

  16. Analysis of rare categories

    CERN Document Server

    He, Jingrui

    2012-01-01

    This book focuses on rare category analysis where the majority classes have smooth distributions and the minority classes exhibit the compactness property. It focuses on challenging cases where the support regions of the majority and minority classes overlap.

  17. Consumer Product Category Database

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Chemical and Product Categories database (CPCat) catalogs the use of over 40,000 chemicals and their presence in different consumer products. The chemical use...

  18. Cultural differences in the visual processing of meaning: detecting incongruities between background and foreground objects using the N400.

    Science.gov (United States)

    Goto, Sharon G; Ando, Yumi; Huang, Carol; Yee, Alicia; Lewis, Richard S

    2010-06-01

    East Asians have been found to allocate relatively greater attention to background objects, whereas European Americans have been found to allocate relatively greater attention to foreground objects. This is well documented across a variety of cognitive measures. We used a modification of the Ganis and Kutas (2003) N400 event-related potential design to measure the degree to which Asian Americans and European Americans responded to semantic incongruity between target objects and background scenes. As predicted, Asian Americans showed a greater negativity to incongruent trials than to congruent trials. In contrast, European Americans showed no difference in amplitude across the two conditions. Furthermore, smaller magnitude N400 incongruity effects were associated with higher independent self-construal scores. These data suggest that Asian Americans are processing the relationship between foreground and background objects to a greater degree than European Americans, which is consistent with hypothesized greater holistic processing among East Asians. Implications for using neural measures, the role of semantic processing to understand cultural differences in cognition, and the relationship between self construal and neural measures of cognition are discussed.

  19. A Study of the Development of Students' Visualizations of Program State during an Elementary Object-Oriented Programming Course

    Science.gov (United States)

    Sajaniemi, Jorma; Kuittinen, Marja; Tikansalo, Taina

    2008-01-01

    Students' understanding of object-oriented (OO) program execution was studied by asking students to draw a picture of a program state at a specific moment. Students were given minimal instructions on what to include in their drawings in order to see what they considered to be central concepts and relationships in program execution. Three drawing…

  20. Product Category Management Issues

    OpenAIRE

    Żukowska, Joanna

    2011-01-01

    The purpose of the paper is to present the issues related to category management. It includes the overview of category management definitions and the correct process of exercising it. Moreover, attention is paid to the advantages of brand management, the benefits the supplier and retailer may obtain in this way. The risk element related to this topics is also presented herein. Joanna Żukowska

  1. First-Pass Processing of Value Cues in the Ventral Visual Pathway.

    Science.gov (United States)

    Sasikumar, Dennis; Emeric, Erik; Stuphorn, Veit; Connor, Charles E

    2018-02-19

    Real-world value often depends on subtle, continuously variable visual cues specific to particular object categories, like the tailoring of a suit, the condition of an automobile, or the construction of a house. Here, we used microelectrode recording in behaving monkeys to test two possible mechanisms for category-specific value-cue processing: (1) previous findings suggest that prefrontal cortex (PFC) identifies object categories, and based on category identity, PFC could use top-down attentional modulation to enhance visual processing of category-specific value cues, providing signals to PFC for calculating value, and (2) a faster mechanism would be first-pass visual processing of category-specific value cues, immediately providing the necessary visual information to PFC. This, however, would require learned mechanisms for processing the appropriate cues in a given object category. To test these hypotheses, we trained monkeys to discriminate value in four letter-like stimulus categories. Each category had a different, continuously variable shape cue that signified value (liquid reward amount) as well as other cues that were irrelevant. Monkeys chose between stimuli of different reward values. Consistent with the first-pass hypothesis, we found early signals for category-specific value cues in area TE (the final stage in monkey ventral visual pathway) beginning 81 ms after stimulus onset-essentially at the start of TE responses. Task-related activity emerged in lateral PFC approximately 40 ms later and consisted mainly of category-invariant value tuning. Our results show that, for familiar, behaviorally relevant object categories, high-level ventral pathway cortex can implement rapid, first-pass processing of category-specific value cues. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. ACCURACY EVALUATION OF THE OBJECT LOCATION VISUALIZATION FOR GEO-INFORMATION AND DISPLAY SYSTEMS OF MANNED AIRCRAFTS NAVIGATION COMPLEXES

    Directory of Open Access Journals (Sweden)

    M. O. Kostishin

    2014-01-01

    Full Text Available The paper deals with the issue of accuracy estimating for the object location display in the geographic information systems and display systems of manned aircrafts navigation complexes. Application features of liquid crystal screens with a different number of vertical and horizontal pixels are considered at displaying of geographic information data on different scales. Estimation display of navigation parameters values on board the aircraft is done in two ways: a numeric value is directly displayed on the screen of multi-color indicator, and a silhouette of the object is formed on the screen on a substrate background, which is a graphical representation of area map in the flight zone. Various scales of area digital map display currently used in the aviation industry have been considered. Calculation results of one pixel scale interval, depending on the specifications of liquid crystal screen and zoom of the map display area on the multifunction digital display, are given. The paper contains experimental results of the accuracy evaluation for area position display of the aircraft based on the data from the satellite navigation system and inertial navigation system, obtained during the flight program run of the real object. On the basis of these calculations a family of graphs was created for precision error display of the object reference point position using the onboard indicators with liquid crystal screen with different screen resolutions (6 "×8", 7.2 "×9.6", 9"×12" for two map display scales (1:0 , 25 km, 1-2 km. These dependency graphs can be used both to assess the error value of object area position display in existing navigation systems and to calculate the error value in upgrading facilities.

  3. Visual long-term memory and change blindness: Different effects of pre- and post-change information on one-shot change detection using meaningless geometric objects.

    Science.gov (United States)

    Nishiyama, Megumi; Kawaguchi, Jun

    2014-11-01

    To clarify the relationship between visual long-term memory (VLTM) and online visual processing, we investigated whether and how VLTM involuntarily affects the performance of a one-shot change detection task using images consisting of six meaningless geometric objects. In the study phase, participants observed pre-change (Experiment 1), post-change (Experiment 2), or both pre- and post-change (Experiment 3) images appearing in the subsequent change detection phase. In the change detection phase, one object always changed between pre- and post-change images and participants reported which object was changed. Results showed that VLTM of pre-change images enhanced the performance of change detection, while that of post-change images decreased accuracy. Prior exposure to both pre- and post-change images did not influence performance. These results indicate that pre-change information plays an important role in change detection, and that information in VLTM related to the current task does not always have a positive effect on performance. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. Converging modalities ground abstract categories: the case of politics.

    Science.gov (United States)

    Farias, Ana Rita; Garrido, Margarida V; Semin, Gün R

    2013-01-01

    Three studies are reported examining the grounding of abstract concepts across two modalities (visual and auditory) and their symbolic representation. A comparison of the outcomes across these studies reveals that the symbolic representation of political concepts and their visual and auditory modalities is convergent. In other words, the spatial relationships between specific instances of the political categories are highly overlapping across the symbolic, visual and auditory modalities. These findings suggest that abstract categories display redundancy across modal and amodal representations, and are multimodal.

  5. When seeing depends on knowing: adults with Autism Spectrum Conditions show diminished top-down processes in the visual perception of degraded faces but not degraded objects.

    Science.gov (United States)

    Loth, Eva; Gómez, Juan Carlos; Happé, Francesca

    2010-04-01

    Behavioural, neuroimaging and neurophysiological approaches emphasise the active and constructive nature of visual perception, determined not solely by the environmental input, but modulated top-down by prior knowledge. For example, degraded images, which at first appear as meaningless 'blobs', can easily be recognized as, say, a face, after having seen the same image un-degraded. This conscious perception of the fragmented stimuli relies on top-down priming influences from systems involved in attention and mental imagery on the processing of stimulus attributes, and feature-binding [Dolan, R. J., Fink, G. R., Rolls, E., Booth, M., Holmes, A., Frackowiak, R. S. J., et al. (1997). How the brain learns to see objects and faces in an impoverished context. Nature, 389, 596-599]. In Autism Spectrum Conditions (ASC), face processing abnormalities are well-established, but top-down anomalies in various domains have also been shown. Thus, we tested two alternative hypotheses: (i) that people with ASC show overall reduced top-down modulation in visual perception, or (ii) that top-down anomalies affect specifically the perception of faces. Participants were presented with sets of three consecutive images: degraded images (of faces or objects), corresponding or non-corresponding grey-scale photographs, and the same degraded images again. In a passive viewing sequence we compared gaze times (an index of focal attention) on faces/objects vs. background before and after viewers had seen the undegraded photographs. In an active viewing sequence, we compared how many faces/objects were identified pre- and post-exposure. Behavioural and gaze tracking data showed significantly reduced effects of prior knowledge on the conscious perception of degraded faces, but not objects in the ASC group. Implications for future work on the underlying mechanisms, at the cognitive and neurofunctional levels, are discussed. (c) 2009 Elsevier Ltd. All rights reserved.

  6. Curvature and the visual perception of shape: theory on information along object boundaries and the minima rule revisited.

    Science.gov (United States)

    Lim, Ik Soo; Leek, E Charles

    2012-07-01

    Previous empirical studies have shown that information along visual contours is known to be concentrated in regions of high magnitude of curvature, and, for closed contours, segments of negative curvature (i.e., concave segments) carry greater perceptual relevance than corresponding regions of positive curvature (i.e., convex segments). Lately, Feldman and Singh (2005, Psychological Review, 112, 243-252) proposed a mathematical derivation to yield information content as a function of curvature along a contour. Here, we highlight several fundamental errors in their derivation and in its associated implementation, which are problematic in both mathematical and psychological senses. Instead, we propose an alternative mathematical formulation for information measure of contour curvature that addresses these issues. Additionally, unlike in previous work, we extend this approach to 3-dimensional (3D) shape by providing a formal measure of information content for surface curvature and outline a modified version of the minima rule relating to part segmentation using curvature in 3D shape. Copyright 2012 APA, all rights reserved.

  7. Stimulus familiarity modulates functional connectivity of the perirhinal cortex and anterior hippocampus during visual discrimination of faces and objects

    Science.gov (United States)

    McLelland, Victoria C.; Chan, David; Ferber, Susanne; Barense, Morgan D.

    2014-01-01

    Recent research suggests that the medial temporal lobe (MTL) is involved in perception as well as in declarative memory. Amnesic patients with focal MTL lesions and semantic dementia patients showed perceptual deficits when discriminating faces and objects. Interestingly, these two patient groups showed different profiles of impairment for familiar and unfamiliar stimuli. For MTL amnesics, the use of familiar relative to unfamiliar stimuli improved discrimination performance. By contrast, patients with semantic dementia—a neurodegenerative condition associated with anterolateral temporal lobe damage—showed no such facilitation from familiar stimuli. Given that the two patient groups had highly overlapping patterns of damage to the perirhinal cortex, hippocampus, and temporal pole, the neuroanatomical substrates underlying their performance discrepancy were unclear. Here, we addressed this question with a multivariate reanalysis of the data presented by Barense et al. (2011), using functional connectivity to examine how stimulus familiarity affected the broader networks with which the perirhinal cortex, hippocampus, and temporal poles interact. In this study, healthy participants were scanned while they performed an odd-one-out perceptual task involving familiar and novel faces or objects. Seed-based analyses revealed that functional connectivity of the right perirhinal cortex and right anterior hippocampus was modulated by the degree of stimulus familiarity. For familiar relative to unfamiliar faces and objects, both right perirhinal cortex and right anterior hippocampus showed enhanced functional correlations with anterior/lateral temporal cortex, temporal pole, and medial/lateral parietal cortex. These findings suggest that in order to benefit from stimulus familiarity, it is necessary to engage not only the perirhinal cortex and hippocampus, but also a network of regions known to represent semantic information. PMID:24624075

  8. Structural similarity causes different category-effects depending on task characteristics

    DEFF Research Database (Denmark)

    Gerlach, Christian

    2001-01-01

    difference was found on easy object decision tasks. In experiment 2 an advantage for natural objects was found during object decisions performed under degraded viewing conditions (lateralized stimulus presentation). It is argued that these findings can be accounted for by assuming that natural objects...... it is in difficult object decision tasks). However, when viewing conditions are degraded and performance tends to depend on global shape information (carried by low spatial frequency components), natural objects may fare better than artefacts because the global shape of natural objects reveals more of their identity......It has been suggested that category-specific impairments for natural objects may reflect that natural objects are more globally visually similar than artefacts and therefore more difficult to recognize following brain damage [Aphasiology 13 (1992) 169]. This account has been challenged...

  9. First comparative approach to touchscreen-based visual object-location paired-associates learning in humans (Homo sapiens) and a nonhuman primate (Microcebus murinus).

    Science.gov (United States)

    Schmidtke, Daniel; Ammersdörfer, Sandra; Joly, Marine; Zimmermann, Elke

    2018-05-10

    A recent study suggests that a specific, touchscreen-based task on visual object-location paired-associates learning (PAL), the so-called Different PAL (dPAL) task, allows effective translation from animal models to humans. Here, we adapted the task to a nonhuman primate (NHP), the gray mouse lemur, and provide first evidence for the successful comparative application of the task to humans and NHPs. Young human adults reach the learning criterion after considerably less sessions (one order of magnitude) than young, adult NHPs, which is likely due to faster and voluntary rejection of ineffective learning strategies in humans and almost immediate rule generalization. At criterion, however, all human subjects solved the task by either applying a visuospatial rule or, more rarely, by memorizing all possible stimulus combinations and responding correctly based on global visual information. An error-profile analysis in humans and NHPs suggests that successful learning in NHPs is comparably based either on the formation of visuospatial associative links or on more reflexive, visually guided stimulus-response learning. The classification in the NHPs is further supported by an analysis of the individual response latencies, which are considerably higher in NHPs classified as spatial learners. Our results, therefore, support the high translational potential of the standardized, touchscreen-based dPAL task by providing first empirical and comparable evidence for two different cognitive processes underlying dPAL performance in primates. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  10. A note on thick subcategories of stable derived categories

    OpenAIRE

    Krause, Henning; Stevenson, Greg

    2013-01-01

    For an exact category having enough projective objects, we establish a bijection between thick subcategories containing the projective objects and thick subcategories of the stable derived category. Using this bijection, we classify thick subcategories of finitely generated modules over strict local complete intersections and produce generators for the category of coherent sheaves on a separated Noetherian scheme with an ample family of line bundles.

  11. Models as Relational Categories

    Science.gov (United States)

    Kokkonen, Tommi

    2017-11-01

    Model-based learning (MBL) has an established position within science education. It has been found to enhance conceptual understanding and provide a way for engaging students in authentic scientific activity. Despite ample research, few studies have examined the cognitive processes regarding learning scientific concepts within MBL. On the other hand, recent research within cognitive science has examined the learning of so-called relational categories. Relational categories are categories whose membership is determined on the basis of the common relational structure. In this theoretical paper, I argue that viewing models as relational categories provides a well-motivated cognitive basis for MBL. I discuss the different roles of models and modeling within MBL (using ready-made models, constructive modeling, and generative modeling) and discern the related cognitive aspects brought forward by the reinterpretation of models as relational categories. I will argue that relational knowledge is vital in learning novel models and in the transfer of learning. Moreover, relational knowledge underlies the coherent, hierarchical knowledge of experts. Lastly, I will examine how the format of external representations may affect the learning of models and the relevant relations. The nature of the learning mechanisms underlying students' mental representations of models is an interesting open question to be examined. Furthermore, the ways in which the expert-like knowledge develops and how to best support it is in need of more research. The discussion and conceptualization of models as relational categories allows discerning students' mental representations of models in terms of evolving relational structures in greater detail than previously done.

  12. Efficient light scattering through thin semi-transparent objects

    DEFF Research Database (Denmark)

    Frisvad, Jeppe Revall; Christensen, Niels Jørgen; Falster, Peter

    2005-01-01

    This paper concerns real-time rendering of thin semi-transparent objects. An object in this category could be a piece of cloth, eg. a curtain. Semi-transparent objects are visualized most correctly using volume rendering techniques. In general such techniques are, however, intractable for real-ti...... in this new area gives far better results than what is obtainable with a traditional real-time rendering scheme using a constant factor for alpha blending....

  13. Categories of transactions

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    This chapter discusses the types of wholesale sales made by utilities. The Federal Energy Regulatory Commission (FERC), which regulates inter-utility sales, divides these sales into two broad categories: requirements and coordination. A variety of wholesale sales do not fall neatly into either category. For example, power purchased to replace the Three Mile Island outage is in a sense a reliability purchase, since it is bought on a long-term firm basis to meet basic load requirements. However, it does not fit the traditional model of a sale considered as part of each utility's long range planning. In addition, this chapter discusses transmission services, with a particular emphasis on wheeling

  14. Optimal thickness of a monocrystal line object in atomic plane visualization on its image in a high-resolution electron microscope

    International Nuclear Information System (INIS)

    Grishina, T.A.; Sviridova, V.Yu.

    1983-01-01

    Theoretical and experimental investigation of the influence of the FCC-lattice crystal (gold, nickel) thickness on conditions of visulization of atomic plane projections (APP) on the crystal image in a transmission high-resolution electron microscope (THREM) is reported. Results of electron diffraction theory are used for theoretical investigation. Calculation analysis of the influence of the monocrystal thickness and orientation on conitions of visualization of APP and atomic columns in monocrystal images formed in THREM in multibeam regimes with inclined and axial illumination is conducted. It is shown that, to visualize the atomic column projections in a crystal image formed in the multibeam regime with axial illumination, optimal are the thicknesses from 0.1 xisub(min) to 0.25 xisub(min) and at some object orientations also the thicknesses from 0.8 xisub(min) to 0.9 xisub(min), where xisub(min) is the extinction length minimum for the given orientation. It is shown that, to realize the ultimate resolutions in multibeam regimes both with inclined and axial illumination the optimal thickness of the object is 0.63 xisub(min). Satisfactory coincidence of theoretical and experimental data is obtained

  15. What Makes an Object Memorable?

    KAUST Repository

    Dubey, Rachit

    2016-02-19

    Recent studies on image memorability have shed light on what distinguishes the memorability of different images and the intrinsic and extrinsic properties that make those images memorable. However, a clear understanding of the memorability of specific objects inside an image remains elusive. In this paper, we provide the first attempt to answer the question: what exactly is remembered about an image? We augment both the images and object segmentations from the PASCAL-S dataset with ground truth memorability scores and shed light on the various factors and properties that make an object memorable (or forgettable) to humans. We analyze various visual factors that may influence object memorability (e.g. color, visual saliency, and object categories). We also study the correlation between object and image memorability and find that image memorability is greatly affected by the memorability of its most memorable object. Lastly, we explore the effectiveness of deep learning and other computational approaches in predicting object memorability in images. Our efforts offer a deeper understanding of memorability in general thereby opening up avenues for a wide variety of applications. © 2015 IEEE.

  16. What Makes an Object Memorable?

    KAUST Repository

    Dubey, Rachit; Peterson, Joshua; Khosla, Aditya; Yang, Ming-Hsuan; Ghanem, Bernard

    2016-01-01

    Recent studies on image memorability have shed light on what distinguishes the memorability of different images and the intrinsic and extrinsic properties that make those images memorable. However, a clear understanding of the memorability of specific objects inside an image remains elusive. In this paper, we provide the first attempt to answer the question: what exactly is remembered about an image? We augment both the images and object segmentations from the PASCAL-S dataset with ground truth memorability scores and shed light on the various factors and properties that make an object memorable (or forgettable) to humans. We analyze various visual factors that may influence object memorability (e.g. color, visual saliency, and object categories). We also study the correlation between object and image memorability and find that image memorability is greatly affected by the memorability of its most memorable object. Lastly, we explore the effectiveness of deep learning and other computational approaches in predicting object memorability in images. Our efforts offer a deeper understanding of memorability in general thereby opening up avenues for a wide variety of applications. © 2015 IEEE.

  17. Words, shape, visual search and visual working memory in 3-year-old children.

    Science.gov (United States)

    Vales, Catarina; Smith, Linda B

    2015-01-01

    Do words cue children's visual attention, and if so, what are the relevant mechanisms? Across four experiments, 3-year-old children (N = 163) were tested in visual search tasks in which targets were cued with only a visual preview versus a visual preview and a spoken name. The experiments were designed to determine whether labels facilitated search times and to examine one route through which labels could have their effect: By influencing the visual working memory representation of the target. The targets and distractors were pictures of instances of basic-level known categories and the labels were the common name for the target category. We predicted that the label would enhance the visual working memory representation of the target object, guiding attention to objects that better matched the target representation. Experiments 1 and 2 used conjunctive search tasks, and Experiment 3 varied shape discriminability between targets and distractors. Experiment 4 compared the effects of labels to repeated presentations of the visual target, which should also influence the working memory representation of the target. The overall pattern fits contemporary theories of how the contents of visual working memory interact with visual search and attention, and shows that even in very young children heard words affect the processing of visual information. © 2014 John Wiley & Sons Ltd.

  18. Generating descriptive visual words and visual phrases for large-scale image applications.

    Science.gov (United States)

    Zhang, Shiliang; Tian, Qi; Hua, Gang; Huang, Qingming; Gao, Wen

    2011-09-01

    Bag-of-visual Words (BoWs) representation has been applied for various problems in the fields of multimedia and computer vision. The basic idea is to represent images as visual documents composed of repeatable and distinctive visual elements, which are comparable to the text words. Notwithstanding its great success and wide adoption, visual vocabulary created from single-image local descriptors is often shown to be not as effective as desired. In this paper, descriptive visual words (DVWs) and descriptive visual phrases (DVPs) are proposed as the visual correspondences to text words and phrases, where visual phrases refer to the frequently co-occurring visual word pairs. Since images are the carriers of visual objects and scenes, a descriptive visual element set can be composed by the visual words and their combinations which are effective in representing certain visual objects or scenes. Based on this idea, a general framework is proposed for generating DVWs and DVPs for image applications. In a large-scale image database containing 1506 object and scene categories, the visual words and visual word pairs descriptive to certain objects or scenes are identified and collected as the DVWs and DVPs. Experiments show that the DVWs and DVPs are informative and descriptive and, thus, are more comparable with the text words than the classic visual words. We apply the identified DVWs and DVPs in several applications including large-scale near-duplicated image retrieval, image search re-ranking, and object recognition. The combination of DVW and DVP performs better than the state of the art in large-scale near-duplicated image retrieval in terms of accuracy, efficiency and memory consumption. The proposed image search re-ranking algorithm: DWPRank outperforms the state-of-the-art algorithm by 12.4% in mean average precision and about 11 times faster in efficiency.

  19. Consumer Product Category Database

    Science.gov (United States)

    The Chemical and Product Categories database (CPCat) catalogs the use of over 40,000 chemicals and their presence in different consumer products. The chemical use information is compiled from multiple sources while product information is gathered from publicly available Material Safety Data Sheets (MSDS). EPA researchers are evaluating the possibility of expanding the database with additional product and use information.

  20. Experimental Effects and Individual Differences in Linear Mixed Models: Estimating the Relationship between Spatial, Object, and Attraction Effects in Visual Attention

    Science.gov (United States)

    Kliegl, Reinhold; Wei, Ping; Dambacher, Michael; Yan, Ming; Zhou, Xiaolin

    2011-01-01

    Linear mixed models (LMMs) provide a still underused methodological perspective on combining experimental and individual-differences research. Here we illustrate this approach with two-rectangle cueing in visual attention (Egly et al., 1994). We replicated previous experimental cue-validity effects relating to a spatial shift of attention within an object (spatial effect), to attention switch between objects (object effect), and to the attraction of attention toward the display centroid (attraction effect), also taking into account the design-inherent imbalance of valid and other trials. We simultaneously estimated variance/covariance components of subject-related random effects for these spatial, object, and attraction effects in addition to their mean reaction times (RTs). The spatial effect showed a strong positive correlation with mean RT and a strong negative correlation with the attraction effect. The analysis of individual differences suggests that slow subjects engage attention more strongly at the cued location than fast subjects. We compare this joint LMM analysis of experimental effects and associated subject-related variances and correlations with two frequently used alternative statistical procedures. PMID:21833292