WorldWideScience

Sample records for learning visual representation

  1. Effects of Computer-Based Visual Representation on Mathematics Learning and Cognitive Load

    Science.gov (United States)

    Yung, Hsin I.; Paas, Fred

    2015-01-01

    Visual representation has been recognized as a powerful learning tool in many learning domains. Based on the assumption that visual representations can support deeper understanding, we examined the effects of visual representations on learning performance and cognitive load in the domain of mathematics. An experimental condition with visual…

  2. Conditions for the Effectiveness of Multiple Visual Representations in Enhancing STEM Learning

    Science.gov (United States)

    Rau, Martina A.

    2017-01-01

    Visual representations play a critical role in enhancing science, technology, engineering, and mathematics (STEM) learning. Educational psychology research shows that adding visual representations to text can enhance students' learning of content knowledge, compared to text-only. But should students learn with a single type of visual…

  3. Learning Visual Representations for Perception-Action Systems

    DEFF Research Database (Denmark)

    Piater, Justus; Jodogne, Sebastien; Detry, Renaud

    2011-01-01

    and RLJC, our second method learns structural object models for robust object detection and pose estimation by probabilistic inference. To these models, the method associates grasp experiences autonomously learned by trial and error. These experiences form a nonparametric representation of grasp success......We discuss vision as a sensory modality for systems that effect actions in response to perceptions. While the internal representations informed by vision may be arbitrarily complex, we argue that in many cases it is advantageous to link them rather directly to action via learned mappings....... These arguments are illustrated by two examples of our own work. First, our RLVC algorithm performs reinforcement learning directly on the visual input space. To make this very large space manageable, RLVC interleaves the reinforcement learner with a supervised classification algorithm that seeks to split...

  4. The role of visual representation in physics learning: dynamic versus static visualization

    Science.gov (United States)

    Suyatna, Agus; Anggraini, Dian; Agustina, Dina; Widyastuti, Dini

    2017-11-01

    This study aims to examine the role of visual representation in physics learning and to compare the learning outcomes of using dynamic and static visualization media. The study was conducted using quasi-experiment with Pretest-Posttest Control Group Design. The samples of this research are students of six classes at State Senior High School in Lampung Province. The experimental class received a learning using dynamic visualization and control class using static visualization media. Both classes are given pre-test and post-test with the same instruments. Data were tested with N-gain analysis, normality test, homogeneity test and mean difference test. The results showed that there was a significant increase of mean (N-Gain) learning outcomes (p physical phenomena and requires long-term observation.

  5. Exploring Middle School Students' Representational Competence in Science: Development and Verification of a Framework for Learning with Visual Representations

    Science.gov (United States)

    Tippett, Christine Diane

    Scientific knowledge is constructed and communicated through a range of forms in addition to verbal language. Maps, graphs, charts, diagrams, formulae, models, and drawings are just some of the ways in which science concepts can be represented. Representational competence---an aspect of visual literacy that focuses on the ability to interpret, transform, and produce visual representations---is a key component of science literacy and an essential part of science reading and writing. To date, however, most research has examined learning from representations rather than learning with representations. This dissertation consisted of three distinct projects that were related by a common focus on learning from visual representations as an important aspect of scientific literacy. The first project was the development of an exploratory framework that is proposed for use in investigations of students constructing and interpreting multimedia texts. The exploratory framework, which integrates cognition, metacognition, semiotics, and systemic functional linguistics, could eventually result in a model that might be used to guide classroom practice, leading to improved visual literacy, better comprehension of science concepts, and enhanced science literacy because it emphasizes distinct aspects of learning with representations that can be addressed though explicit instruction. The second project was a metasynthesis of the research that was previously conducted as part of the Explicit Literacy Instruction Embedded in Middle School Science project (Pacific CRYSTAL, http://www.educ.uvic.ca/pacificcrystal). Five overarching themes emerged from this case-to-case synthesis: the engaging and effective nature of multimedia genres, opportunities for differentiated instruction using multimodal strategies, opportunities for assessment, an emphasis on visual representations, and the robustness of some multimodal literacy strategies across content areas. The third project was a mixed

  6. Teaching with Concrete and Abstract Visual Representations: Effects on Students' Problem Solving, Problem Representations, and Learning Perceptions

    Science.gov (United States)

    Moreno, Roxana; Ozogul, Gamze; Reisslein, Martin

    2011-01-01

    In 3 experiments, we examined the effects of using concrete and/or abstract visual problem representations during instruction on students' problem-solving practice, near transfer, problem representations, and learning perceptions. In Experiments 1 and 2, novice students learned about electrical circuit analysis with an instructional program that…

  7. Sparse representation, modeling and learning in visual recognition theory, algorithms and applications

    CERN Document Server

    Cheng, Hong

    2015-01-01

    This unique text/reference presents a comprehensive review of the state of the art in sparse representations, modeling and learning. The book examines both the theoretical foundations and details of algorithm implementation, highlighting the practical application of compressed sensing research in visual recognition and computer vision. Topics and features: provides a thorough introduction to the fundamentals of sparse representation, modeling and learning, and the application of these techniques in visual recognition; describes sparse recovery approaches, robust and efficient sparse represen

  8. Constructing visual representations

    DEFF Research Database (Denmark)

    Huron, Samuel; Jansen, Yvonne; Carpendale, Sheelagh

    2014-01-01

    tangible building blocks. We learned that all participants, most of whom had little experience in visualization authoring, were readily able to create and talk about their own visualizations. Based on our observations, we discuss participants’ actions during the development of their visual representations......The accessibility of infovis authoring tools to a wide audience has been identified as a major research challenge. A key task in the authoring process is the development of visual mappings. While the infovis community has long been deeply interested in finding effective visual mappings......, comparatively little attention has been placed on how people construct visual mappings. In this paper, we present the results of a study designed to shed light on how people transform data into visual representations. We asked people to create, update and explain their own information visualizations using only...

  9. Learning Sparse Visual Representations with Leaky Capped Norm Regularizers

    OpenAIRE

    Wangni, Jianqiao; Lin, Dahua

    2017-01-01

    Sparsity inducing regularization is an important part for learning over-complete visual representations. Despite the popularity of $\\ell_1$ regularization, in this paper, we investigate the usage of non-convex regularizations in this problem. Our contribution consists of three parts. First, we propose the leaky capped norm regularization (LCNR), which allows model weights below a certain threshold to be regularized more strongly as opposed to those above, therefore imposes strong sparsity and...

  10. Exploring Multi-Modal and Structured Representation Learning for Visual Image and Video Understanding

    OpenAIRE

    Xu, Dan

    2018-01-01

    As the explosive growth of the visual data, it is particularly important to develop intelligent visual understanding techniques for dealing with a large amount of data. Many efforts have been made in recent years to build highly effective and large-scale visual processing algorithms and systems. One of the core aspects in the research line is how to learn robust representations to better describe the data. In this thesis we study the problem of visual image and video understanding and specifi...

  11. Learning representation hierarchies by sharing visual features: a computational investigation of Persian character recognition with unsupervised deep learning.

    Science.gov (United States)

    Sadeghi, Zahra; Testolin, Alberto

    2017-08-01

    In humans, efficient recognition of written symbols is thought to rely on a hierarchical processing system, where simple features are progressively combined into more abstract, high-level representations. Here, we present a computational model of Persian character recognition based on deep belief networks, where increasingly more complex visual features emerge in a completely unsupervised manner by fitting a hierarchical generative model to the sensory data. Crucially, high-level internal representations emerging from unsupervised deep learning can be easily read out by a linear classifier, achieving state-of-the-art recognition accuracy. Furthermore, we tested the hypothesis that handwritten digits and letters share many common visual features: A generative model that captures the statistical structure of the letters distribution should therefore also support the recognition of written digits. To this aim, deep networks trained on Persian letters were used to build high-level representations of Persian digits, which were indeed read out with high accuracy. Our simulations show that complex visual features, such as those mediating the identification of Persian symbols, can emerge from unsupervised learning in multilayered neural networks and can support knowledge transfer across related domains.

  12. Hebbian learning of hand-centred representations in a hierarchical neural network model of the primate visual system

    Science.gov (United States)

    Born, Jannis; Stringer, Simon M.

    2017-01-01

    A subset of neurons in the posterior parietal and premotor areas of the primate brain respond to the locations of visual targets in a hand-centred frame of reference. Such hand-centred visual representations are thought to play an important role in visually-guided reaching to target locations in space. In this paper we show how a biologically plausible, Hebbian learning mechanism may account for the development of localized hand-centred representations in a hierarchical neural network model of the primate visual system, VisNet. The hand-centered neurons developed in the model use an invariance learning mechanism known as continuous transformation (CT) learning. In contrast to previous theoretical proposals for the development of hand-centered visual representations, CT learning does not need a memory trace of recent neuronal activity to be incorporated in the synaptic learning rule. Instead, CT learning relies solely on a Hebbian learning rule, which is able to exploit the spatial overlap that naturally occurs between successive images of a hand-object configuration as it is shifted across different retinal locations due to saccades. Our simulations show how individual neurons in the network model can learn to respond selectively to target objects in particular locations with respect to the hand, irrespective of where the hand-object configuration occurs on the retina. The response properties of these hand-centred neurons further generalise to localised receptive fields in the hand-centred space when tested on novel hand-object configurations that have not been explored during training. Indeed, even when the network is trained with target objects presented across a near continuum of locations around the hand during training, the model continues to develop hand-centred neurons with localised receptive fields in hand-centred space. With the help of principal component analysis, we provide the first theoretical framework that explains the behavior of Hebbian learning

  13. Hebbian learning of hand-centred representations in a hierarchical neural network model of the primate visual system.

    Science.gov (United States)

    Born, Jannis; Galeazzi, Juan M; Stringer, Simon M

    2017-01-01

    A subset of neurons in the posterior parietal and premotor areas of the primate brain respond to the locations of visual targets in a hand-centred frame of reference. Such hand-centred visual representations are thought to play an important role in visually-guided reaching to target locations in space. In this paper we show how a biologically plausible, Hebbian learning mechanism may account for the development of localized hand-centred representations in a hierarchical neural network model of the primate visual system, VisNet. The hand-centered neurons developed in the model use an invariance learning mechanism known as continuous transformation (CT) learning. In contrast to previous theoretical proposals for the development of hand-centered visual representations, CT learning does not need a memory trace of recent neuronal activity to be incorporated in the synaptic learning rule. Instead, CT learning relies solely on a Hebbian learning rule, which is able to exploit the spatial overlap that naturally occurs between successive images of a hand-object configuration as it is shifted across different retinal locations due to saccades. Our simulations show how individual neurons in the network model can learn to respond selectively to target objects in particular locations with respect to the hand, irrespective of where the hand-object configuration occurs on the retina. The response properties of these hand-centred neurons further generalise to localised receptive fields in the hand-centred space when tested on novel hand-object configurations that have not been explored during training. Indeed, even when the network is trained with target objects presented across a near continuum of locations around the hand during training, the model continues to develop hand-centred neurons with localised receptive fields in hand-centred space. With the help of principal component analysis, we provide the first theoretical framework that explains the behavior of Hebbian learning

  14. Transformations in the Visual Representation of a Figural Pattern

    Science.gov (United States)

    Montenegro, Paula; Costa, Cecília; Lopes, Bernardino

    2018-01-01

    Multiple representations of a given mathematical object/concept are one of the biggest difficulties encountered by students. The aim of this study is to investigate the impact of the use of visual representations in teaching and learning algebra. In this paper, we analyze the transformations from and to visual representations that were performed…

  15. Hebbian learning of hand-centred representations in a hierarchical neural network model of the primate visual system.

    Directory of Open Access Journals (Sweden)

    Jannis Born

    Full Text Available A subset of neurons in the posterior parietal and premotor areas of the primate brain respond to the locations of visual targets in a hand-centred frame of reference. Such hand-centred visual representations are thought to play an important role in visually-guided reaching to target locations in space. In this paper we show how a biologically plausible, Hebbian learning mechanism may account for the development of localized hand-centred representations in a hierarchical neural network model of the primate visual system, VisNet. The hand-centered neurons developed in the model use an invariance learning mechanism known as continuous transformation (CT learning. In contrast to previous theoretical proposals for the development of hand-centered visual representations, CT learning does not need a memory trace of recent neuronal activity to be incorporated in the synaptic learning rule. Instead, CT learning relies solely on a Hebbian learning rule, which is able to exploit the spatial overlap that naturally occurs between successive images of a hand-object configuration as it is shifted across different retinal locations due to saccades. Our simulations show how individual neurons in the network model can learn to respond selectively to target objects in particular locations with respect to the hand, irrespective of where the hand-object configuration occurs on the retina. The response properties of these hand-centred neurons further generalise to localised receptive fields in the hand-centred space when tested on novel hand-object configurations that have not been explored during training. Indeed, even when the network is trained with target objects presented across a near continuum of locations around the hand during training, the model continues to develop hand-centred neurons with localised receptive fields in hand-centred space. With the help of principal component analysis, we provide the first theoretical framework that explains the behavior

  16. Picture this: The value of multiple visual representations for student learning of quantum concepts in general chemistry

    Science.gov (United States)

    Allen, Emily Christine

    Mental models for scientific learning are often defined as, "cognitive tools situated between experiments and theories" (Duschl & Grandy, 2012). In learning, these cognitive tools are used to not only take in new information, but to help problem solve in new contexts. Nancy Nersessian (2008) describes a mental model as being "[loosely] characterized as a representation of a system with interactive parts with representations of those interactions. Models can be qualitative, quantitative, and/or simulative (mental, physical, computational)" (p. 63). If conceptual parts used by the students in science education are inaccurate, then the resulting model will not be useful. Students in college general chemistry courses are presented with multiple abstract topics and often struggle to fit these parts into complete models. This is especially true for topics that are founded on quantum concepts, such as atomic structure and molecular bonding taught in college general chemistry. The objectives of this study were focused on how students use visual tools introduced during instruction to reason with atomic and molecular structure, what misconceptions may be associated with these visual tools, and how visual modeling skills may be taught to support students' use of visual tools for reasoning. The research questions for this study follow from Gilbert's (2008) theory that experts use multiple representations when reasoning and modeling a system, and Kozma and Russell's (2005) theory of representational competence levels. This study finds that as students developed greater command of their understanding of abstract quantum concepts, they spontaneously provided additional representations to describe their more sophisticated models of atomic and molecular structure during interviews. This suggests that when visual modeling with multiple representations is taught, along with the limitations of the representations, it can assist students in the development of models for reasoning about

  17. Visual Learning in Application of Integration

    Science.gov (United States)

    Bt Shafie, Afza; Barnachea Janier, Josefina; Bt Wan Ahmad, Wan Fatimah

    Innovative use of technology can improve the way how Mathematics should be taught. It can enhance student's learning the concepts through visualization. Visualization in Mathematics refers to us of texts, pictures, graphs and animations to hold the attention of the learners in order to learn the concepts. This paper describes the use of a developed multimedia courseware as an effective tool for visual learning mathematics. The focus is on the application of integration which is a topic in Engineering Mathematics 2. The course is offered to the foundation students in the Universiti Teknologi of PETRONAS. Questionnaire has been distributed to get a feedback on the visual representation and students' attitudes towards using visual representation as a learning tool. The questionnaire consists of 3 sections: Courseware Design (Part A), courseware usability (Part B) and attitudes towards using the courseware (Part C). The results showed that students demonstrated the use of visual representation has benefited them in learning the topic.

  18. Visual Literacy and Biochemistry Learning: The role of external representations

    Directory of Open Access Journals (Sweden)

    V.J.S.V. Santos

    2011-04-01

    Full Text Available Visual Literacy can bedefined as people’s ability to understand, use, think, learn and express themselves through external representations (ER in a given subject. This research aims to investigate the development of abilities of ERs reading and interpretation by students from a Biochemistry graduate course of theFederal University of São João Del-Rei. In this way, Visual Literacy level was  assessed using a questionnaire validatedin a previous educational research. This diagnosis questionnaire was elaborated according to six visual abilitiesidentified as essential for the study of the metabolic pathways. The initial statistical analysis of data collectedin this study was carried out using ANOVA method. Results obtained showed that the questionnaire used is adequate for the research and indicated that the level of Visual Literacy related to the metabolic processes increased significantly with the progress of the students in the graduation course. There was also an indication of a possible interference in the student’s performancedetermined by the cutoff punctuation in the university selection process.

  19. Transformation-invariant visual representations in self-organizing spiking neural networks.

    Science.gov (United States)

    Evans, Benjamin D; Stringer, Simon M

    2012-01-01

    The ventral visual pathway achieves object and face recognition by building transformation-invariant representations from elementary visual features. In previous computer simulation studies with rate-coded neural networks, the development of transformation-invariant representations has been demonstrated using either of two biologically plausible learning mechanisms, Trace learning and Continuous Transformation (CT) learning. However, it has not previously been investigated how transformation-invariant representations may be learned in a more biologically accurate spiking neural network. A key issue is how the synaptic connection strengths in such a spiking network might self-organize through Spike-Time Dependent Plasticity (STDP) where the change in synaptic strength is dependent on the relative times of the spikes emitted by the presynaptic and postsynaptic neurons rather than simply correlated activity driving changes in synaptic efficacy. Here we present simulations with conductance-based integrate-and-fire (IF) neurons using a STDP learning rule to address these gaps in our understanding. It is demonstrated that with the appropriate selection of model parameters and training regime, the spiking network model can utilize either Trace-like or CT-like learning mechanisms to achieve transform-invariant representations.

  20. Transform-invariant visual representations in self-organizing spiking neural networks

    Directory of Open Access Journals (Sweden)

    Benjamin eEvans

    2012-07-01

    Full Text Available The ventral visual pathway achieves object and face recognition by building transform-invariant representations from elementary visual features. In previous computer simulation studies with rate-coded neural networks, the development of transform invariant representations has been demonstrated using either of two biologically plausible learning mechanisms, Trace learning and Continuous Transformation (CT learning. However, it has not previously been investigated how transform invariant representations may be learned in a more biologically accurate spiking neural network. A key issue is how the synaptic connection strengths in such a spiking network might self-organize through Spike-Time Dependent Plasticity (STDP where the change in synaptic strength is dependent on the relative times of the spikes emitted by the pre- and postsynaptic neurons rather than simply correlated activity driving changes in synaptic efficacy. Here we present simulations with conductance-based integrate-and-fire (IF neurons using a STDP learning rule to address these gaps in our understanding. It is demonstrated that with the appropriate selection of model pa- rameters and training regime, the spiking network model can utilize either Trace-like or CT-like learning mechanisms to achieve transform-invariant representations.

  1. How learning might strengthen existing visual object representations in human object-selective cortex.

    Science.gov (United States)

    Brants, Marijke; Bulthé, Jessica; Daniels, Nicky; Wagemans, Johan; Op de Beeck, Hans P

    2016-02-15

    Visual object perception is an important function in primates which can be fine-tuned by experience, even in adults. Which factors determine the regions and the neurons that are modified by learning is still unclear. Recently, it was proposed that the exact cortical focus and distribution of learning effects might depend upon the pre-learning mapping of relevant functional properties and how this mapping determines the informativeness of neural units for the stimuli and the task to be learned. From this hypothesis we would expect that visual experience would strengthen the pre-learning distributed functional map of the relevant distinctive object properties. Here we present a first test of this prediction in twelve human subjects who were trained in object categorization and differentiation, preceded and followed by a functional magnetic resonance imaging session. Specifically, training increased the distributed multi-voxel pattern information for trained object distinctions in object-selective cortex, resulting in a generalization from pre-training multi-voxel activity patterns to after-training activity patterns. Simulations show that the increased selectivity combined with the inter-session generalization is consistent with a training-induced strengthening of a pre-existing selectivity map. No training-related neural changes were detected in other regions. In sum, training to categorize or individuate objects strengthened pre-existing representations in human object-selective cortex, providing a first indication that the neuroanatomical distribution of learning effects depends upon the pre-learning mapping of visual object properties. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Making Connections among Multiple Visual Representations: How Do Sense-Making Skills and Perceptual Fluency Relate to Learning of Chemistry Knowledge?

    Science.gov (United States)

    Rau, Martina A.

    2018-01-01

    To learn content knowledge in science, technology, engineering, and math domains, students need to make connections among visual representations. This article considers two kinds of connection-making skills: (1) "sense-making skills" that allow students to verbally explain mappings among representations and (2) "perceptual…

  3. The Effect of Using a Visual Representation Tool in a Teaching-Learning Sequence for Teaching Newton's Third Law

    Science.gov (United States)

    Savinainen, Antti; Mäkynen, Asko; Nieminen, Pasi; Viiri, Jouni

    2017-01-01

    This paper presents a research-based teaching-learning sequence (TLS) that focuses on the notion of interaction in teaching Newton's third law (N3 law) which is, as earlier studies have shown, a challenging topic for students to learn. The TLS made systematic use of a visual representation tool--an interaction diagram (ID)--highlighting…

  4. The Nature of Experience Determines Object Representations in the Visual System

    Science.gov (United States)

    Wong, Yetta K.; Folstein, Jonathan R.; Gauthier, Isabel

    2012-01-01

    Visual perceptual learning (PL) and perceptual expertise (PE) traditionally lead to different training effects and recruit different brain areas, but reasons for these differences are largely unknown. Here, we tested how the learning history influences visual object representations. Two groups were trained with tasks typically used in PL or PE…

  5. Object representations in visual memory: evidence from visual illusions.

    Science.gov (United States)

    Ben-Shalom, Asaf; Ganel, Tzvi

    2012-07-26

    Human visual memory is considered to contain different levels of object representations. Representations in visual working memory (VWM) are thought to contain relatively elaborated information about object structure. Conversely, representations in iconic memory are thought to be more perceptual in nature. In four experiments, we tested the effects of two different categories of visual illusions on representations in VWM and in iconic memory. Unlike VWM that was affected by both types of illusions, iconic memory was immune to the effects of within-object contextual illusions and was affected only by illusions driven by between-objects contextual properties. These results show that iconic and visual working memory contain dissociable representations of object shape. These findings suggest that the global properties of the visual scene are processed prior to the processing of specific elements.

  6. Visual Perceptual Learning and Models.

    Science.gov (United States)

    Dosher, Barbara; Lu, Zhong-Lin

    2017-09-15

    Visual perceptual learning through practice or training can significantly improve performance on visual tasks. Originally seen as a manifestation of plasticity in the primary visual cortex, perceptual learning is more readily understood as improvements in the function of brain networks that integrate processes, including sensory representations, decision, attention, and reward, and balance plasticity with system stability. This review considers the primary phenomena of perceptual learning, theories of perceptual learning, and perceptual learning's effect on signal and noise in visual processing and decision. Models, especially computational models, play a key role in behavioral and physiological investigations of the mechanisms of perceptual learning and for understanding, predicting, and optimizing human perceptual processes, learning, and performance. Performance improvements resulting from reweighting or readout of sensory inputs to decision provide a strong theoretical framework for interpreting perceptual learning and transfer that may prove useful in optimizing learning in real-world applications.

  7. Learning STEM Through Integrative Visual Representations

    Science.gov (United States)

    Virk, Satyugjit Singh

    Previous cognitive models of memory have not comprehensively taken into account the internal cognitive load of chunking isolated information and have emphasized the external cognitive load of visual presentation only. Under the Virk Long Term Working Memory Multimedia Model of cognitive load, drawing from the Cowan model, students presented with integrated animations of the key neural signal transmission subcomponents where the interrelationships between subcomponents are visually and verbally explicit, were hypothesized to perform significantly better on free response and diagram labeling questions, than students presented with isolated animations of these subcomponents. This is because the internal attentional cognitive load of chunking these concepts is greatly reduced and hence the overall cognitive load is less for the integrated visuals group than the isolated group, despite the higher external load for the integrated group of having the interrelationships between subcomponents presented explicitly. Experiment 1 demonstrated that integrating the subcomponents of the neuron significantly enhanced comprehension of the interconnections between cellular subcomponents and approached significance for enhancing comprehension of the layered molecular correlates of the cellular structures and their interconnections. Experiment 2 corrected time on task confounds from Experiment 1 and focused on the cellular subcomponents of the neuron only. Results from the free response essay subcomponent subscores did demonstrate significant differences in favor of the integrated group as well as some evidence from the diagram labeling section. Results from free response, short answer and What-If (problem solving), and diagram labeling detailed interrelationship subscores demonstrated the integrated group did indeed learn the extra material they were presented with. This data demonstrating the integrated group learned the extra material they were presented with provides some initial

  8. How Do Students Learn to See Concepts in Visualizations? Social Learning Mechanisms with Physical and Virtual Representations

    Science.gov (United States)

    Rau, Martina A.

    2017-01-01

    STEM instruction often uses visual representations. To benefit from these, students need to understand how representations show domain-relevant concepts. Yet, this is difficult for students. Prior research shows that physical representations (objects that students manipulate by hand) and virtual representations (objects on a computer screen that…

  9. Hybrid image representation learning model with invariant features for basal cell carcinoma detection

    Science.gov (United States)

    Arevalo, John; Cruz-Roa, Angel; González, Fabio A.

    2013-11-01

    This paper presents a novel method for basal-cell carcinoma detection, which combines state-of-the-art methods for unsupervised feature learning (UFL) and bag of features (BOF) representation. BOF, which is a form of representation learning, has shown a good performance in automatic histopathology image classi cation. In BOF, patches are usually represented using descriptors such as SIFT and DCT. We propose to use UFL to learn the patch representation itself. This is accomplished by applying a topographic UFL method (T-RICA), which automatically learns visual invariance properties of color, scale and rotation from an image collection. These learned features also reveals these visual properties associated to cancerous and healthy tissues and improves carcinoma detection results by 7% with respect to traditional autoencoders, and 6% with respect to standard DCT representations obtaining in average 92% in terms of F-score and 93% of balanced accuracy.

  10. What recent research on diagrams suggests about learning with rather than learning from visual representations in science

    Science.gov (United States)

    Tippett, Christine D.

    2016-03-01

    The move from learning science from representations to learning science with representations has many potential and undocumented complexities. This thematic analysis partially explores the trends of representational uses in science instruction, examining 80 research studies on diagram use in science. These studies, published during 2000-2014, were located through searches of journal databases and books. Open coding of the studies identified 13 themes, 6 of which were identified in at least 10% of the studies: eliciting mental models, classroom-based research, multimedia principles, teaching and learning strategies, representational competence, and student agency. A shift in emphasis on learning with rather than learning from representations was evident across the three 5-year intervals considered, mirroring a pedagogical shift from science instruction as transmission of information to constructivist approaches in which learners actively negotiate understanding and construct knowledge. The themes and topics in recent research highlight areas of active interest and reveal gaps that may prove fruitful for further research, including classroom-based studies, the role of prior knowledge, and the use of eye-tracking. The results of the research included in this thematic review of the 2000-2014 literature suggest that both interpreting and constructing representations can lead to better understanding of science concepts.

  11. Dictionary learning in visual computing

    CERN Document Server

    Zhang, Qiang

    2015-01-01

    The last few years have witnessed fast development on dictionary learning approaches for a set of visual computing tasks, largely due to their utilization in developing new techniques based on sparse representation. Compared with conventional techniques employing manually defined dictionaries, such as Fourier Transform and Wavelet Transform, dictionary learning aims at obtaining a dictionary adaptively from the data so as to support optimal sparse representation of the data. In contrast to conventional clustering algorithms like K-means, where a data point is associated with only one cluster c

  12. Associative learning changes cross-modal representations in the gustatory cortex.

    Science.gov (United States)

    Vincis, Roberto; Fontanini, Alfredo

    2016-08-30

    A growing body of literature has demonstrated that primary sensory cortices are not exclusively unimodal, but can respond to stimuli of different sensory modalities. However, several questions concerning the neural representation of cross-modal stimuli remain open. Indeed, it is poorly understood if cross-modal stimuli evoke unique or overlapping representations in a primary sensory cortex and whether learning can modulate these representations. Here we recorded single unit responses to auditory, visual, somatosensory, and olfactory stimuli in the gustatory cortex (GC) of alert rats before and after associative learning. We found that, in untrained rats, the majority of GC neurons were modulated by a single modality. Upon learning, both prevalence of cross-modal responsive neurons and their breadth of tuning increased, leading to a greater overlap of representations. Altogether, our results show that the gustatory cortex represents cross-modal stimuli according to their sensory identity, and that learning changes the overlap of cross-modal representations.

  13. Online multi-modal robust non-negative dictionary learning for visual tracking.

    Science.gov (United States)

    Zhang, Xiang; Guan, Naiyang; Tao, Dacheng; Qiu, Xiaogang; Luo, Zhigang

    2015-01-01

    Dictionary learning is a method of acquiring a collection of atoms for subsequent signal representation. Due to its excellent representation ability, dictionary learning has been widely applied in multimedia and computer vision. However, conventional dictionary learning algorithms fail to deal with multi-modal datasets. In this paper, we propose an online multi-modal robust non-negative dictionary learning (OMRNDL) algorithm to overcome this deficiency. Notably, OMRNDL casts visual tracking as a dictionary learning problem under the particle filter framework and captures the intrinsic knowledge about the target from multiple visual modalities, e.g., pixel intensity and texture information. To this end, OMRNDL adaptively learns an individual dictionary, i.e., template, for each modality from available frames, and then represents new particles over all the learned dictionaries by minimizing the fitting loss of data based on M-estimation. The resultant representation coefficient can be viewed as the common semantic representation of particles across multiple modalities, and can be utilized to track the target. OMRNDL incrementally learns the dictionary and the coefficient of each particle by using multiplicative update rules to respectively guarantee their non-negativity constraints. Experimental results on a popular challenging video benchmark validate the effectiveness of OMRNDL for visual tracking in both quantity and quality.

  14. Learning Convolutional Text Representations for Visual Question Answering

    OpenAIRE

    Wang, Zhengyang; Ji, Shuiwang

    2017-01-01

    Visual question answering is a recently proposed artificial intelligence task that requires a deep understanding of both images and texts. In deep learning, images are typically modeled through convolutional neural networks, and texts are typically modeled through recurrent neural networks. While the requirement for modeling images is similar to traditional computer vision tasks, such as object recognition and image classification, visual question answering raises a different need for textual...

  15. Adaptive learning in a compartmental model of visual cortex—how feedback enables stable category learning and refinement

    Science.gov (United States)

    Layher, Georg; Schrodt, Fabian; Butz, Martin V.; Neumann, Heiko

    2014-01-01

    The categorization of real world objects is often reflected in the similarity of their visual appearances. Such categories of objects do not necessarily form disjunct sets of objects, neither semantically nor visually. The relationship between categories can often be described in terms of a hierarchical structure. For instance, tigers and leopards build two separate mammalian categories, both of which are subcategories of the category Felidae. In the last decades, the unsupervised learning of categories of visual input stimuli has been addressed by numerous approaches in machine learning as well as in computational neuroscience. However, the question of what kind of mechanisms might be involved in the process of subcategory learning, or category refinement, remains a topic of active investigation. We propose a recurrent computational network architecture for the unsupervised learning of categorial and subcategorial visual input representations. During learning, the connection strengths of bottom-up weights from input to higher-level category representations are adapted according to the input activity distribution. In a similar manner, top-down weights learn to encode the characteristics of a specific stimulus category. Feedforward and feedback learning in combination realize an associative memory mechanism, enabling the selective top-down propagation of a category's feedback weight distribution. We suggest that the difference between the expected input encoded in the projective field of a category node and the current input pattern controls the amplification of feedforward-driven representations. Large enough differences trigger the recruitment of new representational resources and the establishment of additional (sub-) category representations. We demonstrate the temporal evolution of such learning and show how the proposed combination of an associative memory with a modulatory feedback integration successfully establishes category and subcategory representations

  16. Adaptive learning in a compartmental model of visual cortex - how feedback enables stable category learning and refinement

    Directory of Open Access Journals (Sweden)

    Georg eLayher

    2014-12-01

    Full Text Available The categorization of real world objects is often reflected in the similarity of their visual appearances. Such categories of objects do not necessarily form disjunct sets of objects, neither semantically nor visually. The relationship between categories can often be described in terms of a hierarchical structure. For instance, tigers and leopards build two separate mammalian categories, but both belong to the category of felines. In other words, tigers and leopards are subcategories of the category Felidae. In the last decades, the unsupervised learning of categories of visual input stimuli has been addressed by numerous approaches in machine learning as well as in the computational neurosciences. However, the question of what kind of mechanisms might be involved in the process of subcategory learning, or category refinement, remains a topic of active investigation. We propose a recurrent computational network architecture for the unsupervised learning of categorial and subcategorial visual input representations. During learning, the connection strengths of bottom-up weights from input to higher-level category representations are adapted according to the input activity distribution. In a similar manner, top-down weights learn to encode the characteristics of a specific stimulus category. Feedforward and feedback learning in combination realize an associative memory mechanism, enabling the selective top-down propagation of a category's feedback weight distribution. We suggest that the difference between the expected input encoded in the projective field of a category node and the current input pattern controls the amplification of feedforward-driven representations. Large enough differences trigger the recruitment of new representational resources and the establishment of (sub- category representations. We demonstrate the temporal evolution of such learning and show how the approach successully establishes category and subcategory

  17. Incidental learning of probability information is differentially affected by the type of visual working memory representation.

    Science.gov (United States)

    van Lamsweerde, Amanda E; Beck, Melissa R

    2015-12-01

    In this study, we investigated whether the ability to learn probability information is affected by the type of representation held in visual working memory. Across 4 experiments, participants detected changes to displays of coloured shapes. While participants detected changes in 1 dimension (e.g., colour), a feature from a second, nonchanging dimension (e.g., shape) predicted which object was most likely to change. In Experiments 1 and 3, items could be grouped by similarity in the changing dimension across items (e.g., colours and shapes were repeated in the display), while in Experiments 2 and 4 items could not be grouped by similarity (all features were unique). Probability information from the predictive dimension was learned and used to increase performance, but only when all of the features within a display were unique (Experiments 2 and 4). When it was possible to group by feature similarity in the changing dimension (e.g., 2 blue objects appeared within an array), participants were unable to learn probability information and use it to improve performance (Experiments 1 and 3). The results suggest that probability information can be learned in a dimension that is not explicitly task-relevant, but only when the probability information is represented with the changing dimension in visual working memory. (c) 2015 APA, all rights reserved).

  18. Improving of Junior High School Visual Thinking Representation Ability in Mathematical Problem Solving by CTL

    Directory of Open Access Journals (Sweden)

    Edy Surya

    2013-01-01

    Full Text Available The students’  difficulty which was found is in the problem of understanding, drawing diagrams, reading the charts correctly, conceptual formal  mathematical understanding, and  mathematical problem solving. The appropriate problem representation is the basic way in order to understand the problem itself and make a plan to solve it. This research was the experimental classroom design with a pretest-posttest control in order to increase the representation of visual thinking ability on mathematical problem solving approach  with  contextual learning. The research instrument was a test, observation and interviews. Contextual approach increases of mathematical representations ability increases in students with high initial category, medium, and low compared to conventional approaches. Keywords: Visual Thinking Representation, Mathematical  Problem Solving, Contextual Teaching Learning Approach DOI: http://dx.doi.org/10.22342/jme.4.1.568.113-126

  19. Supporting Fieldwork Learning by Visual Documentation and Reflection

    DEFF Research Database (Denmark)

    Saltofte, Margit

    2017-01-01

    Photos can be used as a supplements to written fieldnotes and as a sources for mediating reflection during fieldwork and analysis. As part of a field diary, photos can support the recall of experiences and a reflective distance to the events. Photography, as visual representation, can also lead...... to reflection on learning and knowledge production in the process of learning how to conduct fieldwork. Pictures can open the way for abstractions and hidden knowledge, which might otherwise be difficult to formulate in words. However, writing and written field notes cannot be fully replaced by photos...... the role played by photos in their learning process. For students, photography is an everyday documentation form that can support their memory of field experience and serve as a vehicle for the analysis of data. The article discusses how photos and visual representations support fieldwork learning...

  20. Geometric Hypergraph Learning for Visual Tracking

    OpenAIRE

    Du, Dawei; Qi, Honggang; Wen, Longyin; Tian, Qi; Huang, Qingming; Lyu, Siwei

    2016-01-01

    Graph based representation is widely used in visual tracking field by finding correct correspondences between target parts in consecutive frames. However, most graph based trackers consider pairwise geometric relations between local parts. They do not make full use of the target's intrinsic structure, thereby making the representation easily disturbed by errors in pairwise affinities when large deformation and occlusion occur. In this paper, we propose a geometric hypergraph learning based tr...

  1. A deep learning / neuroevolution hybrid for visual control

    DEFF Research Database (Denmark)

    Poulsen, Andreas Precht; Thorhauge, Mark; Funch, Mikkel Hvilshj

    2017-01-01

    This paper presents a deep learning / neuroevolution hybrid approach called DLNE, which allows FPS bots to learn to aim & shoot based only on high-dimensional raw pixel input. The deep learning component is responsible for visual recognition and translating raw pixels to compact feature...... representations, while the evolving network takes those features as inputs to infer actions. The results suggest that combining deep learning and neuroevolution in a hybrid approach is a promising research direction that could make complex visual domains directly accessible to networks trained through evolution....

  2. Hierarchical Representation Learning for Kinship Verification.

    Science.gov (United States)

    Kohli, Naman; Vatsa, Mayank; Singh, Richa; Noore, Afzel; Majumdar, Angshul

    2017-01-01

    Kinship verification has a number of applications such as organizing large collections of images and recognizing resemblances among humans. In this paper, first, a human study is conducted to understand the capabilities of human mind and to identify the discriminatory areas of a face that facilitate kinship-cues. The visual stimuli presented to the participants determine their ability to recognize kin relationship using the whole face as well as specific facial regions. The effect of participant gender and age and kin-relation pair of the stimulus is analyzed using quantitative measures such as accuracy, discriminability index d' , and perceptual information entropy. Utilizing the information obtained from the human study, a hierarchical kinship verification via representation learning (KVRL) framework is utilized to learn the representation of different face regions in an unsupervised manner. We propose a novel approach for feature representation termed as filtered contractive deep belief networks (fcDBN). The proposed feature representation encodes relational information present in images using filters and contractive regularization penalty. A compact representation of facial images of kin is extracted as an output from the learned model and a multi-layer neural network is utilized to verify the kin accurately. A new WVU kinship database is created, which consists of multiple images per subject to facilitate kinship verification. The results show that the proposed deep learning framework (KVRL-fcDBN) yields the state-of-the-art kinship verification accuracy on the WVU kinship database and on four existing benchmark data sets. Furthermore, kinship information is used as a soft biometric modality to boost the performance of face verification via product of likelihood ratio and support vector machine based approaches. Using the proposed KVRL-fcDBN framework, an improvement of over 20% is observed in the performance of face verification.

  3. Learning Multimodal Deep Representations for Crowd Anomaly Event Detection

    Directory of Open Access Journals (Sweden)

    Shaonian Huang

    2018-01-01

    Full Text Available Anomaly event detection in crowd scenes is extremely important; however, the majority of existing studies merely use hand-crafted features to detect anomalies. In this study, a novel unsupervised deep learning framework is proposed to detect anomaly events in crowded scenes. Specifically, low-level visual features, energy features, and motion map features are simultaneously extracted based on spatiotemporal energy measurements. Three convolutional restricted Boltzmann machines are trained to model the mid-level feature representation of normal patterns. Then a multimodal fusion scheme is utilized to learn the deep representation of crowd patterns. Based on the learned deep representation, a one-class support vector machine model is used to detect anomaly events. The proposed method is evaluated using two available public datasets and compared with state-of-the-art methods. The experimental results show its competitive performance for anomaly event detection in video surveillance.

  4. Mathematical Representation Ability by Using Project Based Learning on the Topic of Statistics

    Science.gov (United States)

    Widakdo, W. A.

    2017-09-01

    Seeing the importance of the role of mathematics in everyday life, mastery of the subject areas of mathematics is a must. Representation ability is one of the fundamental ability that used in mathematics to make connection between abstract idea with logical thinking to understanding mathematics. Researcher see the lack of mathematical representation and try to find alternative solution to dolve it by using project based learning. This research use literature study from some books and articles in journals to see the importance of mathematical representation abiliy in mathemtics learning and how project based learning able to increase this mathematical representation ability on the topic of Statistics. The indicators for mathematical representation ability in this research classifies namely visual representation (picture, diagram, graph, or table); symbolize representation (mathematical statement. Mathematical notation, numerical/algebra symbol) and verbal representation (written text). This article explain about why project based learning able to influence student’s mathematical representation by using some theories in cognitive psychology, also showing the example of project based learning that able to use in teaching statistics, one of mathematics topic that very useful to analyze data.

  5. The role of visual representations within working memory for paired-associate and serial order of spoken words.

    Science.gov (United States)

    Ueno, Taiji; Saito, Satoru

    2013-09-01

    Caplan and colleagues have recently explained paired-associate learning and serial-order learning with a single-mechanism computational model by assuming differential degrees of isolation. Specifically, two items in a pair can be grouped together and associated to positional codes that are somewhat isolated from the rest of the items. In contrast, the degree of isolation among the studied items is lower in serial-order learning. One of the key predictions drawn from this theory is that any variables that help chunking of two adjacent items into a group should be beneficial to paired-associate learning, more than serial-order learning. To test this idea, the role of visual representations in memory for spoken verbal materials (i.e., imagery) was compared between two types of learning directly. Experiment 1 showed stronger effects of word concreteness and of concurrent presentation of irrelevant visual stimuli (dynamic visual noise: DVN) in paired-associate memory than in serial-order memory, consistent with the prediction. Experiment 2 revealed that the irrelevant visual stimuli effect was boosted when the participants had to actively maintain the information within working memory, rather than feed it to long-term memory for subsequent recall, due to cue overloading. This indicates that the sensory input from irrelevant visual stimuli can reach and affect visual representations of verbal items within working memory, and that this disruption can be attenuated when the information within working memory can be efficiently supported by long-term memory for subsequent recall.

  6. Errors of Students Learning With React Strategy in Solving the Problems of Mathematical Representation Ability

    Directory of Open Access Journals (Sweden)

    Delsika Pramata Sari

    2017-06-01

    Full Text Available The purpose of this study was to investigate the errors experienced by students learning with REACT strategy and traditional learning in solving problems of mathematical representation ability. This study used quasi experimental pattern with static-group comparison design. The subjects of this study were 47 eighth grade students of junior high school in Bandung consisting of two samples. The instrument used was a test to measure students' mathematical representation ability. The reliability coefficient about the mathematical representation ability was 0.56. The most prominent errors of mathematical representation ability of students learning with REACT strategy and traditional learning, was on indicator that solving problem involving arithmetic symbols (symbolic representation. In addition, errors were also experienced by many students with traditional learning on the indicator of making the image of a real world situation to clarify the problem and facilitate its completion (visual representation.

  7. Advances in visual representation of molecular potentials.

    Science.gov (United States)

    Du, Qi-Shi; Huang, Ri-Bo; Chou, Kuo-Chen

    2010-06-01

    The recent advances in visual representations of molecular properties in 3D space are summarized, and their applications in molecular modeling study and rational drug design are introduced. The visual representation methods provide us with detailed insights into protein-ligand interactions, and hence can play a major role in elucidating the structure or reactivity of a biomolecular system. Three newly developed computation and visualization methods for studying the physical and chemical properties of molecules are introduced, including their electrostatic potential, lipophilicity potential and excess chemical potential. The newest application examples of visual representations in structure-based rational drug are presented. The 3D electrostatic potentials, calculated using the empirical method (EM-ESP), in which the classical Coulomb equation and traditional atomic partial changes are discarded, are highly consistent with the results by the higher level quantum chemical method. The 3D lipophilicity potentials, computed by the heuristic molecular lipophilicity potential method based on the principles of quantum mechanics and statistical mechanics, are more accurate and reliable than those by using the traditional empirical methods. The 3D excess chemical potentials, derived by the reference interaction site model-hypernetted chain theory, provide a new tool for computational chemistry and molecular modeling. For structure-based drug design, the visual representations of molecular properties will play a significant role in practical applications. It is anticipated that the new advances in computational chemistry will stimulate the development of molecular modeling methods, further enriching the visual representation techniques for rational drug design, as well as other relevant fields in life science.

  8. Unimodal Learning Enhances Crossmodal Learning in Robotic Audio-Visual Tracking

    DEFF Research Database (Denmark)

    Shaikh, Danish; Bodenhagen, Leon; Manoonpong, Poramate

    2017-01-01

    Crossmodal sensory integration is a fundamental feature of the brain that aids in forming an coherent and unified representation of observed events in the world. Spatiotemporally correlated sensory stimuli brought about by rich sensorimotor experiences drive the development of crossmodal integrat...... a non-holonomic robotic agent towards a moving audio-visual target. Simulation results demonstrate that unimodal learning enhances crossmodal learning and improves both the overall accuracy and precision of multisensory orientation response....

  9. Unimodal Learning Enhances Crossmodal Learning in Robotic Audio-Visual Tracking

    DEFF Research Database (Denmark)

    Shaikh, Danish; Bodenhagen, Leon; Manoonpong, Poramate

    2018-01-01

    Crossmodal sensory integration is a fundamental feature of the brain that aids in forming an coherent and unified representation of observed events in the world. Spatiotemporally correlated sensory stimuli brought about by rich sensorimotor experiences drive the development of crossmodal integrat...... a non-holonomic robotic agent towards a moving audio-visual target. Simulation results demonstrate that unimodal learning enhances crossmodal learning and improves both the overall accuracy and precision of multisensory orientation response....

  10. Efficacy of Simulation-Based Learning of Electronics Using Visualization and Manipulation

    Science.gov (United States)

    Chen, Yu-Lung; Hong, Yu-Ru; Sung, Yao-Ting; Chang, Kuo-En

    2011-01-01

    Software for simulation-based learning of electronics was implemented to help learners understand complex and abstract concepts through observing external representations and exploring concept models. The software comprises modules for visualization and simulative manipulation. Differences in learning performance of using the learning software…

  11. HD-MTL: Hierarchical Deep Multi-Task Learning for Large-Scale Visual Recognition.

    Science.gov (United States)

    Fan, Jianping; Zhao, Tianyi; Kuang, Zhenzhong; Zheng, Yu; Zhang, Ji; Yu, Jun; Peng, Jinye

    2017-02-09

    In this paper, a hierarchical deep multi-task learning (HD-MTL) algorithm is developed to support large-scale visual recognition (e.g., recognizing thousands or even tens of thousands of atomic object classes automatically). First, multiple sets of multi-level deep features are extracted from different layers of deep convolutional neural networks (deep CNNs), and they are used to achieve more effective accomplishment of the coarseto- fine tasks for hierarchical visual recognition. A visual tree is then learned by assigning the visually-similar atomic object classes with similar learning complexities into the same group, which can provide a good environment for determining the interrelated learning tasks automatically. By leveraging the inter-task relatedness (inter-class similarities) to learn more discriminative group-specific deep representations, our deep multi-task learning algorithm can train more discriminative node classifiers for distinguishing the visually-similar atomic object classes effectively. Our hierarchical deep multi-task learning (HD-MTL) algorithm can integrate two discriminative regularization terms to control the inter-level error propagation effectively, and it can provide an end-to-end approach for jointly learning more representative deep CNNs (for image representation) and more discriminative tree classifier (for large-scale visual recognition) and updating them simultaneously. Our incremental deep learning algorithms can effectively adapt both the deep CNNs and the tree classifier to the new training images and the new object classes. Our experimental results have demonstrated that our HD-MTL algorithm can achieve very competitive results on improving the accuracy rates for large-scale visual recognition.

  12. Facilitating Mathematical Practices through Visual Representations

    Science.gov (United States)

    Murata, Aki; Stewart, Chana

    2017-01-01

    Effective use of mathematical representation is key to supporting student learning. In "Principles to Actions: Ensuring Mathematical Success for All" (NCTM 2014), "use and connect mathematical representations" is one of the effective Mathematics Teaching Practices. By using different representations, students examine concepts…

  13. Is This Real Life? Is This Just Fantasy?: Realism and Representations in Learning with Technology

    Science.gov (United States)

    Sauter, Megan Patrice

    Students often engage in hands-on activities during science learning; however, financial and practical constraints often limit the availability of these activities. Recent advances in technology have led to increases in the use of simulations and remote labs, which attempt to recreate hands-on science learning via computer. Remote labs and simulations are interesting from a cognitive perspective because they allow for different relations between representations and their referents. Remote labs are unique in that they provide a yoked representation, meaning that the representation of the lab on the computer screen is actually linked to that which it represents: a real scientific device. Simulations merely represent the lab and are not connected to any real scientific devices. However, the type of visual representations used in the lab may modify the effects of the lab technology. The purpose of this dissertation is to examine the relation between representation and technology and its effects of students' psychological experiences using online science labs. Undergraduates participated in two studies that investigated the relation between technology and representation. In the first study, participants performed either a remote lab or a simulation incorporating one of two visual representations, either a static image or a video of the equipment. Although participants in both lab conditions learned, participants in the remote lab condition had more authentic experiences. However, effects were moderated by the realism of the visual representation. Participants who saw a video were more invested and felt the experience was more authentic. In a second study, participants performed a remote lab and either saw the same video as in the first study, an animation, or the video and an animation. Most participants had an authentic experience because both representations evoked strong feelings of presence. However, participants who saw the video were more likely to believe the

  14. Visual perception and verbal descriptions as sources for generating mental representations: Evidence from representational neglect.

    Science.gov (United States)

    Denis, Michel; Beschin, Nicoletta; Logie, Robert H; Della Sala, Sergio

    2002-03-01

    In the majority of investigations of representational neglect, patients are asked to report information derived from long-term visual knowledge. In contrast, studies of perceptual neglect involve reporting the contents of relatively novel scenes in the immediate environment. The present study aimed to establish how representational neglect might affect (a) immediate recall of recently perceived, novel visual layouts, and (b) immediate recall of novel layouts presented only as auditory verbal descriptions. These conditions were contrasted with reports from visual perception and a test of immediate recall of verbal material. Data were obtained from 11 neglect patients (9 with representational neglect), 6 right hemisphere lesion control patients with no evidence of neglect, and 15 healthy controls. In the perception, memory following perception, and memory following layout description conditions, the neglect patients showed poorer report of items depicted or described on the left than on the right of each layout. The lateralised error pattern was not evident in the non-neglect patients or healthy controls, and there was no difference among the three groups on immediate verbal memory. One patient showed pure representational neglect, with ceiling performance in the perception condition, but with lateralised errors for memory following perception or following verbal description. Overall, the results indicate that representational neglect does not depend on the presence of perceptual neglect, that visual perception and visual mental representations are less closely linked than has been thought hitherto, and that visuospatial mental representations have similar functional characteristics whether they are derived from visual perception or from auditory linguistic descriptive inputs.

  15. ShapeShop: Towards Understanding Deep Learning Representations via Interactive Experimentation.

    Science.gov (United States)

    Hohman, Fred; Hodas, Nathan; Chau, Duen Horng

    2017-05-01

    Deep learning is the driving force behind many recent technologies; however, deep neural networks are often viewed as "black-boxes" due to their internal complexity that is hard to understand. Little research focuses on helping people explore and understand the relationship between a user's data and the learned representations in deep learning models. We present our ongoing work, ShapeShop, an interactive system for visualizing and understanding what semantics a neural network model has learned. Built using standard web technologies, ShapeShop allows users to experiment with and compare deep learning models to help explore the robustness of image classifiers.

  16. ShapeShop: Towards Understanding Deep Learning Representations via Interactive Experimentation

    Energy Technology Data Exchange (ETDEWEB)

    Hohman, Frederick M.; Hodas, Nathan O.; Chau, Duen Horng

    2017-05-30

    Deep learning is the driving force behind many recent technologies; however, deep neural networks are often viewed as “black-boxes” due to their internal complexity that is hard to understand. Little research focuses on helping people explore and understand the relationship between a user’s data and the learned representations in deep learning models. We present our ongoing work, ShapeShop, an interactive system for visualizing and understanding what semantics a neural network model has learned. Built using standard web technologies, ShapeShop allows users to experiment with and compare deep learning models to help explore the robustness of image classifiers.

  17. Deep neural networks rival the representation of primate IT cortex for core visual object recognition.

    Directory of Open Access Journals (Sweden)

    Charles F Cadieu

    2014-12-01

    Full Text Available The primate visual system achieves remarkable visual object recognition performance even in brief presentations, and under changes to object exemplar, geometric transformations, and background variation (a.k.a. core visual object recognition. This remarkable performance is mediated by the representation formed in inferior temporal (IT cortex. In parallel, recent advances in machine learning have led to ever higher performing models of object recognition using artificial deep neural networks (DNNs. It remains unclear, however, whether the representational performance of DNNs rivals that of the brain. To accurately produce such a comparison, a major difficulty has been a unifying metric that accounts for experimental limitations, such as the amount of noise, the number of neural recording sites, and the number of trials, and computational limitations, such as the complexity of the decoding classifier and the number of classifier training examples. In this work, we perform a direct comparison that corrects for these experimental limitations and computational considerations. As part of our methodology, we propose an extension of "kernel analysis" that measures the generalization accuracy as a function of representational complexity. Our evaluations show that, unlike previous bio-inspired models, the latest DNNs rival the representational performance of IT cortex on this visual object recognition task. Furthermore, we show that models that perform well on measures of representational performance also perform well on measures of representational similarity to IT, and on measures of predicting individual IT multi-unit responses. Whether these DNNs rely on computational mechanisms similar to the primate visual system is yet to be determined, but, unlike all previous bio-inspired models, that possibility cannot be ruled out merely on representational performance grounds.

  18. Differentiating Visual from Response Sequencing during Long-term Skill Learning.

    Science.gov (United States)

    Lynch, Brighid; Beukema, Patrick; Verstynen, Timothy

    2017-01-01

    The dual-system model of sequence learning posits that during early learning there is an advantage for encoding sequences in sensory frames; however, it remains unclear whether this advantage extends to long-term consolidation. Using the serial RT task, we set out to distinguish the dynamics of learning sequential orders of visual cues from learning sequential responses. On each day, most participants learned a new mapping between a set of symbolic cues and responses made with one of four fingers, after which they were exposed to trial blocks of either randomly ordered cues or deterministic ordered cues (12-item sequence). Participants were randomly assigned to one of four groups (n = 15 per group): Visual sequences (same sequence of visual cues across training days), Response sequences (same order of key presses across training days), Combined (same serial order of cues and responses on all training days), and a Control group (a novel sequence each training day). Across 5 days of training, sequence-specific measures of response speed and accuracy improved faster in the Visual group than any of the other three groups, despite no group differences in explicit awareness of the sequence. The two groups that were exposed to the same visual sequence across days showed a marginal improvement in response binding that was not found in the other groups. These results indicate that there is an advantage, in terms of rate of consolidation across multiple days of training, for learning sequences of actions in a sensory representational space, rather than as motoric representations.

  19. The Effects of Visual Cues and Learners' Field Dependence in Multiple External Representations Environment for Novice Program Comprehension

    Science.gov (United States)

    Wei, Liew Tze; Sazilah, Salam

    2012-01-01

    This study investigated the effects of visual cues in multiple external representations (MER) environment on the learning performance of novices' program comprehension. Program codes and flowchart diagrams were used as dual representations in multimedia environment to deliver lessons on C-Programming. 17 field independent participants and 16 field…

  20. Fundamental Visual Representations of Social Cognition in ASD

    Science.gov (United States)

    2016-12-01

    AWARD NUMBER: W81XWH-14-1-0565 TITLE: Fundamental Visual Representations of Social Cognition in ASD PRINCIPAL INVESTIGATOR: John Foxe, Ph.D...Visual Representations of Social Cognition in ASD 5a. CONTRACT NUMBER 5b. GRANT NUMBER W81XWH-14-1-0565 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S... vertical line) adaptation trials are started. This involves moving the target in by 3 degrees of visual angle while the participants eyes are “in

  1. Decoding the future from past experience: learning shapes predictions in early visual cortex.

    Science.gov (United States)

    Luft, Caroline D B; Meeson, Alan; Welchman, Andrew E; Kourtzi, Zoe

    2015-05-01

    Learning the structure of the environment is critical for interpreting the current scene and predicting upcoming events. However, the brain mechanisms that support our ability to translate knowledge about scene statistics to sensory predictions remain largely unknown. Here we provide evidence that learning of temporal regularities shapes representations in early visual cortex that relate to our ability to predict sensory events. We tested the participants' ability to predict the orientation of a test stimulus after exposure to sequences of leftward- or rightward-oriented gratings. Using fMRI decoding, we identified brain patterns related to the observers' visual predictions rather than stimulus-driven activity. Decoding of predicted orientations following structured sequences was enhanced after training, while decoding of cued orientations following exposure to random sequences did not change. These predictive representations appear to be driven by the same large-scale neural populations that encode actual stimulus orientation and to be specific to the learned sequence structure. Thus our findings provide evidence that learning temporal structures supports our ability to predict future events by reactivating selective sensory representations as early as in primary visual cortex. Copyright © 2015 the American Physiological Society.

  2. Distorted representation in visual tourism research

    DEFF Research Database (Denmark)

    Jensen, Martin Trandberg

    2016-01-01

    how photographic materialities, performativities and sensations contribute to new tourism knowledges. While highlighting the potential of distorted representation, the paper posits a cautionary note in regards to the influential role of academic journals in determining the qualities of visual data....... The paper exemplifies distorted representation through three impressionistic tales derived from ethnographic research on the European rail travel phenomenon: interrail.......Tourism research has recently been informed by non-representational theories to highlight the socio-material, embodied and heterogeneous composition of tourist experiences. These advances have contributed to further reflexivity and called for novel ways to animate representations...

  3. Multiple instance learning tracking method with local sparse representation

    KAUST Repository

    Xie, Chengjun

    2013-10-01

    When objects undergo large pose change, illumination variation or partial occlusion, most existed visual tracking algorithms tend to drift away from targets and even fail in tracking them. To address this issue, in this study, the authors propose an online algorithm by combining multiple instance learning (MIL) and local sparse representation for tracking an object in a video system. The key idea in our method is to model the appearance of an object by local sparse codes that can be formed as training data for the MIL framework. First, local image patches of a target object are represented as sparse codes with an overcomplete dictionary, where the adaptive representation can be helpful in overcoming partial occlusion in object tracking. Then MIL learns the sparse codes by a classifier to discriminate the target from the background. Finally, results from the trained classifier are input into a particle filter framework to sequentially estimate the target state over time in visual tracking. In addition, to decrease the visual drift because of the accumulative errors when updating the dictionary and classifier, a two-step object tracking method combining a static MIL classifier with a dynamical MIL classifier is proposed. Experiments on some publicly available benchmarks of video sequences show that our proposed tracker is more robust and effective than others. © The Institution of Engineering and Technology 2013.

  4. Visual motion transforms visual space representations similarly throughout the human visual hierarchy.

    Science.gov (United States)

    Harvey, Ben M; Dumoulin, Serge O

    2016-02-15

    Several studies demonstrate that visual stimulus motion affects neural receptive fields and fMRI response amplitudes. Here we unite results of these two approaches and extend them by examining the effects of visual motion on neural position preferences throughout the hierarchy of human visual field maps. We measured population receptive field (pRF) properties using high-field fMRI (7T), characterizing position preferences simultaneously over large regions of the visual cortex. We measured pRFs properties using sine wave gratings in stationary apertures, moving at various speeds in either the direction of pRF measurement or the orthogonal direction. We find direction- and speed-dependent changes in pRF preferred position and size in all visual field maps examined, including V1, V3A, and the MT+ map TO1. These effects on pRF properties increase up the hierarchy of visual field maps. However, both within and between visual field maps the extent of pRF changes was approximately proportional to pRF size. This suggests that visual motion transforms the representation of visual space similarly throughout the visual hierarchy. Visual motion can also produce an illusory displacement of perceived stimulus position. We demonstrate perceptual displacements using the same stimulus configuration. In contrast to effects on pRF properties, perceptual displacements show only weak effects of motion speed, with far larger speed-independent effects. We describe a model where low-level mechanisms could underlie the observed effects on neural position preferences. We conclude that visual motion induces similar transformations of visuo-spatial representations throughout the visual hierarchy, which may arise through low-level mechanisms. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Experience-Dependency of Reliance on Local Visual and Idiothetic Cues for Spatial Representations Created in the Absence of Distal Information

    Directory of Open Access Journals (Sweden)

    Fabian Draht

    2017-06-01

    Full Text Available Spatial encoding in the hippocampus is based on a range of different input sources. To generate spatial representations, reliable sensory cues from the external environment are integrated with idiothetic cues, derived from self-movement, that enable path integration and directional perception. In this study, we examined to what extent idiothetic cues significantly contribute to spatial representations and navigation: we recorded place cells while rodents navigated towards two visually identical chambers in 180° orientation via two different paths in darkness and in the absence of reliable auditory or olfactory cues. Our goal was to generate a conflict between local visual and direction-specific information, and then to assess which strategy was prioritized in different learning phases. We observed that, in the absence of distal cues, place fields are initially controlled by local visual cues that override idiothetic cues, but that with multiple exposures to the paradigm, spaced at intervals of days, idiothetic cues become increasingly implemented in generating an accurate spatial representation. Taken together, these data support that, in the absence of distal cues, local visual cues are prioritized in the generation of context-specific spatial representations through place cells, whereby idiothetic cues are deemed unreliable. With cumulative exposures to the environments, the animal learns to attend to subtle idiothetic cues to resolve the conflict between visual and direction-specific information.

  6. Experience-Dependency of Reliance on Local Visual and Idiothetic Cues for Spatial Representations Created in the Absence of Distal Information.

    Science.gov (United States)

    Draht, Fabian; Zhang, Sijie; Rayan, Abdelrahman; Schönfeld, Fabian; Wiskott, Laurenz; Manahan-Vaughan, Denise

    2017-01-01

    Spatial encoding in the hippocampus is based on a range of different input sources. To generate spatial representations, reliable sensory cues from the external environment are integrated with idiothetic cues, derived from self-movement, that enable path integration and directional perception. In this study, we examined to what extent idiothetic cues significantly contribute to spatial representations and navigation: we recorded place cells while rodents navigated towards two visually identical chambers in 180° orientation via two different paths in darkness and in the absence of reliable auditory or olfactory cues. Our goal was to generate a conflict between local visual and direction-specific information, and then to assess which strategy was prioritized in different learning phases. We observed that, in the absence of distal cues, place fields are initially controlled by local visual cues that override idiothetic cues, but that with multiple exposures to the paradigm, spaced at intervals of days, idiothetic cues become increasingly implemented in generating an accurate spatial representation. Taken together, these data support that, in the absence of distal cues, local visual cues are prioritized in the generation of context-specific spatial representations through place cells, whereby idiothetic cues are deemed unreliable. With cumulative exposures to the environments, the animal learns to attend to subtle idiothetic cues to resolve the conflict between visual and direction-specific information.

  7. The development of hand-centred visual representations in the primate brain: a computer modelling study using natural visual scenes.

    Directory of Open Access Journals (Sweden)

    Juan Manuel Galeazzi

    2015-12-01

    Full Text Available Neurons that respond to visual targets in a hand-centred frame of reference have been found within various areas of the primate brain. We investigate how hand-centred visual representations may develop in a neural network model of the primate visual system called VisNet, when the model is trained on images of the hand seen against natural visual scenes. The simulations show how such neurons may develop through a biologically plausible process of unsupervised competitive learning and self-organisation. In an advance on our previous work, the visual scenes consisted of multiple targets presented simultaneously with respect to the hand. Three experiments are presented. First, VisNet was trained with computerized images consisting of a realistic image of a hand and and a variety of natural objects, presented in different textured backgrounds during training. The network was then tested with just one textured object near the hand in order to verify if the output cells were capable of building hand-centered representations with a single localised receptive field. We explain the underlying principles of the statistical decoupling that allows the output cells of the network to develop single localised receptive fields even when the network is trained with multiple objects. In a second simulation we examined how some of the cells with hand-centred receptive fields decreased their shape selectivity and started responding to a localised region of hand-centred space as the number of objects presented in overlapping locations during training increases. Lastly, we explored the same learning principles training the network with natural visual scenes collected by volunteers. These results provide an important step in showing how single, localised, hand-centered receptive fields could emerge under more ecologically realistic visual training conditions.

  8. DLNE: A hybridization of deep learning and neuroevolution for visual control

    DEFF Research Database (Denmark)

    Poulsen, Andreas Precht; Thorhauge, Mark; Funch, Mikkel Hvilshj

    2017-01-01

    This paper investigates the potential of combining deep learning and neuroevolution to create a bot for a simple first person shooter (FPS) game capable of aiming and shooting based on high-dimensional raw pixel input. The deep learning component is responsible for visual recognition...... on evolution, and (3) how well they allow the deep network and evolved network to interface with each other. Overall, the results suggest that combining deep learning and neuroevolution in a hybrid approach is a promising research direction that could make complex visual domains directly accessible to networks...... and translating raw pixels to compact feature representations, while the evolving network takes those features as inputs to infer actions. Two types of feature representations are evaluated in terms of (1) how precise they allow the deep network to recognize the position of the enemy, (2) their effect...

  9. Supporting Multimedia Learning with Visual Signalling and Animated Pedagogical Agent: Moderating Effects of Prior Knowledge

    Science.gov (United States)

    Johnson, A. M.; Ozogul, G.; Reisslein, M.

    2015-01-01

    An experiment examined the effects of visual signalling to relevant information in multiple external representations and the visual presence of an animated pedagogical agent (APA). Students learned electric circuit analysis using a computer-based learning environment that included Cartesian graphs, equations and electric circuit diagrams. The…

  10. Visual-Spatial Attention Aids the Maintenance of Object Representations in Visual Working Memory

    Science.gov (United States)

    Williams, Melonie; Pouget, Pierre; Boucher, Leanne; Woodman, Geoffrey F.

    2013-01-01

    Theories have proposed that the maintenance of object representations in visual working memory is aided by a spatial rehearsal mechanism. In this study, we used two different approaches to test the hypothesis that overt and covert visual-spatial attention mechanisms contribute to the maintenance of object representations in visual working memory. First, we tracked observers’ eye movements while remembering a variable number of objects during change-detection tasks. We observed that during the blank retention interval, participants spontaneously shifted gaze to the locations that the objects had occupied in the memory array. Next, we hypothesized that if attention mechanisms contribute to the maintenance of object representations, then drawing attention away from the object locations during the retention interval would impair object memory during these change-detection tasks. Supporting this prediction, we found that attending to the fixation point in anticipation of a brief probe stimulus during the retention interval reduced change-detection accuracy even on the trials in which no probe occurred. These findings support models of working memory in which visual-spatial selection mechanisms contribute to the maintenance of object representations. PMID:23371773

  11. Changing viewer perspectives reveals constraints to implicit visual statistical learning.

    Science.gov (United States)

    Jiang, Yuhong V; Swallow, Khena M

    2014-10-07

    Statistical learning-learning environmental regularities to guide behavior-likely plays an important role in natural human behavior. One potential use is in search for valuable items. Because visual statistical learning can be acquired quickly and without intention or awareness, it could optimize search and thereby conserve energy. For this to be true, however, visual statistical learning needs to be viewpoint invariant, facilitating search even when people walk around. To test whether implicit visual statistical learning of spatial information is viewpoint independent, we asked participants to perform a visual search task from variable locations around a monitor placed flat on a stand. Unbeknownst to participants, the target was more often in some locations than others. In contrast to previous research on stationary observers, visual statistical learning failed to produce a search advantage for targets in high-probable regions that were stable within the environment but variable relative to the viewer. This failure was observed even when conditions for spatial updating were optimized. However, learning was successful when the rich locations were referenced relative to the viewer. We conclude that changing viewer perspective disrupts implicit learning of the target's location probability. This form of learning shows limited integration with spatial updating or spatiotopic representations. © 2014 ARVO.

  12. Visual Representations of DNA Replication: Middle Grades Students' Perceptions and Interpretations

    Science.gov (United States)

    Patrick, Michelle D.; Carter, Glenda; Wiebe, Eric N.

    2005-01-01

    Visual representations play a critical role in the communication of science concepts for scientists and students alike. However, recent research suggests that novice students experience difficulty extracting relevant information from representations. This study examined students' interpretations of visual representations of DNA replication. Each…

  13. How initial representations shape coupled learning processes

    DEFF Research Database (Denmark)

    Puranam, Phanish; Swamy, M.

    2016-01-01

    Coupled learning processes, in which specialists from different domains learn how to make interdependent choices among alternatives, are common in organizations. We explore the role played by initial representations held by the learners in coupled learning processes using a formal agent-based model....... We find that initial representations have important consequences for the success of the coupled learning process, particularly when communication is constrained and individual rates of learning are high. Under these conditions, initial representations that generate incorrect beliefs can outperform...... one that does not discriminate among alternatives, or even a mix of correct and incorrect representations among the learners. We draw implications for the design of coupled learning processes in organizations. © 2016 INFORMS....

  14. A comparative evaluation of supervised and unsupervised representation learning approaches for anaplastic medulloblastoma differentiation

    Science.gov (United States)

    Cruz-Roa, Angel; Arevalo, John; Basavanhally, Ajay; Madabhushi, Anant; González, Fabio

    2015-01-01

    Learning data representations directly from the data itself is an approach that has shown great success in different pattern recognition problems, outperforming state-of-the-art feature extraction schemes for different tasks in computer vision, speech recognition and natural language processing. Representation learning applies unsupervised and supervised machine learning methods to large amounts of data to find building-blocks that better represent the information in it. Digitized histopathology images represents a very good testbed for representation learning since it involves large amounts of high complex, visual data. This paper presents a comparative evaluation of different supervised and unsupervised representation learning architectures to specifically address open questions on what type of learning architectures (deep or shallow), type of learning (unsupervised or supervised) is optimal. In this paper we limit ourselves to addressing these questions in the context of distinguishing between anaplastic and non-anaplastic medulloblastomas from routine haematoxylin and eosin stained images. The unsupervised approaches evaluated were sparse autoencoders and topographic reconstruct independent component analysis, and the supervised approach was convolutional neural networks. Experimental results show that shallow architectures with more neurons are better than deeper architectures without taking into account local space invariances and that topographic constraints provide useful invariant features in scale and rotations for efficient tumor differentiation.

  15. Accurate metacognition for visual sensory memory representations

    NARCIS (Netherlands)

    Vandenbroucke, A.R.E.; Sligte, I.G.; Barrett, A.B.; Seth, A.K.; Fahrenfort, J.J.; Lamme, V.A.F.

    2014-01-01

    The capacity to attend to multiple objects in the visual field is limited. However, introspectively, people feel that they see the whole visual world at once. Some scholars suggest that this introspective feeling is based on short-lived sensory memory representations, whereas others argue that the

  16. Realistic versus Schematic Interactive Visualizations for Learning Surveying Practices: A Comparative Study

    Science.gov (United States)

    Dib, Hazar; Adamo-Villani, Nicoletta; Garver, Stephen

    2014-01-01

    Many benefits have been claimed for visualizations, a general assumption being that learning is facilitated. However, several researchers argue that little is known about the cognitive value of graphical representations, be they schematic visualizations, such as diagrams or more realistic, such as virtual reality. The study reported in the paper…

  17. The Effect of Visual Variability on the Learning of Academic Concepts.

    Science.gov (United States)

    Bourgoyne, Ashley; Alt, Mary

    2017-06-10

    The purpose of this study was to identify effects of variability of visual input on development of conceptual representations of academic concepts for college-age students with normal language (NL) and those with language-learning disabilities (LLD). Students with NL (n = 11) and LLD (n = 11) participated in a computer-based training for introductory biology course concepts. Participants were trained on half the concepts under a low-variability condition and half under a high-variability condition. Participants completed a posttest in which they were asked to identify and rate the accuracy of novel and trained visual representations of the concepts. We performed separate repeated measures analyses of variance to examine the accuracy of identification and ratings. Participants were equally accurate on trained and novel items in the high-variability condition, but were less accurate on novel items only in the low-variability condition. The LLD group showed the same pattern as the NL group; they were just less accurate. Results indicated that high-variability visual input may facilitate the acquisition of academic concepts in college students with NL and LLD. High-variability visual input may be especially beneficial for generalization to novel representations of concepts. Implicit learning methods may be harnessed by college courses to provide students with basic conceptual knowledge when they are entering courses or beginning new units.

  18. Collective form generation through visual participatory representation

    DEFF Research Database (Denmark)

    Day, Dennis; Sharma, Nishant; Punekar, Ravi

    2012-01-01

    In order to inspire and inform designers with the users data from participatory research, it may be important to represent data in a visual format that is easily understandable to the designers. For a case study in vehicle design, the paper outlines visual representation of data and the use...

  19. Computational Modelling of the Neural Representation of Object Shape in the Primate Ventral Visual System

    Directory of Open Access Journals (Sweden)

    Akihiro eEguchi

    2015-08-01

    Full Text Available Neurons in successive stages of the primate ventral visual pathway encode the spatial structure of visual objects. In this paper, we investigate through computer simulation how these cell firing properties may develop through unsupervised visually-guided learning. Individual neurons in the model are shown to exploit statistical regularity and temporal continuity of the visual inputs during training to learn firing properties that are similar to neurons in V4 and TEO. Neurons in V4 encode the conformation of boundary contour elements at a particular position within an object regardless of the location of the object on the retina, while neurons in TEO integrate information from multiple boundary contour elements. This representation goes beyond mere object recognition, in which neurons simply respond to the presence of a whole object, but provides an essential foundation from which the brain is subsequently able to recognise the whole object.

  20. Expertise Reversal for Iconic Representations in Science Visualizations

    Science.gov (United States)

    Homer, Bruce D.; Plass, Jan L.

    2010-01-01

    The influence of prior knowledge and cognitive development on the effectiveness of iconic representations in science visualizations was examined. Middle and high school students (N = 186) were given narrated visualizations of two chemistry topics: Kinetic Molecular Theory (Day 1) and Ideal Gas Laws (Day 2). For half of the visualizations, iconic…

  1. Three-dimensional visual feature representation in the primary visual cortex.

    Science.gov (United States)

    Tanaka, Shigeru; Moon, Chan-Hong; Fukuda, Mitsuhiro; Kim, Seong-Gi

    2011-12-01

    In the cat primary visual cortex, it is accepted that neurons optimally responding to similar stimulus orientations are clustered in a column extending from the superficial to deep layers. The cerebral cortex is, however, folded inside a skull, which makes gyri and fundi. The primary visual area of cats, area 17, is located on the fold of the cortex called the lateral gyrus. These facts raise the question of how to reconcile the tangential arrangement of the orientation columns with the curvature of the gyrus. In the present study, we show a possible configuration of feature representation in the visual cortex using a three-dimensional (3D) self-organization model. We took into account preferred orientation, preferred direction, ocular dominance and retinotopy, assuming isotropic interaction. We performed computer simulation only in the middle layer at the beginning and expanded the range of simulation gradually to other layers, which was found to be a unique method in the present model for obtaining orientation columns spanning all the layers in the flat cortex. Vertical columns of preferred orientations were found in the flat parts of the model cortex. On the other hand, in the curved parts, preferred orientations were represented in wedge-like columns rather than straight columns, and preferred directions were frequently reversed in the deeper layers. Singularities associated with orientation representation appeared as warped lines in the 3D model cortex. Direction reversal appeared on the sheets that were delimited by orientation-singularity lines. These structures emerged from the balance between periodic arrangements of preferred orientations and vertical alignment of the same orientations. Our theoretical predictions about orientation representation were confirmed by multi-slice, high-resolution functional MRI in the cat visual cortex. We obtained a close agreement between theoretical predictions and experimental observations. The present study throws a

  2. Change blindness and visual memory: visual representations get rich and act poor.

    Science.gov (United States)

    Varakin, D Alexander; Levin, Daniel T

    2006-02-01

    Change blindness is often taken as evidence that visual representations are impoverished, while successful recognition of specific objects is taken as evidence that they are richly detailed. In the current experiments, participants performed cover tasks that required each object in a display to be attended. Change detection trials were unexpectedly introduced and surprise recognition tests were given for nonchanging displays. For both change detection and recognition, participants had to distinguish objects from the same basic-level category, making it likely that specific visual information had to be used for successful performance. Although recognition was above chance, incidental change detection usually remained at floor. These results help reconcile demonstrations of poor change detection with demonstrations of good memory because they suggest that the capability to store visual information in memory is not reflected by the visual system's tendency to utilize these representations for purposes of detecting unexpected changes.

  3. V4 activity predicts the strength of visual short-term memory representations.

    Science.gov (United States)

    Sligte, Ilja G; Scholte, H Steven; Lamme, Victor A F

    2009-06-10

    Recent studies have shown the existence of a form of visual memory that lies intermediate of iconic memory and visual short-term memory (VSTM), in terms of both capacity (up to 15 items) and the duration of the memory trace (up to 4 s). Because new visual objects readily overwrite this intermediate visual store, we believe that it reflects a weak form of VSTM with high capacity that exists alongside a strong but capacity-limited form of VSTM. In the present study, we isolated brain activity related to weak and strong VSTM representations using functional magnetic resonance imaging. We found that activity in visual cortical area V4 predicted the strength of VSTM representations; activity was low when there was no VSTM, medium when there was a weak VSTM representation regardless of whether this weak representation was available for report or not, and high when there was a strong VSTM representation. Altogether, this study suggests that the high capacity yet weak VSTM store is represented in visual parts of the brain. Allegedly, only some of these VSTM traces are amplified by parietal and frontal regions and as a consequence reside in traditional or strong VSTM. The additional weak VSTM representations remain available for conscious access and report when attention is redirected to them yet are overwritten as soon as new visual stimuli hit the eyes.

  4. Does Sleep Facilitate the Consolidation of Allocentric or Egocentric Representations of Implicitly Learned Visual-Motor Sequence Learning?

    Science.gov (United States)

    Viczko, Jeremy; Sergeeva, Valya; Ray, Laura B.; Owen, Adrian M.; Fogel, Stuart M.

    2018-01-01

    Sleep facilitates the consolidation (i.e., enhancement) of simple, explicit (i.e., conscious) motor sequence learning (MSL). MSL can be dissociated into egocentric (i.e., motor) or allocentric (i.e., spatial) frames of reference. The consolidation of the allocentric memory representation is sleep-dependent, whereas the egocentric consolidation…

  5. The epistemic representation: visual production and communication of scientific knowledge.

    Directory of Open Access Journals (Sweden)

    Francisco López Cantos

    2015-03-01

    Full Text Available Despite its great influence on the History of Science, visual representations have attracted marginal interest until very recently and have often been regarded as a simple aid for mere illustration or scientific demonstration. However, it has been shown that visualization is an integral element of reasoning and a highly effective and common heuristic strategy in the scientific community and that the study of the conditions of visual production and communication are essential in the development of scientific knowledge. In this paper we deal with the nature of the various forms of visual representation of knowledge that have been happening throughout the history of science, taking as its starting point the illustrated monumental works and three-dimensional models that begin to develop within the scientific community around the fifteenth century. The main thesis of this paper is that any scientific visual representations have common elements that allow us to approach them from epistemic nature, heuristic and communicative dimension.

  6. Poincaré Embeddings for Learning Hierarchical Representations

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    Abstracts: Representation learning has become an invaluable approach for learning from symbolic data such as text and graphs. However, while complex symbolic datasets often exhibit a latent hierarchical structure, state-of-the-art methods typically do not account for this property. In this talk, I will discuss a new approach for learning hierarchical representations of symbolic data by embedding them into hyperbolic space -- or more precisely into an n-dimensional Poincaré ball. Due to the underlying hyperbolic geometry, this allows us to learn parsimonious representations of symbolic data by simultaneously capturing hierarchy and similarity. We introduce an efficient algorithm to learn the embeddings based on Riemannian optimization and show experimentally that Poincaré embeddings outperform Euclidean embeddings significantly on data with latent hierarchies, both in terms of representation capacity and in terms of generalization ability.      &...

  7. Deep Residual Network Predicts Cortical Representation and Organization of Visual Features for Rapid Categorization.

    Science.gov (United States)

    Wen, Haiguang; Shi, Junxing; Chen, Wei; Liu, Zhongming

    2018-02-28

    The brain represents visual objects with topographic cortical patterns. To address how distributed visual representations enable object categorization, we established predictive encoding models based on a deep residual network, and trained them to predict cortical responses to natural movies. Using this predictive model, we mapped human cortical representations to 64,000 visual objects from 80 categories with high throughput and accuracy. Such representations covered both the ventral and dorsal pathways, reflected multiple levels of object features, and preserved semantic relationships between categories. In the entire visual cortex, object representations were organized into three clusters of categories: biological objects, non-biological objects, and background scenes. In a finer scale specific to each cluster, object representations revealed sub-clusters for further categorization. Such hierarchical clustering of category representations was mostly contributed by cortical representations of object features from middle to high levels. In summary, this study demonstrates a useful computational strategy to characterize the cortical organization and representations of visual features for rapid categorization.

  8. Accurate metacognition for visual sensory memory representations.

    Science.gov (United States)

    Vandenbroucke, Annelinde R E; Sligte, Ilja G; Barrett, Adam B; Seth, Anil K; Fahrenfort, Johannes J; Lamme, Victor A F

    2014-04-01

    The capacity to attend to multiple objects in the visual field is limited. However, introspectively, people feel that they see the whole visual world at once. Some scholars suggest that this introspective feeling is based on short-lived sensory memory representations, whereas others argue that the feeling of seeing more than can be attended to is illusory. Here, we investigated this phenomenon by combining objective memory performance with subjective confidence ratings during a change-detection task. This allowed us to compute a measure of metacognition--the degree of knowledge that subjects have about the correctness of their decisions--for different stages of memory. We show that subjects store more objects in sensory memory than they can attend to but, at the same time, have similar metacognition for sensory memory and working memory representations. This suggests that these subjective impressions are not an illusion but accurate reflections of the richness of visual perception.

  9. From phonemes to images : levels of representation in a recurrent neural model of visually-grounded language learning

    NARCIS (Netherlands)

    Gelderloos, L.J.; Chrupala, Grzegorz

    2016-01-01

    We present a model of visually-grounded language learning based on stacked gated recurrent neural networks which learns to predict visual features given an image description in the form of a sequence of phonemes. The learning task resembles that faced by human language learners who need to discover

  10. A survey of visual preprocessing and shape representation techniques

    Science.gov (United States)

    Olshausen, Bruno A.

    1988-01-01

    Many recent theories and methods proposed for visual preprocessing and shape representation are summarized. The survey brings together research from the fields of biology, psychology, computer science, electrical engineering, and most recently, neural networks. It was motivated by the need to preprocess images for a sparse distributed memory (SDM), but the techniques presented may also prove useful for applying other associative memories to visual pattern recognition. The material of this survey is divided into three sections: an overview of biological visual processing; methods of preprocessing (extracting parts of shape, texture, motion, and depth); and shape representation and recognition (form invariance, primitives and structural descriptions, and theories of attention).

  11. Visual word representation in the brain

    NARCIS (Netherlands)

    Ramakrishnan, K.; Groen, I.; Scholte, S.; Smeulders, A.; Ghebreab, S.

    2013-01-01

    The human visual system is thought to use features of intermediate complexity for scene representation. How the brain computationally represents intermediate features is unclear, however. To study this, we tested the Bag of Words (BoW) model in computer vision against human brain activity. This

  12. Visual Representations of the Water Cycle in Science Textbooks

    Science.gov (United States)

    Vinisha, K.; Ramadas, J.

    2013-01-01

    Visual representations, including photographs, sketches and schematic diagrams, are a valuable yet often neglected aspect of textbooks. Visual means of communication are particularly helpful in introducing abstract concepts in science. For effective communication, visuals and text need to be appropriately integrated within the textbook. This study…

  13. Visual recognition and inference using dynamic overcomplete sparse learning.

    Science.gov (United States)

    Murray, Joseph F; Kreutz-Delgado, Kenneth

    2007-09-01

    We present a hierarchical architecture and learning algorithm for visual recognition and other visual inference tasks such as imagination, reconstruction of occluded images, and expectation-driven segmentation. Using properties of biological vision for guidance, we posit a stochastic generative world model and from it develop a simplified world model (SWM) based on a tractable variational approximation that is designed to enforce sparse coding. Recent developments in computational methods for learning overcomplete representations (Lewicki & Sejnowski, 2000; Teh, Welling, Osindero, & Hinton, 2003) suggest that overcompleteness can be useful for visual tasks, and we use an overcomplete dictionary learning algorithm (Kreutz-Delgado, et al., 2003) as a preprocessing stage to produce accurate, sparse codings of images. Inference is performed by constructing a dynamic multilayer network with feedforward, feedback, and lateral connections, which is trained to approximate the SWM. Learning is done with a variant of the back-propagation-through-time algorithm, which encourages convergence to desired states within a fixed number of iterations. Vision tasks require large networks, and to make learning efficient, we take advantage of the sparsity of each layer to update only a small subset of elements in a large weight matrix at each iteration. Experiments on a set of rotated objects demonstrate various types of visual inference and show that increasing the degree of overcompleteness improves recognition performance in difficult scenes with occluded objects in clutter.

  14. Internal attention to features in visual short-term memory guides object learning.

    Science.gov (United States)

    Fan, Judith E; Turk-Browne, Nicholas B

    2013-11-01

    Attending to objects in the world affects how we perceive and remember them. What are the consequences of attending to an object in mind? In particular, how does reporting the features of a recently seen object guide visual learning? In three experiments, observers were presented with abstract shapes in a particular color, orientation, and location. After viewing each object, observers were cued to report one feature from visual short-term memory (VSTM). In a subsequent test, observers were cued to report features of the same objects from visual long-term memory (VLTM). We tested whether reporting a feature from VSTM: (1) enhances VLTM for just that feature (practice-benefit hypothesis), (2) enhances VLTM for all features (object-based hypothesis), or (3) simultaneously enhances VLTM for that feature and suppresses VLTM for unreported features (feature-competition hypothesis). The results provided support for the feature-competition hypothesis, whereby the representation of an object in VLTM was biased towards features reported from VSTM and away from unreported features (Experiment 1). This bias could not be explained by the amount of sensory exposure or response learning (Experiment 2) and was amplified by the reporting of multiple features (Experiment 3). Taken together, these results suggest that selective internal attention induces competitive dynamics among features during visual learning, flexibly tuning object representations to align with prior mnemonic goals. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Learning from graphically integrated 2D and 3D representations improves retention of neuroanatomy

    Science.gov (United States)

    Naaz, Farah

    Visualizations in the form of computer-based learning environments are highly encouraged in science education, especially for teaching spatial material. Some spatial material, such as sectional neuroanatomy, is very challenging to learn. It involves learning the two dimensional (2D) representations that are sampled from the three dimensional (3D) object. In this study, a computer-based learning environment was used to explore the hypothesis that learning sectional neuroanatomy from a graphically integrated 2D and 3D representation will lead to better learning outcomes than learning from a sequential presentation. The integrated representation explicitly demonstrates the 2D-3D transformation and should lead to effective learning. This study was conducted using a computer graphical model of the human brain. There were two learning groups: Whole then Sections, and Integrated 2D3D. Both groups learned whole anatomy (3D neuroanatomy) before learning sectional anatomy (2D neuroanatomy). The Whole then Sections group then learned sectional anatomy using 2D representations only. The Integrated 2D3D group learned sectional anatomy from a graphically integrated 3D and 2D model. A set of tests for generalization of knowledge to interpreting biomedical images was conducted immediately after learning was completed. The order of presentation of the tests of generalization of knowledge was counterbalanced across participants to explore a secondary hypothesis of the study: preparation for future learning. If the computer-based instruction programs used in this study are effective tools for teaching anatomy, the participants should continue learning neuroanatomy with exposure to new representations. A test of long-term retention of sectional anatomy was conducted 4-8 weeks after learning was completed. The Integrated 2D3D group was better than the Whole then Sections group in retaining knowledge of difficult instances of sectional anatomy after the retention interval. The benefit

  16. Neuronal representations of stimulus associations develop in the temporal lobe during learning.

    Science.gov (United States)

    Messinger, A; Squire, L R; Zola, S M; Albright, T D

    2001-10-09

    Visual stimuli that are frequently seen together become associated in long-term memory, such that the sight of one stimulus readily brings to mind the thought or image of the other. It has been hypothesized that acquisition of such long-term associative memories proceeds via the strengthening of connections between neurons representing the associated stimuli, such that a neuron initially responding only to one stimulus of an associated pair eventually comes to respond to both. Consistent with this hypothesis, studies have demonstrated that individual neurons in the primate inferior temporal cortex tend to exhibit similar responses to pairs of visual stimuli that have become behaviorally associated. In the present study, we investigated the role of these areas in the formation of conditional visual associations by monitoring the responses of individual neurons during the learning of new stimulus pairs. We found that many neurons in both area TE and perirhinal cortex came to elicit more similar neuronal responses to paired stimuli as learning proceeded. Moreover, these neuronal response changes were learning-dependent and proceeded with an average time course that paralleled learning. This experience-dependent plasticity of sensory representations in the cerebral cortex may underlie the learning of associations between objects.

  17. Autonomous learning of robust visual object detection and identification on a humanoid

    NARCIS (Netherlands)

    Leitner, J.; Chandrashekhariah, P.; Harding, S.; Frank, M.; Spina, G.; Förster, A.; Triesch, J.; Schmidhuber, J.

    2012-01-01

    In this work we introduce a technique for a humanoid robot to autonomously learn the representations of objects within its visual environment. Our approach involves an attention mechanism in association with feature based segmentation that explores the environment and provides object samples for

  18. Visual representation of spatiotemporal structure

    Science.gov (United States)

    Schill, Kerstin; Zetzsche, Christoph; Brauer, Wilfried; Eisenkolb, A.; Musto, A.

    1998-07-01

    The processing and representation of motion information is addressed from an integrated perspective comprising low- level signal processing properties as well as higher-level cognitive aspects. For the low-level processing of motion information we argue that a fundamental requirement is the existence of a spatio-temporal memory. Its key feature, the provision of an orthogonal relation between external time and its internal representation, is achieved by a mapping of temporal structure into a locally distributed activity distribution accessible in parallel by higher-level processing stages. This leads to a reinterpretation of the classical concept of `iconic memory' and resolves inconsistencies on ultra-short-time processing and visual masking. The spatial-temporal memory is further investigated by experiments on the perception of spatio-temporal patterns. Results on the direction discrimination of motion paths provide evidence that information about direction and location are not processed and represented independent of each other. This suggests a unified representation on an early level, in the sense that motion information is internally available in form of a spatio-temporal compound. For the higher-level representation we have developed a formal framework for the qualitative description of courses of motion that may occur with moving objects.

  19. Rich Representations with Exposed Semantics for Deep Visual Reasoning

    Science.gov (United States)

    2016-06-01

    of a relationship between visual recognition, associative processing, and episodic memory and provides important clues into the neural mechanism...provides critical evidence of a relationship between visual recognition, associative processing, and episodic memory and provides important clues into...From - To) ;run.- ~01~ Final!Technical 4. TITLE AND SUBTITLE Sa. CONTRACT NUMBER Rich Representations with Exposed Semantics for Deep Visual

  20. Analysis of visual representation techniques for product configuration systems in industrial companies

    DEFF Research Database (Denmark)

    Shafiee, Sara; Kristjansdottir, Katrin; Hvam, Lars

    2016-01-01

    with knowledge representations and communications with domain experts. The results presented in the paper are therefore aimed to provide insight into the impact from using visual knowledge representations techniques in PCSs projects. The findings indicate that use of visual knowledge representations techniques...... in PCSs projects will result in improved quality of maintenance and development support for the knowledge base and improved quality of the communication with domain experts....

  1. Scientific Representation and Science Learning

    Science.gov (United States)

    Matta, Corrado

    2014-01-01

    In this article I examine three examples of philosophical theories of scientific representation with the aim of assessing which of these is a good candidate for a philosophical theory of scientific representation in science learning. The three candidate theories are Giere's intentional approach, Suárez's inferential approach and Lynch and…

  2. Representations in learning new faces: evidence from prosopagnosia.

    Science.gov (United States)

    Polster, M R; Rapcsak, S Z

    1996-05-01

    We report the performance of a prosopagnosic patient on face learning tasks under different encoding instructions (i.e., levels of processing manipulations). R.J. performs at chance when given no encoding instructions or when given "shallow" encoding instruction to focus on facial features. By contrast, he performs relatively well with "deep" encoding instructions to rate faces in terms of personality traits or when provided with semantic and name information during the study phase. We propose that the improvement associated with deep encoding instructions may be related to the establishment of distinct visually derived and identity-specific semantic codes. The benefit associated with deep encoding in R.J., however, was found to be restricted to the specific view of the face presented at study and did not generalize to other views of the same face. These observations suggest that deep encoding instructions may enhance memory for concrete or pictorial representations of faces in patients with prosopagnosia, but that these patients cannot compensate for the inability to construct abstract structural codes that normally allow faces to be recognized from different orientations. We postulate further that R.J.'s poor performance on face learning tasks may be attributable to excessive reliance on a feature-based left hemisphere face processing system that operates primarily on view-specific representations.

  3. Ambiguous science and the visual representation of the real

    Science.gov (United States)

    Newbold, Curtis Robert

    The emergence of visual media as prominent and even expected forms of communication in nearly all disciplines, including those scientific, has raised new questions about how the art and science of communication epistemologically affect the interpretation of scientific phenomena. In this dissertation I explore how the influence of aesthetics in visual representations of science inevitably creates ambiguous meanings. As a means to improve visual literacy in the sciences, I call awareness to the ubiquity of visual ambiguity and its importance and relevance in scientific discourse. To do this, I conduct a literature review that spans interdisciplinary research in communication, science, art, and rhetoric. Furthermore, I create a paradoxically ambiguous taxonomy, which functions to exploit the nuances of visual ambiguities and their role in scientific communication. I then extrapolate the taxonomy of visual ambiguity and from it develop an ambiguous, rhetorical heuristic, the Tetradic Model of Visual Ambiguity. The Tetradic Model is applied to a case example of a scientific image as a demonstration of how scientific communicators may increase their awareness of the epistemological effects of ambiguity in the visual representations of science. I conclude by demonstrating how scientific communicators may make productive use of visual ambiguity, even in communications of objective science, and I argue how doing so strengthens scientific communicators' visual literacy skills and their ability to communicate more ethically and effectively.

  4. Visual Representations on High School Biology, Chemistry, Earth Science, and Physics Assessments

    Science.gov (United States)

    LaDue, Nicole D.; Libarkin, Julie C.; Thomas, Stephen R.

    2015-01-01

    The pervasive use of visual representations in textbooks, curricula, and assessments underscores their importance in K-12 science education. For example, visual representations figure prominently in the recent publication of the Next Generation Science Standards (NGSS Lead States in Next generation science standards: for states, by states.…

  5. Shape representation modulating the effect of motion on visual search performance.

    Science.gov (United States)

    Yang, Lindong; Yu, Ruifeng; Lin, Xuelian; Liu, Na

    2017-11-02

    The effect of motion on visual search has been extensively investigated, but that of uniform linear motion of display on search performance for tasks with different target-distractor shape representations has been rarely explored. The present study conducted three visual search experiments. In Experiments 1 and 2, participants finished two search tasks that differed in target-distractor shape representations under static and dynamic conditions. Two tasks with clear and blurred stimuli were performed in Experiment 3. The experiments revealed that target-distractor shape representation modulated the effect of motion on visual search performance. For tasks with low target-distractor shape similarity, motion negatively affected search performance, which was consistent with previous studies. However, for tasks with high target-distractor shape similarity, if the target differed from distractors in that a gap with a linear contour was added to the target, and the corresponding part of distractors had a curved contour, motion positively influenced search performance. Motion blur contributed to the performance enhancement under dynamic conditions. The findings are useful for understanding the influence of target-distractor shape representation on dynamic visual search performance when display had uniform linear motion.

  6. Learning a New Selection Rule in Visual and Frontal Cortex.

    Science.gov (United States)

    van der Togt, Chris; Stănişor, Liviu; Pooresmaeili, Arezoo; Albantakis, Larissa; Deco, Gustavo; Roelfsema, Pieter R

    2016-08-01

    How do you make a decision if you do not know the rules of the game? Models of sensory decision-making suggest that choices are slow if evidence is weak, but they may only apply if the subject knows the task rules. Here, we asked how the learning of a new rule influences neuronal activity in the visual (area V1) and frontal cortex (area FEF) of monkeys. We devised a new icon-selection task. On each day, the monkeys saw 2 new icons (small pictures) and learned which one was relevant. We rewarded eye movements to a saccade target connected to the relevant icon with a curve. Neurons in visual and frontal cortex coded the monkey's choice, because the representation of the selected curve was enhanced. Learning delayed the neuronal selection signals and we uncovered the cause of this delay in V1, where learning to select the relevant icon caused an early suppression of surrounding image elements. These results demonstrate that the learning of a new rule causes a transition from fast and random decisions to a more considerate strategy that takes additional time and they reveal the contribution of visual and frontal cortex to the learning process. © The Author 2016. Published by Oxford University Press.

  7. Weakly supervised visual dictionary learning by harnessing image attributes.

    Science.gov (United States)

    Gao, Yue; Ji, Rongrong; Liu, Wei; Dai, Qionghai; Hua, Gang

    2014-12-01

    Bag-of-features (BoFs) representation has been extensively applied to deal with various computer vision applications. To extract discriminative and descriptive BoF, one important step is to learn a good dictionary to minimize the quantization loss between local features and codewords. While most existing visual dictionary learning approaches are engaged with unsupervised feature quantization, the latest trend has turned to supervised learning by harnessing the semantic labels of images or regions. However, such labels are typically too expensive to acquire, which restricts the scalability of supervised dictionary learning approaches. In this paper, we propose to leverage image attributes to weakly supervise the dictionary learning procedure without requiring any actual labels. As a key contribution, our approach establishes a generative hidden Markov random field (HMRF), which models the quantized codewords as the observed states and the image attributes as the hidden states, respectively. Dictionary learning is then performed by supervised grouping the observed states, where the supervised information is stemmed from the hidden states of the HMRF. In such a way, the proposed dictionary learning approach incorporates the image attributes to learn a semantic-preserving BoF representation without any genuine supervision. Experiments in large-scale image retrieval and classification tasks corroborate that our approach significantly outperforms the state-of-the-art unsupervised dictionary learning approaches.

  8. Adaptive representations for reinforcement learning

    NARCIS (Netherlands)

    Whiteson, S.

    2010-01-01

    This book presents new algorithms for reinforcement learning, a form of machine learning in which an autonomous agent seeks a control policy for a sequential decision task. Since current methods typically rely on manually designed solution representations, agents that automatically adapt their own

  9. Visual representation of gender in flood coverage of Pakistani print media

    Directory of Open Access Journals (Sweden)

    Zarqa S. Ali

    2014-08-01

    Full Text Available This paper studies gender representation in the visual coverage of the 2010 floods in Pakistan. The data were collected from flood visuals published in the most circulated mainstream English newspapers in Pakistan, Dawn and The News. This study analyses how gender has been framed in the flood visuals. It is argued that visual representation of gender reinforces the gender stereotypes and cultural norms of Pakistani society. The gender-oriented flood coverage in both newspapers frequently seemed to take a reductionist approach while confining the representation of women to gender, and gender-specific roles. Though the gender-sensitive coverage displayed has been typical, showing women as helpless victims of flood, it has aroused sentiments of sympathy among readers and donors, inspiring them to give immediate moral and material help to the affected people. This agenda set by media might be to exploit the politics of sympathy but it has the effect of endorsing gender stereotypes.

  10. Two-stage perceptual learning to break visual crowding.

    Science.gov (United States)

    Zhu, Ziyun; Fan, Zhenzhi; Fang, Fang

    2016-01-01

    When a target is presented with nearby flankers in the peripheral visual field, it becomes harder to identify, which is referred to as crowding. Crowding sets a fundamental limit of object recognition in peripheral vision, preventing us from fully appreciating cluttered visual scenes. We trained adult human subjects on a crowded orientation discrimination task and investigated whether crowding could be completely eliminated by training. We discovered a two-stage learning process with this training task. In the early stage, when the target and flankers were separated beyond a certain distance, subjects acquired a relatively general ability to break crowding, as evidenced by the fact that the breaking of crowding could transfer to another crowded orientation, even a crowded motion stimulus, although the transfer to the opposite visual hemi-field was weak. In the late stage, like many classical perceptual learning effects, subjects' performance gradually improved and showed specificity to the trained orientation. We also found that, when the target and flankers were spaced too finely, training could only reduce, rather than completely eliminate, the crowding effect. This two-stage learning process illustrates a learning strategy for our brain to deal with the notoriously difficult problem of identifying peripheral objects in clutter. The brain first learned to solve the "easy and general" part of the problem (i.e., improving the processing resolution and segmenting the target and flankers) and then tackle the "difficult and specific" part (i.e., refining the representation of the target).

  11. Data Representations, Transformations, and Statistics for Visual Reasoning

    CERN Document Server

    Maciejewski, Ross

    2011-01-01

    Analytical reasoning techniques are methods by which users explore their data to obtain insight and knowledge that can directly support situational awareness and decision making. Recently, the analytical reasoning process has been augmented through the use of interactive visual representations and tools which utilize cognitive, design and perceptual principles. These tools are commonly referred to as visual analytics tools, and the underlying methods and principles have roots in a variety of disciplines. This chapter provides an introduction to young researchers as an overview of common visual

  12. How does the brain rapidly learn and reorganize view-invariant and position-invariant object representations in the inferotemporal cortex?

    Science.gov (United States)

    Cao, Yongqiang; Grossberg, Stephen; Markowitz, Jeffrey

    2011-12-01

    All primates depend for their survival on being able to rapidly learn about and recognize objects. Objects may be visually detected at multiple positions, sizes, and viewpoints. How does the brain rapidly learn and recognize objects while scanning a scene with eye movements, without causing a combinatorial explosion in the number of cells that are needed? How does the brain avoid the problem of erroneously classifying parts of different objects together at the same or different positions in a visual scene? In monkeys and humans, a key area for such invariant object category learning and recognition is the inferotemporal cortex (IT). A neural model is proposed to explain how spatial and object attention coordinate the ability of IT to learn invariant category representations of objects that are seen at multiple positions, sizes, and viewpoints. The model clarifies how interactions within a hierarchy of processing stages in the visual brain accomplish this. These stages include the retina, lateral geniculate nucleus, and cortical areas V1, V2, V4, and IT in the brain's What cortical stream, as they interact with spatial attention processes within the parietal cortex of the Where cortical stream. The model builds upon the ARTSCAN model, which proposed how view-invariant object representations are generated. The positional ARTSCAN (pARTSCAN) model proposes how the following additional processes in the What cortical processing stream also enable position-invariant object representations to be learned: IT cells with persistent activity, and a combination of normalizing object category competition and a view-to-object learning law which together ensure that unambiguous views have a larger effect on object recognition than ambiguous views. The model explains how such invariant learning can be fooled when monkeys, or other primates, are presented with an object that is swapped with another object during eye movements to foveate the original object. The swapping procedure is

  13. Educating "The Simpsons": Teaching Queer Representations in Contemporary Visual Media

    Science.gov (United States)

    Padva, Gilad

    2008-01-01

    This article analyzes queer representation in contemporary visual media and examines how the episode "Homer's Phobia" from Matt Groening's animation series "The Simpsons" can be used to deconstruct hetero- and homo-sexual codes of behavior, socialization, articulation, representation and visibility. The analysis is contextualized in the…

  14. Emerging category representation in the visual forebrain hierarchy of pigeons (Columba livia).

    Science.gov (United States)

    Azizi, Amir Hossein; Pusch, Roland; Koenen, Charlotte; Klatt, Sebastian; Bröcker, Franziska; Thiele, Samuel; Kellermann, Janosch; Güntürkün, Onur; Cheng, Sen

    2018-06-06

    Recognizing and categorizing visual stimuli are cognitive functions vital for survival, and an important feature of visual systems in primates as well as in birds. Visual stimuli are processed along the ventral visual pathway. At every stage in the hierarchy, neurons respond selectively to more complex features, transforming the population representation of the stimuli. It is therefore easier to read-out category information in higher visual areas. While explicit category representations have been observed in the primate brain, less is known on equivalent processes in the avian brain. Even though their brain anatomies are radically different, it has been hypothesized that visual object representations are comparable across mammals and birds. In the present study, we investigated category representations in the pigeon visual forebrain using recordings from single cells responding to photographs of real-world objects. Using a linear classifier, we found that the population activity in the visual associative area mesopallium ventrolaterale (MVL) distinguishes between animate and inanimate objects, although this distinction is not required by the task. By contrast, a population of cells in the entopallium, a region that is lower in the hierarchy of visual areas and that is related to the primate extrastriate cortex, lacked this information. A model that pools responses of simple cells, which function as edge detectors, can account for the animate vs. inanimate categorization in the MVL, but performance in the model is based on different features than in MVL. Therefore, processing in MVL cells is very likely more abstract than simple computations on the output of edge detectors. Copyright © 2018. Published by Elsevier B.V.

  15. Motor sequence learning occurs despite disrupted visual and proprioceptive feedback

    Directory of Open Access Journals (Sweden)

    Boyd Lara A

    2008-07-01

    Full Text Available Abstract Background Recent work has demonstrated the importance of proprioception for the development of internal representations of the forces encountered during a task. Evidence also exists for a significant role for proprioception in the execution of sequential movements. However, little work has explored the role of proprioceptive sensation during the learning of continuous movement sequences. Here, we report that the repeated segment of a continuous tracking task can be learned despite peripherally altered arm proprioception and severely restricted visual feedback regarding motor output. Methods Healthy adults practiced a continuous tracking task over 2 days. Half of the participants experienced vibration that altered proprioception of shoulder flexion/extension of the active tracking arm (experimental condition and half experienced vibration of the passive resting arm (control condition. Visual feedback was restricted for all participants. Retention testing was conducted on a separate day to assess motor learning. Results Regardless of vibration condition, participants learned the repeated segment demonstrated by significant improvements in accuracy for tracking repeated as compared to random continuous movement sequences. Conclusion These results suggest that with practice, participants were able to use residual afferent information to overcome initial interference of tracking ability related to altered proprioception and restricted visual feedback to learn a continuous motor sequence. Motor learning occurred despite an initial interference of tracking noted during acquisition practice.

  16. Negative emotion boosts quality of visual working memory representation.

    Science.gov (United States)

    Xie, Weizhen; Zhang, Weiwei

    2016-08-01

    Negative emotion impacts a variety of cognitive processes, including working memory (WM). The present study investigated whether negative emotion modulated WM capacity (quantity) or resolution (quality), 2 independent limits on WM storage. In Experiment 1, observers tried to remember several colors over 1-s delay and then recalled the color of a randomly picked memory item by clicking a best-matching color on a continuous color wheel. On each trial, before the visual WM task, 1 of 3 emotion conditions (negative, neutral, or positive) was induced by having observers to rate the valence of an International Affective Picture System image. Visual WM under negative emotion showed enhanced resolution compared with neutral and positive conditions, whereas the number of retained representations was comparable across the 3 emotion conditions. These effects were generalized to closed-contour shapes in Experiment 2. To isolate the locus of these effects, Experiment 3 adopted an iconic memory version of the color recall task by eliminating the 1-s retention interval. No significant change in the quantity or quality of iconic memory was observed, suggesting that the resolution effects in the first 2 experiments were critically dependent on the need to retain memory representations over a short period of time. Taken together, these results suggest that negative emotion selectively boosts visual WM quality, supporting the dissociable nature quantitative and qualitative aspects of visual WM representation. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  17. A visual representation system for the scheduling and management of projects

    NARCIS (Netherlands)

    Pollalis, S.N.

    1992-01-01

    A VISUAL SCHEDULING AND MANAGEMENT SYSTEM (VSMS) This work proposes a new system for the visual representation of projects that displays the quantities of work, resources and cost. This new system, called Visual Scheduling and Management System, has a built-in hierarchical system to provide

  18. Multiple representations in physics education

    CERN Document Server

    Duit, Reinders; Fischer, Hans E

    2017-01-01

    This volume is important because despite various external representations, such as analogies, metaphors, and visualizations being commonly used by physics teachers, educators and researchers, the notion of using the pedagogical functions of multiple representations to support teaching and learning is still a gap in physics education. The research presented in the three sections of the book is introduced by descriptions of various psychological theories that are applied in different ways for designing physics teaching and learning in classroom settings. The following chapters of the book illustrate teaching and learning with respect to applying specific physics multiple representations in different levels of the education system and in different physics topics using analogies and models, different modes, and in reasoning and representational competence. When multiple representations are used in physics for teaching, the expectation is that they should be successful. To ensure this is the case, the implementati...

  19. Numerical Magnitude Representations Influence Arithmetic Learning

    Science.gov (United States)

    Booth, Julie L.; Siegler, Robert S.

    2008-01-01

    This study examined whether the quality of first graders' (mean age = 7.2 years) numerical magnitude representations is correlated with, predictive of, and causally related to their arithmetic learning. The children's pretest numerical magnitude representations were found to be correlated with their pretest arithmetic knowledge and to be…

  20. Isolating Visual and Proprioceptive Components of Motor Sequence Learning in ASD.

    Science.gov (United States)

    Sharer, Elizabeth A; Mostofsky, Stewart H; Pascual-Leone, Alvaro; Oberman, Lindsay M

    2016-05-01

    In addition to defining impairments in social communication skills, individuals with autism spectrum disorder (ASD) also show impairments in more basic sensory and motor skills. Development of new skills involves integrating information from multiple sensory modalities. This input is then used to form internal models of action that can be accessed when both performing skilled movements, as well as understanding those actions performed by others. Learning skilled gestures is particularly reliant on integration of visual and proprioceptive input. We used a modified serial reaction time task (SRTT) to decompose proprioceptive and visual components and examine whether patterns of implicit motor skill learning differ in ASD participants as compared with healthy controls. While both groups learned the implicit motor sequence during training, healthy controls showed robust generalization whereas ASD participants demonstrated little generalization when visual input was constant. In contrast, no group differences in generalization were observed when proprioceptive input was constant, with both groups showing limited degrees of generalization. The findings suggest, when learning a motor sequence, individuals with ASD tend to rely less on visual feedback than do healthy controls. Visuomotor representations are considered to underlie imitative learning and action understanding and are thereby crucial to social skill and cognitive development. Thus, anomalous patterns of implicit motor learning, with a tendency to discount visual feedback, may be an important contributor in core social communication deficits that characterize ASD. Autism Res 2016, 9: 563-569. © 2015 International Society for Autism Research, Wiley Periodicals, Inc. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.

  1. Cross-cultural understanding through visual representation

    Directory of Open Access Journals (Sweden)

    Kristina Beckman

    2011-04-01

    Full Text Available This article analyzes international students’ drawings of their home countries’ essay assignments. These English as a Second Language (ESL students often have difficulty in meeting the local demands of our Writing Program, which centers on argumentative writing with thesis and support. Any part of an essay deemed irrelevant is censured as “off topic;” some students see this structure as too direct or even impolite. While not all students found visual representation easy, the drawings reveal some basic assumptions about writing embodied in their native cultures’ assignments. We discuss the drawings first for visual rhetorical content, then in the students’ own terms. Last, we consider how our own pedagogy has been shaped.

  2. The percien contribution for an indexal representation of visual images

    Directory of Open Access Journals (Sweden)

    Virginia Bentes Pinto

    2008-04-01

    Full Text Available However, even if along history the visual images have gained a great importance as sources of information, one cannot deny that with the newest information and communication technologies (ICT they drew the attention of experts from the most different fields of knowledge, such as arts, biology, astronomy, archeology, history, health, fashion, decoration, public relations, editing, engineering and architecture, among others. Presents some theoretical reflections concerning representation in Peirce’s perspective based on the context of the new approaches used for the treatment of visual images, using as examples the paradigms of the manual, semiautomatic, automatic and mixed index representation. The results of the experiments show that the difficulties found in the construction of an index representation of that document type originate from the complexity inherent in the process of production and reception of the imagetic sign.

  3. Perceptual learning modifies the functional specializations of visual cortical areas.

    Science.gov (United States)

    Chen, Nihong; Cai, Peng; Zhou, Tiangang; Thompson, Benjamin; Fang, Fang

    2016-05-17

    Training can improve performance of perceptual tasks. This phenomenon, known as perceptual learning, is strongest for the trained task and stimulus, leading to a widely accepted assumption that the associated neuronal plasticity is restricted to brain circuits that mediate performance of the trained task. Nevertheless, learning does transfer to other tasks and stimuli, implying the presence of more widespread plasticity. Here, we trained human subjects to discriminate the direction of coherent motion stimuli. The behavioral learning effect substantially transferred to noisy motion stimuli. We used transcranial magnetic stimulation (TMS) and functional magnetic resonance imaging (fMRI) to investigate the neural mechanisms underlying the transfer of learning. The TMS experiment revealed dissociable, causal contributions of V3A (one of the visual areas in the extrastriate visual cortex) and MT+ (middle temporal/medial superior temporal cortex) to coherent and noisy motion processing. Surprisingly, the contribution of MT+ to noisy motion processing was replaced by V3A after perceptual training. The fMRI experiment complemented and corroborated the TMS finding. Multivariate pattern analysis showed that, before training, among visual cortical areas, coherent and noisy motion was decoded most accurately in V3A and MT+, respectively. After training, both kinds of motion were decoded most accurately in V3A. Our findings demonstrate that the effects of perceptual learning extend far beyond the retuning of specific neural populations for the trained stimuli. Learning could dramatically modify the inherent functional specializations of visual cortical areas and dynamically reweight their contributions to perceptual decisions based on their representational qualities. These neural changes might serve as the neural substrate for the transfer of perceptual learning.

  4. Deep generative learning of location-invariant visual word recognition

    Science.gov (United States)

    Di Bono, Maria Grazia; Zorzi, Marco

    2013-01-01

    It is widely believed that orthographic processing implies an approximate, flexible coding of letter position, as shown by relative-position and transposition priming effects in visual word recognition. These findings have inspired alternative proposals about the representation of letter position, ranging from noisy coding across the ordinal positions to relative position coding based on open bigrams. This debate can be cast within the broader problem of learning location-invariant representations of written words, that is, a coding scheme abstracting the identity and position of letters (and combinations of letters) from their eye-centered (i.e., retinal) locations. We asked whether location-invariance would emerge from deep unsupervised learning on letter strings and what type of intermediate coding would emerge in the resulting hierarchical generative model. We trained a deep network with three hidden layers on an artificial dataset of letter strings presented at five possible retinal locations. Though word-level information (i.e., word identity) was never provided to the network during training, linear decoding from the activity of the deepest hidden layer yielded near-perfect accuracy in location-invariant word recognition. Conversely, decoding from lower layers yielded a large number of transposition errors. Analyses of emergent internal representations showed that word selectivity and location invariance increased as a function of layer depth. Word-tuning and location-invariance were found at the level of single neurons, but there was no evidence for bigram coding. Finally, the distributed internal representation of words at the deepest layer showed higher similarity to the representation elicited by the two exterior letters than by other combinations of two contiguous letters, in agreement with the hypothesis that word edges have special status. These results reveal that the efficient coding of written words—which was the model's learning objective

  5. Deep generative learning of location-invariant visual word recognition.

    Science.gov (United States)

    Di Bono, Maria Grazia; Zorzi, Marco

    2013-01-01

    It is widely believed that orthographic processing implies an approximate, flexible coding of letter position, as shown by relative-position and transposition priming effects in visual word recognition. These findings have inspired alternative proposals about the representation of letter position, ranging from noisy coding across the ordinal positions to relative position coding based on open bigrams. This debate can be cast within the broader problem of learning location-invariant representations of written words, that is, a coding scheme abstracting the identity and position of letters (and combinations of letters) from their eye-centered (i.e., retinal) locations. We asked whether location-invariance would emerge from deep unsupervised learning on letter strings and what type of intermediate coding would emerge in the resulting hierarchical generative model. We trained a deep network with three hidden layers on an artificial dataset of letter strings presented at five possible retinal locations. Though word-level information (i.e., word identity) was never provided to the network during training, linear decoding from the activity of the deepest hidden layer yielded near-perfect accuracy in location-invariant word recognition. Conversely, decoding from lower layers yielded a large number of transposition errors. Analyses of emergent internal representations showed that word selectivity and location invariance increased as a function of layer depth. Word-tuning and location-invariance were found at the level of single neurons, but there was no evidence for bigram coding. Finally, the distributed internal representation of words at the deepest layer showed higher similarity to the representation elicited by the two exterior letters than by other combinations of two contiguous letters, in agreement with the hypothesis that word edges have special status. These results reveal that the efficient coding of written words-which was the model's learning objective

  6. Deep generative learning of location-invariant visual word recognition

    Directory of Open Access Journals (Sweden)

    Maria Grazia eDi Bono

    2013-09-01

    Full Text Available It is widely believed that orthographic processing implies an approximate, flexible coding of letter position, as shown by relative-position and transposition priming effects in visual word recognition. These findings have inspired alternative proposals about the representation of letter position, ranging from noisy coding across the ordinal positions to relative position coding based on open bigrams. This debate can be cast within the broader problem of learning location-invariant representations of written words, that is, a coding scheme abstracting the identity and position of letters (and combinations of letters from their eye-centred (i.e., retinal locations. We asked whether location-invariance would emerge from deep unsupervised learning on letter strings and what type of intermediate coding would emerge in the resulting hierarchical generative model. We trained a deep network with three hidden layers on an artificial dataset of letter strings presented at five possible retinal locations. Though word-level information (i.e., word identity was never provided to the network during training, linear decoding from the activity of the deepest hidden layer yielded near-perfect accuracy in location-invariant word recognition. Conversely, decoding from lower layers yielded a large number of transposition errors. Analyses of emergent internal representations showed that word selectivity and location invariance increased as a function of layer depth. Conversely, there was no evidence for bigram coding. Finally, the distributed internal representation of words at the deepest layer showed higher similarity to the representation elicited by the two exterior letters than by other combinations of two contiguous letters, in agreement with the hypothesis that word edges have special status. These results reveal that the efficient coding of written words – which was the model’s learning objective – is largely based on letter-level information.

  7. A review of visual memory capacity: Beyond individual items and towards structured representations

    Science.gov (United States)

    Brady, Timothy F.; Konkle, Talia; Alvarez, George A.

    2012-01-01

    Traditional memory research has focused on identifying separate memory systems and exploring different stages of memory processing. This approach has been valuable for establishing a taxonomy of memory systems and characterizing their function, but has been less informative about the nature of stored memory representations. Recent research on visual memory has shifted towards a representation-based emphasis, focusing on the contents of memory, and attempting to determine the format and structure of remembered information. The main thesis of this review will be that one cannot fully understand memory systems or memory processes without also determining the nature of memory representations. Nowhere is this connection more obvious than in research that attempts to measure the capacity of visual memory. We will review research on the capacity of visual working memory and visual long-term memory, highlighting recent work that emphasizes the contents of memory. This focus impacts not only how we estimate the capacity of the system - going beyond quantifying how many items can be remembered, and moving towards structured representations - but how we model memory systems and memory processes. PMID:21617025

  8. Supramodal processing optimizes visual perceptual learning and plasticity.

    Science.gov (United States)

    Zilber, Nicolas; Ciuciu, Philippe; Gramfort, Alexandre; Azizi, Leila; van Wassenhove, Virginie

    2014-06-01

    Multisensory interactions are ubiquitous in cortex and it has been suggested that sensory cortices may be supramodal i.e. capable of functional selectivity irrespective of the sensory modality of inputs (Pascual-Leone and Hamilton, 2001; Renier et al., 2013; Ricciardi and Pietrini, 2011; Voss and Zatorre, 2012). Here, we asked whether learning to discriminate visual coherence could benefit from supramodal processing. To this end, three groups of participants were briefly trained to discriminate which of a red or green intermixed population of random-dot-kinematograms (RDKs) was most coherent in a visual display while being recorded with magnetoencephalography (MEG). During training, participants heard no sound (V), congruent acoustic textures (AV) or auditory noise (AVn); importantly, congruent acoustic textures shared the temporal statistics - i.e. coherence - of visual RDKs. After training, the AV group significantly outperformed participants trained in V and AVn although they were not aware of their progress. In pre- and post-training blocks, all participants were tested without sound and with the same set of RDKs. When contrasting MEG data collected in these experimental blocks, selective differences were observed in the dynamic pattern and the cortical loci responsive to visual RDKs. First and common to all three groups, vlPFC showed selectivity to the learned coherence levels whereas selectivity in visual motion area hMT+ was only seen for the AV group. Second and solely for the AV group, activity in multisensory cortices (mSTS, pSTS) correlated with post-training performances; additionally, the latencies of these effects suggested feedback from vlPFC to hMT+ possibly mediated by temporal cortices in AV and AVn groups. Altogether, we interpret our results in the context of the Reverse Hierarchy Theory of learning (Ahissar and Hochstein, 2004) in which supramodal processing optimizes visual perceptual learning by capitalizing on sensory

  9. Learned image representations for visual recognition

    DEFF Research Database (Denmark)

    Larsen, Anders Boesen Lindbo

    This thesis addresses the problem of extracting image structures for representing images effectively in order to solve visual recognition tasks. Problems from diverse research areas (medical imaging, material science and food processing) have motivated large parts of the methodological development...

  10. The loss of short-term visual representations over time: decay or temporal distinctiveness?

    Science.gov (United States)

    Mercer, Tom

    2014-12-01

    There has been much recent interest in the loss of visual short-term memories over the passage of time. According to decay theory, visual representations are gradually forgotten as time passes, reflecting a slow and steady distortion of the memory trace. However, this is controversial and decay effects can be explained in other ways. The present experiment aimed to reexamine the maintenance and loss of visual information over the short term. Decay and temporal distinctiveness models were tested using a delayed discrimination task, in which participants compared complex and novel objects over unfilled retention intervals of variable length. Experiment 1 found no significant change in the accuracy of visual memory from 2 to 6 s, but the gap separating trials reliably influenced task performance. Experiment 2 found evidence for information loss at a 10-s retention interval, but temporally separating trials restored the fidelity of visual memory, possibly because temporally isolated representations are distinct from older memory traces. In conclusion, visual representations lose accuracy at some point after 6 s, but only within temporally crowded contexts. These findings highlight the importance of temporal distinctiveness within visual short-term memory. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  11. COALA-System for Visual Representation of Cryptography Algorithms

    Science.gov (United States)

    Stanisavljevic, Zarko; Stanisavljevic, Jelena; Vuletic, Pavle; Jovanovic, Zoran

    2014-01-01

    Educational software systems have an increasingly significant presence in engineering sciences. They aim to improve students' attitudes and knowledge acquisition typically through visual representation and simulation of complex algorithms and mechanisms or hardware systems that are often not available to the educational institutions. This paper…

  12. Effects of Visual Feedback Distortion on Gait Adaptation: Comparison of Implicit Visual Distortion Versus Conscious Modulation on Retention of Motor Learning.

    Science.gov (United States)

    Kim, Seung-Jae; Ogilvie, Mitchell; Shimabukuro, Nathan; Stewart, Trevor; Shin, Joon-Ho

    2015-09-01

    Visual feedback can be used during gait rehabilitation to improve the efficacy of training. We presented a paradigm called visual feedback distortion; the visual representation of step length was manipulated during treadmill walking. Our prior work demonstrated that an implicit distortion of visual feedback of step length entails an unintentional adaptive process in the subjects' spatial gait pattern. Here, we investigated whether the implicit visual feedback distortion, versus conscious correction, promotes efficient locomotor adaptation that relates to greater retention of a task. Thirteen healthy subjects were studied under two conditions: (1) we implicitly distorted the visual representation of their gait symmetry over 14 min, and (2) with help of visual feedback, subjects were told to walk on the treadmill with the intent of attaining the gait asymmetry observed during the first implicit trial. After adaptation, the visual feedback was removed while subjects continued walking normally. Over this 6-min period, retention of preserved asymmetric pattern was assessed. We found that there was a greater retention rate during the implicit distortion trial than that of the visually guided conscious modulation trial. This study highlights the important role of implicit learning in the context of gait rehabilitation by demonstrating that training with implicit visual feedback distortion may produce longer lasting effects. This suggests that using visual feedback distortion could improve the effectiveness of treadmill rehabilitation processes by influencing the retention of motor skills.

  13. Visual working memory gives up attentional control early in learning: ruling out interhemispheric cancellation.

    Science.gov (United States)

    Reinhart, Robert M G; Carlisle, Nancy B; Woodman, Geoffrey F

    2014-08-01

    Current research suggests that we can watch visual working memory surrender the control of attention early in the process of learning to search for a specific object. This inference is based on the observation that the contralateral delay activity (CDA) rapidly decreases in amplitude across trials when subjects search for the same target object. Here, we tested the alternative explanation that the role of visual working memory does not actually decline across learning, but instead lateralized representations accumulate in both hemispheres across trials and wash out the lateralized CDA. We show that the decline in CDA amplitude occurred even when the target objects were consistently lateralized to a single visual hemifield. Our findings demonstrate that reductions in the amplitude of the CDA during learning are not simply due to the dilution of the CDA from interhemispheric cancellation. Copyright © 2014 Society for Psychophysiological Research.

  14. Top-down attention affects sequential regularity representation in the human visual system.

    Science.gov (United States)

    Kimura, Motohiro; Widmann, Andreas; Schröger, Erich

    2010-08-01

    Recent neuroscience studies using visual mismatch negativity (visual MMN), an event-related brain potential (ERP) index of memory-mismatch processes in the visual sensory system, have shown that although sequential regularities embedded in successive visual stimuli can be automatically represented in the visual sensory system, an existence of sequential regularity itself does not guarantee that the sequential regularity will be automatically represented. In the present study, we investigated the effects of top-down attention on sequential regularity representation in the visual sensory system. Our results showed that a sequential regularity (SSSSD) embedded in a modified oddball sequence where infrequent deviant (D) and frequent standard stimuli (S) differing in luminance were regularly presented (SSSSDSSSSDSSSSD...) was represented in the visual sensory system only when participants attended the sequential regularity in luminance, but not when participants ignored the stimuli or simply attended the dimension of luminance per se. This suggests that top-down attention affects sequential regularity representation in the visual sensory system and that top-down attention is a prerequisite for particular sequential regularities to be represented. Copyright 2010 Elsevier B.V. All rights reserved.

  15. A computational exploration of complementary learning mechanisms in the primate ventral visual pathway.

    Science.gov (United States)

    Spoerer, Courtney J; Eguchi, Akihiro; Stringer, Simon M

    2016-02-01

    In order to develop transformation invariant representations of objects, the visual system must make use of constraints placed upon object transformation by the environment. For example, objects transform continuously from one point to another in both space and time. These two constraints have been exploited separately in order to develop translation and view invariance in a hierarchical multilayer model of the primate ventral visual pathway in the form of continuous transformation learning and temporal trace learning. We show for the first time that these two learning rules can work cooperatively in the model. Using these two learning rules together can support the development of invariance in cells and help maintain object selectivity when stimuli are presented over a large number of locations or when trained separately over a large number of viewing angles. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  16. Analysis and Visualization of Relations in eLearning

    Science.gov (United States)

    Dráždilová, Pavla; Obadi, Gamila; Slaninová, Kateřina; Martinovič, Jan; Snášel, Václav

    The popularity of eLearning systems is growing rapidly; this growth is enabled by the consecutive development in Internet and multimedia technologies. Web-based education became wide spread in the past few years. Various types of learning management systems facilitate development of Web-based courses. Users of these courses form social networks through the different activities performed by them. This chapter focuses on searching the latent social networks in eLearning systems data. These data consist of students activity records wherein latent ties among actors are embedded. The social network studied in this chapter is represented by groups of students who have similar contacts and interact in similar social circles. Different methods of data clustering analysis can be applied to these groups, and the findings show the existence of latent ties among the group members. The second part of this chapter focuses on social network visualization. Graphical representation of social network can describe its structure very efficiently. It can enable social network analysts to determine the network degree of connectivity. Analysts can easily determine individuals with a small or large amount of relationships as well as the amount of independent groups in a given network. When applied to the field of eLearning, data visualization simplifies the process of monitoring the study activities of individuals or groups, as well as the planning of educational curriculum, the evaluation of study processes, etc.

  17. Weighted Discriminative Dictionary Learning based on Low-rank Representation

    International Nuclear Information System (INIS)

    Chang, Heyou; Zheng, Hao

    2017-01-01

    Low-rank representation has been widely used in the field of pattern classification, especially when both training and testing images are corrupted with large noise. Dictionary plays an important role in low-rank representation. With respect to the semantic dictionary, the optimal representation matrix should be block-diagonal. However, traditional low-rank representation based dictionary learning methods cannot effectively exploit the discriminative information between data and dictionary. To address this problem, this paper proposed weighted discriminative dictionary learning based on low-rank representation, where a weighted representation regularization term is constructed. The regularization associates label information of both training samples and dictionary atoms, and encourages to generate a discriminative representation with class-wise block-diagonal structure, which can further improve the classification performance where both training and testing images are corrupted with large noise. Experimental results demonstrate advantages of the proposed method over the state-of-the-art methods. (paper)

  18. Creating visual explanations improves learning.

    Science.gov (United States)

    Bobek, Eliza; Tversky, Barbara

    2016-01-01

    Many topics in science are notoriously difficult for students to learn. Mechanisms and processes outside student experience present particular challenges. While instruction typically involves visualizations, students usually explain in words. Because visual explanations can show parts and processes of complex systems directly, creating them should have benefits beyond creating verbal explanations. We compared learning from creating visual or verbal explanations for two STEM domains, a mechanical system (bicycle pump) and a chemical system (bonding). Both kinds of explanations were analyzed for content and learning assess by a post-test. For the mechanical system, creating a visual explanation increased understanding particularly for participants of low spatial ability. For the chemical system, creating both visual and verbal explanations improved learning without new teaching. Creating a visual explanation was superior and benefitted participants of both high and low spatial ability. Visual explanations often included crucial yet invisible features. The greater effectiveness of visual explanations appears attributable to the checks they provide for completeness and coherence as well as to their roles as platforms for inference. The benefits should generalize to other domains like the social sciences, history, and archeology where important information can be visualized. Together, the findings provide support for the use of learner-generated visual explanations as a powerful learning tool.

  19. Supervised Filter Learning for Representation Based Face Recognition.

    Directory of Open Access Journals (Sweden)

    Chao Bi

    Full Text Available Representation based classification methods, such as Sparse Representation Classification (SRC and Linear Regression Classification (LRC have been developed for face recognition problem successfully. However, most of these methods use the original face images without any preprocessing for recognition. Thus, their performances may be affected by some problematic factors (such as illumination and expression variances in the face images. In order to overcome this limitation, a novel supervised filter learning algorithm is proposed for representation based face recognition in this paper. The underlying idea of our algorithm is to learn a filter so that the within-class representation residuals of the faces' Local Binary Pattern (LBP features are minimized and the between-class representation residuals of the faces' LBP features are maximized. Therefore, the LBP features of filtered face images are more discriminative for representation based classifiers. Furthermore, we also extend our algorithm for heterogeneous face recognition problem. Extensive experiments are carried out on five databases and the experimental results verify the efficacy of the proposed algorithm.

  20. Hierarchical representation of shapes in visual cortex - from localized features to figural shape segregation

    Directory of Open Access Journals (Sweden)

    Stephan eTschechne

    2014-08-01

    Full Text Available Visual structures in the environment are effortlessly segmented into image regions and those combined to a representation of surfaces and prototypical objects. Such a perceptual organization is performed by complex neural mechanisms in the visual cortex of primates. Multiple mutually connected areas in the ventral cortical pathway receive visual input and extract local form features that are subsequently grouped into increasingly complex, more meaningful image elements. At this stage, highly articulated changes in shape boundary as well as very subtle curvature changes contribute to the perception of an object.We propose a recurrent computational network architecture that utilizes a hierarchical distributed representation of shape features to encode boundary features over different scales of resolution. Our model makes use of neural mechanisms that model the processing capabilities of early and intermediate stages in visual cortex, namely areas V1-V4 and IT. We suggest that multiple specialized component representations interact by feedforward hierarchical processing that is combined with feedback from representations generated at higher stages. In so doing, global configurational as well as local information is available to distinguish changes in the object's contour. Once the outline of a shape has been established, contextual contour configurations are used to assign border ownership directions and thus achieve segregation of figure and ground. This combines separate findings about the generation of cortical shape representation using hierarchical representations with figure-ground segregation mechanisms.Our model is probed with a selection of artificial and real world images to illustrate processing results at different processing stages. We especially highlight how modulatory feedback connections contribute to the processing of visual input at various stages in the processing hierarchy.

  1. Experience-driven formation of parts-based representations in a model of layered visual memory

    Directory of Open Access Journals (Sweden)

    Jenia Jitsev

    2009-09-01

    Full Text Available Growing neuropsychological and neurophysiological evidence suggests that the visual cortex uses parts-based representations to encode, store and retrieve relevant objects. In such a scheme, objects are represented as a set of spatially distributed local features, or parts, arranged in stereotypical fashion. To encode the local appearance and to represent the relations between the constituent parts, there has to be an appropriate memory structure formed by previous experience with visual objects. Here, we propose a model how a hierarchical memory structure supporting efficient storage and rapid recall of parts-based representations can be established by an experience-driven process of self-organization. The process is based on the collaboration of slow bidirectional synaptic plasticity and homeostatic unit activity regulation, both running at the top of fast activity dynamics with winner-take-all character modulated by an oscillatory rhythm. These neural mechanisms lay down the basis for cooperation and competition between the distributed units and their synaptic connections. Choosing human face recognition as a test task, we show that, under the condition of open-ended, unsupervised incremental learning, the system is able to form memory traces for individual faces in a parts-based fashion. On a lower memory layer the synaptic structure is developed to represent local facial features and their interrelations, while the identities of different persons are captured explicitly on a higher layer. An additional property of the resulting representations is the sparseness of both the activity during the recall and the synaptic patterns comprising the memory traces.

  2. A VISUAL AND VERBAL ANALYSIS OF CHILDREN REPRESENTATION IN TELEVISION ADVERTISEMENT

    Directory of Open Access Journals (Sweden)

    Budi Hermawan

    2014-12-01

    Full Text Available The study investigates the representation of children in television advertisement of 3 Indie+ cellular phone operator. The study is descriptive qualitative and has employed Kress & Leuween’s Reading Images (2006 to analyze the visual data, and Halliday’ Transitivity System (1994, 2004 which is simplified by Gerot and Wignell (1995 for the analyzing the verbal data. The aim of the study is to examine the representation of children visually and verbally in the 3 Indie+ cellular phone operator advertisement. Based on the data analysis, the study finds that visually children are represented as a naive person who is “pretending to know” adult life when in fact they are still a child through the use of setting, layout composition, and perspective (shot, gaze. Children are verbally represented through the use of mental and material processes as somebody who tells about their hope, obsession, and aspirations in the future, and their naive imaginations of how an adult life is In relation to the product advertised the representation signifies that unlike other providers, using 3 Indie+ is very easy; it is not as hard as to live as adults.

  3. Visual Learning Induces Changes in Resting-State fMRI Multivariate Pattern of Information.

    Science.gov (United States)

    Guidotti, Roberto; Del Gratta, Cosimo; Baldassarre, Antonello; Romani, Gian Luca; Corbetta, Maurizio

    2015-07-08

    When measured with functional magnetic resonance imaging (fMRI) in the resting state (R-fMRI), spontaneous activity is correlated between brain regions that are anatomically and functionally related. Learning and/or task performance can induce modulation of the resting synchronization between brain regions. Moreover, at the neuronal level spontaneous brain activity can replay patterns evoked by a previously presented stimulus. Here we test whether visual learning/task performance can induce a change in the patterns of coded information in R-fMRI signals consistent with a role of spontaneous activity in representing task-relevant information. Human subjects underwent R-fMRI before and after perceptual learning on a novel visual shape orientation discrimination task. Task-evoked fMRI patterns to trained versus novel stimuli were recorded after learning was completed, and before the second R-fMRI session. Using multivariate pattern analysis on task-evoked signals, we found patterns in several cortical regions, as follows: visual cortex, V3/V3A/V7; within the default mode network, precuneus, and inferior parietal lobule; and, within the dorsal attention network, intraparietal sulcus, which discriminated between trained and novel visual stimuli. The accuracy of classification was strongly correlated with behavioral performance. Next, we measured multivariate patterns in R-fMRI signals before and after learning. The frequency and similarity of resting states representing the task/visual stimuli states increased post-learning in the same cortical regions recruited by the task. These findings support a representational role of spontaneous brain activity. Copyright © 2015 the authors 0270-6474/15/359786-13$15.00/0.

  4. Visual Vehicle Tracking Based on Deep Representation and Semisupervised Learning

    Directory of Open Access Journals (Sweden)

    Yingfeng Cai

    2017-01-01

    Full Text Available Discriminative tracking methods use binary classification to discriminate between the foreground and background and have achieved some useful results. However, the use of labeled training samples is insufficient for them to achieve accurate tracking. Hence, discriminative classifiers must use their own classification results to update themselves, which may lead to feedback-induced tracking drift. To overcome these problems, we propose a semisupervised tracking algorithm that uses deep representation and transfer learning. Firstly, a 2D multilayer deep belief network is trained with a large amount of unlabeled samples. The nonlinear mapping point at the top of this network is subtracted as the feature dictionary. Then, this feature dictionary is utilized to transfer train and update a deep tracker. The positive samples for training are the tracked vehicles, and the negative samples are the background images. Finally, a particle filter is used to estimate vehicle position. We demonstrate experimentally that our proposed vehicle tracking algorithm can effectively restrain drift while also maintaining the adaption of vehicle appearance. Compared with similar algorithms, our method achieves a better tracking success rate and fewer average central-pixel errors.

  5. The influence of visual representations of “the Other” in the system of modern sociocultural communications

    Directory of Open Access Journals (Sweden)

    Kolodii Nataliya

    2016-01-01

    Full Text Available The paper deals with the way and the form of modern humanitaristics understanding of the problem of visual representation of “the Other”. The authors’ tasks were to comprehend the nature and dynamics of visualization, to give a distinct working definition of visual competence. Besides, the purpose of the paper was to state the components of visual competence, its criteria, estimation methods and in this context to interpret the image of “the Other” decoded in scientific philosophic and cultural literature and in daily cultural practices. And the final task was to reduce the visual message to the verbal one. The doctrine that the image may be read is the common prejudice, which prevents the formation of a new approach to visuality. The first step towards the solution of problem is to describe the techniques, which help in potential understanding of the visual structure. Understanding the image diversity and its possible text analogues should help in establishing the specific requirements, which can be and must be applicable to visual representation of “the Other”. Representations in the visual culture (photography, cinematography, media, painting, advertisement influence the social image, affects the daily social practices and communications. Visual representations are of interest for social theorists as well as cultural texts, as they give an idea on the context of cultural production, social interaction and individual experience.

  6. Emerging Object Representations in the Visual System Predict Reaction Times for Categorization

    Science.gov (United States)

    Ritchie, J. Brendan; Tovar, David A.; Carlson, Thomas A.

    2015-01-01

    Recognizing an object takes just a fraction of a second, less than the blink of an eye. Applying multivariate pattern analysis, or “brain decoding”, methods to magnetoencephalography (MEG) data has allowed researchers to characterize, in high temporal resolution, the emerging representation of object categories that underlie our capacity for rapid recognition. Shortly after stimulus onset, object exemplars cluster by category in a high-dimensional activation space in the brain. In this emerging activation space, the decodability of exemplar category varies over time, reflecting the brain’s transformation of visual inputs into coherent category representations. How do these emerging representations relate to categorization behavior? Recently it has been proposed that the distance of an exemplar representation from a categorical boundary in an activation space is critical for perceptual decision-making, and that reaction times should therefore correlate with distance from the boundary. The predictions of this distance hypothesis have been born out in human inferior temporal cortex (IT), an area of the brain crucial for the representation of object categories. When viewed in the context of a time varying neural signal, the optimal time to “read out” category information is when category representations in the brain are most decodable. Here, we show that the distance from a decision boundary through activation space, as measured using MEG decoding methods, correlates with reaction times for visual categorization during the period of peak decodability. Our results suggest that the brain begins to read out information about exemplar category at the optimal time for use in choice behaviour, and support the hypothesis that the structure of the representation for objects in the visual system is partially constitutive of the decision process in recognition. PMID:26107634

  7. Posttraining transcranial magnetic stimulation of striate cortex disrupts consolidation early in visual skill learning.

    Science.gov (United States)

    De Weerd, Peter; Reithler, Joel; van de Ven, Vincent; Been, Marin; Jacobs, Christianne; Sack, Alexander T

    2012-02-08

    Practice-induced improvements in skilled performance reflect "offline " consolidation processes extending beyond daily training sessions. According to visual learning theories, an early, fast learning phase driven by high-level areas is followed by a late, asymptotic learning phase driven by low-level, retinotopic areas when higher resolution is required. Thus, low-level areas would not contribute to learning and offline consolidation until late learning. Recent studies have challenged this notion, demonstrating modified responses to trained stimuli in primary visual cortex (V1) and offline activity after very limited training. However, the behavioral relevance of modified V1 activity for offline consolidation of visual skill memory in V1 after early training sessions remains unclear. Here, we used neuronavigated transcranial magnetic stimulation (TMS) directed to a trained retinotopic V1 location to test for behaviorally relevant consolidation in human low-level visual cortex. Applying TMS to the trained V1 location within 45 min of the first or second training session strongly interfered with learning, as measured by impaired performance the next day. The interference was conditional on task context and occurred only when training in the location targeted by TMS was followed by training in a second location before TMS. In this condition, high-level areas may become coupled to the second location and uncoupled from the previously trained low-level representation, thereby rendering consolidation vulnerable to interference. Our data show that, during the earliest phases of skill learning in the lowest-level visual areas, a behaviorally relevant form of consolidation exists of which the robustness is controlled by high-level, contextual factors.

  8. Are baboons learning "orthographic" representations? Probably not.

    Directory of Open Access Journals (Sweden)

    Maja Linke

    Full Text Available The ability of Baboons (papio papio to distinguish between English words and nonwords has been modeled using a deep learning convolutional network model that simulates a ventral pathway in which lexical representations of different granularity develop. However, given that pigeons (columba livia, whose brain morphology is drastically different, can also be trained to distinguish between English words and nonwords, it appears that a less species-specific learning algorithm may be required to explain this behavior. Accordingly, we examined whether the learning model of Rescorla and Wagner, which has proved to be amazingly fruitful in understanding animal and human learning could account for these data. We show that a discrimination learning network using gradient orientation features as input units and word and nonword units as outputs succeeds in predicting baboon lexical decision behavior-including key lexical similarity effects and the ups and downs in accuracy as learning unfolds-with surprising precision. The models performance, in which words are not explicitly represented, is remarkable because it is usually assumed that lexicality decisions, including the decisions made by baboons and pigeons, are mediated by explicit lexical representations. By contrast, our results suggest that in learning to perform lexical decision tasks, baboons and pigeons do not construct a hierarchy of lexical units. Rather, they make optimal use of low-level information obtained through the massively parallel processing of gradient orientation features. Accordingly, we suggest that reading in humans first involves initially learning a high-level system building on letter representations acquired from explicit instruction in literacy, which is then integrated into a conventionalized oral communication system, and that like the latter, fluent reading involves the massively parallel processing of the low-level features encoding semantic contrasts.

  9. Parts-based stereoscopic image assessment by learning binocular manifold color visual properties

    Science.gov (United States)

    Xu, Haiyong; Yu, Mei; Luo, Ting; Zhang, Yun; Jiang, Gangyi

    2016-11-01

    Existing stereoscopic image quality assessment (SIQA) methods are mostly based on the luminance information, in which color information is not sufficiently considered. Actually, color is part of the important factors that affect human visual perception, and nonnegative matrix factorization (NMF) and manifold learning are in line with human visual perception. We propose an SIQA method based on learning binocular manifold color visual properties. To be more specific, in the training phase, a feature detector is created based on NMF with manifold regularization by considering color information, which not only allows parts-based manifold representation of an image, but also manifests localized color visual properties. In the quality estimation phase, visually important regions are selected by considering different human visual attention, and feature vectors are extracted by using the feature detector. Then the feature similarity index is calculated and the parts-based manifold color feature energy (PMCFE) for each view is defined based on the color feature vectors. The final quality score is obtained by considering a binocular combination based on PMCFE. The experimental results on LIVE I and LIVE Π 3-D IQA databases demonstrate that the proposed method can achieve much higher consistency with subjective evaluations than the state-of-the-art SIQA methods.

  10. A computational model of the development of separate representations of facial identity and expression in the primate visual system.

    Science.gov (United States)

    Tromans, James Matthew; Harris, Mitchell; Stringer, Simon Maitland

    2011-01-01

    Experimental studies have provided evidence that the visual processing areas of the primate brain represent facial identity and facial expression within different subpopulations of neurons. For example, in non-human primates there is evidence that cells within the inferior temporal gyrus (TE) respond primarily to facial identity, while cells within the superior temporal sulcus (STS) respond to facial expression. More recently, it has been found that the orbitofrontal cortex (OFC) of non-human primates contains some cells that respond exclusively to changes in facial identity, while other cells respond exclusively to facial expression. How might the primate visual system develop physically separate representations of facial identity and expression given that the visual system is always exposed to simultaneous combinations of facial identity and expression during learning? In this paper, a biologically plausible neural network model, VisNet, of the ventral visual pathway is trained on a set of carefully-designed cartoon faces with different identities and expressions. The VisNet model architecture is composed of a hierarchical series of four Self-Organising Maps (SOMs), with associative learning in the feedforward synaptic connections between successive layers. During learning, the network develops separate clusters of cells that respond exclusively to either facial identity or facial expression. We interpret the performance of the network in terms of the learning properties of SOMs, which are able to exploit the statistical indendependence between facial identity and expression.

  11. A computational model of the development of separate representations of facial identity and expression in the primate visual system.

    Directory of Open Access Journals (Sweden)

    James Matthew Tromans

    Full Text Available Experimental studies have provided evidence that the visual processing areas of the primate brain represent facial identity and facial expression within different subpopulations of neurons. For example, in non-human primates there is evidence that cells within the inferior temporal gyrus (TE respond primarily to facial identity, while cells within the superior temporal sulcus (STS respond to facial expression. More recently, it has been found that the orbitofrontal cortex (OFC of non-human primates contains some cells that respond exclusively to changes in facial identity, while other cells respond exclusively to facial expression. How might the primate visual system develop physically separate representations of facial identity and expression given that the visual system is always exposed to simultaneous combinations of facial identity and expression during learning? In this paper, a biologically plausible neural network model, VisNet, of the ventral visual pathway is trained on a set of carefully-designed cartoon faces with different identities and expressions. The VisNet model architecture is composed of a hierarchical series of four Self-Organising Maps (SOMs, with associative learning in the feedforward synaptic connections between successive layers. During learning, the network develops separate clusters of cells that respond exclusively to either facial identity or facial expression. We interpret the performance of the network in terms of the learning properties of SOMs, which are able to exploit the statistical indendependence between facial identity and expression.

  12. Independent sources of anisotropy in visual orientation representation: a visual and a cognitive oblique effect.

    Science.gov (United States)

    Balikou, Panagiota; Gourtzelidis, Pavlos; Mantas, Asimakis; Moutoussis, Konstantinos; Evdokimidis, Ioannis; Smyrnis, Nikolaos

    2015-11-01

    The representation of visual orientation is more accurate for cardinal orientations compared to oblique, and this anisotropy has been hypothesized to reflect a low-level visual process (visual, "class 1" oblique effect). The reproduction of directional and orientation information also leads to a mean error away from cardinal orientations or directions. This anisotropy has been hypothesized to reflect a high-level cognitive process of space categorization (cognitive, "class 2," oblique effect). This space categorization process would be more prominent when the visual representation of orientation degrades such as in the case of working memory with increasing cognitive load, leading to increasing magnitude of the "class 2" oblique effect, while the "class 1" oblique effect would remain unchanged. Two experiments were performed in which an array of orientation stimuli (1-4 items) was presented and then subjects had to realign a probe stimulus within the previously presented array. In the first experiment, the delay between stimulus presentation and probe varied, while in the second experiment, the stimulus presentation time varied. The variable error was larger for oblique compared to cardinal orientations in both experiments reproducing the visual "class 1" oblique effect. The mean error also reproduced the tendency away from cardinal and toward the oblique orientations in both experiments (cognitive "class 2" oblique effect). The accuracy or the reproduced orientation degraded (increasing variable error) and the cognitive "class 2" oblique effect increased with increasing memory load (number of items) in both experiments and presentation time in the second experiment. In contrast, the visual "class 1" oblique effect was not significantly modulated by any one of these experimental factors. These results confirmed the theoretical predictions for the two anisotropies in visual orientation reproduction and provided support for models proposing the categorization of

  13. Associative visual learning by tethered bees in a controlled visual environment.

    Science.gov (United States)

    Buatois, Alexis; Pichot, Cécile; Schultheiss, Patrick; Sandoz, Jean-Christophe; Lazzari, Claudio R; Chittka, Lars; Avarguès-Weber, Aurore; Giurfa, Martin

    2017-10-10

    Free-flying honeybees exhibit remarkable cognitive capacities but the neural underpinnings of these capacities cannot be studied in flying insects. Conversely, immobilized bees are accessible to neurobiological investigation but display poor visual learning. To overcome this limitation, we aimed at establishing a controlled visual environment in which tethered bees walking on a spherical treadmill learn to discriminate visual stimuli video projected in front of them. Freely flying bees trained to walk into a miniature Y-maze displaying these stimuli in a dark environment learned the visual discrimination efficiently when one of them (CS+) was paired with sucrose and the other with quinine solution (CS-). Adapting this discrimination to the treadmill paradigm with a tethered, walking bee was successful as bees exhibited robust discrimination and preferred the CS+ to the CS- after training. As learning was better in the maze, movement freedom, active vision and behavioral context might be important for visual learning. The nature of the punishment associated with the CS- also affects learning as quinine and distilled water enhanced the proportion of learners. Thus, visual learning is amenable to a controlled environment in which tethered bees learn visual stimuli, a result that is important for future neurobiological studies in virtual reality.

  14. Learning Visual Basic NET

    CERN Document Server

    Liberty, Jesse

    2009-01-01

    Learning Visual Basic .NET is a complete introduction to VB.NET and object-oriented programming. By using hundreds of examples, this book demonstrates how to develop various kinds of applications--including those that work with databases--and web services. Learning Visual Basic .NET will help you build a solid foundation in .NET.

  15. Representation of Coordination Mechanisms in IMS Learning Design to Support Group-based Learning

    NARCIS (Netherlands)

    Miao, Yongwu; Burgos, Daniel; Griffiths, David; Koper, Rob

    2007-01-01

    Miao, Y., Burgos, D., Griffiths, D., & Koper, R. (2008). Representation of Coordination Mechanisms in IMS Learning Design to Support Group-based Learning. In L. Lockyer, S. Bennet, S. Agostinho & B. Harper (Eds.), Handbook of Research on Learning Design and Learning Objects: Issues, Applications and

  16. Visual management of large scale data mining projects.

    Science.gov (United States)

    Shah, I; Hunter, L

    2000-01-01

    This paper describes a unified framework for visualizing the preparations for, and results of, hundreds of machine learning experiments. These experiments were designed to improve the accuracy of enzyme functional predictions from sequence, and in many cases were successful. Our system provides graphical user interfaces for defining and exploring training datasets and various representational alternatives, for inspecting the hypotheses induced by various types of learning algorithms, for visualizing the global results, and for inspecting in detail results for specific training sets (functions) and examples (proteins). The visualization tools serve as a navigational aid through a large amount of sequence data and induced knowledge. They provided significant help in understanding both the significance and the underlying biological explanations of our successes and failures. Using these visualizations it was possible to efficiently identify weaknesses of the modular sequence representations and induction algorithms which suggest better learning strategies. The context in which our data mining visualization toolkit was developed was the problem of accurately predicting enzyme function from protein sequence data. Previous work demonstrated that approximately 6% of enzyme protein sequences are likely to be assigned incorrect functions on the basis of sequence similarity alone. In order to test the hypothesis that more detailed sequence analysis using machine learning techniques and modular domain representations could address many of these failures, we designed a series of more than 250 experiments using information-theoretic decision tree induction and naive Bayesian learning on local sequence domain representations of problematic enzyme function classes. In more than half of these cases, our methods were able to perfectly discriminate among various possible functions of similar sequences. We developed and tested our visualization techniques on this application.

  17. Attention enhances multi-voxel representation of novel objects in frontal, parietal and visual cortices.

    Science.gov (United States)

    Woolgar, Alexandra; Williams, Mark A; Rich, Anina N

    2015-04-01

    Selective attention is fundamental for human activity, but the details of its neural implementation remain elusive. One influential theory, the adaptive coding hypothesis (Duncan, 2001, An adaptive coding model of neural function in prefrontal cortex, Nature Reviews Neuroscience 2:820-829), proposes that single neurons in certain frontal and parietal regions dynamically adjust their responses to selectively encode relevant information. This selective representation may in turn support selective processing in more specialized brain regions such as the visual cortices. Here, we use multi-voxel decoding of functional magnetic resonance images to demonstrate selective representation of attended--and not distractor--objects in frontal, parietal, and visual cortices. In addition, we highlight a critical role for task demands in determining which brain regions exhibit selective coding. Strikingly, representation of attended objects in frontoparietal cortex was highest under conditions of high perceptual demand, when stimuli were hard to perceive and coding in early visual cortex was weak. Coding in early visual cortex varied as a function of attention and perceptual demand, while coding in higher visual areas was sensitive to the allocation of attention but robust to changes in perceptual difficulty. Consistent with high-profile reports, peripherally presented objects could also be decoded from activity at the occipital pole, a region which corresponds to the fovea. Our results emphasize the flexibility of frontoparietal and visual systems. They support the hypothesis that attention enhances the multi-voxel representation of information in the brain, and suggest that the engagement of this attentional mechanism depends critically on current task demands. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. Exploring the Structure of Spatial Representations

    Science.gov (United States)

    Madl, Tamas; Franklin, Stan; Chen, Ke; Trappl, Robert; Montaldi, Daniela

    2016-01-01

    It has been suggested that the map-like representations that support human spatial memory are fragmented into sub-maps with local reference frames, rather than being unitary and global. However, the principles underlying the structure of these ‘cognitive maps’ are not well understood. We propose that the structure of the representations of navigation space arises from clustering within individual psychological spaces, i.e. from a process that groups together objects that are close in these spaces. Building on the ideas of representational geometry and similarity-based representations in cognitive science, we formulate methods for learning dissimilarity functions (metrics) characterizing participants’ psychological spaces. We show that these learned metrics, together with a probabilistic model of clustering based on the Bayesian cognition paradigm, allow prediction of participants’ cognitive map structures in advance. Apart from insights into spatial representation learning in human cognition, these methods could facilitate novel computational tools capable of using human-like spatial concepts. We also compare several features influencing spatial memory structure, including spatial distance, visual similarity and functional similarity, and report strong correlations between these dimensions and the grouping probability in participants’ spatial representations, providing further support for clustering in spatial memory. PMID:27347681

  19. Implicit visual learning and the expression of learning.

    Science.gov (United States)

    Haider, Hilde; Eberhardt, Katharina; Kunde, Alexander; Rose, Michael

    2013-03-01

    Although the existence of implicit motor learning is now widely accepted, the findings concerning perceptual implicit learning are ambiguous. Some researchers have observed perceptual learning whereas other authors have not. The review of the literature provides different reasons to explain this ambiguous picture, such as differences in the underlying learning processes, selective attention, or differences in the difficulty to express this knowledge. In three experiments, we investigated implicit visual learning within the original serial reaction time task. We used different response devices (keyboard vs. mouse) in order to manipulate selective attention towards response dimensions. Results showed that visual and motor sequence learning differed in terms of RT-benefits, but not in terms of the amount of knowledge assessed after training. Furthermore, visual sequence learning was modulated by selective attention. However, the findings of all three experiments suggest that selective attention did not alter implicit but rather explicit learning processes. Copyright © 2012 Elsevier Inc. All rights reserved.

  20. Visual Aversive Learning Compromises Sensory Discrimination.

    Science.gov (United States)

    Shalev, Lee; Paz, Rony; Avidan, Galia

    2018-03-14

    Aversive learning is thought to modulate perceptual thresholds, which can lead to overgeneralization. However, it remains undetermined whether this modulation is domain specific or a general effect. Moreover, despite the unique role of the visual modality in human perception, it is unclear whether this aspect of aversive learning exists in this modality. The current study was designed to examine the effect of visual aversive outcomes on the perception of basic visual and auditory features. We tested the ability of healthy participants, both males and females, to discriminate between neutral stimuli, before and after visual learning. In each experiment, neutral stimuli were associated with aversive images in an experimental group and with neutral images in a control group. Participants demonstrated a deterioration in discrimination (higher discrimination thresholds) only after aversive learning. This deterioration was measured for both auditory (tone frequency) and visual (orientation and contrast) features. The effect was replicated in five different experiments and lasted for at least 24 h. fMRI neural responses and pupil size were also measured during learning. We showed an increase in neural activations in the anterior cingulate cortex, insula, and amygdala during aversive compared with neutral learning. Interestingly, the early visual cortex showed increased brain activity during aversive compared with neutral context trials, with identical visual information. Our findings imply the existence of a central multimodal mechanism, which modulates early perceptual properties, following exposure to negative situations. Such a mechanism could contribute to abnormal responses that underlie anxiety states, even in new and safe environments. SIGNIFICANCE STATEMENT Using a visual aversive-learning paradigm, we found deteriorated discrimination abilities for visual and auditory stimuli that were associated with visual aversive stimuli. We showed increased neural

  1. L1-norm locally linear representation regularization multi-source adaptation learning.

    Science.gov (United States)

    Tao, Jianwen; Wen, Shiting; Hu, Wenjun

    2015-09-01

    In most supervised domain adaptation learning (DAL) tasks, one has access only to a small number of labeled examples from target domain. Therefore the success of supervised DAL in this "small sample" regime needs the effective utilization of the large amounts of unlabeled data to extract information that is useful for generalization. Toward this end, we here use the geometric intuition of manifold assumption to extend the established frameworks in existing model-based DAL methods for function learning by incorporating additional information about the target geometric structure of the marginal distribution. We would like to ensure that the solution is smooth with respect to both the ambient space and the target marginal distribution. In doing this, we propose a novel L1-norm locally linear representation regularization multi-source adaptation learning framework which exploits the geometry of the probability distribution, which has two techniques. Firstly, an L1-norm locally linear representation method is presented for robust graph construction by replacing the L2-norm reconstruction measure in LLE with L1-norm one, which is termed as L1-LLR for short. Secondly, considering the robust graph regularization, we replace traditional graph Laplacian regularization with our new L1-LLR graph Laplacian regularization and therefore construct new graph-based semi-supervised learning framework with multi-source adaptation constraint, which is coined as L1-MSAL method. Moreover, to deal with the nonlinear learning problem, we also generalize the L1-MSAL method by mapping the input data points from the input space to a high-dimensional reproducing kernel Hilbert space (RKHS) via a nonlinear mapping. Promising experimental results have been obtained on several real-world datasets such as face, visual video and object. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. The body voyage as visual representation and art performance

    DEFF Research Database (Denmark)

    Olsén, Jan-Eric

    2011-01-01

    This paper looks at the notion of the body as an interior landscape that is made intelligible through visual representation. It discerns the key figure of the inner corporeal voyage, identifies its main elements and examines how contemporary artists working with performances and installations deal...... with it. A further aim with the paper is to discuss what kind of image of the body that is conveyed through medical visual technologies, such as endoscopy, and relate it to contemporary discussions on embodiment, embodied vision and bodily presence. The paper concludes with a recent exhibition...

  3. Perceptual learning in children with visual impairment improves near visual acuity.

    Science.gov (United States)

    Huurneman, Bianca; Boonstra, F Nienke; Cox, Ralf F A; van Rens, Ger; Cillessen, Antonius H N

    2013-09-17

    This study investigated whether visual perceptual learning can improve near visual acuity and reduce foveal crowding effects in four- to nine-year-old children with visual impairment. Participants were 45 children with visual impairment and 29 children with normal vision. Children with visual impairment were divided into three groups: a magnifier group (n = 12), a crowded perceptual learning group (n = 18), and an uncrowded perceptual learning group (n = 15). Children with normal vision also were divided in three groups, but were measured only at baseline. Dependent variables were single near visual acuity (NVA), crowded NVA, LH line 50% crowding NVA, number of trials, accuracy, performance time, amount of small errors, and amount of large errors. Children with visual impairment trained during six weeks, two times per week, for 30 minutes (12 training sessions). After training, children showed significant improvement of NVA in addition to specific improvements on the training task. The crowded perceptual learning group showed the largest acuity improvements (1.7 logMAR lines on the crowded chart, P children in the crowded perceptual learning group showed improvements on all NVA charts. Children with visual impairment benefit from perceptual training. While task-specific improvements were observed in all training groups, transfer to crowded NVA was largest in the crowded perceptual learning group. To our knowledge, this is the first study to provide evidence for the improvement of NVA by perceptual learning in children with visual impairment. (http://www.trialregister.nl number, NTR2537.).

  4. Sparse Representations-Based Super-Resolution of Key-Frames Extracted from Frames-Sequences Generated by a Visual Sensor Network

    Directory of Open Access Journals (Sweden)

    Muhammad Sajjad

    2014-02-01

    Full Text Available Visual sensor networks (VSNs usually generate a low-resolution (LR frame-sequence due to energy and processing constraints. These LR-frames are not very appropriate for use in certain surveillance applications. It is very important to enhance the resolution of the captured LR-frames using resolution enhancement schemes. In this paper, an effective framework for a super-resolution (SR scheme is proposed that enhances the resolution of LR key-frames extracted from frame-sequences captured by visual-sensors. In a VSN, a visual processing hub (VPH collects a huge amount of visual data from camera sensors. In the proposed framework, at the VPH, key-frames are extracted using our recent key-frame extraction technique and are streamed to the base station (BS after compression. A novel effective SR scheme is applied at BS to produce a high-resolution (HR output from the received key-frames. The proposed SR scheme uses optimized orthogonal matching pursuit (OOMP for sparse-representation recovery in SR. OOMP does better in terms of detecting true sparsity than orthogonal matching pursuit (OMP. This property of the OOMP helps produce a HR image which is closer to the original image. The K-SVD dictionary learning procedure is incorporated for dictionary learning. Batch-OMP improves the dictionary learning process by removing the limitation in handling a large set of observed signals. Experimental results validate the effectiveness of the proposed scheme and show its superiority over other state-of-the-art schemes.

  5. Sparse representations-based super-resolution of key-frames extracted from frames-sequences generated by a visual sensor network.

    Science.gov (United States)

    Sajjad, Muhammad; Mehmood, Irfan; Baik, Sung Wook

    2014-02-21

    Visual sensor networks (VSNs) usually generate a low-resolution (LR) frame-sequence due to energy and processing constraints. These LR-frames are not very appropriate for use in certain surveillance applications. It is very important to enhance the resolution of the captured LR-frames using resolution enhancement schemes. In this paper, an effective framework for a super-resolution (SR) scheme is proposed that enhances the resolution of LR key-frames extracted from frame-sequences captured by visual-sensors. In a VSN, a visual processing hub (VPH) collects a huge amount of visual data from camera sensors. In the proposed framework, at the VPH, key-frames are extracted using our recent key-frame extraction technique and are streamed to the base station (BS) after compression. A novel effective SR scheme is applied at BS to produce a high-resolution (HR) output from the received key-frames. The proposed SR scheme uses optimized orthogonal matching pursuit (OOMP) for sparse-representation recovery in SR. OOMP does better in terms of detecting true sparsity than orthogonal matching pursuit (OMP). This property of the OOMP helps produce a HR image which is closer to the original image. The K-SVD dictionary learning procedure is incorporated for dictionary learning. Batch-OMP improves the dictionary learning process by removing the limitation in handling a large set of observed signals. Experimental results validate the effectiveness of the proposed scheme and show its superiority over other state-of-the-art schemes.

  6. The application of brain-based learning principles aided by GeoGebra to improve mathematical representation ability

    Science.gov (United States)

    Priatna, Nanang

    2017-08-01

    The use of Information and Communication Technology (ICT) in mathematics instruction will help students in building conceptual understanding. One of the software products used in mathematics instruction is GeoGebra. The program enables simple visualization of complex geometric concepts and helps improve students' understanding of geometric concepts. Instruction applying brain-based learning principles is one oriented at the efforts of naturally empowering the brain potentials which enable students to build their own knowledge. One of the goals of mathematics instruction in school is to develop mathematical communication ability. Mathematical representation is regarded as a part of mathematical communication. It is a description, expression, symbolization, or modeling of mathematical ideas/concepts as an attempt of clarifying meanings or seeking for solutions to the problems encountered by students. The research aims to develop a learning model and teaching materials by applying the principles of brain-based learning aided by GeoGebra to improve junior high school students' mathematical representation ability. It adopted a quasi-experimental method with the non-randomized control group pretest-posttest design and the 2x3 factorial model. Based on analysis of the data, it is found that the increase in the mathematical representation ability of students who were treated with mathematics instruction applying the brain-based learning principles aided by GeoGebra was greater than the increase of the students given conventional instruction, both as a whole and based on the categories of students' initial mathematical ability.

  7. Building Program Vector Representations for Deep Learning

    OpenAIRE

    Mou, Lili; Li, Ge; Liu, Yuxuan; Peng, Hao; Jin, Zhi; Xu, Yan; Zhang, Lu

    2014-01-01

    Deep learning has made significant breakthroughs in various fields of artificial intelligence. Advantages of deep learning include the ability to capture highly complicated features, weak involvement of human engineering, etc. However, it is still virtually impossible to use deep learning to analyze programs since deep architectures cannot be trained effectively with pure back propagation. In this pioneering paper, we propose the "coding criterion" to build program vector representations, whi...

  8. Learning Science Through Visualization

    Science.gov (United States)

    Chaudhury, S. Raj

    2005-01-01

    In the context of an introductory physical science course for non-science majors, I have been trying to understand how scientific visualizations of natural phenomena can constructively impact student learning. I have also necessarily been concerned with the instructional and assessment approaches that need to be considered when focusing on learning science through visually rich information sources. The overall project can be broken down into three distinct segments : (i) comparing students' abilities to demonstrate proportional reasoning competency on visual and verbal tasks (ii) decoding and deconstructing visualizations of an object falling under gravity (iii) the role of directed instruction to elicit alternate, valid scientific visualizations of the structure of the solar system. Evidence of student learning was collected in multiple forms for this project - quantitative analysis of student performance on written, graded assessments (tests and quizzes); qualitative analysis of videos of student 'think aloud' sessions. The results indicate that there are significant barriers for non-science majors to succeed in mastering the content of science courses, but with informed approaches to instruction and assessment, these barriers can be overcome.

  9. Emergence of realism: Enhanced visual artistry and high accuracy of visual numerosity representation after left prefrontal damage.

    Science.gov (United States)

    Takahata, Keisuke; Saito, Fumie; Muramatsu, Taro; Yamada, Makiko; Shirahase, Joichiro; Tabuchi, Hajime; Suhara, Tetsuya; Mimura, Masaru; Kato, Motoichiro

    2014-05-01

    Over the last two decades, evidence of enhancement of drawing and painting skills due to focal prefrontal damage has accumulated. It is of special interest that most artworks created by such patients were highly realistic ones, but the mechanism underlying this phenomenon remains to be understood. Our hypothesis is that enhanced tendency of realism was associated with accuracy of visual numerosity representation, which has been shown to be mediated predominantly by right parietal functions. Here, we report a case of left prefrontal stroke, where the patient showed enhancement of artistic skills of realistic painting after the onset of brain damage. We investigated cognitive, functional and esthetic characteristics of the patient׳s visual artistry and visual numerosity representation. Neuropsychological tests revealed impaired executive function after the stroke. Despite that, the patient׳s visual artistry related to realism was rather promoted across the onset of brain damage as demonstrated by blind evaluation of the paintings by professional art reviewers. On visual numerical cognition tasks, the patient showed higher performance in comparison with age-matched healthy controls. These results paralleled increased perfusion in the right parietal cortex including the precuneus and intraparietal sulcus. Our data provide new insight into mechanisms underlying change in artistic style due to focal prefrontal lesion. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Hierarchical representation of shapes in visual cortex-from localized features to figural shape segregation.

    Science.gov (United States)

    Tschechne, Stephan; Neumann, Heiko

    2014-01-01

    Visual structures in the environment are segmented into image regions and those combined to a representation of surfaces and prototypical objects. Such a perceptual organization is performed by complex neural mechanisms in the visual cortex of primates. Multiple mutually connected areas in the ventral cortical pathway receive visual input and extract local form features that are subsequently grouped into increasingly complex, more meaningful image elements. Such a distributed network of processing must be capable to make accessible highly articulated changes in shape boundary as well as very subtle curvature changes that contribute to the perception of an object. We propose a recurrent computational network architecture that utilizes hierarchical distributed representations of shape features to encode surface and object boundary over different scales of resolution. Our model makes use of neural mechanisms that model the processing capabilities of early and intermediate stages in visual cortex, namely areas V1-V4 and IT. We suggest that multiple specialized component representations interact by feedforward hierarchical processing that is combined with feedback signals driven by representations generated at higher stages. Based on this, global configurational as well as local information is made available to distinguish changes in the object's contour. Once the outline of a shape has been established, contextual contour configurations are used to assign border ownership directions and thus achieve segregation of figure and ground. The model, thus, proposes how separate mechanisms contribute to distributed hierarchical cortical shape representation and combine with processes of figure-ground segregation. Our model is probed with a selection of stimuli to illustrate processing results at different processing stages. We especially highlight how modulatory feedback connections contribute to the processing of visual input at various stages in the processing hierarchy.

  11. Learning Document Semantic Representation with Hybrid Deep Belief Network

    Directory of Open Access Journals (Sweden)

    Yan Yan

    2015-01-01

    it is also an effective way to remove noise from the different document representation type; the DBN can enhance extract abstract of the document in depth, making the model learn sufficient semantic representation. At the same time, we explore different input strategies for semantic distributed representation. Experimental results show that our model using the word embedding instead of single word has better performance.

  12. Reflexive Learning through Visual Methods

    DEFF Research Database (Denmark)

    Frølunde, Lisbeth

    2014-01-01

    What. This chapter concerns how visual methods and visual materials can support visually oriented, collaborative, and creative learning processes in education. The focus is on facilitation (guiding, teaching) with visual methods in learning processes that are designerly or involve design. Visual...... methods are exemplified through two university classroom cases about collaborative idea generation processes. The visual methods and materials in the cases are photo elicitation using photo cards, and modeling with LEGO Serious Play sets. Why. The goal is to encourage the reader, whether student...... or professional, to facilitate with visual methods in a critical, reflective, and experimental way. The chapter offers recommendations for facilitating with visual methods to support playful, emergent designerly processes. The chapter also has a critical, situated perspective. Where. This chapter offers case...

  13. Crowding in Visual Working Memory Reveals Its Spatial Resolution and the Nature of Its Representations.

    Science.gov (United States)

    Tamber-Rosenau, Benjamin J; Fintzi, Anat R; Marois, René

    2015-09-01

    Spatial resolution fundamentally limits any image representation. Although this limit has been extensively investigated for perceptual representations by assessing how neighboring flankers degrade the perception of a peripheral target with visual crowding, the corresponding limit for representations held in visual working memory (VWM) is unknown. In the present study, we evoked crowding in VWM and directly compared resolution in VWM and perception. Remarkably, the spatial resolution of VWM proved to be no worse than that of perception. However, mixture modeling of errors caused by crowding revealed the qualitatively distinct nature of these representations. Perceptual crowding errors arose from both increased imprecision in target representations and substitution of flankers for targets. By contrast, VWM crowding errors arose exclusively from substitutions, which suggests that VWM transforms analog perceptual representations into discrete items. Thus, although perception and VWM share a common resolution limit, exceeding this limit reveals distinct mechanisms for perceiving images and holding them in mind. © The Author(s) 2015.

  14. Visual attention to features by associative learning.

    Science.gov (United States)

    Gozli, Davood G; Moskowitz, Joshua B; Pratt, Jay

    2014-11-01

    Expecting a particular stimulus can facilitate processing of that stimulus over others, but what is the fate of other stimuli that are known to co-occur with the expected stimulus? This study examined the impact of learned association on feature-based attention. The findings show that the effectiveness of an uninformative color transient in orienting attention can change by learned associations between colors and the expected target shape. In an initial acquisition phase, participants learned two distinct sequences of stimulus-response-outcome, where stimuli were defined by shape ('S' vs. 'H'), responses were localized key-presses (left vs. right), and outcomes were colors (red vs. green). Next, in a test phase, while expecting a target shape (80% probable), participants showed reliable attentional orienting to the color transient associated with the target shape, and showed no attentional orienting with the color associated with the alternative target shape. This bias seemed to be driven by learned association between shapes and colors, and not modulated by the response. In addition, the bias seemed to depend on observing target-color conjunctions, since encountering the two features disjunctively (without spatiotemporal overlap) did not replicate the findings. We conclude that associative learning - likely mediated by mechanisms underlying visual object representation - can extend the impact of goal-driven attention to features associated with a target stimulus. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Gravity influences the visual representation of object tilt in parietal cortex.

    Science.gov (United States)

    Rosenberg, Ari; Angelaki, Dora E

    2014-10-22

    Sensory systems encode the environment in egocentric (e.g., eye, head, or body) reference frames, creating inherently unstable representations that shift and rotate as we move. However, it is widely speculated that the brain transforms these signals into an allocentric, gravity-centered representation of the world that is stable and independent of the observer's spatial pose. Where and how this representation may be achieved is currently unknown. Here we demonstrate that a subpopulation of neurons in the macaque caudal intraparietal area (CIP) visually encodes object tilt in nonegocentric coordinates defined relative to the gravitational vector. Neuronal responses to the tilt of a visually presented planar surface were measured with the monkey in different spatial orientations (upright and rolled left/right ear down) and then compared. This revealed a continuum of representations in which planar tilt was encoded in a gravity-centered reference frame in approximately one-tenth of the comparisons, intermediate reference frames ranging between gravity-centered and egocentric in approximately two-tenths of the comparisons, and in an egocentric reference frame in less than half of the comparisons. Altogether, almost half of the comparisons revealed a shift in the preferred tilt and/or a gain change consistent with encoding object orientation in nonegocentric coordinates. Through neural network modeling, we further show that a purely gravity-centered representation of object tilt can be achieved directly from the population activity of CIP-like units. These results suggest that area CIP may play a key role in creating a stable, allocentric representation of the environment defined relative to an "earth-vertical" direction. Copyright © 2014 the authors 0270-6474/14/3414170-11$15.00/0.

  16. 3D surface parameterization using manifold learning for medial shape representation

    Science.gov (United States)

    Ward, Aaron D.; Hamarneh, Ghassan

    2007-03-01

    The choice of 3D shape representation for anatomical structures determines the effectiveness with which segmentation, visualization, deformation, and shape statistics are performed. Medial axis-based shape representations have attracted considerable attention due to their inherent ability to encode information about the natural geometry of parts of the anatomy. In this paper, we propose a novel approach, based on nonlinear manifold learning, to the parameterization of medial sheets and object surfaces based on the results of skeletonization. For each single-sheet figure in an anatomical structure, we skeletonize the figure, and classify its surface points according to whether they lie on the upper or lower surface, based on their relationship to the skeleton points. We then perform nonlinear dimensionality reduction on the skeleton, upper, and lower surface points, to find the intrinsic 2D coordinate system of each. We then center a planar mesh over each of the low-dimensional representations of the points, and map the meshes back to 3D using the mappings obtained by manifold learning. Correspondence between mesh vertices, established in their intrinsic 2D coordinate spaces, is used in order to compute the thickness vectors emanating from the medial sheet. We show results of our algorithm on real brain and musculoskeletal structures extracted from MRI, as well as an artificial multi-sheet example. The main advantages to this method are its relative simplicity and noniterative nature, and its ability to correctly compute nonintersecting thickness vectors for a medial sheet regardless of both the amount of coincident bending and thickness in the object, and of the incidence of local concavities and convexities in the object's surface.

  17. Acoustic Tactile Representation of Visual Information

    Science.gov (United States)

    Silva, Pubudu Madhawa

    Our goal is to explore the use of hearing and touch to convey graphical and pictorial information to visually impaired people. Our focus is on dynamic, interactive display of visual information using existing, widely available devices, such as smart phones and tablets with touch sensitive screens. We propose a new approach for acoustic-tactile representation of visual signals that can be implemented on a touch screen and allows the user to actively explore a two-dimensional layout consisting of one or more objects with a finger or a stylus while listening to auditory feedback via stereo headphones. The proposed approach is acoustic-tactile because sound is used as the primary source of information for object localization and identification, while touch is used for pointing and kinesthetic feedback. A static overlay of raised-dot tactile patterns can also be added. A key distinguishing feature of the proposed approach is the use of spatial sound (directional and distance cues) to facilitate the active exploration of the layout. We consider a variety of configurations for acoustic-tactile rendering of object size, shape, identity, and location, as well as for the overall perception of simple layouts and scenes. While our primary goal is to explore the fundamental capabilities and limitations of representing visual information in acoustic-tactile form, we also consider a number of relatively simple configurations that can be tied to specific applications. In particular, we consider a simple scene layout consisting of objects in a linear arrangement, each with a distinct tapping sound, which we compare to a ''virtual cane.'' We will also present a configuration that can convey a ''Venn diagram.'' We present systematic subjective experiments to evaluate the effectiveness of the proposed display for shape perception, object identification and localization, and 2-D layout perception, as well as the applications. Our experiments were conducted with visually blocked

  18. Mirror representations innate versus determined by experience: a viewpoint from learning theory.

    Science.gov (United States)

    Giese, Martin A

    2014-04-01

    From the viewpoint of pattern recognition and computational learning, mirror neurons form an interesting multimodal representation that links action perception and planning. While it seems unlikely that all details of such representations are specified by the genetic code, robust learning of such complex representations likely requires an appropriate interplay between plasticity, generalization, and anatomical constraints of the underlying neural architecture.

  19. Multimodal representations in collaborative history learning

    NARCIS (Netherlands)

    Prangsma, M.E.

    2007-01-01

    This dissertation focuses on the question: How does making and connecting different types of multimodal representations affect the collaborative learning process and the acquisition of a chronological frame of reference in 12 to 14-year olds in pre vocational education? A chronological frame of

  20. Learning spaces as representational scaffolds for learning conceptual knowledge of system behaviour

    NARCIS (Netherlands)

    Bredeweg, B.; Liem, J.; Beek, W.; Salles, P.; Linnebank, F.; Wolpers, M.; Kirschner, P.A.; Scheffel, M.; Lindstaedt, S.; Dimitrova, V.

    2010-01-01

    Scaffolding is a well-known approach to bridge the gap between novice and expert capabilities in a discovery-oriented learning environment. This paper discusses a set of knowledge representations referred to as Learning Spaces (LSs) that can be used to support learners in acquiring conceptual

  1. Representation learning with deep extreme learning machines for efficient image set classification

    KAUST Repository

    Uzair, Muhammad

    2016-12-09

    Efficient and accurate representation of a collection of images, that belong to the same class, is a major research challenge for practical image set classification. Existing methods either make prior assumptions about the data structure, or perform heavy computations to learn structure from the data itself. In this paper, we propose an efficient image set representation that does not make any prior assumptions about the structure of the underlying data. We learn the nonlinear structure of image sets with deep extreme learning machines that are very efficient and generalize well even on a limited number of training samples. Extensive experiments on a broad range of public datasets for image set classification show that the proposed algorithm consistently outperforms state-of-the-art image set classification methods both in terms of speed and accuracy.

  2. Representation learning with deep extreme learning machines for efficient image set classification

    KAUST Repository

    Uzair, Muhammad; Shafait, Faisal; Ghanem, Bernard; Mian, Ajmal

    2016-01-01

    Efficient and accurate representation of a collection of images, that belong to the same class, is a major research challenge for practical image set classification. Existing methods either make prior assumptions about the data structure, or perform heavy computations to learn structure from the data itself. In this paper, we propose an efficient image set representation that does not make any prior assumptions about the structure of the underlying data. We learn the nonlinear structure of image sets with deep extreme learning machines that are very efficient and generalize well even on a limited number of training samples. Extensive experiments on a broad range of public datasets for image set classification show that the proposed algorithm consistently outperforms state-of-the-art image set classification methods both in terms of speed and accuracy.

  3. Teaching and Learning about Force with a Representational Focus: Pedagogy and Teacher Change

    Science.gov (United States)

    Hubber, Peter; Tytler, Russell; Haslam, Filocha

    2010-01-01

    A large body of research in the conceptual change tradition has shown the difficulty of learning fundamental science concepts, yet conceptual change schemes have failed to convincingly demonstrate improvements in supporting significant student learning. Recent work in cognitive science has challenged this purely conceptual view of learning, emphasising the role of language, and the importance of personal and contextual aspects of understanding science. The research described in this paper is designed around the notion that learning involves the recognition and development of students’ representational resources. In particular, we argue that conceptual difficulties with the concept of force are fundamentally representational in nature. This paper describes a classroom sequence in force that focuses on representations and their negotiation, and reports on the effectiveness of this perspective in guiding teaching, and in providing insight into student learning. Classroom sequences involving three teachers were videotaped using a combined focus on the teacher and groups of students. Video analysis software was used to capture the variety of representations used, and sequences of representational negotiation. Stimulated recall interviews were conducted with teachers and students. The paper reports on the nature of the pedagogies developed as part of this representational focus, its effectiveness in supporting student learning, and on the pedagogical and epistemological challenges negotiated by teachers in implementing this approach.

  4. Heterogeneous iris image hallucination using sparse representation on a learned heterogeneous patch dictionary

    Science.gov (United States)

    Li, Yung-Hui; Zheng, Bo-Ren; Ji, Dai-Yan; Tien, Chung-Hao; Liu, Po-Tsun

    2014-09-01

    Cross sensor iris matching may seriously degrade the recognition performance because of the sensor mis-match problem of iris images between the enrollment and test stage. In this paper, we propose two novel patch-based heterogeneous dictionary learning method to attack this problem. The first method applies the latest sparse representation theory while the second method tries to learn the correspondence relationship through PCA in heterogeneous patch space. Both methods learn the basic atoms in iris textures across different image sensors and build connections between them. After such connections are built, at test stage, it is possible to hallucinate (synthesize) iris images across different sensors. By matching training images with hallucinated images, the recognition rate can be successfully enhanced. The experimental results showed the satisfied results both visually and in terms of recognition rate. Experimenting with an iris database consisting of 3015 images, we show that the EER is decreased 39.4% relatively by the proposed method.

  5. Visual Representation in GENESIS as a tool for Physical Modeling, Sound Synthesis and Musical Composition

    OpenAIRE

    Villeneuve, Jérôme; Cadoz, Claude; Castagné, Nicolas

    2015-01-01

    The motivation of this paper is to highlight the importance of visual representations for artists when modeling and simulating mass-interaction physical networks in the context of sound synthesis and musical composition. GENESIS is a musician-oriented software environment for sound synthesis and musical composition. However, despite this orientation, a substantial amount of effort has been put into building a rich variety of tools based on static or dynamic visual representations of models an...

  6. Learned reward association improves visual working memory.

    Science.gov (United States)

    Gong, Mengyuan; Li, Sheng

    2014-04-01

    Statistical regularities in the natural environment play a central role in adaptive behavior. Among other regularities, reward association is potentially the most prominent factor that influences our daily life. Recent studies have suggested that pre-established reward association yields strong influence on the spatial allocation of attention. Here we show that reward association can also improve visual working memory (VWM) performance when the reward-associated feature is task-irrelevant. We established the reward association during a visual search training session, and investigated the representation of reward-associated features in VWM by the application of a change detection task before and after the training. The results showed that the improvement in VWM was significantly greater for items in the color associated with high reward than for those in low reward-associated or nonrewarded colors. In particular, the results from control experiments demonstrate that the observed reward effect in VWM could not be sufficiently accounted for by attentional capture toward the high reward-associated item. This was further confirmed when the effect of attentional capture was minimized by presenting the items in the sample and test displays of the change detection task with the same color. The results showed significantly larger improvement in VWM performance when the items in a display were in the high reward-associated color than those in the low reward-associated or nonrewarded colors. Our findings suggest that, apart from inducing space-based attentional capture, the learned reward association could also facilitate the perceptual representation of high reward-associated items through feature-based attentional modulation.

  7. Measuring, Predicting and Visualizing Short-Term Change in Word Representation and Usage in VKontakte Social Network

    Energy Technology Data Exchange (ETDEWEB)

    Stewart, Ian B.; Arendt, Dustin L.; Bell, Eric B.; Volkova, Svitlana

    2017-05-17

    Language in social media is extremely dynamic: new words emerge, trend and disappear, while the meaning of existing words can fluctuate over time. This work addresses several important tasks of visualizing and predicting short term text representation shift, i.e. the change in a word’s contextual semantics. We study the relationship between short-term concept drift and representation shift on a large social media corpus – VKontakte collected during the Russia-Ukraine crisis in 2014 – 2015. We visualize short-term representation shift for example keywords and build predictive models to forecast short-term shifts in meaning from previous meaning as well as from concept drift. We show that short-term representation shift can be accurately predicted up to several weeks in advance and that visualization provides insight into meaning change. Our approach can be used to explore and characterize specific aspects of the streaming corpus during crisis events and potentially improve other downstream classification tasks including real-time event forecasting in social media.

  8. Improving the learning of clinical reasoning through computer-based cognitive representation.

    Science.gov (United States)

    Wu, Bian; Wang, Minhong; Johnson, Janice M; Grotzer, Tina A

    2014-01-01

    Clinical reasoning is usually taught using a problem-solving approach, which is widely adopted in medical education. However, learning through problem solving is difficult as a result of the contextualization and dynamic aspects of actual problems. Moreover, knowledge acquired from problem-solving practice tends to be inert and fragmented. This study proposed a computer-based cognitive representation approach that externalizes and facilitates the complex processes in learning clinical reasoning. The approach is operationalized in a computer-based cognitive representation tool that involves argument mapping to externalize the problem-solving process and concept mapping to reveal the knowledge constructed from the problems. Twenty-nine Year 3 or higher students from a medical school in east China participated in the study. Participants used the proposed approach implemented in an e-learning system to complete four learning cases in 4 weeks on an individual basis. For each case, students interacted with the problem to capture critical data, generate and justify hypotheses, make a diagnosis, recall relevant knowledge, and update their conceptual understanding of the problem domain. Meanwhile, students used the computer-based cognitive representation tool to articulate and represent the key elements and their interactions in the learning process. A significant improvement was found in students' learning products from the beginning to the end of the study, consistent with students' report of close-to-moderate progress in developing problem-solving and knowledge-construction abilities. No significant differences were found between the pretest and posttest scores with the 4-week period. The cognitive representation approach was found to provide more formative assessment. The computer-based cognitive representation approach improved the learning of clinical reasoning in both problem solving and knowledge construction.

  9. Children's Learning from Touch Screens: A Dual Representation Perspective.

    Science.gov (United States)

    Sheehan, Kelly J; Uttal, David H

    2016-01-01

    Parents and educators often expect that children will learn from touch screen devices, such as during joint e-book reading. Therefore an essential question is whether young children understand that the touch screen can be a symbolic medium - that entities represented on the touch screen can refer to entities in the real world. Research on symbolic development suggests that symbolic understanding requires that children develop dual representational abilities, meaning children need to appreciate that a symbol is an object in itself (i.e., picture of a dog) while also being a representation of something else (i.e., the real dog). Drawing on classic research on symbols and new research on children's learning from touch screens, we offer the perspective that children's ability to learn from the touch screen as a symbolic medium depends on the effect of interactivity on children's developing dual representational abilities. Although previous research on dual representation suggests the interactive nature of the touch screen might make it difficult for young children to use as a symbolic medium, the unique interactive affordances may help alleviate this difficulty. More research needs to investigate how the interactivity of the touch screen affects children's ability to connect the symbols on the screen to the real world. Given the interactive nature of the touch screen, researchers and educators should consider both the affordances of the touch screen as well as young children's cognitive abilities when assessing whether young children can learn from it as a symbolic medium.

  10. Perisaccadic Updating of Visual Representations and Attentional States: Linking Behavior and Neurophysiology

    Science.gov (United States)

    Marino, Alexandria C.; Mazer, James A.

    2016-01-01

    During natural vision, saccadic eye movements lead to frequent retinal image changes that result in different neuronal subpopulations representing the same visual feature across fixations. Despite these potentially disruptive changes to the neural representation, our visual percept is remarkably stable. Visual receptive field remapping, characterized as an anticipatory shift in the position of a neuron’s spatial receptive field immediately before saccades, has been proposed as one possible neural substrate for visual stability. Many of the specific properties of remapping, e.g., the exact direction of remapping relative to the saccade vector and the precise mechanisms by which remapping could instantiate stability, remain a matter of debate. Recent studies have also shown that visual attention, like perception itself, can be sustained across saccades, suggesting that the attentional control system can also compensate for eye movements. Classical remapping could have an attentional component, or there could be a distinct attentional analog of visual remapping. At this time we do not yet fully understand how the stability of attentional representations relates to perisaccadic receptive field shifts. In this review, we develop a vocabulary for discussing perisaccadic shifts in receptive field location and perisaccadic shifts of attentional focus, review and synthesize behavioral and neurophysiological studies of perisaccadic perception and perisaccadic attention, and identify open questions that remain to be experimentally addressed. PMID:26903820

  11. Learning style, judgements of learning, and learning of verbal and visual information.

    Science.gov (United States)

    Knoll, Abby R; Otani, Hajime; Skeel, Reid L; Van Horn, K Roger

    2017-08-01

    The concept of learning style is immensely popular despite the lack of evidence showing that learning style influences performance. This study tested the hypothesis that the popularity of learning style is maintained because it is associated with subjective aspects of learning, such as judgements of learning (JOLs). Preference for verbal and visual information was assessed using the revised Verbalizer-Visualizer Questionnaire (VVQ). Then, participants studied a list of word pairs and a list of picture pairs, making JOLs (immediate, delayed, and global) while studying each list. Learning was tested by cued recall. The results showed that higher VVQ verbalizer scores were associated with higher immediate JOLs for words, and higher VVQ visualizer scores were associated with higher immediate JOLs for pictures. There was no association between VVQ scores and recall or JOL accuracy. As predicted, learning style was associated with subjective aspects of learning but not objective aspects of learning. © 2016 The British Psychological Society.

  12. Learning modulation of odor representations: new findings from Arc-indexed networks

    Directory of Open Access Journals (Sweden)

    Qi eYuan

    2014-12-01

    Full Text Available We first review our understanding of odor representations in rodent olfactory bulb and anterior piriform cortex. We then consider learning-induced representation changes. Finally we describe the perspective on network representations gained from examining Arc-indexed odor networks of awake rats. Arc-indexed networks are sparse and distributed, consistent with current views. However Arc provides representations of repeated odors. Arc-indexed repeated odor representations are quite variable. Sparse representations are assumed to be compact and reliable memory codes. Arc suggests this is not necessarily the case. The variability seen is consistent with electrophysiology in awake animals and may reflect top down-cortical modulation of context. Arc-indexing shows that distinct odors share larger than predicted neuron pools. These may be low-threshold neuronal subsets.Learning’s effect on Arc-indexed representations is to increase the stable or overlapping component of rewarded odor representations. This component can decrease for similar odors when their discrimination is rewarded. The learning effects seen are supported by electrophysiology, but mechanisms remain to be elucidated.

  13. Collaborative Random Faces-Guided Encoders for Pose-Invariant Face Representation Learning.

    Science.gov (United States)

    Shao, Ming; Zhang, Yizhe; Fu, Yun

    2018-04-01

    Learning discriminant face representation for pose-invariant face recognition has been identified as a critical issue in visual learning systems. The challenge lies in the drastic changes of facial appearances between the test face and the registered face. To that end, we propose a high-level feature learning framework called "collaborative random faces (RFs)-guided encoders" toward this problem. The contributions of this paper are three fold. First, we propose a novel supervised autoencoder that is able to capture the high-level identity feature despite of pose variations. Second, we enrich the identity features by replacing the target values of conventional autoencoders with random signals (RFs in this paper), which are unique for each subject under different poses. Third, we further improve the performance of the framework by incorporating deep convolutional neural network facial descriptors and linking discriminative identity features from different RFs for the augmented identity features. Finally, we conduct face identification experiments on Multi-PIE database, and face verification experiments on labeled faces in the wild and YouTube Face databases, where face recognition rate and verification accuracy with Receiver Operating Characteristic curves are rendered. In addition, discussions of model parameters and connections with the existing methods are provided. These experiments demonstrate that our learning system works fairly well on handling pose variations.

  14. What is adapted in face adaptation? The neural representations of expression in the human visual system.

    Science.gov (United States)

    Fox, Christopher J; Barton, Jason J S

    2007-01-05

    The neural representation of facial expression within the human visual system is not well defined. Using an adaptation paradigm, we examined aftereffects on expression perception produced by various stimuli. Adapting to a face, which was used to create morphs between two expressions, substantially biased expression perception within the morphed faces away from the adapting expression. This adaptation was not based on low-level image properties, as a different image of the same person displaying that expression produced equally robust aftereffects. Smaller but significant aftereffects were generated by images of different individuals, irrespective of gender. Non-face visual, auditory, or verbal representations of emotion did not generate significant aftereffects. These results suggest that adaptation affects at least two neural representations of expression: one specific to the individual (not the image), and one that represents expression across different facial identities. The identity-independent aftereffect suggests the existence of a 'visual semantic' for facial expression in the human visual system.

  15. An Intrinsic Value System for Developing Multiple Invariant Representations with Incremental Slowness Learning

    Directory of Open Access Journals (Sweden)

    Matthew David Luciw

    2013-05-01

    Full Text Available Curiosity Driven Modular Incremental Slow Feature Analysis (CD-MISFA;~cite{cdmisfa} is a recently introduced model of intrinsically-motivated invariance learning, which shows how curiosity enables the orderly formation of multiple stable sensory representations, through which the agent can simplify its complex sensory input. Here, we first discuss the computational properties of the CD-MISFA model itself, followed by a discussion of neurophysiological analogs fulfilling similar functional roles. CD-MISFA combines 1. unsupervised representation learning through the slowness principle, 2. generation of an intrinsic reward signal through the learning progress of the developing features, and 3. balancing of exploration and exploitation in order to maximize learning progress and quickly learn multiple feature sets for perceptual simplification. Experimental results on synthetic observations and on the iCub robot show that the intrinsic value system is an essential component to representation learning, further, the model explores such that the representations are typically learned in order from least to most costly, as predicted by the theory of Artificial Curiosity.

  16. Improving the learning of clinical reasoning through computer-based cognitive representation

    Directory of Open Access Journals (Sweden)

    Bian Wu

    2014-12-01

    Full Text Available Objective: Clinical reasoning is usually taught using a problem-solving approach, which is widely adopted in medical education. However, learning through problem solving is difficult as a result of the contextualization and dynamic aspects of actual problems. Moreover, knowledge acquired from problem-solving practice tends to be inert and fragmented. This study proposed a computer-based cognitive representation approach that externalizes and facilitates the complex processes in learning clinical reasoning. The approach is operationalized in a computer-based cognitive representation tool that involves argument mapping to externalize the problem-solving process and concept mapping to reveal the knowledge constructed from the problems. Methods: Twenty-nine Year 3 or higher students from a medical school in east China participated in the study. Participants used the proposed approach implemented in an e-learning system to complete four learning cases in 4 weeks on an individual basis. For each case, students interacted with the problem to capture critical data, generate and justify hypotheses, make a diagnosis, recall relevant knowledge, and update their conceptual understanding of the problem domain. Meanwhile, students used the computer-based cognitive representation tool to articulate and represent the key elements and their interactions in the learning process. Results: A significant improvement was found in students’ learning products from the beginning to the end of the study, consistent with students’ report of close-to-moderate progress in developing problem-solving and knowledge-construction abilities. No significant differences were found between the pretest and posttest scores with the 4-week period. The cognitive representation approach was found to provide more formative assessment. Conclusions: The computer-based cognitive representation approach improved the learning of clinical reasoning in both problem solving and knowledge

  17. Characterizing representational learning: A combined simulation and tutorial on perturbation theory

    Directory of Open Access Journals (Sweden)

    Antje Kohnle

    2017-11-01

    Full Text Available Analyzing, constructing, and translating between graphical, pictorial, and mathematical representations of physics ideas and reasoning flexibly through them (“representational competence” is a key characteristic of expertise in physics but is a challenge for learners to develop. Interactive computer simulations and University of Washington style tutorials both have affordances to support representational learning. This article describes work to characterize students’ spontaneous use of representations before and after working with a combined simulation and tutorial on first-order energy corrections in the context of quantum-mechanical time-independent perturbation theory. Data were collected from two institutions using pre-, mid-, and post-tests to assess short- and long-term gains. A representational competence level framework was adapted to devise level descriptors for the assessment items. The results indicate an increase in the number of representations used by students and the consistency between them following the combined simulation tutorial. The distributions of representational competence levels suggest a shift from perceptual to semantic use of representations based on their underlying meaning. In terms of activity design, this study illustrates the need to support students in making sense of the representations shown in a simulation and in learning to choose the most appropriate representation for a given task. In terms of characterizing representational abilities, this study illustrates the usefulness of a framework focusing on perceptual, syntactic, and semantic use of representations.

  18. Characterizing representational learning: A combined simulation and tutorial on perturbation theory

    Science.gov (United States)

    Kohnle, Antje; Passante, Gina

    2017-12-01

    Analyzing, constructing, and translating between graphical, pictorial, and mathematical representations of physics ideas and reasoning flexibly through them ("representational competence") is a key characteristic of expertise in physics but is a challenge for learners to develop. Interactive computer simulations and University of Washington style tutorials both have affordances to support representational learning. This article describes work to characterize students' spontaneous use of representations before and after working with a combined simulation and tutorial on first-order energy corrections in the context of quantum-mechanical time-independent perturbation theory. Data were collected from two institutions using pre-, mid-, and post-tests to assess short- and long-term gains. A representational competence level framework was adapted to devise level descriptors for the assessment items. The results indicate an increase in the number of representations used by students and the consistency between them following the combined simulation tutorial. The distributions of representational competence levels suggest a shift from perceptual to semantic use of representations based on their underlying meaning. In terms of activity design, this study illustrates the need to support students in making sense of the representations shown in a simulation and in learning to choose the most appropriate representation for a given task. In terms of characterizing representational abilities, this study illustrates the usefulness of a framework focusing on perceptual, syntactic, and semantic use of representations.

  19. Understanding How to Build Long-Lived Learning Collaborators

    Science.gov (United States)

    2016-03-16

    language. We also made progress on using qualitative representations for strategic thinking , where continuous processes and causal knowledge about...discrimination in learning, and dynamic encoding strategies to improve visual encoding for learning via analogical generalization. We showed that spatial concepts...a 20,000 sketch corpus to examine the tradeoffs involved in visual representation and analogical generalization. 15. SUBJECT TERMS

  20. Student Visual Communication of Evolution

    Science.gov (United States)

    Oliveira, Alandeom W.; Cook, Kristin

    2017-01-01

    Despite growing recognition of the importance of visual representations to science education, previous research has given attention mostly to verbal modalities of evolution instruction. Visual aspects of classroom learning of evolution are yet to be systematically examined by science educators. The present study attends to this issue by exploring…

  1. Children's Learning from Touch Screens: A Dual Representation perspective

    Directory of Open Access Journals (Sweden)

    Kelly Jean Sheehan

    2016-08-01

    Full Text Available Parents and educators often expect that children will learn from touch screen devices, such as during joint e-book reading. Therefore an essential question is whether young children understand that the touch screen can be a symbolic medium – that entities represented on the touch screen can refer to entities in the real world. Research on symbolic development suggests that symbolic understanding requires that children develop dual representational abilities, meaning children need to appreciate that a symbol is an object in itself (i.e., picture of a dog while also being a representation of something else (i.e., the real dog. Drawing on classic research on symbols and new research on children’s learning from touch screens, we offer the perspective that children’s ability to learn from the touch screen as a symbolic medium depends on the effect of interactivity on children’s developing dual representational abilities. Although previous research on dual representation suggests the interactive nature of the touch screen might make it difficult for young children to use as a symbolic medium, the unique interactive affordances may help alleviate this difficulty. More research needs to investigate how the interactivity of the touch screen affects children’s ability to connect the symbols on the screen to the real world. Given the interactive nature of the touch screen, researchers and educators should consider both the affordances of the touch screen as well as young children’s cognitive abilities when assessing whether young children can learn from it as a symbolic medium.

  2. Apparatus for producing a visual representation of a radiographic scan

    International Nuclear Information System (INIS)

    Hounsfield, G.N.

    1976-01-01

    An apparatus is disclosed for providing a visual representation of the absorption or transmission coefficients of the elements of a two dimensional matrix of elements notionally defined in a cross-sectional plane through a body. The representation is in the form of an analogue display comprising superimposed lines of information scanned on the surface of a suitable screen, the brightness of each line being indicative of the absorption suffered by penetrating radiation on traversing a respective path through said plane of the body. The orientation of each scanned line depends on the orientation of the respective path with respect to the body. 7 Claims, 4 Drawing Figures

  3. The effects of technology on making conjectures: linking multiple representations in learning iterations

    OpenAIRE

    San Diego, Jonathan; Aczel, James; Hodgson, Barbara

    2004-01-01

    Numerous studies have suggested that different technologies have different effects on students' learning of mathematics, particularly in facilitating students' graphing skills and preferences for representations. For example, there are claims that students who prefer algebraic representations can experience discomfort in learning mathematics concepts using computers (Weigand and Weller, 2001; Villarreal, 2000) whilst students using calculators preferred graphical representation (Keller and Hi...

  4. Unsupervised learning of a steerable basis for invariant image representations

    Science.gov (United States)

    Bethge, Matthias; Gerwinn, Sebastian; Macke, Jakob H.

    2007-02-01

    There are two aspects to unsupervised learning of invariant representations of images: First, we can reduce the dimensionality of the representation by finding an optimal trade-off between temporal stability and informativeness. We show that the answer to this optimization problem is generally not unique so that there is still considerable freedom in choosing a suitable basis. Which of the many optimal representations should be selected? Here, we focus on this second aspect, and seek to find representations that are invariant under geometrical transformations occuring in sequences of natural images. We utilize ideas of 'steerability' and Lie groups, which have been developed in the context of filter design. In particular, we show how an anti-symmetric version of canonical correlation analysis can be used to learn a full-rank image basis which is steerable with respect to rotations. We provide a geometric interpretation of this algorithm by showing that it finds the two-dimensional eigensubspaces of the average bivector. For data which exhibits a variety of transformations, we develop a bivector clustering algorithm, which we use to learn a basis of generalized quadrature pairs (i.e. 'complex cells') from sequences of natural images.

  5. The body voyage as visual representation and art performance.

    Science.gov (United States)

    Olsén, Jan Eric

    2011-01-01

    This paper looks at the notion of the body as an interior landscape that is made intelligible through visual representation. It discerns the key figure of the inner corporeal voyage, identifies its main elements and examines how contemporary artists working with performances and installations deal with it. A further aim with the paper is to discuss what kind of image of the body that is conveyed through medical visual technologies, such as endoscopy, and relate it to contemporary discussions on embodiment, embodied vision and bodily presence. The paper concludes with a recent exhibition by the French artist Christian Boltanski, which gives a somewhat different meaning to the idea of the body voyage.

  6. Effects of Multimodal Information on Learning Performance and Judgment of Learning

    Science.gov (United States)

    Chen, Gongxiang; Fu, Xiaolan

    2003-01-01

    Two experiments were conducted to investigate the effects of multimodal information on learning performance and judgment of learning (JOL). Experiment 1 examined the effects of representation type (word-only versus word-plus-picture) and presentation channel (visual-only versus visual-plus-auditory) on recall and immediate-JOL in fixed-rate…

  7. Learning word vector representations based on acoustic counts

    OpenAIRE

    Ribeiro, Sam; Watts, Oliver; Yamagishi, Junichi

    2017-01-01

    This paper presents a simple count-based approach to learning word vector representations by leveraging statistics of cooccurrences between text and speech. This type of representation requires two discrete sequences of units defined across modalities. Two possible methods for the discretization of an acoustic signal are presented, which are then applied to fundamental frequency and energy contours of a transcribed corpus of speech, yielding a sequence of textual objects (e.g. words, syllable...

  8. Problem solving based learning model with multiple representations to improve student's mental modelling ability on physics

    Science.gov (United States)

    Haili, Hasnawati; Maknun, Johar; Siahaan, Parsaoran

    2017-08-01

    Physics is a lessons that related to students' daily experience. Therefore, before the students studying in class formally, actually they have already have a visualization and prior knowledge about natural phenomenon and could wide it themselves. The learning process in class should be aimed to detect, process, construct, and use students' mental model. So, students' mental model agree with and builds in the right concept. The previous study held in MAN 1 Muna informs that in learning process the teacher did not pay attention students' mental model. As a consequence, the learning process has not tried to build students' mental modelling ability (MMA). The purpose of this study is to describe the improvement of students' MMA as a effect of problem solving based learning model with multiple representations approach. This study is pre experimental design with one group pre post. It is conducted in XI IPA MAN 1 Muna 2016/2017. Data collection uses problem solving test concept the kinetic theory of gasses and interview to get students' MMA. The result of this study is clarification students' MMA which is categorized in 3 category; High Mental Modelling Ability (H-MMA) for 7Mental Modelling Ability (M-MMA) for 3Mental Modelling Ability (L-MMA) for 0 ≤ x ≤ 3 score. The result shows that problem solving based learning model with multiple representations approach can be an alternative to be applied in improving students' MMA.

  9. Effects of prior knowledge on learning from different compositions of representations in a mobile learning environment

    NARCIS (Netherlands)

    T.-C. Liu (Tzu-Chien); Y.-C. Lin (Yi-Chun); G.W.C. Paas (Fred)

    2014-01-01

    textabstractTwo experiments examined the effects of prior knowledge on learning from different compositions of multiple representations in a mobile learning environment on plant leaf morphology for primary school students. Experiment 1 compared the learning effects of a mobile learning environment

  10. Intelligent Fault Diagnosis of Rotary Machinery Based on Unsupervised Multiscale Representation Learning

    Science.gov (United States)

    Jiang, Guo-Qian; Xie, Ping; Wang, Xiao; Chen, Meng; He, Qun

    2017-11-01

    The performance of traditional vibration based fault diagnosis methods greatly depends on those handcrafted features extracted using signal processing algorithms, which require significant amounts of domain knowledge and human labor, and do not generalize well to new diagnosis domains. Recently, unsupervised representation learning provides an alternative promising solution to feature extraction in traditional fault diagnosis due to its superior learning ability from unlabeled data. Given that vibration signals usually contain multiple temporal structures, this paper proposes a multiscale representation learning (MSRL) framework to learn useful features directly from raw vibration signals, with the aim to capture rich and complementary fault pattern information at different scales. In our proposed approach, a coarse-grained procedure is first employed to obtain multiple scale signals from an original vibration signal. Then, sparse filtering, a newly developed unsupervised learning algorithm, is applied to automatically learn useful features from each scale signal, respectively, and then the learned features at each scale to be concatenated one by one to obtain multiscale representations. Finally, the multiscale representations are fed into a supervised classifier to achieve diagnosis results. Our proposed approach is evaluated using two different case studies: motor bearing and wind turbine gearbox fault diagnosis. Experimental results show that the proposed MSRL approach can take full advantages of the availability of unlabeled data to learn discriminative features and achieved better performance with higher accuracy and stability compared to the traditional approaches.

  11. How to make a good animation: A grounded cognition model of how visual representation design affects the construction of abstract physics knowledge

    Directory of Open Access Journals (Sweden)

    Zhongzhou Chen

    2014-04-01

    Full Text Available Visual representations play a critical role in teaching physics. However, since we do not have a satisfactory understanding of how visual perception impacts the construction of abstract knowledge, most visual representations used in instructions are either created based on existing conventions or designed according to the instructor’s intuition, which leads to a significant variance in their effectiveness. In this paper we propose a cognitive mechanism based on grounded cognition, suggesting that visual perception affects understanding by activating “perceptual symbols”: the basic cognitive unit used by the brain to construct a concept. A good visual representation activates perceptual symbols that are essential for the construction of the represented concept, whereas a bad representation does the opposite. As a proof of concept, we conducted a clinical experiment in which participants received three different versions of a multimedia tutorial teaching the integral expression of electric potential. The three versions were only different by the details of the visual representation design, only one of which contained perceptual features that activate perceptual symbols essential for constructing the idea of “accumulation.” On a following post-test, participants receiving this version of tutorial significantly outperformed those who received the other two versions of tutorials designed to mimic conventional visual representations used in classrooms.

  12. Discriminative object tracking via sparse representation and online dictionary learning.

    Science.gov (United States)

    Xie, Yuan; Zhang, Wensheng; Li, Cuihua; Lin, Shuyang; Qu, Yanyun; Zhang, Yinghua

    2014-04-01

    We propose a robust tracking algorithm based on local sparse coding with discriminative dictionary learning and new keypoint matching schema. This algorithm consists of two parts: the local sparse coding with online updated discriminative dictionary for tracking (SOD part), and the keypoint matching refinement for enhancing the tracking performance (KP part). In the SOD part, the local image patches of the target object and background are represented by their sparse codes using an over-complete discriminative dictionary. Such discriminative dictionary, which encodes the information of both the foreground and the background, may provide more discriminative power. Furthermore, in order to adapt the dictionary to the variation of the foreground and background during the tracking, an online learning method is employed to update the dictionary. The KP part utilizes refined keypoint matching schema to improve the performance of the SOD. With the help of sparse representation and online updated discriminative dictionary, the KP part are more robust than the traditional method to reject the incorrect matches and eliminate the outliers. The proposed method is embedded into a Bayesian inference framework for visual tracking. Experimental results on several challenging video sequences demonstrate the effectiveness and robustness of our approach.

  13. Spelling pronunciation and visual preview both facilitate learning to spell irregular words.

    Science.gov (United States)

    Hilte, Maartje; Reitsma, Pieter

    2006-12-01

    Spelling pronunciations are hypothesized to be helpful in building up relatively stable phonologically underpinned orthographic representations, particularly for learning words with irregular phoneme-grapheme correspondences. In a four-week computer-based training, the efficacy of spelling pronunciations and previewing the spelling patterns on learning to spell loan words in Dutch, originating from French and English, was examined in skilled and less skilled spellers with varying ages. Reading skills were taken into account. Overall, compared to normal pronunciation, spelling pronunciation facilitated the learning of the correct spelling of irregular words, but it appeared to be no more effective than previewing. Differences between training conditions appeared to fade with older spellers. Less skilled young spellers seemed to profit more from visual examination of the word as compared to practice with spelling pronunciations. The findings appear to indicate that spelling pronunciation and allowing a preview can both be effective ways to learn correct spellings of orthographically unpredictable words, irrespective of age or spelling ability.

  14. A review of visual memory capacity: Beyond individual items and towards structured representations

    OpenAIRE

    Brady, Timothy F.; Konkle, Talia; Alvarez, George A.

    2011-01-01

    Traditional memory research has focused on identifying separate memory systems and exploring different stages of memory processing. This approach has been valuable for establishing a taxonomy of memory systems and characterizing their function, but has been less informative about the nature of stored memory representations. Recent research on visual memory has shifted towards a representation-based emphasis, focusing on the contents of memory, and attempting to determine the format and struct...

  15. Using Virtual Microscopy to Scaffold Learning of Pathology: A Naturalistic Experiment on the Role of Visual and Conceptual Cues

    Science.gov (United States)

    Nivala, Markus; Saljo, Roger; Rystedt, Hans; Kronqvist, Pauliina; Lehtinen, Erno

    2012-01-01

    New representational technologies, such as virtual microscopy, create new affordances for medical education. In the article, a study on the following two issues is reported: (a) How does collaborative use of virtual microscopy shape students' engagement with and learning from virtual slides of tissue specimen? (b) How do visual and conceptual cues…

  16. Information processing in illness representation: Implications from an associative-learning framework.

    Science.gov (United States)

    Lowe, Rob; Norman, Paul

    2017-03-01

    The common-sense model (Leventhal, Meyer, & Nerenz, 1980) outlines how illness representations are important for understanding adjustment to health threats. However, psychological processes giving rise to these representations are little understood. To address this, an associative-learning framework was used to model low-level process mechanics of illness representation and coping-related decision making. Associative learning was modeled within a connectionist network simulation. Two types of information were paired: Illness identities (indigestion, heart attack, cancer) were paired with illness-belief profiles (cause, timeline, consequences, control/cure), and specific illness beliefs were paired with coping procedures (family doctor, emergency services, self-treatment). To emulate past experience, the network was trained with these pairings. As an analogue of a current illness event, the trained network was exposed to partial information (illness identity or select representation beliefs) and its response recorded. The network (a) produced the appropriate representation profile (beliefs) for a given illness identity, (b) prioritized expected coping procedures, and (c) highlighted circumstances in which activated representation profiles could include self-generated or counterfactual beliefs. Encoding and activation of illness beliefs can occur spontaneously and automatically; conventional questionnaire measurement may be insensitive to these automatic representations. Furthermore, illness representations may comprise a coherent set of nonindependent beliefs (a schema) rather than a collective of independent beliefs. Incoming information may generate a "tipping point," dramatically changing the active schema as a new illness-knowledge set is invoked. Finally, automatic activation of well-learned information can lead to the erroneous interpretation of illness events, with implications for [inappropriate] coping efforts. (PsycINFO Database Record (c) 2017 APA, all

  17. Multimodal integration in statistical learning

    DEFF Research Database (Denmark)

    Mitchell, Aaron; Christiansen, Morten Hyllekvist; Weiss, Dan

    2014-01-01

    , we investigated the ability of adults to integrate audio and visual input during statistical learning. We presented learners with a speech stream synchronized with a video of a speaker’s face. In the critical condition, the visual (e.g., /gi/) and auditory (e.g., /mi/) signals were occasionally...... facilitated participants’ ability to segment the speech stream. Our results therefore demonstrate that participants can integrate audio and visual input to perceive the McGurk illusion during statistical learning. We interpret our findings as support for modality-interactive accounts of statistical learning.......Recent advances in the field of statistical learning have established that learners are able to track regularities of multimodal stimuli, yet it is unknown whether the statistical computations are performed on integrated representations or on separate, unimodal representations. In the present study...

  18. Zero-Shot Learning by Generating Pseudo Feature Representations

    OpenAIRE

    Lu, Jiang; Li, Jin; Yan, Ziang; Zhang, Changshui

    2017-01-01

    Zero-shot learning (ZSL) is a challenging task aiming at recognizing novel classes without any training instances. In this paper we present a simple but high-performance ZSL approach by generating pseudo feature representations (GPFR). Given the dataset of seen classes and side information of unseen classes (e.g. attributes), we synthesize feature-level pseudo representations for novel concepts, which allows us access to the formulation of unseen class predictor. Firstly we design a Joint Att...

  19. Separate visual representations for perception and for visually guided behavior

    Science.gov (United States)

    Bridgeman, Bruce

    1989-01-01

    Converging evidence from several sources indicates that two distinct representations of visual space mediate perception and visually guided behavior, respectively. The two maps of visual space follow different rules; spatial values in either one can be biased without affecting the other. Ordinarily the two maps give equivalent responses because both are veridically in register with the world; special techniques are required to pull them apart. One such technique is saccadic suppression: small target displacements during saccadic eye movements are not preceived, though the displacements can change eye movements or pointing to the target. A second way to separate cognitive and motor-oriented maps is with induced motion: a slowly moving frame will make a fixed target appear to drift in the opposite direction, while motor behavior toward the target is unchanged. The same result occurs with stroboscopic induced motion, where the frame jump abruptly and the target seems to jump in the opposite direction. A third method of separating cognitive and motor maps, requiring no motion of target, background or eye, is the Roelofs effect: a target surrounded by an off-center rectangular frame will appear to be off-center in the direction opposite the frame. Again the effect influences perception, but in half of the subjects it does not influence pointing to the target. This experience also reveals more characteristics of the maps and their interactions with one another, the motor map apparently has little or no memory, and must be fed from the biased cognitive map if an enforced delay occurs between stimulus presentation and motor response. In designing spatial displays, the results mean that what you see isn't necessarily what you get. Displays must be designed with either perception or visually guided behavior in mind.

  20. Digital representations of the real world how to capture, model, and render visual reality

    CERN Document Server

    Magnor, Marcus A; Sorkine-Hornung, Olga; Theobalt, Christian

    2015-01-01

    Create Genuine Visual Realism in Computer Graphics Digital Representations of the Real World: How to Capture, Model, and Render Visual Reality explains how to portray visual worlds with a high degree of realism using the latest video acquisition technology, computer graphics methods, and computer vision algorithms. It explores the integration of new capture modalities, reconstruction approaches, and visual perception into the computer graphics pipeline.Understand the Entire Pipeline from Acquisition, Reconstruction, and Modeling to Realistic Rendering and ApplicationsThe book covers sensors fo

  1. Multi-channel EEG-based sleep stage classification with joint collaborative representation and multiple kernel learning.

    Science.gov (United States)

    Shi, Jun; Liu, Xiao; Li, Yan; Zhang, Qi; Li, Yingjie; Ying, Shihui

    2015-10-30

    Electroencephalography (EEG) based sleep staging is commonly used in clinical routine. Feature extraction and representation plays a crucial role in EEG-based automatic classification of sleep stages. Sparse representation (SR) is a state-of-the-art unsupervised feature learning method suitable for EEG feature representation. Collaborative representation (CR) is an effective data coding method used as a classifier. Here we use CR as a data representation method to learn features from the EEG signal. A joint collaboration model is established to develop a multi-view learning algorithm, and generate joint CR (JCR) codes to fuse and represent multi-channel EEG signals. A two-stage multi-view learning-based sleep staging framework is then constructed, in which JCR and joint sparse representation (JSR) algorithms first fuse and learning the feature representation from multi-channel EEG signals, respectively. Multi-view JCR and JSR features are then integrated and sleep stages recognized by a multiple kernel extreme learning machine (MK-ELM) algorithm with grid search. The proposed two-stage multi-view learning algorithm achieves superior performance for sleep staging. With a K-means clustering based dictionary, the mean classification accuracy, sensitivity and specificity are 81.10 ± 0.15%, 71.42 ± 0.66% and 94.57 ± 0.07%, respectively; while with the dictionary learned using the submodular optimization method, they are 80.29 ± 0.22%, 71.26 ± 0.78% and 94.38 ± 0.10%, respectively. The two-stage multi-view learning based sleep staging framework outperforms all other classification methods compared in this work, while JCR is superior to JSR. The proposed multi-view learning framework has the potential for sleep staging based on multi-channel or multi-modality polysomnography signals. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Visual Literacy in Bloom: Using Bloom's Taxonomy to Support Visual Learning Skills

    Science.gov (United States)

    Arneson, Jessie B.; Offerdahl, Erika G.

    2018-01-01

    "Vision and Change" identifies science communication as one of the core competencies in undergraduate biology. Visual representations are an integral part of science communication, allowing ideas to be shared among and between scientists and the public. As such, development of scientific visual literacy should be a desired outcome of…

  3. The Focus of Attention in Visual Working Memory: Protection of Focused Representations and Its Individual Variation.

    Science.gov (United States)

    Heuer, Anna; Schubö, Anna

    2016-01-01

    Visual working memory can be modulated according to changes in the cued task relevance of maintained items. Here, we investigated the mechanisms underlying this modulation. In particular, we studied the consequences of attentional selection for selected and unselected items, and the role of individual differences in the efficiency with which attention is deployed. To this end, performance in a visual working memory task as well as the CDA/SPCN and the N2pc, ERP components associated with visual working memory and attentional processes, were analysed. Selection during the maintenance stage was manipulated by means of two successively presented retrocues providing spatial information as to which items were most likely to be tested. Results show that attentional selection serves to robustly protect relevant representations in the focus of attention while unselected representations which may become relevant again still remain available. Individuals with larger retrocueing benefits showed higher efficiency of attentional selection, as indicated by the N2pc, and showed stronger maintenance-associated activity (CDA/SPCN). The findings add to converging evidence that focused representations are protected, and highlight the flexibility of visual working memory, in which information can be weighted according its relevance.

  4. Perceptual Learning in Children With Visual Impairment Improves Near Visual Acuity

    NARCIS (Netherlands)

    Huurneman, Bianca; Boonstra, F. Nienke; Cox, Ralf F. A.; van Rens, Ger; Cillessen, Antonius H. N.

    PURPOSE. This study investigated whether visual perceptual learning can improve near visual acuity and reduce foveal crowding effects in four-to nine-year-old children with visual impairment. METHODS. Participants were 45 children with visual impairment and 29 children with normal vision. Children

  5. Perceptual Learning in Children With Visual Impairment Improves Near Visual Acuity

    NARCIS (Netherlands)

    Huurneman, B.; Boonstra, F.N.; Cox, R.F.A.; van Rens, G.H.M.B.; Cillessen, A.H.N.

    2013-01-01

    Purpose. This study investigated whether visual perceptual learning can improve near visual acuity and reduce foveal crowding effects in four- to nine-year-old children with visual impairment. Methods. Participants were 45 children with visual impairment and 29 children with normal vision. Children

  6. Perceptual learning in children with visual impairment improves near visual acuity

    NARCIS (Netherlands)

    Huurneman, B.; Boonstra, F.N.; Cox, R.F.; Rens, G. van; Cillessen, A.H.

    2013-01-01

    PURPOSE: This study investigated whether visual perceptual learning can improve near visual acuity and reduce foveal crowding effects in four- to nine-year-old children with visual impairment. METHODS: Participants were 45 children with visual impairment and 29 children with normal vision. Children

  7. Perceptual Learning in Children With Visual Impairment Improves Near Visual Acuity

    NARCIS (Netherlands)

    Huurneman, B.; Boonstra, F.N.; Cox, R.F.A.; Rens, G.H.M.B. van; Cillessen, A.H.N.

    2013-01-01

    PURPOSE. This study investigated whether visual perceptual learning can improve near visual acuity and reduce foveal crowding effects in four-to nine-year-old children with visual impairment. METHODS. Participants were 45 children with visual impairment and 29 children with normal vision. Children

  8. Visual acuity and visual skills in Malaysian children with learning disabilities

    Directory of Open Access Journals (Sweden)

    Muzaliha MN

    2012-09-01

    Full Text Available Mohd-Nor Muzaliha,1 Buang Nurhamiza,1 Adil Hussein,1 Abdul-Rani Norabibas,1 Jaafar Mohd-Hisham-Basrun,1 Abdullah Sarimah,2 Seo-Wei Leo,3 Ismail Shatriah11Department of Ophthalmology, 2Biostatistics and Research Methodology Unit, School of Medical Sciences, Universiti Sains Malaysia, Kelantan, Malaysia; 3Paediatric Ophthalmology and Strabismus Unit, Department of Ophthalmology, Tan Tock Seng Hospital, SingaporeBackground: There is limited data in the literature concerning the visual status and skills in children with learning disabilities, particularly within the Asian population. This study is aimed to determine visual acuity and visual skills in children with learning disabilities in primary schools within the suburban Kota Bharu district in Malaysia.Methods: We examined 1010 children with learning disabilities aged between 8–12 years from 40 primary schools in the Kota Bharu district, Malaysia from January 2009 to March 2010. These children were identified based on their performance in a screening test known as the Early Intervention Class for Reading and Writing Screening Test conducted by the Ministry of Education, Malaysia. Complete ocular examinations and visual skills assessment included near point of convergence, amplitude of accommodation, accommodative facility, convergence break and recovery, divergence break and recovery, and developmental eye movement tests for all subjects.Results: A total of 4.8% of students had visual acuity worse than 6/12 (20/40, 14.0% had convergence insufficiency, 28.3% displayed poor accommodative amplitude, and 26.0% showed signs of accommodative infacility. A total of 12.1% of the students had poor convergence break, 45.7% displayed poor convergence recovery, 37.4% showed poor divergence break, and 66.3% were noted to have poor divergence recovery. The mean horizontal developmental eye movement was significantly prolonged.Conclusion: Although their visual acuity was satisfactory, nearly 30% of the

  9. Children’s Learning from Touch Screens: A Dual Representation Perspective

    Science.gov (United States)

    Sheehan, Kelly J.; Uttal, David H.

    2016-01-01

    Parents and educators often expect that children will learn from touch screen devices, such as during joint e-book reading. Therefore an essential question is whether young children understand that the touch screen can be a symbolic medium – that entities represented on the touch screen can refer to entities in the real world. Research on symbolic development suggests that symbolic understanding requires that children develop dual representational abilities, meaning children need to appreciate that a symbol is an object in itself (i.e., picture of a dog) while also being a representation of something else (i.e., the real dog). Drawing on classic research on symbols and new research on children’s learning from touch screens, we offer the perspective that children’s ability to learn from the touch screen as a symbolic medium depends on the effect of interactivity on children’s developing dual representational abilities. Although previous research on dual representation suggests the interactive nature of the touch screen might make it difficult for young children to use as a symbolic medium, the unique interactive affordances may help alleviate this difficulty. More research needs to investigate how the interactivity of the touch screen affects children’s ability to connect the symbols on the screen to the real world. Given the interactive nature of the touch screen, researchers and educators should consider both the affordances of the touch screen as well as young children’s cognitive abilities when assessing whether young children can learn from it as a symbolic medium. PMID:27570516

  10. Visual working memory capacity for color is independent of representation resolution.

    Science.gov (United States)

    Ye, Chaoxiong; Zhang, Lingcong; Liu, Taosheng; Li, Hong; Liu, Qiang

    2014-01-01

    The relationship between visual working memory (VWM) capacity and resolution of representation have been extensively investigated. Several recent ERP studies using orientation (or arrow) stimuli suggest that there is an inverse relationship between VWM capacity and representation resolution. However, different results have been obtained in studies using color stimuli. This could be due to important differences in the experimental paradigms used in previous studies. We examined whether the same relationship between capacity and resolution holds for color information. Participants performed a color change detection task while their electroencephalography was recorded. We manipulated representation resolution by asking participants to detect either a salient change (low-resolution) or a subtle change (high-resolution) in color. We used an ERP component known as contralateral delay activity (CDA) to index the amount of information maintained in VWM. The result demonstrated the same pattern for both low- and high-resolution conditions, with no difference between conditions. This result suggests that VWM always represents a fixed number of approximately 3-4 colors regardless of the resolution of representation.

  11. From Sensory Signals to Modality-Independent Conceptual Representations: A Probabilistic Language of Thought Approach.

    Directory of Open Access Journals (Sweden)

    Goker Erdogan

    2015-11-01

    Full Text Available People learn modality-independent, conceptual representations from modality-specific sensory signals. Here, we hypothesize that any system that accomplishes this feat will include three components: a representational language for characterizing modality-independent representations, a set of sensory-specific forward models for mapping from modality-independent representations to sensory signals, and an inference algorithm for inverting forward models-that is, an algorithm for using sensory signals to infer modality-independent representations. To evaluate this hypothesis, we instantiate it in the form of a computational model that learns object shape representations from visual and/or haptic signals. The model uses a probabilistic grammar to characterize modality-independent representations of object shape, uses a computer graphics toolkit and a human hand simulator to map from object representations to visual and haptic features, respectively, and uses a Bayesian inference algorithm to infer modality-independent object representations from visual and/or haptic signals. Simulation results show that the model infers identical object representations when an object is viewed, grasped, or both. That is, the model's percepts are modality invariant. We also report the results of an experiment in which different subjects rated the similarity of pairs of objects in different sensory conditions, and show that the model provides a very accurate account of subjects' ratings. Conceptually, this research significantly contributes to our understanding of modality invariance, an important type of perceptual constancy, by demonstrating how modality-independent representations can be acquired and used. Methodologically, it provides an important contribution to cognitive modeling, particularly an emerging probabilistic language-of-thought approach, by showing how symbolic and statistical approaches can be combined in order to understand aspects of human perception.

  12. Invariant Visual Object and Face Recognition: Neural and Computational Bases, and a Model, VisNet.

    Science.gov (United States)

    Rolls, Edmund T

    2012-01-01

    Neurophysiological evidence for invariant representations of objects and faces in the primate inferior temporal visual cortex is described. Then a computational approach to how invariant representations are formed in the brain is described that builds on the neurophysiology. A feature hierarchy model in which invariant representations can be built by self-organizing learning based on the temporal and spatial statistics of the visual input produced by objects as they transform in the world is described. VisNet can use temporal continuity in an associative synaptic learning rule with a short-term memory trace, and/or it can use spatial continuity in continuous spatial transformation learning which does not require a temporal trace. The model of visual processing in the ventral cortical stream can build representations of objects that are invariant with respect to translation, view, size, and also lighting. The model has been extended to provide an account of invariant representations in the dorsal visual system of the global motion produced by objects such as looming, rotation, and object-based movement. The model has been extended to incorporate top-down feedback connections to model the control of attention by biased competition in, for example, spatial and object search tasks. The approach has also been extended to account for how the visual system can select single objects in complex visual scenes, and how multiple objects can be represented in a scene. The approach has also been extended to provide, with an additional layer, for the development of representations of spatial scenes of the type found in the hippocampus.

  13. Invariant visual object and face recognition: neural and computational bases, and a model, VisNet

    Directory of Open Access Journals (Sweden)

    Edmund T eRolls

    2012-06-01

    Full Text Available Neurophysiological evidence for invariant representations of objects and faces in the primate inferior temporal visual cortex is described. Then a computational approach to how invariant representations are formed in the brain is described that builds on the neurophysiology. A feature hierarchy modelin which invariant representations can be built by self-organizing learning based on the temporal and spatialstatistics of the visual input produced by objects as they transform in the world is described. VisNet can use temporal continuity in an associativesynaptic learning rule with a short term memory trace, and/or it can use spatialcontinuity in Continuous Spatial Transformation learning which does not require a temporal trace. The model of visual processing in theventral cortical stream can build representations of objects that are invariant withrespect to translation, view, size, and also lighting. The modelhas been extended to provide an account of invariant representations in the dorsal visualsystem of the global motion produced by objects such as looming, rotation, and objectbased movement. The model has been extended to incorporate top-down feedback connectionsto model the control of attention by biased competition in for example spatial and objectsearch tasks. The model has also been extended to account for how the visual system canselect single objects in complex visual scenes, and how multiple objects can berepresented in a scene. The model has also been extended to provide, with an additional layer, for the development of representations of spatial scenes of the type found in the hippocampus.

  14. Combining generative and discriminative representation learning for lung CT analysis with convolutional restricted Boltzmann machines

    DEFF Research Database (Denmark)

    van Tulder, Gijs; de Bruijne, Marleen

    2016-01-01

    The choice of features greatly influences the performance of a tissue classification system. Despite this, many systems are built with standard, predefined filter banks that are not optimized for that particular application. Representation learning methods such as restricted Boltzmann machines may...... outperform these standard filter banks because they learn a feature description directly from the training data. Like many other representation learning methods, restricted Boltzmann machines are unsupervised and are trained with a generative learning objective; this allows them to learn representations from...... unlabeled data, but does not necessarily produce features that are optimal for classification. In this paper we propose the convolutional classification restricted Boltzmann machine, which combines a generative and a discriminative learning objective. This allows it to learn filters that are good both...

  15. Representation and Integration: Combining Robot Control, High-Level Planning, and Action Learning

    DEFF Research Database (Denmark)

    Petrick, Ronald; Kraft, Dirk; Mourao, Kira

    We describe an approach to integrated robot control, high-level planning, and action effect learning that attempts to overcome the representational difficulties that exist between these diverse areas. Our approach combines ideas from robot vision, knowledgelevel planning, and connectionist machine......-level action specifications, suitable for planning, from a robot’s interactions with the world. We present a detailed overview of our approach and show how it supports the learning of certain aspects of a high-level lepresentation from low-level world state information....... learning, and focuses on the representational needs of these components.We also make use of a simple representational unit called an instantiated state transition fragment (ISTF) and a related structure called an object-action complex (OAC). The goal of this work is a general approach for inducing high...

  16. Grounded Object and Grasp Representations in a Cognitive Architecture

    DEFF Research Database (Denmark)

    Kraft, Dirk

    developed. This work presents a system that is able to learn autonomously about objects and applicable grasps in an unknown environment through exploratory manipulation and to then use this grounded knowledge in a planning setup to address complex tasks. A set of different subsystems is needed to achieve....... The topics are ordered so that we proceed from the more general integration works towards the works describing the individual components. The first chapter gives an overview over the system that is able to learn a grounded visual object representation and a grounded grasp representation. In the following...... part, we describe how this grounding procedures can be embedded in a three cognitive level architecture. Our initial work to use a tactile sensor to enrichen the object representations as well as allow for more complex actions is presented here as well. Since our system is concerned with learning about...

  17. How to Make a Good Animation: A Grounded Cognition Model of How Visual Representation Design Affects the Construction of Abstract Physics Knowledge

    Science.gov (United States)

    Chen, Zhongzhou; Gladding, Gary

    2014-01-01

    Visual representations play a critical role in teaching physics. However, since we do not have a satisfactory understanding of how visual perception impacts the construction of abstract knowledge, most visual representations used in instructions are either created based on existing conventions or designed according to the instructor's intuition,…

  18. Feature and Region Selection for Visual Learning.

    Science.gov (United States)

    Zhao, Ji; Wang, Liantao; Cabral, Ricardo; De la Torre, Fernando

    2016-03-01

    Visual learning problems, such as object classification and action recognition, are typically approached using extensions of the popular bag-of-words (BoWs) model. Despite its great success, it is unclear what visual features the BoW model is learning. Which regions in the image or video are used to discriminate among classes? Which are the most discriminative visual words? Answering these questions is fundamental for understanding existing BoW models and inspiring better models for visual recognition. To answer these questions, this paper presents a method for feature selection and region selection in the visual BoW model. This allows for an intermediate visualization of the features and regions that are important for visual learning. The main idea is to assign latent weights to the features or regions, and jointly optimize these latent variables with the parameters of a classifier (e.g., support vector machine). There are four main benefits of our approach: 1) our approach accommodates non-linear additive kernels, such as the popular χ(2) and intersection kernel; 2) our approach is able to handle both regions in images and spatio-temporal regions in videos in a unified way; 3) the feature selection problem is convex, and both problems can be solved using a scalable reduced gradient method; and 4) we point out strong connections with multiple kernel learning and multiple instance learning approaches. Experimental results in the PASCAL VOC 2007, MSR Action Dataset II and YouTube illustrate the benefits of our approach.

  19. A boosting framework for visuality-preserving distance metric learning and its application to medical image retrieval.

    Science.gov (United States)

    Yang, Liu; Jin, Rong; Mummert, Lily; Sukthankar, Rahul; Goode, Adam; Zheng, Bin; Hoi, Steven C H; Satyanarayanan, Mahadev

    2010-01-01

    Similarity measurement is a critical component in content-based image retrieval systems, and learning a good distance metric can significantly improve retrieval performance. However, despite extensive study, there are several major shortcomings with the existing approaches for distance metric learning that can significantly affect their application to medical image retrieval. In particular, "similarity" can mean very different things in image retrieval: resemblance in visual appearance (e.g., two images that look like one another) or similarity in semantic annotation (e.g., two images of tumors that look quite different yet are both malignant). Current approaches for distance metric learning typically address only one goal without consideration of the other. This is problematic for medical image retrieval where the goal is to assist doctors in decision making. In these applications, given a query image, the goal is to retrieve similar images from a reference library whose semantic annotations could provide the medical professional with greater insight into the possible interpretations of the query image. If the system were to retrieve images that did not look like the query, then users would be less likely to trust the system; on the other hand, retrieving images that appear superficially similar to the query but are semantically unrelated is undesirable because that could lead users toward an incorrect diagnosis. Hence, learning a distance metric that preserves both visual resemblance and semantic similarity is important. We emphasize that, although our study is focused on medical image retrieval, the problem addressed in this work is critical to many image retrieval systems. We present a boosting framework for distance metric learning that aims to preserve both visual and semantic similarities. The boosting framework first learns a binary representation using side information, in the form of labeled pairs, and then computes the distance as a weighted Hamming

  20. Problem representation and mathematical problem solving of students of varying math ability.

    Science.gov (United States)

    Krawec, Jennifer L

    2014-01-01

    The purpose of this study was to examine differences in math problem solving among students with learning disabilities (LD, n = 25), low-achieving students (LA, n = 30), and average-achieving students (AA, n = 29). The primary interest was to analyze the processes students use to translate and integrate problem information while solving problems. Paraphrasing, visual representation, and problem-solving accuracy were measured in eighth grade students using a researcher-modified version of the Mathematical Processing Instrument. Results indicated that both students with LD and LA students struggled with processing but that students with LD were significantly weaker than their LA peers in paraphrasing relevant information. Paraphrasing and visual representation accuracy each accounted for a statistically significant amount of variance in problem-solving accuracy. Finally, the effect of visual representation of relevant information on problem-solving accuracy was dependent on ability; specifically, for students with LD, generating accurate visual representations was more strongly related to problem-solving accuracy than for AA students. Implications for instruction for students with and without LD are discussed.

  1. ANALYSIS OF MATHEMATIC REPRESENTATION ABILITY OF JUNIOR HIGH SCHOOL STUDENTS IN THE IMPLEMENTATION OF GUIDED INQUIRY LEARNING

    Directory of Open Access Journals (Sweden)

    Yumiati Yumiati

    2017-09-01

    Full Text Available The objective of this research is to analysing the different on upgrade the student’s math representation that obtained the guided inquiry learning and conventional learning. This research conducted by applying experiment method with nonequivalent control group design at one school. Which becoming research subject are students of Dharma Karya UT Middle School at 8th Grade. 8-2 class selected as control class (19 students and 8-3 class selected as experiment class (20 students. Before and after learning process, two classes given the test of math representation with reliability is 0.70 (high category. The magnitude of the increasing in students’ math representation student group of guided inquiry learning group is 0.41 included as medium category. Meanwhile, the increasing students’ math representation student group of conventional learning is 0.26 included as low category. In conclusion, the hypothesis of the ability of the mathematical representation of students who learning with guided inquiry is better than students with conventional learning is accepted.

  2. Group-sparse representation with dictionary learning for medical image denoising and fusion.

    Science.gov (United States)

    Li, Shutao; Yin, Haitao; Fang, Leyuan

    2012-12-01

    Recently, sparse representation has attracted a lot of interest in various areas. However, the standard sparse representation does not consider the intrinsic structure, i.e., the nonzero elements occur in clusters, called group sparsity. Furthermore, there is no dictionary learning method for group sparse representation considering the geometrical structure of space spanned by atoms. In this paper, we propose a novel dictionary learning method, called Dictionary Learning with Group Sparsity and Graph Regularization (DL-GSGR). First, the geometrical structure of atoms is modeled as the graph regularization. Then, combining group sparsity and graph regularization, the DL-GSGR is presented, which is solved by alternating the group sparse coding and dictionary updating. In this way, the group coherence of learned dictionary can be enforced small enough such that any signal can be group sparse coded effectively. Finally, group sparse representation with DL-GSGR is applied to 3-D medical image denoising and image fusion. Specifically, in 3-D medical image denoising, a 3-D processing mechanism (using the similarity among nearby slices) and temporal regularization (to perverse the correlations across nearby slices) are exploited. The experimental results on 3-D image denoising and image fusion demonstrate the superiority of our proposed denoising and fusion approaches.

  3. Representational Learning for Fault Diagnosis of Wind Turbine Equipment: A Multi-Layered Extreme Learning Machines Approach

    Directory of Open Access Journals (Sweden)

    Zhi-Xin Yang

    2016-05-01

    Full Text Available Reliable and quick response fault diagnosis is crucial for the wind turbine generator system (WTGS to avoid unplanned interruption and to reduce the maintenance cost. However, the conditional data generated from WTGS operating in a tough environment is always dynamical and high-dimensional. To address these challenges, we propose a new fault diagnosis scheme which is composed of multiple extreme learning machines (ELM in a hierarchical structure, where a forwarding list of ELM layers is concatenated and each of them is processed independently for its corresponding role. The framework enables both representational feature learning and fault classification. The multi-layered ELM based representational learning covers functions including data preprocessing, feature extraction and dimension reduction. An ELM based autoencoder is trained to generate a hidden layer output weight matrix, which is then used to transform the input dataset into a new feature representation. Compared with the traditional feature extraction methods which may empirically wipe off some “insignificant’ feature information that in fact conveys certain undiscovered important knowledge, the introduced representational learning method could overcome the loss of information content. The computed output weight matrix projects the high dimensional input vector into a compressed and orthogonally weighted distribution. The last single layer of ELM is applied for fault classification. Unlike the greedy layer wise learning method adopted in back propagation based deep learning (DL, the proposed framework does not need iterative fine-tuning of parameters. To evaluate its experimental performance, comparison tests are carried out on a wind turbine generator simulator. The results show that the proposed diagnostic framework achieves the best performance among the compared approaches in terms of accuracy and efficiency in multiple faults detection of wind turbines.

  4. Colouring the Gaps in Learning Design: Aesthetics and the Visual in Learning

    Science.gov (United States)

    Carroll, Fiona; Kop, Rita

    2016-01-01

    The visual is a dominant mode of information retrieval and understanding however, the focus on the visual dimension of Technology Enhanced Learning (TEL) is still quite weak in relation to its predominant focus on usability. To accommodate the future needs of the visual learner, designers of e-learning environments should advance the current…

  5. Learning of Multimodal Representations With Random Walks on the Click Graph.

    Science.gov (United States)

    Wu, Fei; Lu, Xinyan; Song, Jun; Yan, Shuicheng; Zhang, Zhongfei Mark; Rui, Yong; Zhuang, Yueting

    2016-02-01

    In multimedia information retrieval, most classic approaches tend to represent different modalities of media in the same feature space. With the click data collected from the users' searching behavior, existing approaches take either one-to-one paired data (text-image pairs) or ranking examples (text-query-image and/or image-query-text ranking lists) as training examples, which do not make full use of the click data, particularly the implicit connections among the data objects. In this paper, we treat the click data as a large click graph, in which vertices are images/text queries and edges indicate the clicks between an image and a query. We consider learning a multimodal representation from the perspective of encoding the explicit/implicit relevance relationship between the vertices in the click graph. By minimizing both the truncated random walk loss as well as the distance between the learned representation of vertices and their corresponding deep neural network output, the proposed model which is named multimodal random walk neural network (MRW-NN) can be applied to not only learn robust representation of the existing multimodal data in the click graph, but also deal with the unseen queries and images to support cross-modal retrieval. We evaluate the latent representation learned by MRW-NN on a public large-scale click log data set Clickture and further show that MRW-NN achieves much better cross-modal retrieval performance on the unseen queries/images than the other state-of-the-art methods.

  6. Imprinting modulates processing of visual information in the visual wulst of chicks

    Directory of Open Access Journals (Sweden)

    Uchimura Motoaki

    2006-11-01

    Full Text Available Abstract Background Imprinting behavior is one form of learning and memory in precocial birds. With the aim of elucidating of the neural basis for visual imprinting, we focused on visual information processing. Results A lesion in the visual wulst, which is similar functionally to the mammalian visual cortex, caused anterograde amnesia in visual imprinting behavior. Since the color of an object was one of the important cues for imprinting, we investigated color information processing in the visual wulst. Intrinsic optical signals from the visual wulst were detected in the early posthatch period and the peak regions of responses to red, green, and blue were spatially organized from the caudal to the nasal regions in dark-reared chicks. This spatial representation of color recognition showed plastic changes, and the response pattern along the antero-posterior axis of the visual wulst altered according to the color the chick was imprinted to. Conclusion These results indicate that the thalamofugal pathway is critical for learning the imprinting stimulus and that the visual wulst shows learning-related plasticity and may relay processed visual information to indicate the color of the imprint stimulus to the memory storage region, e.g., the intermediate medial mesopallium.

  7. The Effects of Static and Dynamic Visual Representations as Aids for Primary School Children in Tasks of Auditory Discrimination of Sound Patterns. An Intervention-based Study.

    Directory of Open Access Journals (Sweden)

    Jesus Tejada

    2018-02-01

    Full Text Available It has been proposed that non-conventional presentations of visual information could be very useful as a scaffolding strategy in the learning of Western music notation. As a result, this study has attempted to determine if there is any effect of static and dynamic presentation modes of visual information in the recognition of sound patterns. An intervention-based quasi-experimental design was adopted with two groups of fifth-grade students in a Spanish city. Students did tasks involving discrimination, auditory recognition and symbolic association of the sound patterns with non-musical representations, either static images (S group, or dynamic images (D group. The results showed neither statistically significant differences in the scores of D and S, nor influence of the covariates on the dependent variable, although statistically significant intra-group differences were found for both groups. This suggests that both types of graphic formats could be effective as digital learning mediators in the learning of Western musical notation.

  8. Learning Visualizations by Analogy: Promoting Visual Literacy through Visualization Morphing.

    Science.gov (United States)

    Ruchikachorn, Puripant; Mueller, Klaus

    2015-09-01

    We propose the concept of teaching (and learning) unfamiliar visualizations by analogy, that is, demonstrating an unfamiliar visualization method by linking it to another more familiar one, where the in-betweens are designed to bridge the gap of these two visualizations and explain the difference in a gradual manner. As opposed to a textual description, our morphing explains an unfamiliar visualization through purely visual means. We demonstrate our idea by ways of four visualization pair examples: data table and parallel coordinates, scatterplot matrix and hyperbox, linear chart and spiral chart, and hierarchical pie chart and treemap. The analogy is commutative i.e. any member of the pair can be the unfamiliar visualization. A series of studies showed that this new paradigm can be an effective teaching tool. The participants could understand the unfamiliar visualization methods in all of the four pairs either fully or at least significantly better after they observed or interacted with the transitions from the familiar counterpart. The four examples suggest how helpful visualization pairings be identified and they will hopefully inspire other visualization morphings and associated transition strategies to be identified.

  9. Student Visual Communication of Evolution

    Science.gov (United States)

    Oliveira, Alandeom W.; Cook, Kristin

    2017-06-01

    Despite growing recognition of the importance of visual representations to science education, previous research has given attention mostly to verbal modalities of evolution instruction. Visual aspects of classroom learning of evolution are yet to be systematically examined by science educators. The present study attends to this issue by exploring the types of evolutionary imagery deployed by secondary students. Our visual design analysis revealed that students resorted to two larger categories of images when visually communicating evolution: spatial metaphors (images that provided a spatio-temporal account of human evolution as a metaphorical "walk" across time and space) and symbolic representations ("icons of evolution" such as personal portraits of Charles Darwin that simply evoked evolutionary theory rather than metaphorically conveying its conceptual contents). It is argued that students need opportunities to collaboratively critique evolutionary imagery and to extend their visual perception of evolution beyond dominant images.

  10. Contextual effects in visual working memory reveal hierarchically structured memory representations.

    Science.gov (United States)

    Brady, Timothy F; Alvarez, George A

    2015-01-01

    Influential slot and resource models of visual working memory make the assumption that items are stored in memory as independent units, and that there are no interactions between them. Consequently, these models predict that the number of items to be remembered (the set size) is the primary determinant of working memory performance, and therefore these models quantify memory capacity in terms of the number and quality of individual items that can be stored. Here we demonstrate that there is substantial variance in display difficulty within a single set size, suggesting that limits based on the number of individual items alone cannot explain working memory storage. We asked hundreds of participants to remember the same sets of displays, and discovered that participants were highly consistent in terms of which items and displays were hardest or easiest to remember. Although a simple grouping or chunking strategy could not explain this individual-display variability, a model with multiple, interacting levels of representation could explain some of the display-by-display differences. Specifically, a model that includes a hierarchical representation of items plus the mean and variance of sets of the colors on the display successfully accounts for some of the variability across displays. We conclude that working memory representations are composed only in part of individual, independent object representations, and that a major factor in how many items are remembered on a particular display is interitem representations such as perceptual grouping, ensemble, and texture representations.

  11. Representations for Semantic Learning Webs: Semantic Web Technology in Learning Support

    Science.gov (United States)

    Dzbor, M.; Stutt, A.; Motta, E.; Collins, T.

    2007-01-01

    Recent work on applying semantic technologies to learning has concentrated on providing novel means of accessing and making use of learning objects. However, this is unnecessarily limiting: semantic technologies will make it possible to develop a range of educational Semantic Web services, such as interpretation, structure-visualization, support…

  12. Is a picture worth a thousand words? The interaction of visual display and attribute representation in attenuating framing bias}

    Directory of Open Access Journals (Sweden)

    Eyal Gamliel

    2013-07-01

    Full Text Available The attribute framing bias is a well-established phenomenon, in which an object or an event is evaluated more favorably when presented in a positive frame such as ``the half full glass'' than when presented in the complementary negative framing. Given that previous research showed that visual aids can attenuate this bias, the current research explores the factors underlying the attenuating effect of visual aids. In a series of three experiments, we examined how attribute framing bias is affected by two factors: (a The display mode---verbal versus visual; and (b the representation of the critical attribute---whether one outcome, either the positive or the negative, is represented or both outcomes are represented. In Experiment 1 a marginal attenuation of attribute framing bias was obtained when verbal description of either positive or negative information was accompanied by corresponding visual representation. In Experiment 2 similar marginal attenuation was obtained when both positive and negative outcomes were verbally represented. In Experiment 3, where the verbal description represented both positive and negative outcomes, significant attenuation was obtained when it was accompanied by a visual display that represented a single outcome, and complete attenuation, totally eliminating the framing bias, was obtained when it was accompanied by a visual display that represented both outcomes. Thus, our findings showed that interaction between the display mode and the representation of the critical attribute attenuated the framing bias. Theoretical and practical implications of the interaction between verbal description, visual aids and representation of the critical attribute are discussed, and future research is suggested.

  13. Priming Contour-Deleted Images: Evidence for Immediate Representations in Visual Object Recognition.

    Science.gov (United States)

    Biederman, Irving; Cooper, Eric E.

    1991-01-01

    Speed and accuracy of identification of pictures of objects are facilitated by prior viewing. Contributions of image features, convex or concave components, and object models in a repetition priming task were explored in 2 studies involving 96 college students. Results provide evidence of intermediate representations in visual object recognition.…

  14. Visualization of diversity in large multivariate data sets.

    Science.gov (United States)

    Pham, Tuan; Hess, Rob; Ju, Crystal; Zhang, Eugene; Metoyer, Ronald

    2010-01-01

    Understanding the diversity of a set of multivariate objects is an important problem in many domains, including ecology, college admissions, investing, machine learning, and others. However, to date, very little work has been done to help users achieve this kind of understanding. Visual representation is especially appealing for this task because it offers the potential to allow users to efficiently observe the objects of interest in a direct and holistic way. Thus, in this paper, we attempt to formalize the problem of visualizing the diversity of a large (more than 1000 objects), multivariate (more than 5 attributes) data set as one worth deeper investigation by the information visualization community. In doing so, we contribute a precise definition of diversity, a set of requirements for diversity visualizations based on this definition, and a formal user study design intended to evaluate the capacity of a visual representation for communicating diversity information. Our primary contribution, however, is a visual representation, called the Diversity Map, for visualizing diversity. An evaluation of the Diversity Map using our study design shows that users can judge elements of diversity consistently and as or more accurately than when using the only other representation specifically designed to visualize diversity.

  15. Robust visual tracking via multi-task sparse learning

    KAUST Repository

    Zhang, Tianzhu

    2012-06-01

    In this paper, we formulate object tracking in a particle filter framework as a multi-task sparse learning problem, which we denote as Multi-Task Tracking (MTT). Since we model particles as linear combinations of dictionary templates that are updated dynamically, learning the representation of each particle is considered a single task in MTT. By employing popular sparsity-inducing p, q mixed norms (p D; 1), we regularize the representation problem to enforce joint sparsity and learn the particle representations together. As compared to previous methods that handle particles independently, our results demonstrate that mining the interdependencies between particles improves tracking performance and overall computational complexity. Interestingly, we show that the popular L 1 tracker [15] is a special case of our MTT formulation (denoted as the L 11 tracker) when p q 1. The learning problem can be efficiently solved using an Accelerated Proximal Gradient (APG) method that yields a sequence of closed form updates. As such, MTT is computationally attractive. We test our proposed approach on challenging sequences involving heavy occlusion, drastic illumination changes, and large pose variations. Experimental results show that MTT methods consistently outperform state-of-the-art trackers. © 2012 IEEE.

  16. Magnetic stimulation of visual cortex impairs perceptual learning.

    Science.gov (United States)

    Baldassarre, Antonello; Capotosto, Paolo; Committeri, Giorgia; Corbetta, Maurizio

    2016-12-01

    The ability to learn and process visual stimuli more efficiently is important for survival. Previous neuroimaging studies have shown that perceptual learning on a shape identification task differently modulates activity in both frontal-parietal cortical regions and visual cortex (Sigman et al., 2005;Lewis et al., 2009). Specifically, fronto-parietal regions (i.e. intra parietal sulcus, pIPS) became less activated for trained as compared to untrained stimuli, while visual regions (i.e. V2d/V3 and LO) exhibited higher activation for familiar shape. Here, after the intensive training, we employed transcranial magnetic stimulation over both visual occipital and parietal regions, previously shown to be modulated, to investigate their causal role in learning the shape identification task. We report that interference with V2d/V3 and LO increased reaction times to learned stimuli as compared to pIPS and Sham control condition. Moreover, the impairment observed after stimulation over the two visual regions was positive correlated. These results strongly support the causal role of the visual network in the control of the perceptual learning. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Deep learning for visual understanding

    NARCIS (Netherlands)

    Guo, Y.

    2017-01-01

    With the dramatic growth of the image data on the web, there is an increasing demand of the algorithms capable of understanding the visual information automatically. Deep learning, served as one of the most significant breakthroughs, has brought revolutionary success in diverse visual applications,

  18. Learning temporal context shapes prestimulus alpha oscillations and improves visual discrimination performance.

    Science.gov (United States)

    Toosi, Tahereh; K Tousi, Ehsan; Esteky, Hossein

    2017-08-01

    Time is an inseparable component of every physical event that we perceive, yet it is not clear how the brain processes time or how the neuronal representation of time affects our perception of events. Here we asked subjects to perform a visual discrimination task while we changed the temporal context in which the stimuli were presented. We collected electroencephalography (EEG) signals in two temporal contexts. In predictable blocks stimuli were presented after a constant delay relative to a visual cue, and in unpredictable blocks stimuli were presented after variable delays relative to the visual cue. Four subsecond delays of 83, 150, 400, and 800 ms were used in the predictable and unpredictable blocks. We observed that predictability modulated the power of prestimulus alpha oscillations in the parieto-occipital sites: alpha power increased in the 300-ms window before stimulus onset in the predictable blocks compared with the unpredictable blocks. This modulation only occurred in the longest delay period, 800 ms, in which predictability also improved the behavioral performance of the subjects. Moreover, learning the temporal context shaped the prestimulus alpha power: modulation of prestimulus alpha power grew during the predictable block and correlated with performance enhancement. These results suggest that the brain is able to learn the subsecond temporal context of stimuli and use this to enhance sensory processing. Furthermore, the neural correlate of this temporal prediction is reflected in the alpha oscillations. NEW & NOTEWORTHY It is not well understood how the uncertainty in the timing of an external event affects its processing, particularly at subsecond scales. Here we demonstrate how a predictable timing scheme improves visual processing. We found that learning the predictable scheme gradually shaped the prestimulus alpha power. These findings indicate that the human brain is able to extract implicit subsecond patterns in the temporal context of

  19. How online learning modules can improve the representational fluency and conceptual understanding of university physics students

    Science.gov (United States)

    Hill, M.; Sharma, M. D.; Johnston, H.

    2015-07-01

    The use of online learning resources as core components of university science courses is increasing. Learning resources range from summaries, videos, and simulations, to question banks. Our study set out to develop, implement, and evaluate research-based online learning resources in the form of pre-lecture online learning modules (OLMs). The aim of this paper is to share our experiences with those using, or considering implementing, online learning resources. Our first task was to identify student learning issues in physics to base the learning resources on. One issue with substantial research is conceptual understanding, the other with comparatively less research is scientific representations (graphs, words, equations, and diagrams). We developed learning resources on both these issues and measured their impact. We created weekly OLMs which were delivered to first year physics students at The University of Sydney prior to their first lecture of the week. Students were randomly allocated to either a concepts stream or a representations stream of online modules. The programme was first implemented in 2013 to trial module content, gain experience and process logistical matters and repeated in 2014 with approximately 400 students. Two validated surveys, the Force and Motion Concept Evaluation (FMCE) and the Representational Fluency Survey (RFS) were used as pre-tests and post-tests to measure learning gains while surveys and interviews provided further insights. While both streams of OLMs produced similar positive learning gains on the FMCE, the representations-focussed OLMs produced higher gains on the RFS. Conclusions were triangulated with student responses which indicated that they have recognized the benefit of the OLMs for their learning of physics. Our study shows that carefully designed online resources used as pre-instruction can make a difference in students’ conceptual understanding and representational fluency in physics, as well as make them more aware

  20. Learning of grammar-like visual sequences by adults with and without language-learning disabilities.

    Science.gov (United States)

    Aguilar, Jessica M; Plante, Elena

    2014-08-01

    Two studies examined learning of grammar-like visual sequences to determine whether a general deficit in statistical learning characterizes this population. Furthermore, we tested the hypothesis that difficulty in sustaining attention during the learning task might account for differences in statistical learning. In Study 1, adults with normal language (NL) or language-learning disability (LLD) were familiarized with the visual artificial grammar and then tested using items that conformed or deviated from the grammar. In Study 2, a 2nd sample of adults with NL and LLD were presented auditory word pairs with weak semantic associations (e.g., groom + clean) along with the visual learning task. Participants were instructed to attend to visual sequences and to ignore the auditory stimuli. Incidental encoding of these words would indicate reduced attention to the primary task. In Studies 1 and 2, both groups demonstrated learning and generalization of the artificial grammar. In Study 2, neither the NL nor the LLD group appeared to encode the words presented during the learning phase. The results argue against a general deficit in statistical learning for individuals with LLD and demonstrate that both NL and LLD learners can ignore extraneous auditory stimuli during visual learning.

  1. Recommendations for benefit-risk assessment methodologies and visual representations

    DEFF Research Database (Denmark)

    Hughes, Diana; Waddingham, Ed; Mt-Isa, Shahrul

    2016-01-01

    PURPOSE: The purpose of this study is to draw on the practical experience from the PROTECT BR case studies and make recommendations regarding the application of a number of methodologies and visual representations for benefit-risk assessment. METHODS: Eight case studies based on the benefit......-risk balance of real medicines were used to test various methodologies that had been identified from the literature as having potential applications in benefit-risk assessment. Recommendations were drawn up based on the results of the case studies. RESULTS: A general pathway through the case studies...

  2. Automatic frame-centered object representation and integration revealed by iconic memory, visual priming, and backward masking.

    Science.gov (United States)

    Lin, Zhicheng; He, Sheng

    2012-10-25

    Object identities ("what") and their spatial locations ("where") are processed in distinct pathways in the visual system, raising the question of how the what and where information is integrated. Because of object motions and eye movements, the retina-based representations are unstable, necessitating nonretinotopic representation and integration. A potential mechanism is to code and update objects according to their reference frames (i.e., frame-centered representation and integration). To isolate frame-centered processes, in a frame-to-frame apparent motion configuration, we (a) presented two preceding or trailing objects on the same frame, equidistant from the target on the other frame, to control for object-based (frame-based) effect and space-based effect, and (b) manipulated the target's relative location within its frame to probe frame-centered effect. We show that iconic memory, visual priming, and backward masking depend on objects' relative frame locations, orthogonal of the retinotopic coordinate. These findings not only reveal that iconic memory, visual priming, and backward masking can be nonretinotopic but also demonstrate that these processes are automatically constrained by contextual frames through a frame-centered mechanism. Thus, object representation is robustly and automatically coupled to its reference frame and continuously being updated through a frame-centered, location-specific mechanism. These findings lead to an object cabinet framework, in which objects ("files") within the reference frame ("cabinet") are orderly coded relative to the frame.

  3. Role of working memory in transformation of visual and motor representations for use in mental simulation.

    Science.gov (United States)

    Gabbard, Carl; Lee, Jihye; Caçola, Priscila

    2013-01-01

    This study examined the role of visual working memory when transforming visual representations to motor representations in the context of motor imagery. Participants viewed randomized number sequences of three, four, and five digits, and then reproduced the sequence by finger tapping using motor imagery or actually executing the movements; movement duration was recorded. One group viewed the stimulus for three seconds and responded immediately, while the second group had a three-second view followed by a three-second blank screen delay before responding. As expected, delay group times were longer with each condition and digit load. Whereas correlations between imagined and executed actions (temporal congruency) were significant in a positive direction for both groups, interestingly, the delay group's values were significantly stronger. That outcome prompts speculation that delay influenced the congruency between motor representation and actual execution.

  4. A blended learning concept for an engineering course in the field of color representation and display technologies

    Science.gov (United States)

    Vauderwange, Oliver; Wozniak, Peter; Javahiraly, Nicolas; Curticapean, Dan

    2016-09-01

    The Paper presents the design and development of a blended learning concept for an engineering course in the field of color representation and display technologies. A suitable learning environment is crucial for the success of the teaching scenario. A mixture of theoretical lectures and hands-on activities with practical applications and experiments, combined with the advantages of modern digital media is the main topic of the paper. Blended learning describes the didactical change of attendance periods and online periods. The e-learning environment for the online period is designed toward an easy access and interaction. Present digital media extends the established teaching scenarios and enables the presentation of videos, animations and augmented reality (AR). Visualizations are effective tools to impart learning contents with lasting effect. The preparation and evaluation of the theoretical lectures and the hands-on activities are stimulated and affects positively the attendance periods. The tasks and experiments require the students to work independently and to develop individual solution strategies. This engages and motivates the students, deepens the knowledge. The authors will present their experience with the implemented blended learning scenario in this field of optics and photonics. All aspects of the learning environment will be introduced.

  5. Visual and Verbal Learning in a Genetic Metabolic Disorder

    Science.gov (United States)

    Spilkin, Amy M.; Ballantyne, Angela O.; Trauner, Doris A.

    2009-01-01

    Visual and verbal learning in a genetic metabolic disorder (cystinosis) were examined in the following three studies. The goal of Study I was to provide a normative database and establish the reliability and validity of a new test of visual learning and memory (Visual Learning and Memory Test; VLMT) that was modeled after a widely used test of…

  6. Digital media Experiences for Visual Learning

    DEFF Research Database (Denmark)

    Buhl, Mie

    2013-01-01

    for new tools and new theoretical approaches with which to understand them. the article argues that the current phase of social practices and technological development makes it difficult to disitnguish between experience with digital media and mediated experiences, because of the use of renegotiation og......Visual learning is a topic for didactic studies in all levels of educaion, brought about by an increasing use of digital meida- digital media give rise to discussions of how learning expereienes come about from various media ressources that generate new learning situations. new situations call...... about by the nature of diverse digital artefacts, 3. the learning potentials in using mobils devices for integrating the body in visual perception processes....

  7. Age-related declines of stability in visual perceptual learning.

    Science.gov (United States)

    Chang, Li-Hung; Shibata, Kazuhisa; Andersen, George J; Sasaki, Yuka; Watanabe, Takeo

    2014-12-15

    One of the biggest questions in learning is how a system can resolve the plasticity and stability dilemma. Specifically, the learning system needs to have not only a high capability of learning new items (plasticity) but also a high stability to retain important items or processing in the system by preventing unimportant or irrelevant information from being learned. This dilemma should hold true for visual perceptual learning (VPL), which is defined as a long-term increase in performance on a visual task as a result of visual experience. Although it is well known that aging influences learning, the effect of aging on the stability and plasticity of the visual system is unclear. To address the question, we asked older and younger adults to perform a task while a task-irrelevant feature was merely exposed. We found that older individuals learned the task-irrelevant features that younger individuals did not learn, both the features that were sufficiently strong for younger individuals to suppress and the features that were too weak for younger individuals to learn. At the same time, there was no plasticity reduction in older individuals within the task tested. These results suggest that the older visual system is less stable to unimportant information than the younger visual system. A learning problem with older individuals may be due to a decrease in stability rather than a decrease in plasticity, at least in VPL. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Learning from Balance Sheet Visualization

    Science.gov (United States)

    Tanlamai, Uthai; Soongswang, Oranuj

    2011-01-01

    This exploratory study examines alternative visuals and their effect on the level of learning of balance sheet users. Executive and regular classes of graduate students majoring in information technology in business were asked to evaluate the extent of acceptance and enhanced capability of these alternative visuals toward their learning…

  9. Studying Visual Displays: How to Instructionally Support Learning

    Science.gov (United States)

    Renkl, Alexander; Scheiter, Katharina

    2017-01-01

    Visual displays are very frequently used in learning materials. Although visual displays have great potential to foster learning, they also pose substantial demands on learners so that the actual learning outcomes are often disappointing. In this article, we pursue three main goals. First, we identify the main difficulties that learners have when…

  10. Constructed vs. received graphical representations for learning about scientific controversy: Implications for learning and coaching

    Science.gov (United States)

    Cavalli-Sforza, Violetta Laura Maria

    Students in science classes hardly ever study scientific controversy, especially in terms of the different types of arguments used to support and criticize theories and hypotheses. Yet, learning the reasons for scientific debate and scientific change is an important part of appreciating the nature of the scientific enterprise and communicating it to the non-scientific world. This dissertation explores the usefulness of graphical representations in teaching students about scientific arguments. Subjects participating in an extended experiment studied instructional materials and used the Belvedere graphical interface to analyze texts drawn from an actual scientific debate. In one experimental condition, subjects used a box-and-arrow representation whose primitive graphical elements had preassigned meanings tailored to the domain of instruction. In the other experimental condition, subjects could use the graphical elements as they wished, thereby creating their own representation. The development of a representation, by forcing a deeper analysis, can potentially yield a greater understanding of the domain under study. The results of the research suggest two conclusions. From the perspective of learning target concepts, asking subjects to develop their own representation may not hurt those subjects who gain a sufficient understanding of the possibilities of abstract representation. The risks are much greater for less able subjects because, if they develop a representation that is inadequate for expressing the target concepts, they will use those concepts less or not at all. From the perspective of coaching subjects as they diagram their analysis of texts, a predefined representation has significant advantages. If it is appropriately expressive for the task, it provides a common language and clearer shared meaning between the subject and the coach. It also enables the coach to understand subjects' analysis more easily, and to evaluate it more effectively against the

  11. iSee: Teaching Visual Learning in an Organic Virtual Learning Environment

    Science.gov (United States)

    Han, Hsiao-Cheng

    2017-01-01

    This paper presents a three-year participatory action research project focusing on the graduate level course entitled Visual Learning in 3D Animated Virtual Worlds. The purpose of this research was to understand "How the virtual world processes of observing and creating can best help students learn visual theories". The first cycle of…

  12. Influence of TANDUR Learning to Students's Mathematical Representation and Student Self-Concept

    Directory of Open Access Journals (Sweden)

    Dimas Fajar Maulana

    2017-11-01

    study is all students of class X which amounted to 350 students in one of the SMA Negeri in Cirebon city. From the population is taken the sample using simple random sampling technique as many as 60 students are divided into two groups namely groups who get TANDUR learning and groups that get conventional learning. The results showed that the TANDUR learning model had an effect of 66.9% on the selfconcept of the students, while the students' mathematical representation ability was 75.5%. Meanwhile, the correlation between selfconcept and student's mathematical representation is 74,3%.

  13. Representations of Mathematics, their teaching and learning: an exploratory study

    Directory of Open Access Journals (Sweden)

    Maria Margarida Graça

    2004-03-01

    Full Text Available This work describes an exploratory study, the first of the four phases of a more inclusive research, which aims at understanding the way to promote, in a Mathematics teachers’ group, a representational evolution leading to a practice that allows a Mathematical meaningful learning of Mathematics. The methodology of this study is qualitative. Data gathering was based on questioning; all the subjects of the sample (n=48 carried out a projective task (a hierarchical evocation test and answered a written individual questionnaire. Data analysis was based in a set of categories previously defined. The main purpose of this research was to identify, to characterize and to describe the representations of Mathematics, their teaching and learning, in a group of 48 subjects, from different social groups, in order to get indicators for the construction of the instruments to be used in to the next phases of the research. The main results of this study are the following: (1 we were able to identify and characterize different representations of the teaching and learning of Mathematics, in what respects its epistemological, pedagogical, emotional and sociocultural dimensions; (2 we were also able to identify limitations, difficulties and items to be included or rephrased in the instruments used.

  14. Shape representations in the primate dorsal visual stream

    Directory of Open Access Journals (Sweden)

    Tom eTheys

    2015-04-01

    Full Text Available The primate visual system extracts object shape information for object recognition in the ventral visual stream. Recent research has demonstrated that object shape is also processed in the dorsal visual stream, which is specialized for spatial vision and the planning of actions. A number of studies have investigated the coding of 2D shape in the anterior intraparietal area (AIP, one of the end-stage areas of the dorsal stream which has been implicated in the extraction of affordances for the purpose of grasping. These findings challenge the current understanding of area AIP as a critical stage in the dorsal stream for the extraction of object affordances. The representation of three-dimensional (3D shape has been studied in two interconnected areas known to be critical for object grasping: area AIP and area F5a in the ventral premotor cortex (PMv, to which AIP projects. In both areas neurons respond selectively to 3D shape defined by binocular disparity, but the latency of the neural selectivity is approximately 10 ms longer in F5a compared to AIP, consistent with its higher position in the hierarchy of cortical areas. Furthermore F5a neurons were more sensitive to small amplitudes of 3D curvature and could detect subtle differences in 3D structure more reliably than AIP neurons. In both areas, 3D-shape selective neurons were co-localized with neurons showing motor-related activity during object grasping in the dark, indicating a close convergence of visual and motor information on the same clusters of neurons.

  15. Reference frames for spatial frequency in face representation differ in the temporal visual cortex and amygdala.

    Science.gov (United States)

    Inagaki, Mikio; Fujita, Ichiro

    2011-07-13

    Social communication in nonhuman primates and humans is strongly affected by facial information from other individuals. Many cortical and subcortical brain areas are known to be involved in processing facial information. However, how the neural representation of faces differs across different brain areas remains unclear. Here, we demonstrate that the reference frame for spatial frequency (SF) tuning of face-responsive neurons differs in the temporal visual cortex and amygdala in monkeys. Consistent with psychophysical properties for face recognition, temporal cortex neurons were tuned to image-based SFs (cycles/image) and showed viewing distance-invariant representation of face patterns. On the other hand, many amygdala neurons were influenced by retina-based SFs (cycles/degree), a characteristic that is useful for social distance computation. The two brain areas also differed in the luminance contrast sensitivity of face-responsive neurons; amygdala neurons sharply reduced their responses to low luminance contrast images, while temporal cortex neurons maintained the level of their responses. From these results, we conclude that different types of visual processing in the temporal visual cortex and the amygdala contribute to the construction of the neural representations of faces.

  16. Robust visual tracking via structured multi-task sparse learning

    KAUST Repository

    Zhang, Tianzhu

    2012-11-09

    In this paper, we formulate object tracking in a particle filter framework as a structured multi-task sparse learning problem, which we denote as Structured Multi-Task Tracking (S-MTT). Since we model particles as linear combinations of dictionary templates that are updated dynamically, learning the representation of each particle is considered a single task in Multi-Task Tracking (MTT). By employing popular sparsity-inducing lp,q mixed norms (specifically p∈2,∞ and q=1), we regularize the representation problem to enforce joint sparsity and learn the particle representations together. As compared to previous methods that handle particles independently, our results demonstrate that mining the interdependencies between particles improves tracking performance and overall computational complexity. Interestingly, we show that the popular L1 tracker (Mei and Ling, IEEE Trans Pattern Anal Mach Intel 33(11):2259-2272, 2011) is a special case of our MTT formulation (denoted as the L11 tracker) when p=q=1. Under the MTT framework, some of the tasks (particle representations) are often more closely related and more likely to share common relevant covariates than other tasks. Therefore, we extend the MTT framework to take into account pairwise structural correlations between particles (e.g. spatial smoothness of representation) and denote the novel framework as S-MTT. The problem of learning the regularized sparse representation in MTT and S-MTT can be solved efficiently using an Accelerated Proximal Gradient (APG) method that yields a sequence of closed form updates. As such, S-MTT and MTT are computationally attractive. We test our proposed approach on challenging sequences involving heavy occlusion, drastic illumination changes, and large pose variations. Experimental results show that S-MTT is much better than MTT, and both methods consistently outperform state-of-the-art trackers. © 2012 Springer Science+Business Media New York.

  17. Learning a Mid-Level Representation for Multiview Action Recognition

    Directory of Open Access Journals (Sweden)

    Cuiwei Liu

    2018-01-01

    Full Text Available Recognizing human actions in videos is an active topic with broad commercial potentials. Most of the existing action recognition methods are supposed to have the same camera view during both training and testing. And thus performances of these single-view approaches may be severely influenced by the camera movement and variation of viewpoints. In this paper, we address the above problem by utilizing videos simultaneously recorded from multiple views. To this end, we propose a learning framework based on multitask random forest to exploit a discriminative mid-level representation for videos from multiple cameras. In the first step, subvolumes of continuous human-centered figures are extracted from original videos. In the next step, spatiotemporal cuboids sampled from these subvolumes are characterized by multiple low-level descriptors. Then a set of multitask random forests are built upon multiview cuboids sampled at adjacent positions and construct an integrated mid-level representation for multiview subvolumes of one action. Finally, a random forest classifier is employed to predict the action category in terms of the learned representation. Experiments conducted on the multiview IXMAS action dataset illustrate that the proposed method can effectively recognize human actions depicted in multiview videos.

  18. Using Technology to Support Visual Learning Strategies

    Science.gov (United States)

    O'Bannon, Blanche; Puckett, Kathleen; Rakes, Glenda

    2006-01-01

    Visual learning is a strategy for visually representing the structure of information and for representing the ways in which concepts are related. Based on the work of Ausubel, these hierarchical maps facilitate student learning of unfamiliar information in the K-12 classroom. This paper presents the research base for this Type II computer tool, as…

  19. Cognitive Strategies for Learning from Static and Dynamic Visuals.

    Science.gov (United States)

    Lewalter, D.

    2003-01-01

    Studied the effects of including static or dynamic visuals in an expository text on a learning outcome and the use of learning strategies when working with these visuals. Results for 60 undergraduates for both types of illustration indicate different frequencies in the use of learning strategies relevant for the learning outcome. (SLD)

  20. Caudate nucleus reactivity predicts perceptual learning rate for visual feature conjunctions.

    Science.gov (United States)

    Reavis, Eric A; Frank, Sebastian M; Tse, Peter U

    2015-04-15

    Useful information in the visual environment is often contained in specific conjunctions of visual features (e.g., color and shape). The ability to quickly and accurately process such conjunctions can be learned. However, the neural mechanisms responsible for such learning remain largely unknown. It has been suggested that some forms of visual learning might involve the dopaminergic neuromodulatory system (Roelfsema et al., 2010; Seitz and Watanabe, 2005), but this hypothesis has not yet been directly tested. Here we test the hypothesis that learning visual feature conjunctions involves the dopaminergic system, using functional neuroimaging, genetic assays, and behavioral testing techniques. We use a correlative approach to evaluate potential associations between individual differences in visual feature conjunction learning rate and individual differences in dopaminergic function as indexed by neuroimaging and genetic markers. We find a significant correlation between activity in the caudate nucleus (a component of the dopaminergic system connected to visual areas of the brain) and visual feature conjunction learning rate. Specifically, individuals who showed a larger difference in activity between positive and negative feedback on an unrelated cognitive task, indicative of a more reactive dopaminergic system, learned visual feature conjunctions more quickly than those who showed a smaller activity difference. This finding supports the hypothesis that the dopaminergic system is involved in visual learning, and suggests that visual feature conjunction learning could be closely related to associative learning. However, no significant, reliable correlations were found between feature conjunction learning and genotype or dopaminergic activity in any other regions of interest. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. When memory is not enough: Electrophysiological evidence for goal-dependent use of working memory representations in guiding visual attention

    Science.gov (United States)

    Carlisle, Nancy B.; Woodman, Geoffrey F.

    2014-01-01

    Biased competition theory proposes that representations in working memory drive visual attention to select similar inputs. However, behavioral tests of this hypothesis have led to mixed results. These inconsistent findings could be due to the inability of behavioral measures to reliably detect the early, automatic effects on attentional deployment that the memory representations exert. Alternatively, executive mechanisms may govern how working memory representations influence attention based on higher-level goals. In the present study, we tested these hypotheses using the N2pc component of participants’ event-related potentials (ERPs) to directly measure the early deployments of covert attention. Participants searched for a target in an array that sometimes contained a memory-matching distractor. In Experiments 1–3, we manipulated the difficulty of the target discrimination and the proximity of distractors, but consistently observed that covert attention was deployed to the search targets and not the memory-matching distractors. In Experiment 4, we showed that when participants’ goal involved attending to memory-matching items that these items elicited a large and early N2pc. Our findings demonstrate that working memory representations alone are not sufficient to guide early deployments of visual attention to matching inputs and that goal-dependent executive control mediates the interactions between working memory representations and visual attention. PMID:21254796

  2. A visual tracking method based on deep learning without online model updating

    Science.gov (United States)

    Tang, Cong; Wang, Yicheng; Feng, Yunsong; Zheng, Chao; Jin, Wei

    2018-02-01

    The paper proposes a visual tracking method based on deep learning without online model updating. In consideration of the advantages of deep learning in feature representation, deep model SSD (Single Shot Multibox Detector) is used as the object extractor in the tracking model. Simultaneously, the color histogram feature and HOG (Histogram of Oriented Gradient) feature are combined to select the tracking object. In the process of tracking, multi-scale object searching map is built to improve the detection performance of deep detection model and the tracking efficiency. In the experiment of eight respective tracking video sequences in the baseline dataset, compared with six state-of-the-art methods, the method in the paper has better robustness in the tracking challenging factors, such as deformation, scale variation, rotation variation, illumination variation, and background clutters, moreover, its general performance is better than other six tracking methods.

  3. Visual Descriptor Learning for Predicting Grasping Affordances

    DEFF Research Database (Denmark)

    Thomsen, Mikkel Tang

    2016-01-01

    by the task of grasping unknown objects given visual sensor information. The contributions from this thesis stem from three works that all relate to the task of grasping unknown objects but with particular focus on the visual representation part of the problem. First an investigation of a visual feature space...... consisting of surface features was performed. Dimensions in the visual space were varied and the effects were evaluated with the task of grasping unknown object. The evaluation was performed using a novel probabilistic grasp prediction approach based on neighbourhood analysis. The resulting success......-rates for predicting grasps were between 75% and 90% depending on the object class. The investigations also provided insights into the importance of selecting a proper visual feature space when utilising it for predicting affordances. As a consequence of the gained insights, a semi-local surface feature, the Sliced...

  4. Visual Memories Bypass Normalization.

    Science.gov (United States)

    Bloem, Ilona M; Watanabe, Yurika L; Kibbe, Melissa M; Ling, Sam

    2018-05-01

    How distinct are visual memory representations from visual perception? Although evidence suggests that briefly remembered stimuli are represented within early visual cortices, the degree to which these memory traces resemble true visual representations remains something of a mystery. Here, we tested whether both visual memory and perception succumb to a seemingly ubiquitous neural computation: normalization. Observers were asked to remember the contrast of visual stimuli, which were pitted against each other to promote normalization either in perception or in visual memory. Our results revealed robust normalization between visual representations in perception, yet no signature of normalization occurring between working memory stores-neither between representations in memory nor between memory representations and visual inputs. These results provide unique insight into the nature of visual memory representations, illustrating that visual memory representations follow a different set of computational rules, bypassing normalization, a canonical visual computation.

  5. An Evaluation of Multimodal Interactions with Technology while Learning Science Concepts

    Science.gov (United States)

    Anastopoulou, Stamatina; Sharples, Mike; Baber, Chris

    2011-01-01

    This paper explores the value of employing multiple modalities to facilitate science learning with technology. In particular, it is argued that when multiple modalities are employed, learners construct strong relations between physical movement and visual representations of motion. Body interactions with visual representations, enabled by…

  6. Design of multiple representations e-learning resources based on a contextual approach for the basic physics course

    Science.gov (United States)

    Bakri, F.; Muliyati, D.

    2018-05-01

    This research aims to design e-learning resources with multiple representations based on a contextual approach for the Basic Physics Course. The research uses the research and development methods accordance Dick & Carey strategy. The development carried out in the digital laboratory of Physics Education Department, Mathematics and Science Faculty, Universitas Negeri Jakarta. The result of the process of product development with Dick & Carey strategy, have produced e-learning design of the Basic Physics Course is presented in multiple representations in contextual learning syntax. The appropriate of representation used in the design of learning basic physics include: concept map, video, figures, data tables of experiment results, charts of data tables, the verbal explanations, mathematical equations, problem and solutions example, and exercise. Multiple representations are presented in the form of contextual learning by stages: relating, experiencing, applying, transferring, and cooperating.

  7. Visual teaching and learning in the fields of engineering

    Directory of Open Access Journals (Sweden)

    Kyvete S. Shatri

    2015-11-01

    Full Text Available Engineering education today is faced with numerous demands that are closely connected with a globalized economy. One of these requirements is to draw the engineers of the future, who are characterized with: strong analytical skills, creativity, ingenuity, professionalism, intercultural communication and leadership. To achieve this effective teaching methods should be used to facilitate and enhance the learning of students and their performance in general, making them able to cope with market demands of a globalized economy. One of these methods is the visualization as a very important method that increases the learning of students. A visual approach in science and in engineering also increases communication, critical thinking and provides analytical approach to various problems. Therefore, this research is aimed to investigate the effect of the use of visualization in the process of teaching and learning in engineering fields and encourage teachers and students to use visual methods for teaching and learning. The results of this research highlight the positive effect that the use of visualization has in the learning process of students and their overall performance. In addition, innovative teaching methods have a good effect in the improvement of the situation. Visualization motivates students to learn, making them more cooperative and developing their communication skills.

  8. Learning QlikView data visualization

    CERN Document Server

    Pover, Karl

    2013-01-01

    A practical and fast-paced guide that gives you all the information you need to start developing charts from your data.Learning QlikView Data Visualization is for anybody interested in performing powerful data analysis and crafting insightful data visualization, independent of any previous knowledge of QlikView. Experience with spreadsheet software will help you understand QlikView functions.

  9. The Concept of Happiness as Conveyed in Visual Representations: Analysis of the Work of Early Childhood Educators

    Science.gov (United States)

    Russo-Zimet, Gila; Segel, Sarit

    2014-01-01

    This research was designed to examine how early-childhood educators pursuing their graduate degrees perceive the concept of happiness, as conveyed in visual representations. The research methodology combines qualitative and quantitative paradigms using the metaphoric collage, a tool used to analyze visual and verbal aspects. The research…

  10. Semantic elaboration in auditory and visual spatial memory.

    Science.gov (United States)

    Taevs, Meghan; Dahmani, Louisa; Zatorre, Robert J; Bohbot, Véronique D

    2010-01-01

    The aim of this study was to investigate the hypothesis that semantic information facilitates auditory and visual spatial learning and memory. An auditory spatial task was administered, whereby healthy participants were placed in the center of a semi-circle that contained an array of speakers where the locations of nameable and non-nameable sounds were learned. In the visual spatial task, locations of pictures of abstract art intermixed with nameable objects were learned by presenting these items in specific locations on a computer screen. Participants took part in both the auditory and visual spatial tasks, which were counterbalanced for order and were learned at the same rate. Results showed that learning and memory for the spatial locations of nameable sounds and pictures was significantly better than for non-nameable stimuli. Interestingly, there was a cross-modal learning effect such that the auditory task facilitated learning of the visual task and vice versa. In conclusion, our results support the hypotheses that the semantic representation of items, as well as the presentation of items in different modalities, facilitate spatial learning and memory.

  11. Learning Building Layouts with Non-geometric Visual Information: The Effects of Visual Impairment and Age

    Science.gov (United States)

    Kalia, Amy A.; Legge, Gordon E.; Giudice, Nicholas A.

    2009-01-01

    Previous studies suggest that humans rely on geometric visual information (hallway structure) rather than non-geometric visual information (e.g., doors, signs and lighting) for acquiring cognitive maps of novel indoor layouts. This study asked whether visual impairment and age affect reliance on non-geometric visual information for layout learning. We tested three groups of participants—younger (sighted, older (50–70 years) normally sighted, and low vision (people with heterogeneous forms of visual impairment ranging in age from 18–67). Participants learned target locations in building layouts using four presentation modes: a desktop virtual environment (VE) displaying only geometric cues (Sparse VE), a VE displaying both geometric and non-geometric cues (Photorealistic VE), a Map, and a Real building. Layout knowledge was assessed by map drawing and by asking participants to walk to specified targets in the real space. Results indicate that low-vision and older normally-sighted participants relied on additional non-geometric information to accurately learn layouts. In conclusion, visual impairment and age may result in reduced perceptual and/or memory processing that makes it difficult to learn layouts without non-geometric visual information. PMID:19189732

  12. Repetitive Transcranial Direct Current Stimulation Induced Excitability Changes of Primary Visual Cortex and Visual Learning Effects-A Pilot Study.

    Science.gov (United States)

    Sczesny-Kaiser, Matthias; Beckhaus, Katharina; Dinse, Hubert R; Schwenkreis, Peter; Tegenthoff, Martin; Höffken, Oliver

    2016-01-01

    Studies on noninvasive motor cortex stimulation and motor learning demonstrated cortical excitability as a marker for a learning effect. Transcranial direct current stimulation (tDCS) is a non-invasive tool to modulate cortical excitability. It is as yet unknown how tDCS-induced excitability changes and perceptual learning in visual cortex correlate. Our study aimed to examine the influence of tDCS on visual perceptual learning in healthy humans. Additionally, we measured excitability in primary visual cortex (V1). We hypothesized that anodal tDCS would improve and cathodal tDCS would have minor or no effects on visual learning. Anodal, cathodal or sham tDCS were applied over V1 in a randomized, double-blinded design over four consecutive days (n = 30). During 20 min of tDCS, subjects had to learn a visual orientation-discrimination task (ODT). Excitability parameters were measured by analyzing paired-stimulation behavior of visual-evoked potentials (ps-VEP) and by measuring phosphene thresholds (PTs) before and after the stimulation period of 4 days. Compared with sham-tDCS, anodal tDCS led to an improvement of visual discrimination learning (p learning effect. For cathodal tDCS, no significant effects on learning or on excitability could be seen. Our results showed that anodal tDCS over V1 resulted in improved visual perceptual learning and increased cortical excitability. tDCS is a promising tool to alter V1 excitability and, hence, perceptual visual learning.

  13. The role of visual representations during the lexical access of spoken words.

    Science.gov (United States)

    Lewis, Gwyneth; Poeppel, David

    2014-07-01

    Do visual representations contribute to spoken word recognition? We examine, using MEG, the effects of sublexical and lexical variables at superior temporal (ST) areas and the posterior middle temporal gyrus (pMTG) compared with that of word imageability at visual cortices. Embodied accounts predict early modulation of visual areas by imageability--concurrently with or prior to modulation of pMTG by lexical variables. Participants responded to speech stimuli varying continuously in imageability during lexical decision with simultaneous MEG recording. We employed the linguistic variables in a new type of correlational time course analysis to assess trial-by-trial activation in occipital, ST, and pMTG regions of interest (ROIs). The linguistic variables modulated the ROIs during different time windows. Critically, visual regions reflected an imageability effect prior to effects of lexicality on pMTG. This surprising effect supports a view on which sensory aspects of a lexical item are not a consequence of lexical activation. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Interactions between visual working memory representations.

    Science.gov (United States)

    Bae, Gi-Yeul; Luck, Steven J

    2017-11-01

    We investigated whether the representations of different objects are maintained independently in working memory or interact with each other. Observers were shown two sequentially presented orientations and required to reproduce each orientation after a delay. The sequential presentation minimized perceptual interactions so that we could isolate interactions between memory representations per se. We found that similar orientations were repelled from each other whereas dissimilar orientations were attracted to each other. In addition, when one of the items was given greater attentional priority by means of a cue, the representation of the high-priority item was not influenced very much by the orientation of the low-priority item, but the representation of the low-priority item was strongly influenced by the orientation of the high-priority item. This indicates that attention modulates the interactions between working memory representations. In addition, errors in the reported orientations of the two objects were positively correlated under some conditions, suggesting that representations of distinct objects may become grouped together in memory. Together, these results demonstrate that working-memory representations are not independent but instead interact with each other in a manner that depends on attentional priority.

  15. Conceptual Understanding and Representation Quality through Multi-representation Learning on Newton Law Content

    Directory of Open Access Journals (Sweden)

    Suci Furwati

    2017-08-01

    Full Text Available Abstract: Students who have good conceptual acquisition will be able to represent the concept by using multi representation. This study aims to determine the improvement of students' understanding of the concept of Newton's Law material, and the quality of representation used in solving problems on Newton's Law material. The results showed that the concept acquisition of students increased from the average of 35.32 to 78.97 with an effect size of 2.66 (strong and N-gain of 0.68 (medium. The quality of each type of student representation also increased from level 1 and level 2 up to level 3. Key Words: concept aquisition, represetation quality, multi representation learning, Newton’s Law Abstrak: Siswa yang memiliki penguasaan konsep yang baik akan mampu merepresentasikan konsep dengan menggunakan multi representasi. Penelitian ini bertujuan untuk mengetahui peningkatan pemahaman konsep siswa SMP pada materi Hukum Newton, dan kualitas representasi yang digunakan dalam menyelesaikan masalah pada materi Hukum Newton. Hasil penelitian menunjukkan bahwa penguasaan konsep siswa meningkat dari rata-rata 35,32 menjadi 78,97 dengan effect size sebesar 2,66 (kuat dan N-gain sebesar 0,68 (sedang. Kualitas tiap jenis representasi siswa juga mengalami peningkatan dari level 1 dan level 2 naik menjadi level 3. Kata kunci: hukum Newton, kualitas representasi, pemahaman konsep, pembelajaran multi representasi

  16. Designing Grounded Feedback: Criteria for Using Linked Representations to Support Learning of Abstract Symbols

    Science.gov (United States)

    Wiese, Eliane S.; Koedinger, Kenneth R.

    2017-01-01

    This paper proposes "grounded feedback" as a way to provide implicit verification when students are working with a novel representation. In grounded feedback, students' responses are in the target, to-be-learned representation, and those responses are reflected in a more-accessible linked representation that is intrinsic to the domain.…

  17. It's Not a Math Lesson--We're Learning to Draw! Teachers' Use of Visual Representations in Instructing Word Problem Solving in Sixth Grade of Elementary School

    Science.gov (United States)

    Boonen, Anton J. H.; Reed, Helen C.; Schoonenboom, Judith; Jolles, Jelle

    2016-01-01

    Non-routine word problem solving is an essential feature of the mathematical development of elementary school students worldwide. Many students experience difficulties in solving these problems due to erroneous problem comprehension. These difficulties could be alleviated by instructing students how to use visual representations that clarify the…

  18. Sparse Representation Based Multi-Instance Learning for Breast Ultrasound Image Classification

    Directory of Open Access Journals (Sweden)

    Lu Bing

    2017-01-01

    Full Text Available We propose a novel method based on sparse representation for breast ultrasound image classification under the framework of multi-instance learning (MIL. After image enhancement and segmentation, concentric circle is used to extract the global and local features for improving the accuracy in diagnosis and prediction. The classification problem of ultrasound image is converted to sparse representation based MIL problem. Each instance of a bag is represented as a sparse linear combination of all basis vectors in the dictionary, and then the bag is represented by one feature vector which is obtained via sparse representations of all instances within the bag. The sparse and MIL problem is further converted to a conventional learning problem that is solved by relevance vector machine (RVM. Results of single classifiers are combined to be used for classification. Experimental results on the breast cancer datasets demonstrate the superiority of the proposed method in terms of classification accuracy as compared with state-of-the-art MIL methods.

  19. Sparse Representation Based Multi-Instance Learning for Breast Ultrasound Image Classification.

    Science.gov (United States)

    Bing, Lu; Wang, Wei

    2017-01-01

    We propose a novel method based on sparse representation for breast ultrasound image classification under the framework of multi-instance learning (MIL). After image enhancement and segmentation, concentric circle is used to extract the global and local features for improving the accuracy in diagnosis and prediction. The classification problem of ultrasound image is converted to sparse representation based MIL problem. Each instance of a bag is represented as a sparse linear combination of all basis vectors in the dictionary, and then the bag is represented by one feature vector which is obtained via sparse representations of all instances within the bag. The sparse and MIL problem is further converted to a conventional learning problem that is solved by relevance vector machine (RVM). Results of single classifiers are combined to be used for classification. Experimental results on the breast cancer datasets demonstrate the superiority of the proposed method in terms of classification accuracy as compared with state-of-the-art MIL methods.

  20. Learning without knowing: subliminal visual feedback facilitates ballistic motor learning

    DEFF Research Database (Denmark)

    Lundbye-Jensen, Jesper; Leukel, Christian; Nielsen, Jens Bo

    by subconscious (subliminal) augmented visual feedback on motor performance. To test this, 45 subjects participated in the experiment, which involved learning of a ballistic task. The task was to execute simple ankle plantar flexion movements as quickly as possible within 200 ms and to continuously improve...... by the learner, indeed facilitated ballistic motor learning. This effect likely relates to multiple (conscious versus unconscious) processing of visual feedback and to the specific neural circuitries involved in optimization of ballistic motor performance.......). It is a well- described phenomenon that we may respond to features of our surroundings without being aware of them. It is also a well-known principle, that learning is reinforced by augmented feedback on motor performance. In the present experiment we hypothesized that motor learning may be facilitated...

  1. Teachers’ learning and assessing of mathematical processes with emphasis on representations, reasoning and proof

    Directory of Open Access Journals (Sweden)

    Satsope Maoto

    2018-03-01

    Full Text Available This article focuses mainly on two key mathematical processes (representation, and reasoning and proof. Firstly, we observed how teachers learn these processes and subsequently identify what and how to assess learners on the same processes. Secondly, we reviewed one teacher’s attempt to facilitate the learning of the processes in his classroom. Two interrelated questions were pursued: ‘what are the teachers’ challenges in learning mathematical processes?’ and ‘in what ways are teachers’ approaches to learning mathematical processes influencing how they assess their learners on the same processes?’ A case study was undertaken involving 10 high school mathematics teachers who enrolled for an assessment module towards a Bachelor in Education Honours degree in mathematics education. We present an interpretive analysis of two sets of data. The first set consisted of the teachers’ written responses to a pattern searching activity. The second set consisted of a mathematical discourse on matchstick patterns in a Grade 9 class. The overall finding was that teachers rush through forms of representation and focus more on manipulation of numerical representations with a view to deriving symbolic representation. Subsequently, this unidirectional approach limits the scope of assessment of mathematical processes. Interventions with regard to the enhancement of these complex processes should involve teachers’ actual engagements in and reflections on similar learning.

  2. Multisensory training can promote or impede visual perceptual learning of speech stimuli: visual-tactile vs. visual-auditory training.

    Science.gov (United States)

    Eberhardt, Silvio P; Auer, Edward T; Bernstein, Lynne E

    2014-01-01

    In a series of studies we have been investigating how multisensory training affects unisensory perceptual learning with speech stimuli. Previously, we reported that audiovisual (AV) training with speech stimuli can promote auditory-only (AO) perceptual learning in normal-hearing adults but can impede learning in congenitally deaf adults with late-acquired cochlear implants. Here, impeder and promoter effects were sought in normal-hearing adults who participated in lipreading training. In Experiment 1, visual-only (VO) training on paired associations between CVCVC nonsense word videos and nonsense pictures demonstrated that VO words could be learned to a high level of accuracy even by poor lipreaders. In Experiment 2, visual-auditory (VA) training in the same paradigm but with the addition of synchronous vocoded acoustic speech impeded VO learning of the stimuli in the paired-associates paradigm. In Experiment 3, the vocoded AO stimuli were shown to be less informative than the VO speech. Experiment 4 combined vibrotactile speech stimuli with the visual stimuli during training. Vibrotactile stimuli were shown to promote visual perceptual learning. In Experiment 5, no-training controls were used to show that training with visual speech carried over to consonant identification of untrained CVCVC stimuli but not to lipreading words in sentences. Across this and previous studies, multisensory training effects depended on the functional relationship between pathways engaged during training. Two principles are proposed to account for stimulus effects: (1) Stimuli presented to the trainee's primary perceptual pathway will impede learning by a lower-rank pathway. (2) Stimuli presented to the trainee's lower rank perceptual pathway will promote learning by a higher-rank pathway. The mechanisms supporting these principles are discussed in light of multisensory reverse hierarchy theory (RHT).

  3. Learning during processing Word learning doesn’t wait for word recognition to finish

    Science.gov (United States)

    Apfelbaum, Keith S.; McMurray, Bob

    2017-01-01

    Previous research on associative learning has uncovered detailed aspects of the process, including what types of things are learned, how they are learned, and where in the brain such learning occurs. However, perceptual processes, such as stimulus recognition and identification, take time to unfold. Previous studies of learning have not addressed when, during the course of these dynamic recognition processes, learned representations are formed and updated. If learned representations are formed and updated while recognition is ongoing, the result of learning may incorporate spurious, partial information. For example, during word recognition, words take time to be identified, and competing words are often active in parallel. If learning proceeds before this competition resolves, representations may be influenced by the preliminary activations present at the time of learning. In three experiments using word learning as a model domain, we provide evidence that learning reflects the ongoing dynamics of auditory and visual processing during a learning event. These results show that learning can occur before stimulus recognition processes are complete; learning does not wait for ongoing perceptual processing to complete. PMID:27471082

  4. The impact of category structure and training methodology on learning and generalizing within-category representations.

    Science.gov (United States)

    Ell, Shawn W; Smith, David B; Peralta, Gabriela; Hélie, Sébastien

    2017-08-01

    When interacting with categories, representations focused on within-category relationships are often learned, but the conditions promoting within-category representations and their generalizability are unclear. We report the results of three experiments investigating the impact of category structure and training methodology on the learning and generalization of within-category representations (i.e., correlational structure). Participants were trained on either rule-based or information-integration structures using classification (Is the stimulus a member of Category A or Category B?), concept (e.g., Is the stimulus a member of Category A, Yes or No?), or inference (infer the missing component of the stimulus from a given category) and then tested on either an inference task (Experiments 1 and 2) or a classification task (Experiment 3). For the information-integration structure, within-category representations were consistently learned, could be generalized to novel stimuli, and could be generalized to support inference at test. For the rule-based structure, extended inference training resulted in generalization to novel stimuli (Experiment 2) and inference training resulted in generalization to classification (Experiment 3). These data help to clarify the conditions under which within-category representations can be learned. Moreover, these results make an important contribution in highlighting the impact of category structure and training methodology on the generalization of categorical knowledge.

  5. Could a Multimodal Dictionary Serve as a Learning Tool? An Examination of the Impact of Technologically Enhanced Visual Glosses on L2 Text Comprehension

    Science.gov (United States)

    Sato, Takeshi

    2016-01-01

    This study examines the efficacy of a multimodal online bilingual dictionary based on cognitive linguistics in order to explore the advantages and limitations of explicit multimodal L2 vocabulary learning. Previous studies have examined the efficacy of the verbal and visual representation of words while reading L2 texts, concluding that it…

  6. Learning semantic histopathological representation for basal cell carcinoma classification

    Science.gov (United States)

    Gutiérrez, Ricardo; Rueda, Andrea; Romero, Eduardo

    2013-03-01

    Diagnosis of a histopathology glass slide is a complex process that involves accurate recognition of several structures, their function in the tissue and their relation with other structures. The way in which the pathologist represents the image content and the relations between those objects yields a better and accurate diagnoses. Therefore, an appropriate semantic representation of the image content will be useful in several analysis tasks such as cancer classification, tissue retrieval and histopahological image analysis, among others. Nevertheless, to automatically recognize those structures and extract their inner semantic meaning are still very challenging tasks. In this paper we introduce a new semantic representation that allows to describe histopathological concepts suitable for classification. The approach herein identify local concepts using a dictionary learning approach, i.e., the algorithm learns the most representative atoms from a set of random sampled patches, and then models the spatial relations among them by counting the co-occurrence between atoms, while penalizing the spatial distance. The proposed approach was compared with a bag-of-features representation in a tissue classification task. For this purpose, 240 histological microscopical fields of view, 24 per tissue class, were collected. Those images fed a Support Vector Machine classifier per class, using 120 images as train set and the remaining ones for testing, maintaining the same proportion of each concept in the train and test sets. The obtained classification results, averaged from 100 random partitions of training and test sets, shows that our approach is more sensitive in average than the bag-of-features representation in almost 6%.

  7. Unimodal and crossmodal working memory representations of visual and kinesthetic movement trajectories.

    Science.gov (United States)

    Seemüller, Anna; Fiehler, Katja; Rösler, Frank

    2011-01-01

    The present study investigated whether visual and kinesthetic stimuli are stored as multisensory or modality-specific representations in unimodal and crossmodal working memory tasks. To this end, angle-shaped movement trajectories were presented to 16 subjects in delayed matching-to-sample tasks either visually or kinesthetically during encoding and recognition. During the retention interval, a secondary visual or kinesthetic interference task was inserted either immediately or with a delay after encoding. The modality of the interference task interacted significantly with the encoding modality. After visual encoding, memory was more impaired by a visual than by a kinesthetic secondary task, while after kinesthetic encoding the pattern was reversed. The time when the secondary task had to be performed interacted with the encoding modality as well. For visual encoding, memory was more impaired, when the secondary task had to be performed at the beginning of the retention interval. In contrast, memory after kinesthetic encoding was more affected, when the secondary task was introduced later in the retention interval. The findings suggest that working memory traces are maintained in a modality-specific format characterized by distinct consolidation processes that take longer after kinesthetic than after visual encoding. Copyright © 2010 Elsevier B.V. All rights reserved.

  8. Getting the picture: The role of external representations in simulation-based inquiry learning.

    NARCIS (Netherlands)

    Kolloffel, Bas Jan

    2008-01-01

    Three studies were performed to examine the effects of formats of ‘pre-fabricated’ and learner-generated representations on learning outcomes of pupils learning combinatorics and probability theory. In Study I, the effects of different formats on learning outcomes were examined. Learners in five

  9. ESTEEM: A Novel Framework for Qualitatively Evaluating and Visualizing Spatiotemporal Embeddings in Social Media

    Energy Technology Data Exchange (ETDEWEB)

    Arendt, Dustin L.; Volkova, Svitlana

    2017-07-30

    Analyzing and visualizing large amounts of social media communications and contrasting short-term conversation changes over time and geo-locations is extremely important for commercial and government applications. Earlier approaches for large-scale text stream summarization used dynamic topic models and trending words. Instead, we rely on text embeddings – low-dimensional word representations in a continuous vector space where similar words are embedded nearby each other. This paper presents ESTEEM,1 a novel tool for visualizing and evaluating spatiotemporal embeddings learned from streaming social media texts. Our tool allows users to monitor and analyze query words and their closest neighbors with an interactive interface. We used state-of- the-art techniques to learn embeddings and developed a visualization to represent dynamically changing relations between words in social media over time and other dimensions. This is the first interactive visualization of streaming text representations learned from social media texts that also allows users to contrast differences across multiple dimensions of the data.

  10. Object representation in the bottlenose dolphin (Tursiops truncatus): integration of visual and echoic information.

    Science.gov (United States)

    Harley, H E; Roitblat, H L; Nachtigall, P E

    1996-04-01

    A dolphin performed a 3-alternative matching-to-sample task in different modality conditions (visual/echoic, both vision and echolocation: visual, vision only; echoic, echolocation only). In Experiment 1, training occurred in the dual-modality (visual/echoic) condition. Choice accuracy in tests of all conditions was above chance without further training. In Experiment 2, unfamiliar objects with complementary similarity relations in vision and echolocation were presented in single-modality conditions until accuracy was about 70%. When tested in the visual/echoic condition, accuracy immediately rose (95%), suggesting integration across modalities. In Experiment 3, conditions varied between presentation of sample and alternatives. The dolphin successfully matched familiar objects in the cross-modal conditions. These data suggest that the dolphin has an object-based representational system.

  11. Learning with multiple representations: an example of a revision lesson in mechanics

    Science.gov (United States)

    Wong, Darren; Poo, Sng Peng; Eng Hock, Ng; Loo Kang, Wee

    2011-03-01

    We describe an example of learning with multiple representations in an A-level revision lesson on mechanics. The context of the problem involved the motion of a ball thrown vertically upwards in air and studying how the associated physical quantities changed during its flight. Different groups of students were assigned to look at the ball's motion using various representations: motion diagrams, vector diagrams, free-body diagrams, verbal description, equations and graphs, drawn against time as well as against displacement. Overall, feedback from students about the lesson was positive. We further discuss the benefits of using computer simulation to support and extend student learning.

  12. Women And Visual Representations Of Space In Two Chinese Film Adaptations Of Hamlet

    Directory of Open Access Journals (Sweden)

    CHEANG WAI FONG

    2014-12-01

    Full Text Available This paper studies two Chinese film adaptations of Shakespeare’s Hamlet, Xiaogang Feng’s The Banquet (2006 and Sherwood Hu’s Prince of the Himalayas (2006, by focusing on their visual representations of spaces allotted to women. Its thesis is that even though on the original Shakespearean stage details of various spaces might not be as vividly represented as in modern film productions, spaces are still crucial dramatic elements imbued with powerful significations. By analyzing the two Chinese film adaptations alongside the original Hamlet text, the paper attempts to reinterpret their different representations of spaces in relation to their different historical-cultural gender notions.

  13. Building Artificial Vision Systems with Machine Learning

    Energy Technology Data Exchange (ETDEWEB)

    LeCun, Yann [New York University

    2011-02-23

    Three questions pose the next challenge for Artificial Intelligence (AI), robotics, and neuroscience. How do we learn perception (e.g. vision)? How do we learn representations of the perceptual world? How do we learn visual categories from just a few examples?

  14. Forever young: Visual representations of gender and age in online dating sites for older adults.

    Science.gov (United States)

    Gewirtz-Meydan, Ateret; Ayalon, Liat

    2017-06-13

    Online dating has become increasingly popular among older adults following broader social media adoption patterns. The current study examined the visual representations of people on 39 dating sites intended for the older population, with a particular focus on the visualization of the intersection between age and gender. All 39 dating sites for older adults were located through the Google search engine. Visual thematic analysis was performed with reference to general, non-age-related signs (e.g., facial expression, skin color), signs of aging (e.g., perceived age, wrinkles), relational features (e.g., proximity between individuals), and additional features such as number of people presented. The visual analysis in the present study revealed a clear intersection between ageism and sexism in the presentation of older adults. The majority of men and women were smiling and had a fair complexion, with light eye color and perceived age of younger than 60. Older women were presented as younger and wore more cosmetics as compared with older men. The present study stresses the social regulation of sexuality, as only heterosexual couples were presented. The narrow representation of older adults and the anti-aging messages portrayed in the pictures convey that love, intimacy, and sexual activity are for older adults who are "forever young."

  15. Students’ mathematical representations on secondary school in solving trigonometric problems

    Science.gov (United States)

    Istadi; Kusmayadi, T. A.; Sujadi, I.

    2017-06-01

    This research aimed to analyse students’ mathematical representations on secondary school in solving trigonometric problems. This research used qualitative method. The participants were 4 students who had high competence of knowledge taken from 20 students of 12th natural-science grade SMAN-1 Kota Besi, Central Kalimantan. Data validation was carried out using time triangulation. Data analysis used Huberman and Miles stages. The results showed that their answers were not only based on the given figure, but also used the definition of trigonometric ratio on verbal representations. On the other hand, they were able to determine the object positions to be observed. However, they failed to determine the position of the angle of depression at the sketches made on visual representations. Failure in determining the position of the angle of depression to cause an error in using the mathematical equation. Finally, they were unsuccessful to use the mathematical equation properly on symbolic representations. From this research, we could recommend the importance of translations between mathematical problems and mathematical representations as well as translations among mathematical representaions (verbal, visual, and symbolic) in learning mathematics in the classroom.

  16. Where to attend next: guiding refreshing of visual, spatial, and verbal representations in working memory.

    Science.gov (United States)

    Souza, Alessandra S; Vergauwe, Evie; Oberauer, Klaus

    2018-04-23

    One of the functions that attention may serve in working memory (WM) is boosting information accessibility, a mechanism known as attentional refreshing. Refreshing is assumed to be a domain-general process operating on visual, spatial, and verbal representations alike. So far, few studies have directly manipulated refreshing of individual WM representations to measure the WM benefits of refreshing. Recently, a guided-refreshing method was developed, which consists of presenting cues during the retention interval of a WM task to instruct people to refresh (i.e., attend to) the cued items. Using a continuous-color reconstruction task, previous studies demonstrated that the error in reporting a color varies linearly with the frequency with which it was refreshed. Here, we extend this approach to assess the WM benefits of refreshing different representation types, from colors to spatial locations and words. Across six experiments, we show that refreshing frequency modulates performance in all stimulus domains in accordance with the tenet that refreshing is a domain-general process in WM. The benefits of refreshing were, however, larger for visual-spatial than verbal materials. © 2018 New York Academy of Sciences.

  17. Visual variability affects early verb learning.

    Science.gov (United States)

    Twomey, Katherine E; Lush, Lauren; Pearce, Ruth; Horst, Jessica S

    2014-09-01

    Research demonstrates that within-category visual variability facilitates noun learning; however, the effect of visual variability on verb learning is unknown. We habituated 24-month-old children to a novel verb paired with an animated star-shaped actor. Across multiple trials, children saw either a single action from an action category (identical actions condition, for example, travelling while repeatedly changing into a circle shape) or multiple actions from that action category (variable actions condition, for example, travelling while changing into a circle shape, then a square shape, then a triangle shape). Four test trials followed habituation. One paired the habituated verb with a new action from the habituated category (e.g., 'dacking' + pentagon shape) and one with a completely novel action (e.g., 'dacking' + leg movement). The others paired a new verb with a new same-category action (e.g., 'keefing' + pentagon shape), or a completely novel category action (e.g., 'keefing' + leg movement). Although all children discriminated novel verb/action pairs, children in the identical actions condition discriminated trials that included the completely novel verb, while children in the variable actions condition discriminated the out-of-category action. These data suggest that - as in noun learning - visual variability affects verb learning and children's ability to form action categories. © 2014 The British Psychological Society.

  18. Enhancing Undergraduate Chemistry Learning by Helping Students Make Connections among Multiple Graphical Representations

    Science.gov (United States)

    Rau, Martina A.

    2015-01-01

    Multiple representations are ubiquitous in chemistry education. To benefit from multiple representations, students have to make connections between them. However, connection making is a difficult task for students. Prior research shows that supporting connection making enhances students' learning in math and science domains. Most prior research…

  19. Improving Mobility Performance in Low Vision With a Distance-Based Representation of the Visual Scene.

    Science.gov (United States)

    van Rheede, Joram J; Wilson, Iain R; Qian, Rose I; Downes, Susan M; Kennard, Christopher; Hicks, Stephen L

    2015-07-01

    Severe visual impairment can have a profound impact on personal independence through its effect on mobility. We investigated whether the mobility of people with vision low enough to be registered as blind could be improved by presenting the visual environment in a distance-based manner for easier detection of obstacles. We accomplished this by developing a pair of "residual vision glasses" (RVGs) that use a head-mounted depth camera and displays to present information about the distance of obstacles to the wearer as brightness, such that obstacles closer to the wearer are represented more brightly. We assessed the impact of the RVGs on the mobility performance of visually impaired participants during the completion of a set of obstacle courses. Participant position was monitored continuously, which enabled us to capture the temporal dynamics of mobility performance. This allowed us to find correlates of obstacle detection and hesitations in walking behavior, in addition to the more commonly used measures of trial completion time and number of collisions. All participants were able to use the smart glasses to navigate the course, and mobility performance improved for those visually impaired participants with the worst prior mobility performance. However, walking speed was slower and hesitations increased with the altered visual representation. A depth-based representation of the visual environment may offer low vision patients improvements in independent mobility. It is important for further work to explore whether practice can overcome the reductions in speed and increased hesitation that were observed in our trial.

  20. Benefits of stimulus congruency for multisensory facilitation of visual learning.

    Directory of Open Access Journals (Sweden)

    Robyn S Kim

    Full Text Available BACKGROUND: Studies of perceptual learning have largely focused on unisensory stimuli. However, multisensory interactions are ubiquitous in perception, even at early processing stages, and thus can potentially play a role in learning. Here, we examine the effect of auditory-visual congruency on visual learning. METHODOLOGY/PRINCIPLE FINDINGS: Subjects were trained over five days on a visual motion coherence detection task with either congruent audiovisual, or incongruent audiovisual stimuli. Comparing performance on visual-only trials, we find that training with congruent audiovisual stimuli produces significantly better learning than training with incongruent audiovisual stimuli or with only visual stimuli. CONCLUSIONS/SIGNIFICANCE: This advantage from stimulus congruency during training suggests that the benefits of multisensory training may result from audiovisual interactions at a perceptual rather than cognitive level.

  1. The relevance of visual information on learning sounds in infancy

    NARCIS (Netherlands)

    ter Schure, S.M.M.

    2016-01-01

    Newborn infants are sensitive to combinations of visual and auditory speech. Does this ability to match sounds and sights affect how infants learn the sounds of their native language? And are visual articulations the only type of visual information that can influence sound learning? This

  2. Figure-ground representation and its decay in primary visual cortex.

    Science.gov (United States)

    Strother, Lars; Lavell, Cheryl; Vilis, Tutis

    2012-04-01

    We used fMRI to study figure-ground representation and its decay in primary visual cortex (V1). Human observers viewed a motion-defined figure that gradually became camouflaged by a cluttered background after it stopped moving. V1 showed positive fMRI responses corresponding to the moving figure and negative fMRI responses corresponding to the static background. This positive-negative delineation of V1 "figure" and "background" fMRI responses defined a retinotopically organized figure-ground representation that persisted after the figure stopped moving but eventually decayed. The temporal dynamics of V1 "figure" and "background" fMRI responses differed substantially. Positive "figure" responses continued to increase for several seconds after the figure stopped moving and remained elevated after the figure had disappeared. We propose that the sustained positive V1 "figure" fMRI responses reflected both persistent figure-ground representation and sustained attention to the location of the figure after its disappearance, as did subjects' reports of persistence. The decreasing "background" fMRI responses were relatively shorter-lived and less biased by spatial attention. Our results show that the transition from a vivid figure-ground percept to its disappearance corresponds to the concurrent decay of figure enhancement and background suppression in V1, both of which play a role in form-based perceptual memory.

  3. Feature selection and multi-kernel learning for sparse representation on a manifold

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-03-01

    Sparse representation has been widely studied as a part-based data representation method and applied in many scientific and engineering fields, such as bioinformatics and medical imaging. It seeks to represent a data sample as a sparse linear combination of some basic items in a dictionary. Gao etal. (2013) recently proposed Laplacian sparse coding by regularizing the sparse codes with an affinity graph. However, due to the noisy features and nonlinear distribution of the data samples, the affinity graph constructed directly from the original feature space is not necessarily a reliable reflection of the intrinsic manifold of the data samples. To overcome this problem, we integrate feature selection and multiple kernel learning into the sparse coding on the manifold. To this end, unified objectives are defined for feature selection, multiple kernel learning, sparse coding, and graph regularization. By optimizing the objective functions iteratively, we develop novel data representation algorithms with feature selection and multiple kernel learning respectively. Experimental results on two challenging tasks, N-linked glycosylation prediction and mammogram retrieval, demonstrate that the proposed algorithms outperform the traditional sparse coding methods. © 2013 Elsevier Ltd.

  4. Feature selection and multi-kernel learning for sparse representation on a manifold.

    Science.gov (United States)

    Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin

    2014-03-01

    Sparse representation has been widely studied as a part-based data representation method and applied in many scientific and engineering fields, such as bioinformatics and medical imaging. It seeks to represent a data sample as a sparse linear combination of some basic items in a dictionary. Gao et al. (2013) recently proposed Laplacian sparse coding by regularizing the sparse codes with an affinity graph. However, due to the noisy features and nonlinear distribution of the data samples, the affinity graph constructed directly from the original feature space is not necessarily a reliable reflection of the intrinsic manifold of the data samples. To overcome this problem, we integrate feature selection and multiple kernel learning into the sparse coding on the manifold. To this end, unified objectives are defined for feature selection, multiple kernel learning, sparse coding, and graph regularization. By optimizing the objective functions iteratively, we develop novel data representation algorithms with feature selection and multiple kernel learning respectively. Experimental results on two challenging tasks, N-linked glycosylation prediction and mammogram retrieval, demonstrate that the proposed algorithms outperform the traditional sparse coding methods. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Public Computer Assisted Learning Facilities for Children with Visual Impairment: Universal Design for Inclusive Learning

    Science.gov (United States)

    Siu, Kin Wai Michael; Lam, Mei Seung

    2012-01-01

    Although computer assisted learning (CAL) is becoming increasingly popular, people with visual impairment face greater difficulty in accessing computer-assisted learning facilities. This is primarily because most of the current CAL facilities are not visually impaired friendly. People with visual impairment also do not normally have access to…

  6. Tiger salamanders' (Ambystoma tigrinum) response learning and usage of visual cues.

    Science.gov (United States)

    Kundey, Shannon M A; Millar, Roberto; McPherson, Justin; Gonzalez, Maya; Fitz, Aleyna; Allen, Chadbourne

    2016-05-01

    We explored tiger salamanders' (Ambystoma tigrinum) learning to execute a response within a maze as proximal visual cue conditions varied. In Experiment 1, salamanders learned to turn consistently in a T-maze for reinforcement before the maze was rotated. All learned the initial task and executed the trained turn during test, suggesting that they learned to demonstrate the reinforced response during training and continued to perform it during test. In a second experiment utilizing a similar procedure, two visual cues were placed consistently at the maze junction. Salamanders were reinforced for turning towards one cue. Cue placement was reversed during test. All learned the initial task, but executed the trained turn rather than turning towards the visual cue during test, evidencing response learning. In Experiment 3, we investigated whether a compound visual cue could control salamanders' behaviour when it was the only cue predictive of reinforcement in a cross-maze by varying start position and cue placement. All learned to turn in the direction indicated by the compound visual cue, indicating that visual cues can come to control their behaviour. Following training, testing revealed that salamanders attended to stimuli foreground over background features. Overall, these results suggest that salamanders learn to execute responses over learning to use visual cues but can use visual cues if required. Our success with this paradigm offers the potential in future studies to explore salamanders' cognition further, as well as to shed light on how features of the tiger salamanders' life history (e.g. hibernation and metamorphosis) impact cognition.

  7. Development of multi-representation learning tools for the course of fundamental physics

    Science.gov (United States)

    Huda, C.; Siswanto, J.; Kurniawan, A. F.; Nuroso, H.

    2016-08-01

    This research is aimed at designing a learning tool based on multi-representation that can improve problem solving skills. It used the research and development approach. It was applied for the course of Fundamental Physics at Universitas PGRI Semarang for the 2014/2015 academic year. Results show gain analysis value of 0.68, which means some medium improvements. The result of t-test is shows a calculated value of 27.35 and a table t of 2.020 for df = 25 and α = 0.05. Results of pre-tests and post-tests increase from 23.45 to 76.15. Application of multi-representation learning tools significantly improves students’ grades.

  8. Adaptive structured dictionary learning for image fusion based on group-sparse-representation

    Science.gov (United States)

    Yang, Jiajie; Sun, Bin; Luo, Chengwei; Wu, Yuzhong; Xu, Limei

    2018-04-01

    Dictionary learning is the key process of sparse representation which is one of the most widely used image representation theories in image fusion. The existing dictionary learning method does not use the group structure information and the sparse coefficients well. In this paper, we propose a new adaptive structured dictionary learning algorithm and a l1-norm maximum fusion rule that innovatively utilizes grouped sparse coefficients to merge the images. In the dictionary learning algorithm, we do not need prior knowledge about any group structure of the dictionary. By using the characteristics of the dictionary in expressing the signal, our algorithm can automatically find the desired potential structure information that hidden in the dictionary. The fusion rule takes the physical meaning of the group structure dictionary, and makes activity-level judgement on the structure information when the images are being merged. Therefore, the fused image can retain more significant information. Comparisons have been made with several state-of-the-art dictionary learning methods and fusion rules. The experimental results demonstrate that, the dictionary learning algorithm and the fusion rule both outperform others in terms of several objective evaluation metrics.

  9. [Associative Learning between Orientation and Color in Early Visual Areas].

    Science.gov (United States)

    Amano, Kaoru; Shibata, Kazuhisa; Kawato, Mitsuo; Sasaki, Yuka; Watanabe, Takeo

    2017-08-01

    Associative learning is an essential neural phenomenon where the contingency of different items increases after training. Although associative learning has been found to occur in many brain regions, there is no clear evidence that associative learning of visual features occurs in early visual areas. Here, we developed an associative decoded functional magnetic resonance imaging (fMRI) neurofeedback (A-DecNef) to determine whether associative learning of color and orientation can be induced in early visual areas. During the three days' training, A-DecNef induced fMRI signal patterns that corresponded to a specific target color (red) mostly in early visual areas while a vertical achromatic grating was simultaneously, physically presented to participants. Consequently, participants' perception of "red" was significantly more frequently than that of "green" in an achromatic vertical grating. This effect was also observed 3 to 5 months after training. These results suggest that long-term associative learning of two different visual features such as color and orientation, was induced most likely in early visual areas. This newly extended technique that induces associative learning may be used as an important tool for understanding and modifying brain function, since associations are fundamental and ubiquitous with respect to brain function.

  10. An analysis of science content and representations in introductory college physics textbooks and multimodal learning resources

    Science.gov (United States)

    Donnelly, Suzanne M.

    This study features a comparative descriptive analysis of the physics content and representations surrounding the first law of thermodynamics as presented in four widely used introductory college physics textbooks representing each of four physics textbook categories (calculus-based, algebra/trigonometry-based, conceptual, and technical/applied). Introducing and employing a newly developed theoretical framework, multimodal generative learning theory (MGLT), an analysis of the multimodal characteristics of textbook and multimedia representations of physics principles was conducted. The modal affordances of textbook representations were identified, characterized, and compared across the four physics textbook categories in the context of their support of problem-solving. Keywords: college science, science textbooks, multimodal learning theory, thermodynamics, representations

  11. Fixed versus mixed RSA: Explaining visual representations by fixed and mixed feature sets from shallow and deep computational models.

    Science.gov (United States)

    Khaligh-Razavi, Seyed-Mahdi; Henriksson, Linda; Kay, Kendrick; Kriegeskorte, Nikolaus

    2017-02-01

    Studies of the primate visual system have begun to test a wide range of complex computational object-vision models. Realistic models have many parameters, which in practice cannot be fitted using the limited amounts of brain-activity data typically available. Task performance optimization (e.g. using backpropagation to train neural networks) provides major constraints for fitting parameters and discovering nonlinear representational features appropriate for the task (e.g. object classification). Model representations can be compared to brain representations in terms of the representational dissimilarities they predict for an image set. This method, called representational similarity analysis (RSA), enables us to test the representational feature space as is (fixed RSA) or to fit a linear transformation that mixes the nonlinear model features so as to best explain a cortical area's representational space (mixed RSA). Like voxel/population-receptive-field modelling, mixed RSA uses a training set (different stimuli) to fit one weight per model feature and response channel (voxels here), so as to best predict the response profile across images for each response channel. We analysed response patterns elicited by natural images, which were measured with functional magnetic resonance imaging (fMRI). We found that early visual areas were best accounted for by shallow models, such as a Gabor wavelet pyramid (GWP). The GWP model performed similarly with and without mixing, suggesting that the original features already approximated the representational space, obviating the need for mixing. However, a higher ventral-stream visual representation (lateral occipital region) was best explained by the higher layers of a deep convolutional network and mixing of its feature set was essential for this model to explain the representation. We suspect that mixing was essential because the convolutional network had been trained to discriminate a set of 1000 categories, whose frequencies

  12. On the Relationship between Visual Attributes and Convolutional Networks

    KAUST Repository

    Castillo, Victor

    2015-06-02

    One of the cornerstone principles of deep models is their abstraction capacity, i.e. their ability to learn abstract concepts from ‘simpler’ ones. Through extensive experiments, we characterize the nature of the relationship between abstract concepts (specifically objects in images) learned by popular and high performing convolutional networks (conv-nets) and established mid-level representations used in computer vision (specifically semantic visual attributes). We focus on attributes due to their impact on several applications, such as object description, retrieval and mining, and active (and zero-shot) learning. Among the findings we uncover, we show empirical evidence of the existence of Attribute Centric Nodes (ACNs) within a conv-net, which is trained to recognize objects (not attributes) in images. These special conv-net nodes (1) collectively encode information pertinent to visual attribute representation and discrimination, (2) are unevenly and sparsely distribution across all layers of the conv-net, and (3) play an important role in conv-net based object recognition.

  13. On the Relationship between Visual Attributes and Convolutional Networks

    KAUST Repository

    Castillo, Victor; Ghanem, Bernard; Niebles, Juan Carlos

    2015-01-01

    One of the cornerstone principles of deep models is their abstraction capacity, i.e. their ability to learn abstract concepts from ‘simpler’ ones. Through extensive experiments, we characterize the nature of the relationship between abstract concepts (specifically objects in images) learned by popular and high performing convolutional networks (conv-nets) and established mid-level representations used in computer vision (specifically semantic visual attributes). We focus on attributes due to their impact on several applications, such as object description, retrieval and mining, and active (and zero-shot) learning. Among the findings we uncover, we show empirical evidence of the existence of Attribute Centric Nodes (ACNs) within a conv-net, which is trained to recognize objects (not attributes) in images. These special conv-net nodes (1) collectively encode information pertinent to visual attribute representation and discrimination, (2) are unevenly and sparsely distribution across all layers of the conv-net, and (3) play an important role in conv-net based object recognition.

  14. Enhancing students’ mathematical representation and selfefficacy through situation-based learning assisted by geometer’s sketchpad program

    Science.gov (United States)

    Sowanto; Kusumah, Y. S.

    2018-05-01

    This research was conducted based on the problem of a lack of students’ mathematical representation ability as well as self-efficacy in accomplishing mathematical tasks. To overcome this problem, this research used situation-based learning (SBL) assisted by geometer’s sketchpad program (GSP). This research investigated students’ improvement of mathematical representation ability who were taught under situation-based learning (SBL) assisted by geometer’s sketchpad program (GSP) and regular method that viewed from the whole students’ prior knowledge (high, average, and low level). In addition, this research investigated the difference of students’ self-efficacy after learning was given. This research belongs to quasi experiment research using non-equivalent control group design with purposive sampling. The result of this research showed that students’ enhancement in their mathematical representation ability taught under SBL assisted by GSP was better than the regular method. Also, there was no interaction between learning methods and students prior knowledge in student’ enhancement of mathematical representation ability. There was significant difference of students’ enhancement of mathematical representation ability taught under SBL assisted by GSP viewed from students’ prior knowledge. Furthermore, there was no significant difference in terms of self-efficacy between those who were taught by SBL assisted by GSP with the regular method.

  15. Learning Reverse Engineering and Simulation with Design Visualization

    Science.gov (United States)

    Hemsworth, Paul J.

    2018-01-01

    The Design Visualization (DV) group supports work at the Kennedy Space Center by utilizing metrology data with Computer-Aided Design (CAD) models and simulations to provide accurate visual representations that aid in decision-making. The capability to measure and simulate objects in real time helps to predict and avoid potential problems before they become expensive in addition to facilitating the planning of operations. I had the opportunity to work on existing and new models and simulations in support of DV and NASA’s Exploration Ground Systems (EGS).

  16. Could a multimodal dictionary serve as a learning tool? An examination of the impact of technologically enhanced visual glosses on L2 text comprehension

    Directory of Open Access Journals (Sweden)

    Takeshi Sato

    2016-09-01

    Full Text Available This study examines the efficacy of a multimodal online bilingual dictionary based on cognitive linguistics in order to explore the advantages and limitations of explicit multimodal L2 vocabulary learning. Previous studies have examined the efficacy of the verbal and visual representation of words while reading L2 texts, concluding that it facilitates incidental word retention. This study explores other potentials of multimodal L2 vocabulary learning: explicit learning with a multimodal dictionary could enhance not only word retention, but also text comprehension; the dictionary could serve not only as a reference tool, but also as a learning tool; and technology-enhanced visual glosses could facilitate deeper text comprehension. To verify these claims, this study investigates the multimodal representations’ effects on Japanese students learning L2 locative prepositions by developing two online dictionaries, one with static pictures and one with animations. The findings show the advantage of such dictionaries in explicit learning; however, no significant differences are found between the two types of visual glosses, either in the vocabulary or in the listening tests. This study confirms the effectiveness of multimodal L2 materials, but also emphasizes the need for further research into making the technologically enhanced materials more effective.

  17. Exploring the relation between visualizer-verbalizer cognitive styles and performance with visual or verbal learning material

    NARCIS (Netherlands)

    Kolloffel, Bas Jan

    2012-01-01

    A student might find a certain representational format (e.g., diagram, text) more attractive than other formats for learning. Computer technology offers opportunities to adjust the formats used in learning environments to the preferences of individual learners. The question addressed in the current

  18. Functional organization and visual representations in human ventral lateral prefrontal cortex

    Directory of Open Access Journals (Sweden)

    Annie Wai Yiu Chan

    2013-07-01

    Full Text Available Recent neuroimaging studies in both human and non-human primates have identified face selective activation in the ventral lateral prefrontal cortex even in the absence of working memory demands. Further, research has suggested that this face-selective response is largely driven by the presence of the eyes. However, the nature and origin of visual category responses in the ventral lateral prefrontal cortex remain unclear. Further, in a broader sense, how do these findings relate to our current understandings of lateral prefrontal cortex? What do these findings tell us about the underlying function and organization principles of the ventral lateral prefrontal cortex? What is the future direction for investigating visual representations in this cortex? This review focuses on the function, topography, and circuitry of the ventral lateral prefrontal cortex to enhance our understanding of the evolution and development of this cortex.

  19. Learning to Recognize Patterns: Changes in the Visual Field with Familiarity

    Science.gov (United States)

    Bebko, James M.; Uchikawa, Keiji; Saida, Shinya; Ikeda, Mitsuo

    1995-01-01

    Two studies were conducted to investigate changes which take place in the visual information processing of novel stimuli as they become familiar. Japanese writing characters (Hiragana and Kanji) which were unfamiliar to two native English speaking subjects were presented using a moving window technique to restrict their visual fields. Study time for visual recognition was recorded across repeated sessions, and with varying visual field restrictions. The critical visual field was defined as the size of the visual field beyond which further increases did not improve the speed of recognition performance. In the first study, when the Hiragana patterns were novel, subjects needed to see about half of the entire pattern simultaneously to maintain optimal performance. However, the critical visual field size decreased as familiarity with the patterns increased. These results were replicated in the second study with more complex Kanji characters. In addition, the critical field size decreased as pattern complexity decreased. We propose a three component model of pattern perception. In the first stage a representation of the stimulus must be constructed by the subject, and restricting of the visual field interferes dramatically with this component when stimuli are unfamiliar. With increased familiarity, subjects become able to reconstruct a previous representation from very small, unique segments of the pattern, analogous to the informativeness areas hypothesized by Loftus and Mackworth [J. Exp. Psychol., 4 (1978) 565].

  20. Simulating my own or others action plans?--Motor representations, not visual representations are recalled in motor memory.

    Directory of Open Access Journals (Sweden)

    Christian Seegelke

    Full Text Available Action plans are not generated from scratch for each movement, but features of recently generated plans are recalled for subsequent movements. This study investigated whether the observation of an action is sufficient to trigger plan recall processes. Participant dyads performed an object manipulation task in which one participant transported a plunger from an outer platform to a center platform of different heights (first move. Subsequently, either the same (intra-individual task condition or the other participant (inter-individual task condition returned the plunger to the outer platform (return moves. Grasp heights were inversely related to center target height and similar irrespective of direction (first vs. return move and task condition (intra- vs. inter-individual. Moreover, participants' return move grasp heights were highly correlated with their own, but not with their partners' first move grasp heights. Our findings provide evidence that a simulated action plan resembles a plan of how the observer would execute that action (based on a motor representation rather than a plan of the actually observed action (based on a visual representation.

  1. Claroscura Representation: An Audio-visual and Theoretical Exploration of the Representation of the Past Through Documentary Filmmaking

    Directory of Open Access Journals (Sweden)

    Gerrit Stollbrock Trujillo

    2017-09-01

    Full Text Available At the nexus between audio-visual production and theoretical research, this article is based on the experience of producing a documentary on the history of a cement plant in Colombia: La Siberia. The tensions between the narratives constructed in the documentary and the immensity of the discarded archives from the plant drive a theoretical quest to respond to its own iconoclast and the post-structuralist critique of history. This brought us to the formulation of the concept of claroscura representation, defined as representation that is transparent about its own limitations. I put this concept to the test through the medium of documentary film, talking specifically about the making of La Siberia, and suggest its relevance in other projects that attempt to represent the past or history through film. I suggest that this theory drives us towards the formulation of a new artistic project. The research process, and the dialogue between theory and practice, is interpreted using the model of abduction proposed by Charles Sanders Peirce.

  2. Per-Sample Multiple Kernel Approach for Visual Concept Learning

    Directory of Open Access Journals (Sweden)

    Ling-Yu Duan

    2010-01-01

    Full Text Available Learning visual concepts from images is an important yet challenging problem in computer vision and multimedia research areas. Multiple kernel learning (MKL methods have shown great advantages in visual concept learning. As a visual concept often exhibits great appearance variance, a canonical MKL approach may not generate satisfactory results when a uniform kernel combination is applied over the input space. In this paper, we propose a per-sample multiple kernel learning (PS-MKL approach to take into account intraclass diversity for improving discrimination. PS-MKL determines sample-wise kernel weights according to kernel functions and training samples. Kernel weights as well as kernel-based classifiers are jointly learned. For efficient learning, PS-MKL employs a sample selection strategy. Extensive experiments are carried out over three benchmarking datasets of different characteristics including Caltech101, WikipediaMM, and Pascal VOC'07. PS-MKL has achieved encouraging performance, comparable to the state of the art, which has outperformed a canonical MKL.

  3. Per-Sample Multiple Kernel Approach for Visual Concept Learning

    Directory of Open Access Journals (Sweden)

    Tian Yonghong

    2010-01-01

    Full Text Available Abstract Learning visual concepts from images is an important yet challenging problem in computer vision and multimedia research areas. Multiple kernel learning (MKL methods have shown great advantages in visual concept learning. As a visual concept often exhibits great appearance variance, a canonical MKL approach may not generate satisfactory results when a uniform kernel combination is applied over the input space. In this paper, we propose a per-sample multiple kernel learning (PS-MKL approach to take into account intraclass diversity for improving discrimination. PS-MKL determines sample-wise kernel weights according to kernel functions and training samples. Kernel weights as well as kernel-based classifiers are jointly learned. For efficient learning, PS-MKL employs a sample selection strategy. Extensive experiments are carried out over three benchmarking datasets of different characteristics including Caltech101, WikipediaMM, and Pascal VOC'07. PS-MKL has achieved encouraging performance, comparable to the state of the art, which has outperformed a canonical MKL.

  4. Scaffolding vector representations for student learning inside a physics game

    Science.gov (United States)

    D'Angelo, Cynthia

    Vectors and vector addition are difficult concepts for many introductory physics students and traditional instruction does not usually sufficiently address these difficulties. Vectors play a major role in most topics in introductory physics and without a complete understanding of them many students are unable to make sense of the physics topics covered in their classes. Video games present a unique opportunity to help students develop an intuitive understanding of motion, forces, and vectors while immersed in an enjoyable and interactive environment. This study examines two dimensions of design decisions to help students learn while playing a physics-based game. The representational complexity dimension looked at two ways of presenting dynamic information about the velocity of the game object on the screen. The scaffolding context dimension looked at two different contexts for presenting vector addition problems that were related to the game. While all students made significant learning games from the pre to the post test, there were virtually no differences between students along the representational complexity dimension and small differences between students along the scaffolding context dimension. A context that directly connects to students' game playing experience was in most cases more productive to learning than an abstract context.

  5. Teaching and Learning Logic Programming in Virtual Worlds Using Interactive Microworld Representations

    Science.gov (United States)

    Vosinakis, Spyros; Anastassakis, George; Koutsabasis, Panayiotis

    2018-01-01

    Logic Programming (LP) follows the declarative programming paradigm, which novice students often find hard to grasp. The limited availability of visual teaching aids for LP can lead to low motivation for learning. In this paper, we present a platform for teaching and learning Prolog in Virtual Worlds, which enables the visual interpretation and…

  6. Promoting Visualization Skills through Deconstruction Using Physical Models and a Visualization Activity Intervention

    Science.gov (United States)

    Schiltz, Holly Kristine

    Visualization skills are important in learning chemistry, as these skills have been shown to correlate to high ability in problem solving. Students' understanding of visual information and their problem-solving processes may only ever be accessed indirectly: verbalization, gestures, drawings, etc. In this research, deconstruction of complex visual concepts was aligned with the promotion of students' verbalization of visualized ideas to teach students to solve complex visual tasks independently. All instructional tools and teaching methods were developed in accordance with the principles of the theoretical framework, the Modeling Theory of Learning: deconstruction of visual representations into model components, comparisons to reality, and recognition of students' their problemsolving strategies. Three physical model systems were designed to provide students with visual and tangible representations of chemical concepts. The Permanent Reflection Plane Demonstration provided visual indicators that students used to support or invalidate the presence of a reflection plane. The 3-D Coordinate Axis system provided an environment that allowed students to visualize and physically enact symmetry operations in a relevant molecular context. The Proper Rotation Axis system was designed to provide a physical and visual frame of reference to showcase multiple symmetry elements that students must identify in a molecular model. Focus groups of students taking Inorganic chemistry working with the physical model systems demonstrated difficulty documenting and verbalizing processes and descriptions of visual concepts. Frequently asked student questions were classified, but students also interacted with visual information through gestures and model manipulations. In an effort to characterize how much students used visualization during lecture or recitation, we developed observation rubrics to gather information about students' visualization artifacts and examined the effect instructors

  7. Crystal structure representations for machine learning models of formation energies

    Energy Technology Data Exchange (ETDEWEB)

    Faber, Felix [Department of Chemistry, Institute of Physical Chemistry and National Center for Computational Design and Discovery of Novel Materials, University of Basel Switzerland; Lindmaa, Alexander [Department of Physics, Chemistry and Biology, Linköping University, SE-581 83 Linköping Sweden; von Lilienfeld, O. Anatole [Department of Chemistry, Institute of Physical Chemistry and National Center for Computational Design and Discovery of Novel Materials, University of Basel Switzerland; Argonne Leadership Computing Facility, Argonne National Laboratory, 9700 S. Cass Avenue Lemont Illinois 60439; Armiento, Rickard [Department of Physics, Chemistry and Biology, Linköping University, SE-581 83 Linköping Sweden

    2015-04-20

    We introduce and evaluate a set of feature vector representations of crystal structures for machine learning (ML) models of formation energies of solids. ML models of atomization energies of organic molecules have been successful using a Coulomb matrix representation of the molecule. We consider three ways to generalize such representations to periodic systems: (i) a matrix where each element is related to the Ewald sum of the electrostatic interaction between two different atoms in the unit cell repeated over the lattice; (ii) an extended Coulomb-like matrix that takes into account a number of neighboring unit cells; and (iii) an ansatz that mimics the periodicity and the basic features of the elements in the Ewald sum matrix using a sine function of the crystal coordinates of the atoms. The representations are compared for a Laplacian kernel with Manhattan norm, trained to reproduce formation energies using a dataset of 3938 crystal structures obtained from the Materials Project. For training sets consisting of 3000 crystals, the generalization error in predicting formation energies of new structures corresponds to (i) 0.49, (ii) 0.64, and (iii) 0.37eV/atom for the respective representations.

  8. An Eye-tracking Study of Notational, Informational, and Emotional Aspects of Learning Analytics Representations

    DEFF Research Database (Denmark)

    Vatrapu, Ravi; Reimann, Peter; Bull, Susan

    2013-01-01

    This paper presents an eye-tracking study of notational, informational, and emotional aspects of nine different notational systems (Skill Meters, Smilies, Traffic Lights, Topic Boxes, Collective Histograms, Word Clouds, Textual Descriptors, Table, and Matrix) and three different information states...... (Weak, Average, & Strong) used to represent student's learning. Findings from the eye-tracking study show that higher emotional activation was observed for the metaphorical notations of traffic lights and smilies and collective representations. Mean view time was higher for representations...... of the "average" informational learning state. Qualitative data analysis of the think-aloud comments and post-study interview show that student participants reflected on the meaning-making opportunities and action-taking possibilities afforded by the representations. Implications for the design and evaluation...

  9. Spatial specificity of working memory representations in the early visual cortex.

    Science.gov (United States)

    Pratte, Michael S; Tong, Frank

    2014-03-19

    Recent fMRI decoding studies have demonstrated that early retinotopic visual areas exhibit similar patterns of activity during the perception of a stimulus and during the maintenance of that stimulus in working memory. These findings provide support for the sensory recruitment hypothesis that the mechanisms underlying perception serve as a foundation for visual working memory. However, a recent study by Ester, Serences, and Awh (2009) found that the orientation of a peripheral grating maintained in working memory could be classified from both the contralateral and ipsilateral regions of the primary visual cortex (V1), implying that, unlike perception, feature-specific information was maintained in a nonretinotopic manner. Here, we evaluated the hypothesis that early visual areas can maintain information in a spatially specific manner and will do so if the task encourages the binding of feature information to a specific location. To encourage reliance on spatially specific memory, our experiment required observers to retain the orientations of two laterally presented gratings. Multivariate pattern analysis revealed that the orientation of each remembered grating was classified more accurately based on activity patterns in the contralateral than in the ipsilateral regions of V1 and V2. In contrast, higher extrastriate areas exhibited similar levels of performance across the two hemispheres. A time-resolved analysis further indicated that the retinotopic specificity of the working memory representation in V1 and V2 was maintained throughout the retention interval. Our results suggest that early visual areas provide a cortical basis for actively maintaining information about the features and locations of stimuli in visual working memory.

  10. Chemosensory Learning in the Cortex

    Directory of Open Access Journals (Sweden)

    Edmund eRolls

    2011-09-01

    Full Text Available Taste is a primary reinforcer. Olfactory-taste and visual-taste association learning takes place in the primate including human orbitofrontal cortex to build representations of flavour. Rapid reversal of this learning can occur using a rule-based learning system that can be reset when an expected taste or flavour reward is not obtained, that is by negative reward prediction error, to which a population of neurons in the orbitofrontal cortex responds. The representation in the orbitofrontal cortex but not the primary taste or olfactory cortex is of the reward value of the visual / olfactory / taste / input as shown by devaluation experiments in which food is fed to satiety, and by correlations with the activations with subjective pleasantness ratings in humans. Sensory-specific satiety for taste, olfactory, visual, and oral somatosensory inputs produced by feeding a particular food to satiety are implemented it is proposed by medium-term synaptic adaptation in the orbitofrontal cortex. Cognitive factors, including word-level descriptions, modulate the representation of the reward value of food in the orbitofrontal cortex, and this effect is learned it is proposed by associative modification of top-down synapses onto neurons activated by bottom-up taste and olfactory inputs when both are active in the orbitofrontal cortex. A similar associative synaptic learning process is proposed to be part of the mechanism for the top-down attentional control to the reward value vs the sensory properties such as intensity of taste and olfactory inputs in the orbitofrontal cortex, as part of a biased activation theory of selective attention.

  11. Refinement of learned skilled movement representation in motor cortex deep output layer

    Science.gov (United States)

    Li, Qian; Ko, Ho; Qian, Zhong-Ming; Yan, Leo Y. C.; Chan, Danny C. W.; Arbuthnott, Gordon; Ke, Ya; Yung, Wing-Ho

    2017-01-01

    The mechanisms underlying the emergence of learned motor skill representation in primary motor cortex (M1) are not well understood. Specifically, how motor representation in the deep output layer 5b (L5b) is shaped by motor learning remains virtually unknown. In rats undergoing motor skill training, we detect a subpopulation of task-recruited L5b neurons that not only become more movement-encoding, but their activities are also more structured and temporally aligned to motor execution with a timescale of refinement in tens-of-milliseconds. Field potentials evoked at L5b in vivo exhibit persistent long-term potentiation (LTP) that parallels motor performance. Intracortical dopamine denervation impairs motor learning, and disrupts the LTP profile as well as the emergent neurodynamical properties of task-recruited L5b neurons. Thus, dopamine-dependent recruitment of L5b neuronal ensembles via synaptic reorganization may allow the motor cortex to generate more temporally structured, movement-encoding output signal from M1 to downstream circuitry that drives increased uniformity and precision of movement during motor learning. PMID:28598433

  12. Audiovisual Association Learning in the Absence of Primary Visual Cortex.

    Science.gov (United States)

    Seirafi, Mehrdad; De Weerd, Peter; Pegna, Alan J; de Gelder, Beatrice

    2015-01-01

    Learning audiovisual associations is mediated by the primary cortical areas; however, recent animal studies suggest that such learning can take place even in the absence of the primary visual cortex. Other studies have demonstrated the involvement of extra-geniculate pathways and especially the superior colliculus (SC) in audiovisual association learning. Here, we investigated such learning in a rare human patient with complete loss of the bilateral striate cortex. We carried out an implicit audiovisual association learning task with two different colors of red and purple (the latter color known to minimally activate the extra-genicular pathway). Interestingly, the patient learned the association between an auditory cue and a visual stimulus only when the unseen visual stimulus was red, but not when it was purple. The current study presents the first evidence showing the possibility of audiovisual association learning in humans with lesioned striate cortex. Furthermore, in line with animal studies, it supports an important role for the SC in audiovisual associative learning.

  13. Developing Explanations and Developing Understanding: Students Explain the Phases of the Moon Using Visual Representations

    Science.gov (United States)

    Parnafes, Orit

    2012-01-01

    This article presents a theoretical model of the process by which students construct and elaborate explanations of scientific phenomena using visual representations. The model describes progress in the underlying conceptual processes in students' explanations as a reorganization of fine-grained knowledge elements based on the Knowledge in Pieces…

  14. Perceptual learning increases the strength of the earliest signals in visual cortex.

    Science.gov (United States)

    Bao, Min; Yang, Lin; Rios, Cristina; He, Bin; Engel, Stephen A

    2010-11-10

    Training improves performance on most visual tasks. Such perceptual learning can modify how information is read out from, and represented in, later visual areas, but effects on early visual cortex are controversial. In particular, it remains unknown whether learning can reshape neural response properties in early visual areas independent from feedback arising in later cortical areas. Here, we tested whether learning can modify feedforward signals in early visual cortex as measured by the human electroencephalogram. Fourteen subjects were trained for >24 d to detect a diagonal grating pattern in one quadrant of the visual field. Training improved performance, reducing the contrast needed for reliable detection, and also reliably increased the amplitude of the earliest component of the visual evoked potential, the C1. Control orientations and locations showed smaller effects of training. Because the C1 arises rapidly and has a source in early visual cortex, our results suggest that learning can increase early visual area response through local receptive field changes without feedback from later areas.

  15. Learning semantic and visual similarity for endomicroscopy video retrieval.

    Science.gov (United States)

    Andre, Barbara; Vercauteren, Tom; Buchner, Anna M; Wallace, Michael B; Ayache, Nicholas

    2012-06-01

    Content-based image retrieval (CBIR) is a valuable computer vision technique which is increasingly being applied in the medical community for diagnosis support. However, traditional CBIR systems only deliver visual outputs, i.e., images having a similar appearance to the query, which is not directly interpretable by the physicians. Our objective is to provide a system for endomicroscopy video retrieval which delivers both visual and semantic outputs that are consistent with each other. In a previous study, we developed an adapted bag-of-visual-words method for endomicroscopy retrieval, called "Dense-Sift," that computes a visual signature for each video. In this paper, we present a novel approach to complement visual similarity learning with semantic knowledge extraction, in the field of in vivo endomicroscopy. We first leverage a semantic ground truth based on eight binary concepts, in order to transform these visual signatures into semantic signatures that reflect how much the presence of each semantic concept is expressed by the visual words describing the videos. Using cross-validation, we demonstrate that, in terms of semantic detection, our intuitive Fisher-based method transforming visual-word histograms into semantic estimations outperforms support vector machine (SVM) methods with statistical significance. In a second step, we propose to improve retrieval relevance by learning an adjusted similarity distance from a perceived similarity ground truth. As a result, our distance learning method allows to statistically improve the correlation with the perceived similarity. We also demonstrate that, in terms of perceived similarity, the recall performance of the semantic signatures is close to that of visual signatures and significantly better than those of several state-of-the-art CBIR methods. The semantic signatures are thus able to communicate high-level medical knowledge while being consistent with the low-level visual signatures and much shorter than them

  16. Spontaneously emerging cortical representations of visual attributes

    Science.gov (United States)

    Kenet, Tal; Bibitchkov, Dmitri; Tsodyks, Misha; Grinvald, Amiram; Arieli, Amos

    2003-10-01

    Spontaneous cortical activity-ongoing activity in the absence of intentional sensory input-has been studied extensively, using methods ranging from EEG (electroencephalography), through voltage sensitive dye imaging, down to recordings from single neurons. Ongoing cortical activity has been shown to play a critical role in development, and must also be essential for processing sensory perception, because it modulates stimulus-evoked activity, and is correlated with behaviour. Yet its role in the processing of external information and its relationship to internal representations of sensory attributes remains unknown. Using voltage sensitive dye imaging, we previously established a close link between ongoing activity in the visual cortex of anaesthetized cats and the spontaneous firing of a single neuron. Here we report that such activity encompasses a set of dynamically switching cortical states, many of which correspond closely to orientation maps. When such an orientation state emerged spontaneously, it spanned several hypercolumns and was often followed by a state corresponding to a proximal orientation. We suggest that dynamically switching cortical states could represent the brain's internal context, and therefore reflect or influence memory, perception and behaviour.

  17. Visual and verbal learning deficits in Veterans with alcohol and substance use disorders.

    Science.gov (United States)

    Bell, Morris D; Vissicchio, Nicholas A; Weinstein, Andrea J

    2016-02-01

    This study examined visual and verbal learning in the early phase of recovery for 48 Veterans with alcohol use (AUD) and substance use disorders (SUD, primarily cocaine and opiate abusers). Previous studies have demonstrated visual and verbal learning deficits in AUD, however little is known about the differences between AUD and SUD on these domains. Since the DSM-5 specifically identifies problems with learning in AUD and not in SUD, and problems with visual and verbal learning have been more prevalent in the literature for AUD than SUD, we predicted that people with AUD would be more impaired on measures of visual and verbal learning than people with SUD. Participants were enrolled in a comprehensive rehabilitation program and were assessed within the first 5 weeks of abstinence. Verbal learning was measured using the Hopkins Verbal Learning Test (HVLT) and visual learning was assessed using the Brief Visuospatial Memory Test (BVMT). Results indicated significantly greater decline in verbal learning on the HVLT across the three learning trials for AUD participants but not for SUD participants (F=4.653, df=48, p=0.036). Visual learning was less impaired than verbal learning across learning trials for both diagnostic groups (F=0.197, df=48, p=0.674); there was no significant difference between groups on visual learning (F=0.401, df=14, p=0.538). Older Veterans in the early phase of recovery from AUD may have difficulty learning new verbal information. Deficits in verbal learning may reduce the effectiveness of verbally-based interventions such as psycho-education. Published by Elsevier Ireland Ltd.

  18. Effects of regular aerobic exercise on visual perceptual learning.

    Science.gov (United States)

    Connell, Charlotte J W; Thompson, Benjamin; Green, Hayden; Sullivan, Rachel K; Gant, Nicholas

    2017-12-02

    This study investigated the influence of five days of moderate intensity aerobic exercise on the acquisition and consolidation of visual perceptual learning using a motion direction discrimination (MDD) task. The timing of exercise relative to learning was manipulated by administering exercise either before or after perceptual training. Within a matched-subjects design, twenty-seven healthy participants (n = 9 per group) completed five consecutive days of perceptual training on a MDD task under one of three interventions: no exercise, exercise before the MDD task, or exercise after the MDD task. MDD task accuracy improved in all groups over the five-day period, but there was a trend for impaired learning when exercise was performed before visual perceptual training. MDD task accuracy (mean ± SD) increased in exercise before by 4.5 ± 6.5%; exercise after by 11.8 ± 6.4%; and no exercise by 11.3 ± 7.2%. All intervention groups displayed similar MDD threshold reductions for the trained and untrained motion axes after training. These findings suggest that moderate daily exercise does not enhance the rate of visual perceptual learning for an MDD task or the transfer of learning to an untrained motion axis. Furthermore, exercise performed immediately prior to a visual perceptual learning task may impair learning. Further research with larger groups is required in order to better understand these effects. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. The Effect of Visual-Chunking-Representation Accommodation on Geometry Testing for Students with Math Disabilities

    Science.gov (United States)

    Zhang, Dake; Ding, Yi; Stegall, Joanna; Mo, Lei

    2012-01-01

    Students who struggle with learning mathematics often have difficulties with geometry problem solving, which requires strong visual imagery skills. These difficulties have been correlated with deficiencies in visual working memory. Cognitive psychology has shown that chunking of visual items accommodates students' working memory deficits. This…

  20. Pupils' Visual Representations in Standard and Problematic Problem Solving in Mathematics: Their Role in the Breach of the Didactical Contract

    Science.gov (United States)

    Deliyianni, Eleni; Monoyiou, Annita; Elia, Iliada; Georgiou, Chryso; Zannettou, Eleni

    2009-01-01

    This study investigated the modes of representations generated by kindergarteners and first graders while solving standard and problematic problems in mathematics. Furthermore, it examined the influence of pupils' visual representations on the breach of the didactical contract rules in problem solving. The sample of the study consisted of 38…

  1. Relative contributions of visual and auditory spatial representations to tactile localization.

    Science.gov (United States)

    Noel, Jean-Paul; Wallace, Mark

    2016-02-01

    Spatial localization of touch is critically dependent upon coordinate transformation between different reference frames, which must ultimately allow for alignment between somatotopic and external representations of space. Although prior work has shown an important role for cues such as body posture in influencing the spatial localization of touch, the relative contributions of the different sensory systems to this process are unknown. In the current study, we had participants perform a tactile temporal order judgment (TOJ) under different body postures and conditions of sensory deprivation. Specifically, participants performed non-speeded judgments about the order of two tactile stimuli presented in rapid succession on their ankles during conditions in which their legs were either uncrossed or crossed (and thus bringing somatotopic and external reference frames into conflict). These judgments were made in the absence of 1) visual, 2) auditory, or 3) combined audio-visual spatial information by blindfolding and/or placing participants in an anechoic chamber. As expected, results revealed that tactile temporal acuity was poorer under crossed than uncrossed leg postures. Intriguingly, results also revealed that auditory and audio-visual deprivation exacerbated the difference in tactile temporal acuity between uncrossed to crossed leg postures, an effect not seen for visual-only deprivation. Furthermore, the effects under combined audio-visual deprivation were greater than those seen for auditory deprivation. Collectively, these results indicate that mechanisms governing the alignment between somatotopic and external reference frames extend beyond those imposed by body posture to include spatial features conveyed by the auditory and visual modalities - with a heavier weighting of auditory than visual spatial information. Thus, sensory modalities conveying exteroceptive spatial information contribute to judgments regarding the localization of touch. Copyright © 2016

  2. Low-rank sparse learning for robust visual tracking

    KAUST Repository

    Zhang, Tianzhu

    2012-01-01

    In this paper, we propose a new particle-filter based tracking algorithm that exploits the relationship between particles (candidate targets). By representing particles as sparse linear combinations of dictionary templates, this algorithm capitalizes on the inherent low-rank structure of particle representations that are learned jointly. As such, it casts the tracking problem as a low-rank matrix learning problem. This low-rank sparse tracker (LRST) has a number of attractive properties. (1) Since LRST adaptively updates dictionary templates, it can handle significant changes in appearance due to variations in illumination, pose, scale, etc. (2) The linear representation in LRST explicitly incorporates background templates in the dictionary and a sparse error term, which enables LRST to address the tracking drift problem and to be robust against occlusion respectively. (3) LRST is computationally attractive, since the low-rank learning problem can be efficiently solved as a sequence of closed form update operations, which yield a time complexity that is linear in the number of particles and the template size. We evaluate the performance of LRST by applying it to a set of challenging video sequences and comparing it to 6 popular tracking methods. Our experiments show that by representing particles jointly, LRST not only outperforms the state-of-the-art in tracking accuracy but also significantly improves the time complexity of methods that use a similar sparse linear representation model for particles [1]. © 2012 Springer-Verlag.

  3. Handwriting generates variable visual input to facilitate symbol learning

    Science.gov (United States)

    Li, Julia X.; James, Karin H.

    2015-01-01

    Recent research has demonstrated that handwriting practice facilitates letter categorization in young children. The present experiments investigated why handwriting practice facilitates visual categorization by comparing two hypotheses: That handwriting exerts its facilitative effect because of the visual-motor production of forms, resulting in a direct link between motor and perceptual systems, or because handwriting produces variable visual instances of a named category in the environment that then changes neural systems. We addressed these issues by measuring performance of 5 year-old children on a categorization task involving novel, Greek symbols across 6 different types of learning conditions: three involving visual-motor practice (copying typed symbols independently, tracing typed symbols, tracing handwritten symbols) and three involving visual-auditory practice (seeing and saying typed symbols of a single typed font, of variable typed fonts, and of handwritten examples). We could therefore compare visual-motor production with visual perception both of variable and similar forms. Comparisons across the six conditions (N=72) demonstrated that all conditions that involved studying highly variable instances of a symbol facilitated symbol categorization relative to conditions where similar instances of a symbol were learned, regardless of visual-motor production. Therefore, learning perceptually variable instances of a category enhanced performance, suggesting that handwriting facilitates symbol understanding by virtue of its environmental output: supporting the notion of developmental change though brain-body-environment interactions. PMID:26726913

  4. Neural Representations of Physics Concepts.

    Science.gov (United States)

    Mason, Robert A; Just, Marcel Adam

    2016-06-01

    We used functional MRI (fMRI) to assess neural representations of physics concepts (momentum, energy, etc.) in juniors, seniors, and graduate students majoring in physics or engineering. Our goal was to identify the underlying neural dimensions of these representations. Using factor analysis to reduce the number of dimensions of activation, we obtained four physics-related factors that were mapped to sets of voxels. The four factors were interpretable as causal motion visualization, periodicity, algebraic form, and energy flow. The individual concepts were identifiable from their fMRI signatures with a mean rank accuracy of .75 using a machine-learning (multivoxel) classifier. Furthermore, there was commonality in participants' neural representation of physics; a classifier trained on data from all but one participant identified the concepts in the left-out participant (mean accuracy = .71 across all nine participant samples). The findings indicate that abstract scientific concepts acquired in an educational setting evoke activation patterns that are identifiable and common, indicating that science education builds abstract knowledge using inherent, repurposed brain systems. © The Author(s) 2016.

  5. Perceptual learning rules based on reinforcers and attention

    NARCIS (Netherlands)

    Roelfsema, Pieter R.; van Ooyen, Arjen; Watanabe, Takeo

    2010-01-01

    How does the brain learn those visual features that are relevant for behavior? In this article, we focus on two factors that guide plasticity of visual representations. First, reinforcers cause the global release of diffusive neuromodulatory signals that gate plasticity. Second, attentional feedback

  6. An object-based visual attention model for robotic applications.

    Science.gov (United States)

    Yu, Yuanlong; Mann, George K I; Gosine, Raymond G

    2010-10-01

    By extending integrated competition hypothesis, this paper presents an object-based visual attention model, which selects one object of interest using low-dimensional features, resulting that visual perception starts from a fast attentional selection procedure. The proposed attention model involves seven modules: learning of object representations stored in a long-term memory (LTM), preattentive processing, top-down biasing, bottom-up competition, mediation between top-down and bottom-up ways, generation of saliency maps, and perceptual completion processing. It works in two phases: learning phase and attending phase. In the learning phase, the corresponding object representation is trained statistically when one object is attended. A dual-coding object representation consisting of local and global codings is proposed. Intensity, color, and orientation features are used to build the local coding, and a contour feature is employed to constitute the global coding. In the attending phase, the model preattentively segments the visual field into discrete proto-objects using Gestalt rules at first. If a task-specific object is given, the model recalls the corresponding representation from LTM and deduces the task-relevant feature(s) to evaluate top-down biases. The mediation between automatic bottom-up competition and conscious top-down biasing is then performed to yield a location-based saliency map. By combination of location-based saliency within each proto-object, the proto-object-based saliency is evaluated. The most salient proto-object is selected for attention, and it is finally put into the perceptual completion processing module to yield a complete object region. This model has been applied into distinct tasks of robots: detection of task-specific stationary and moving objects. Experimental results under different conditions are shown to validate this model.

  7. Multi-representation based on scientific investigation for enhancing students’ representation skills

    Science.gov (United States)

    Siswanto, J.; Susantini, E.; Jatmiko, B.

    2018-03-01

    This research aims to implementation learning physics with multi-representation based on the scientific investigation for enhancing students’ representation skills, especially on the magnetic field subject. The research design is one group pretest-posttest. This research was conducted in the department of mathematics education, Universitas PGRI Semarang, with the sample is students of class 2F who take basic physics courses. The data were obtained by representation skills test and documentation of multi-representation worksheet. The Results show gain analysis value of .64 which means some medium improvements. The result of t-test (α = .05) is shows p-value = .001. This learning significantly improves students representation skills.

  8. Visual areas become less engaged in associative recall following memory stabilization.

    Science.gov (United States)

    Nieuwenhuis, Ingrid L C; Takashima, Atsuko; Oostenveld, Robert; Fernández, Guillén; Jensen, Ole

    2008-04-15

    Numerous studies have focused on changes in the activity in the hippocampus and higher association areas with consolidation and memory stabilization. Even though perceptual areas are engaged in memory recall, little is known about how memory stabilization is reflected in those areas. Using magnetoencephalography (MEG) we investigated changes in visual areas with memory stabilization. Subjects were trained on associating a face to one of eight locations. The first set of associations ('stabilized') was learned in three sessions distributed over a week. The second set ('labile') was learned in one session just prior to the MEG measurement. In the recall session only the face was presented and subjects had to indicate the correct location using a joystick. The MEG data revealed robust gamma activity during recall, which started in early visual cortex and propagated to higher visual and parietal brain areas. The occipital gamma power was higher for the labile than the stabilized condition (time=0.65-0.9 s). Also the event-related field strength was higher during recall of labile than stabilized associations (time=0.59-1.5 s). We propose that recall of the spatial associations prior to memory stabilization involves a top-down process relying on reconstructing learned representations in visual areas. This process is reflected in gamma band activity consistent with the notion that neuronal synchronization in the gamma band is required for visual representations. More direct synaptic connections are formed with memory stabilization, thus decreasing the dependence on visual areas.

  9. Time course influences transfer of visual perceptual learning across spatial location.

    Science.gov (United States)

    Larcombe, S J; Kennard, C; Bridge, H

    2017-06-01

    Visual perceptual learning describes the improvement of visual perception with repeated practice. Previous research has established that the learning effects of perceptual training may be transferable to untrained stimulus attributes such as spatial location under certain circumstances. However, the mechanisms involved in transfer have not yet been fully elucidated. Here, we investigated the effect of altering training time course on the transferability of learning effects. Participants were trained on a motion direction discrimination task or a sinusoidal grating orientation discrimination task in a single visual hemifield. The 4000 training trials were either condensed into one day, or spread evenly across five training days. When participants were trained over a five-day period, there was transfer of learning to both the untrained visual hemifield and the untrained task. In contrast, when the same amount of training was condensed into a single day, participants did not show any transfer of learning. Thus, learning time course may influence the transferability of perceptual learning effects. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Reading visual representations of 'Ndabeni' in the public realms

    Directory of Open Access Journals (Sweden)

    Sipokazi Sambumbu

    2010-11-01

    Full Text Available This essay outlines and analyses contemporary image representations of Ndabeni (also called kwa-Ndabeni, a location near Cape Town where a group of people became confined between 1901 and 1936 following an outbreak of the bubonic plague in the city. This location was to shape Cape Town's landscape for a little less that thirty-five years, accommodating people who were forcibly removed from the Cape Town docklands and from District Six. Images representing this place have been produced, archived, recovered, modified, reproduced and circulated in different ways and contexts. Ndabeni has become public knowledge through public visual representations that have been produced across a range of sites in post-apartheid Cape Town. I focus on three sites: the Victoria and Alfred Waterfront, the District Six Museum, and the Eziko Restaurant and Catering School. In each case I analyse the processes through which the Ndabeni images in question have been used and reused over time in changing contexts. I analyse the 'modalities' in which these images have been composed, interpreted and employed and in which knowledge has been mediated. I explore the contents and contexts of the storyboards and exhibition panels that purport to represent Ndabeni. Finally, I discuss potential meanings that could be constructed if the images could be read independent of the texts.

  11. Selective transfer of visual working memory training on Chinese character learning.

    Science.gov (United States)

    Opitz, Bertram; Schneiders, Julia A; Krick, Christoph M; Mecklinger, Axel

    2014-01-01

    Previous research has shown a systematic relationship between phonological working memory capacity and second language proficiency for alphabetic languages. However, little is known about the impact of working memory processes on second language learning in a non-alphabetic language such as Mandarin Chinese. Due to the greater complexity of the Chinese writing system we expect that visual working memory rather than phonological working memory exerts a unique influence on learning Chinese characters. This issue was explored in the present experiment by comparing visual working memory training with an active (auditory working memory training) control condition and a passive, no training control condition. Training induced modulations in language-related brain networks were additionally examined using functional magnetic resonance imaging in a pretest-training-posttest design. As revealed by pre- to posttest comparisons and analyses of individual differences in working memory training gains, visual working memory training led to positive transfer effects on visual Chinese vocabulary learning compared to both control conditions. In addition, we found sustained activation after visual working memory training in the (predominantly visual) left infero-temporal cortex that was associated with behavioral transfer. In the control conditions, activation either increased (active control condition) or decreased (passive control condition) without reliable behavioral transfer effects. This suggests that visual working memory training leads to more efficient processing and more refined responses in brain regions involved in visual processing. Furthermore, visual working memory training boosted additional activation in the precuneus, presumably reflecting mental image generation of the learned characters. We, therefore, suggest that the conjoint activity of the mid-fusiform gyrus and the precuneus after visual working memory training reflects an interaction of working memory and

  12. Getting the picture: A mixed-methods inquiry into how visual representations are interpreted by students, incorporated within textbooks, and integrated into middle-school science classrooms

    Science.gov (United States)

    Lee, Victor Raymond

    Modern-day middle school science textbooks are heavily populated with colorful images, technical diagrams, and other forms of visual representations. These representations are commonly perceived by educators to be useful aids to support student learning of unfamiliar scientific ideas. However, as the number of representations in science textbooks has seemingly increased in recent decades, concerns have been voiced that many current of these representations are actually undermining instructional goals; they may be introducing substantial conceptual and interpretive difficulties for students. To date, very little empirical work has been done to examine how the representations used in instructional materials have changed, and what influences these changes exert on student understanding. Furthermore, there has also been limited attention given to the extent to which current representational-use routines in science classrooms may mitigate or limit interpretive difficulties. This dissertation seeks to do three things: First, it examines the nature of the relationship between published representations and students' reasoning about the natural world. Second, it considers the ways in which representations are used in textbooks and how that has changed over a span of five decades. Third, this dissertation provides an in-depth look into how middle school science classrooms naturally use these visual representations and what kinds of support are being provided. With respect to the three goals of this dissertation, three pools of data were collected and analyzed for this study. First, interview data was collected in which 32 middle school students interpreted and reasoned with a set of more and less problematic published textbook representations. Quantitative analyses of the interview data suggest that, counter to what has been anticipated in the literature, there were no significant differences in the conceptualizations of students in the different groups. An accompanying

  13. Visual Pretraining for Deep Q-Learning

    OpenAIRE

    Sandven, Torstein

    2016-01-01

    Recent advances in reinforcement learning enable computers to learn human level polices for Atari 2600 games. This is done by training a convolutional neural network to play based on screenshots and in-game rewards. The network is referred to as a deep Q-network (DQN). The main disadvantage to this approach is a long training time. A computer will typically learn for approximately one week. In this time it processes 38 days of game play. This thesis explores the possibility of using visual pr...

  14. Comparison of Auditory/Visual and Visual/Motor Practice on the Spelling Accuracy of Learning Disabled Children.

    Science.gov (United States)

    Aleman, Cheryl; And Others

    1990-01-01

    Compares auditory/visual practice to visual/motor practice in spelling with seven elementary school learning-disabled students enrolled in a resource room setting. Finds that the auditory/visual practice was superior to the visual/motor practice on the weekly spelling performance for all seven students. (MG)

  15. Deformation-specific and deformation-invariant visual object recognition: pose vs identity recognition of people and deforming objects

    Directory of Open Access Journals (Sweden)

    Tristan J Webb

    2014-04-01

    Full Text Available When we see a human sitting down, standing up, or walking, we can recognise one of these poses independently of the individual, or we can recognise the individual person, independently of the pose. The same issues arise for deforming objects. For example, if we see a flag deformed by the wind, either blowing out or hanging languidly, we can usually recognise the flag, independently of its deformation; or we can recognise the deformation independently of the identity of the flag. We hypothesize that these types of recognition can be implemented by the primate visual system using temporo-spatial continuity as objects transform as a learning principle. In particular, we hypothesize that pose or deformation can be learned under conditions in which large numbers of different people are successively seen in the same pose, or objects in the same deformation. We also hypothesize that person-specific representations that are independent of pose, and object-specific representations that are independent of deformation and view, could be built, when individual people or objects are observed successively transforming from one pose or deformation and view to another. These hypotheses were tested in a simulation of the ventral visual system, VisNet, that uses temporal continuity, implemented in a synaptic learning rule with a short-term memory trace of previous neuronal activity, to learn invariant representations. It was found that depending on the statistics of the visual input, either pose-specific or deformation-specific representations could be built that were invariant with respect to individual and view; or that identity-specific representations could be built that were invariant with respect to pose or deformation and view. We propose that this is how pose-specific and pose-invariant, and deformation-specific and deformation-invariant, perceptual representations are built in the brain.

  16. An Interactive Approach to Learning and Teaching in Visual Arts Education

    Science.gov (United States)

    Tomljenovic, Zlata

    2015-01-01

    The present research focuses on modernising the approach to learning and teaching the visual arts in teaching practice, as well as examining the performance of an interactive approach to learning and teaching in visual arts classes with the use of a combination of general and specific (visual arts) teaching methods. The study uses quantitative…

  17. Neural correlates of context-dependent feature conjunction learning in visual search tasks.

    Science.gov (United States)

    Reavis, Eric A; Frank, Sebastian M; Greenlee, Mark W; Tse, Peter U

    2016-06-01

    Many perceptual learning experiments show that repeated exposure to a basic visual feature such as a specific orientation or spatial frequency can modify perception of that feature, and that those perceptual changes are associated with changes in neural tuning early in visual processing. Such perceptual learning effects thus exert a bottom-up influence on subsequent stimulus processing, independent of task-demands or endogenous influences (e.g., volitional attention). However, it is unclear whether such bottom-up changes in perception can occur as more complex stimuli such as conjunctions of visual features are learned. It is not known whether changes in the efficiency with which people learn to process feature conjunctions in a task (e.g., visual search) reflect true bottom-up perceptual learning versus top-down, task-related learning (e.g., learning better control of endogenous attention). Here we show that feature conjunction learning in visual search leads to bottom-up changes in stimulus processing. First, using fMRI, we demonstrate that conjunction learning in visual search has a distinct neural signature: an increase in target-evoked activity relative to distractor-evoked activity (i.e., a relative increase in target salience). Second, we demonstrate that after learning, this neural signature is still evident even when participants passively view learned stimuli while performing an unrelated, attention-demanding task. This suggests that conjunction learning results in altered bottom-up perceptual processing of the learned conjunction stimuli (i.e., a perceptual change independent of the task). We further show that the acquired change in target-evoked activity is contextually dependent on the presence of distractors, suggesting that search array Gestalts are learned. Hum Brain Mapp 37:2319-2330, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  18. EMR-based medical knowledge representation and inference via Markov random fields and distributed representation learning.

    Science.gov (United States)

    Zhao, Chao; Jiang, Jingchi; Guan, Yi; Guo, Xitong; He, Bin

    2018-05-01

    Electronic medical records (EMRs) contain medical knowledge that can be used for clinical decision support (CDS). Our objective is to develop a general system that can extract and represent knowledge contained in EMRs to support three CDS tasks-test recommendation, initial diagnosis, and treatment plan recommendation-given the condition of a patient. We extracted four kinds of medical entities from records and constructed an EMR-based medical knowledge network (EMKN), in which nodes are entities and edges reflect their co-occurrence in a record. Three bipartite subgraphs (bigraphs) were extracted from the EMKN, one to support each task. One part of the bigraph was the given condition (e.g., symptoms), and the other was the condition to be inferred (e.g., diseases). Each bigraph was regarded as a Markov random field (MRF) to support the inference. We proposed three graph-based energy functions and three likelihood-based energy functions. Two of these functions are based on knowledge representation learning and can provide distributed representations of medical entities. Two EMR datasets and three metrics were utilized to evaluate the performance. As a whole, the evaluation results indicate that the proposed system outperformed the baseline methods. The distributed representation of medical entities does reflect similarity relationships with respect to knowledge level. Combining EMKN and MRF is an effective approach for general medical knowledge representation and inference. Different tasks, however, require individually designed energy functions. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. Visual Perceptual Learning and its Specificity and Transfer: A New Perspective

    Directory of Open Access Journals (Sweden)

    Cong Yu

    2011-05-01

    Full Text Available Visual perceptual learning is known to be location and orientation specific, and is thus assumed to reflect the neuronal plasticity in the early visual cortex. However, in recent studies we created “Double training” and “TPE” procedures to demonstrate that these “fundamental” specificities of perceptual learning are in some sense artifacts and that learning can completely transfer to a new location or orientation. We proposed a rule-based learning theory to reinterpret perceptual learning and its specificity and transfer: A high-level decision unit learns the rules of performing a visual task through training. However, the learned rules cannot be applied to a new location or orientation automatically because the decision unit cannot functionally connect to new visual inputs with sufficient strength because these inputs are unattended or even suppressed during training. It is double training and TPE training that reactivate these new inputs, so that the functional connections can be strengthened to enable rule application and learning transfer. Currently we are investigating the properties of perceptual learning free from the bogus specificities, and the results provide some preliminary but very interesting insights into how training reshapes the functional connections between the high-level decision units and sensory inputs in the brain.

  20. Handwriting generates variable visual output to facilitate symbol learning.

    Science.gov (United States)

    Li, Julia X; James, Karin H

    2016-03-01

    Recent research has demonstrated that handwriting practice facilitates letter categorization in young children. The present experiments investigated why handwriting practice facilitates visual categorization by comparing 2 hypotheses: that handwriting exerts its facilitative effect because of the visual-motor production of forms, resulting in a direct link between motor and perceptual systems, or because handwriting produces variable visual instances of a named category in the environment that then changes neural systems. We addressed these issues by measuring performance of 5-year-old children on a categorization task involving novel, Greek symbols across 6 different types of learning conditions: 3 involving visual-motor practice (copying typed symbols independently, tracing typed symbols, tracing handwritten symbols) and 3 involving visual-auditory practice (seeing and saying typed symbols of a single typed font, of variable typed fonts, and of handwritten examples). We could therefore compare visual-motor production with visual perception both of variable and similar forms. Comparisons across the 6 conditions (N = 72) demonstrated that all conditions that involved studying highly variable instances of a symbol facilitated symbol categorization relative to conditions where similar instances of a symbol were learned, regardless of visual-motor production. Therefore, learning perceptually variable instances of a category enhanced performance, suggesting that handwriting facilitates symbol understanding by virtue of its environmental output: supporting the notion of developmental change though brain-body-environment interactions. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  1. Independent Attention Mechanisms Control the Activation of Tactile and Visual Working Memory Representations.

    Science.gov (United States)

    Katus, Tobias; Eimer, Martin

    2018-05-01

    Working memory (WM) is limited in capacity, but it is controversial whether these capacity limitations are domain-general or are generated independently within separate modality-specific memory systems. These alternative accounts were tested in bimodal visual/tactile WM tasks. In Experiment 1, participants memorized the locations of simultaneously presented task-relevant visual and tactile stimuli. Visual and tactile WM load was manipulated independently (one, two, or three items per modality), and one modality was unpredictably tested after each trial. To track the activation of visual and tactile WM representations during the retention interval, the visual contralateral delay activity (CDA) and tactile CDA (tCDA) were measured over visual and somatosensory cortex, respectively. CDA and tCDA amplitudes were selectively affected by WM load in the corresponding (tactile or visual) modality. The CDA parametrically increased when visual load increased from one to two and to three items. The tCDA was enhanced when tactile load increased from one to two items and showed no further enhancement for three tactile items. Critically, these load effects were strictly modality-specific, as substantiated by Bayesian statistics. Increasing tactile load did not affect the visual CDA, and increasing visual load did not modulate the tCDA. Task performance at memory test was also unaffected by WM load in the other (untested) modality. This was confirmed in a second behavioral experiment where tactile and visual loads were either two or four items, unimodal baseline conditions were included, and participants performed a color change detection task in the visual modality. These results show that WM capacity is not limited by a domain-general mechanism that operates across sensory modalities. They suggest instead that WM storage is mediated by distributed modality-specific control mechanisms that are activated independently and in parallel during multisensory WM.

  2. Learning sorting algorithms through visualization construction

    Science.gov (United States)

    Cetin, Ibrahim; Andrews-Larson, Christine

    2016-01-01

    Recent increased interest in computational thinking poses an important question to researchers: What are the best ways to teach fundamental computing concepts to students? Visualization is suggested as one way of supporting student learning. This mixed-method study aimed to (i) examine the effect of instruction in which students constructed visualizations on students' programming achievement and students' attitudes toward computer programming, and (ii) explore how this kind of instruction supports students' learning according to their self-reported experiences in the course. The study was conducted with 58 pre-service teachers who were enrolled in their second programming class. They expect to teach information technology and computing-related courses at the primary and secondary levels. An embedded experimental model was utilized as a research design. Students in the experimental group were given instruction that required students to construct visualizations related to sorting, whereas students in the control group viewed pre-made visualizations. After the instructional intervention, eight students from each group were selected for semi-structured interviews. The results showed that the intervention based on visualization construction resulted in significantly better acquisition of sorting concepts. However, there was no significant difference between the groups with respect to students' attitudes toward computer programming. Qualitative data analysis indicated that students in the experimental group constructed necessary abstractions through their engagement in visualization construction activities. The authors of this study argue that the students' active engagement in the visualization construction activities explains only one side of students' success. The other side can be explained through the instructional approach, constructionism in this case, used to design instruction. The conclusions and implications of this study can be used by researchers and

  3. The Interplay Among Children's Negative Family Representations, Visual Processing of Negative Emotions, and Externalizing Symptoms.

    Science.gov (United States)

    Davies, Patrick T; Coe, Jesse L; Hentges, Rochelle F; Sturge-Apple, Melissa L; van der Kloet, Erika

    2018-03-01

    This study examined the transactional interplay among children's negative family representations, visual processing of negative emotions, and externalizing symptoms in a sample of 243 preschool children (M age  = 4.60 years). Children participated in three annual measurement occasions. Cross-lagged autoregressive models were conducted with multimethod, multi-informant data to identify mediational pathways. Consistent with schema-based top-down models, negative family representations were associated with attention to negative faces in an eye-tracking task and their externalizing symptoms. Children's negative representations of family relationships specifically predicted decreases in their attention to negative emotions, which, in turn, was associated with subsequent increases in their externalizing symptoms. Follow-up analyses indicated that the mediational role of diminished attention to negative emotions was particularly pronounced for angry faces. © 2017 The Authors. Child Development © 2017 Society for Research in Child Development, Inc.

  4. Prior Visual Experience Modulates Learning of Sound Localization Among Blind Individuals.

    Science.gov (United States)

    Tao, Qian; Chan, Chetwyn C H; Luo, Yue-Jia; Li, Jian-Jun; Ting, Kin-Hung; Lu, Zhong-Lin; Whitfield-Gabrieli, Susan; Wang, Jun; Lee, Tatia M C

    2017-05-01

    Cross-modal learning requires the use of information from different sensory modalities. This study investigated how the prior visual experience of late blind individuals could modulate neural processes associated with learning of sound localization. Learning was realized by standardized training on sound localization processing, and experience was investigated by comparing brain activations elicited from a sound localization task in individuals with (late blind, LB) and without (early blind, EB) prior visual experience. After the training, EB showed decreased activation in the precuneus, which was functionally connected to a limbic-multisensory network. In contrast, LB showed the increased activation of the precuneus. A subgroup of LB participants who demonstrated higher visuospatial working memory capabilities (LB-HVM) exhibited an enhanced precuneus-lingual gyrus network. This differential connectivity suggests that visuospatial working memory due to the prior visual experience gained via LB-HVM enhanced learning of sound localization. Active visuospatial navigation processes could have occurred in LB-HVM compared to the retrieval of previously bound information from long-term memory for EB. The precuneus appears to play a crucial role in learning of sound localization, disregarding prior visual experience. Prior visual experience, however, could enhance cross-modal learning by extending binding to the integration of unprocessed information, mediated by the cognitive functions that these experiences develop.

  5. Dynamic functional brain networks involved in simple visual discrimination learning.

    Science.gov (United States)

    Fidalgo, Camino; Conejo, Nélida María; González-Pardo, Héctor; Arias, Jorge Luis

    2014-10-01

    Visual discrimination tasks have been widely used to evaluate many types of learning and memory processes. However, little is known about the brain regions involved at different stages of visual discrimination learning. We used cytochrome c oxidase histochemistry to evaluate changes in regional brain oxidative metabolism during visual discrimination learning in a water-T maze at different time points during training. As compared with control groups, the results of the present study reveal the gradual activation of cortical (prefrontal and temporal cortices) and subcortical brain regions (including the striatum and the hippocampus) associated to the mastery of a simple visual discrimination task. On the other hand, the brain regions involved and their functional interactions changed progressively over days of training. Regions associated with novelty, emotion, visuo-spatial orientation and motor aspects of the behavioral task seem to be relevant during the earlier phase of training, whereas a brain network comprising the prefrontal cortex was found along the whole learning process. This study highlights the relevance of functional interactions among brain regions to investigate learning and memory processes. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. Visual Semiotics & Uncertainty Visualization: An Empirical Study.

    Science.gov (United States)

    MacEachren, A M; Roth, R E; O'Brien, J; Li, B; Swingley, D; Gahegan, M

    2012-12-01

    This paper presents two linked empirical studies focused on uncertainty visualization. The experiments are framed from two conceptual perspectives. First, a typology of uncertainty is used to delineate kinds of uncertainty matched with space, time, and attribute components of data. Second, concepts from visual semiotics are applied to characterize the kind of visual signification that is appropriate for representing those different categories of uncertainty. This framework guided the two experiments reported here. The first addresses representation intuitiveness, considering both visual variables and iconicity of representation. The second addresses relative performance of the most intuitive abstract and iconic representations of uncertainty on a map reading task. Combined results suggest initial guidelines for representing uncertainty and discussion focuses on practical applicability of results.

  7. Embedded data representations

    DEFF Research Database (Denmark)

    Willett, Wesley; Jansen, Yvonne; Dragicevic, Pierre

    2017-01-01

    We introduce embedded data representations, the use of visual and physical representations of data that are deeply integrated with the physical spaces, objects, and entities to which the data refers. Technologies like lightweight wireless displays, mixed reality hardware, and autonomous vehicles...

  8. Organizing learning processes on risks by using the bow-tie representation

    Energy Technology Data Exchange (ETDEWEB)

    Chevreau, F.R. [Ecole des Mines de Paris, 06904 Sophia-Antipolis (France)]. E-mail: chevreau@cindy.ensmp.fr; Wybo, J.L. [Ecole des Mines de Paris, 06904 Sophia-Antipolis (France)]. E-mail: wybo@cindy.ensmp.fr; Cauchois, D. [Process Safety Department, Sanofi-Aventis, Site de Production de Vitry sur Seine, 9 Quai Jules Guesdes, 94400 Vitry sur Seine (France)]. E-mail: didier.cauchois@sanofi-aventis.com

    2006-03-31

    The Aramis method proposes a complete and efficient way to manage risk analysis by using the bow-tie representation. This paper shows how the bow-tie representation can also be appropriate for experience learning. It describes how a pharmaceutical production plant uses bow-ties for incident and accident analysis. Two levels of bow-ties are constructed: standard bow-ties concern generic risks of the plant whereas local bow-ties represent accident scenarios specific to each workplace. When incidents or accidents are analyzed, knowledge that is gained is added to existing local bow-ties. Regularly, local bow-ties that have been updated are compared to standard bow-ties in order to revise them. Knowledge on safety at the global and at local levels is hence as accurate as possible and memorized in a real time framework. As it relies on the communication between safety experts and local operators, this use of the bow-ties contributes therefore to organizational learning for safety.

  9. Organizing learning processes on risks by using the bow-tie representation

    International Nuclear Information System (INIS)

    Chevreau, F.R.; Wybo, J.L.; Cauchois, D.

    2006-01-01

    The Aramis method proposes a complete and efficient way to manage risk analysis by using the bow-tie representation. This paper shows how the bow-tie representation can also be appropriate for experience learning. It describes how a pharmaceutical production plant uses bow-ties for incident and accident analysis. Two levels of bow-ties are constructed: standard bow-ties concern generic risks of the plant whereas local bow-ties represent accident scenarios specific to each workplace. When incidents or accidents are analyzed, knowledge that is gained is added to existing local bow-ties. Regularly, local bow-ties that have been updated are compared to standard bow-ties in order to revise them. Knowledge on safety at the global and at local levels is hence as accurate as possible and memorized in a real time framework. As it relies on the communication between safety experts and local operators, this use of the bow-ties contributes therefore to organizational learning for safety

  10. Searching for Variables and Models to Investigate Mediators of Learning from Multiple Representations

    Science.gov (United States)

    Rau, Martina A.; Scheines, Richard

    2012-01-01

    Although learning from multiple representations has been shown to be effective in a variety of domains, little is known about the mechanisms by which it occurs. We analyzed log data on error-rate, hint-use, and time-spent obtained from two experiments with a Cognitive Tutor for fractions. The goal of the experiments was to compare learning from…

  11. The representation of time course events in visual arts and the development of the concept of time in children: a preliminary study.

    Science.gov (United States)

    Actis-Grosso, Rossana; Zavagno, Daniele

    2008-01-01

    By means of a careful search we found several representations of dynamic contents of events that show how the depiction of the passage of time in the visual arts has evolved gradually through a series of modifications and adaptations. The general hypothesis we started to investigate is that the evolution of the representation of the time course in visual arts is mirrored in the evolution of the concept of time in children, who, according to Piaget (1946), undergo three stages in their ability to conceptualize time. Crucial for our hypothesis is Stage II, in which children become progressively able to link the different phases of an event, but vacillate between what Piaget termed 'intuitive regulations', not being able to understand all the different aspects of a given situation. We found several pictorial representations - mainly dated back to the 14th to 15th century - that seem to fit within a Stage II of children's comprehension of time. According to our hypothesis, this type of pictorial representations should be immediately understood only by those children who are at Piaget's Stage II of time conceptualization. This implies that children at Stages I and III should not be able to understand the representation of time courses in the aforementioned paintings. An experiment was run to verify the agreement between children's collocation within Piaget's three stages - as indicated by an adaptation of Piaget's original experiment - and their understanding of pictorial representations that should be considered as Stage II type of representations of time courses. Despite the small sample of children examined so far, results seem to support our hypothesis. A follow-up (Experiment 2) on the same children was also run one year later in order to verify other possible explanations. Results from the two experiments suggest that the study of the visual arts can aid our understanding of the development of the concept of time, and it can also help to distinguish between the

  12. The effect of learning on the function of monkey extrastriate visual cortex.

    Directory of Open Access Journals (Sweden)

    Gregor Rainer

    2004-02-01

    Full Text Available One of the most remarkable capabilities of the adult brain is its ability to learn and continuously adapt to an ever-changing environment. While many studies have documented how learning improves the perception and identification of visual stimuli, relatively little is known about how it modifies the underlying neural mechanisms. We trained monkeys to identify natural images that were degraded by interpolation with visual noise. We found that learning led to an improvement in monkeys' ability to identify these indeterminate visual stimuli. We link this behavioral improvement to a learning-dependent increase in the amount of information communicated by V4 neurons. This increase was mediated by a specific enhancement in neural activity. Our results reveal a mechanism by which learning increases the amount of information that V4 neurons are able to extract from the visual environment. This suggests that V4 plays a key role in resolving indeterminate visual inputs by coordinated interaction between bottom-up and top-down processing streams.

  13. Learning Objects and Grasp Affordances through Autonomous Exploration

    DEFF Research Database (Denmark)

    Kraft, Dirk; Detry, Renaud; Pugeault, Nicolas

    2009-01-01

    We describe a system for autonomous learning of visual object representations and their grasp affordances on a robot-vision system. It segments objects by grasping and moving 3D scene features, and creates probabilistic visual representations for object detection, recognition and pose estimation...... image sequences as well as (3) a number of built-in behavioral modules on the one hand, and autonomous exploration on the other hand, the system is able to generate object and grasping knowledge through interaction with its environment....

  14. No evidence for visual context-dependency of olfactory learning in Drosophila

    Science.gov (United States)

    Yarali, Ayse; Mayerle, Moritz; Nawroth, Christian; Gerber, Bertram

    2008-08-01

    How is behaviour organised across sensory modalities? Specifically, we ask concerning the fruit fly Drosophila melanogaster how visual context affects olfactory learning and recall and whether information about visual context is getting integrated into olfactory memory. We find that changing visual context between training and test does not deteriorate olfactory memory scores, suggesting that these olfactory memories can drive behaviour despite a mismatch of visual context between training and test. Rather, both the establishment and the recall of olfactory memory are generally facilitated by light. In a follow-up experiment, we find no evidence for learning about combinations of odours and visual context as predictors for reinforcement even after explicit training in a so-called biconditional discrimination task. Thus, a ‘true’ interaction between visual and olfactory modalities is not evident; instead, light seems to influence olfactory learning and recall unspecifically, for example by altering motor activity, alertness or olfactory acuity.

  15. Identification of Quality Visual-Based Learning Material for Technology Education

    Science.gov (United States)

    Katsioloudis, Petros

    2010-01-01

    It is widely known that the use of visual technology enhances learning by providing a better understanding of the topic as well as motivating students. If all visual-based learning materials (tables, figures, photos, etc.) were equally effective in facilitating student achievement of all kinds of educational objectives, there would virtually be no…

  16. Basal ganglia-dependent processes in recalling learned visual-motor adaptations.

    Science.gov (United States)

    Bédard, Patrick; Sanes, Jerome N

    2011-03-01

    Humans learn and remember motor skills to permit adaptation to a changing environment. During adaptation, the brain develops new sensory-motor relationships that become stored in an internal model (IM) that may be retained for extended periods. How the brain learns new IMs and transforms them into long-term memory remains incompletely understood since prior work has mostly focused on the learning process. A current model suggests that basal ganglia, cerebellum, and their neocortical targets actively participate in forming new IMs but that a cerebellar cortical network would mediate automatization. However, a recent study (Marinelli et al. 2009) reported that patients with Parkinson's disease (PD), who have basal ganglia dysfunction, had similar adaptation rates as controls but demonstrated no savings at recall tests (24 and 48 h). Here, we assessed whether a longer training session, a feature known to increase long-term retention of IM in healthy individuals, could allow PD patients to demonstrate savings. We recruited PD patients and age-matched healthy adults and used a visual-motor adaptation paradigm similar to the study by Marinelli et al. (2009), doubling the number of training trials and assessed recall after a short and a 24-h delay. We hypothesized that a longer training session would allow PD patients to develop an enhanced representation of the IM as demonstrated by savings at the recall tests. Our results showed that PD patients had similar adaptation rates as controls but did not demonstrate savings at both recall tests. We interpret these results as evidence that fronto-striatal networks have involvement in the early to late phase of motor memory formation, but not during initial learning.

  17. Neural representations of novel objects associated with olfactory experience.

    Science.gov (United States)

    Ghio, Marta; Schulze, Patrick; Suchan, Boris; Bellebaum, Christian

    2016-07-15

    Object conceptual knowledge comprises information related to several motor and sensory modalities (e.g. for tools, how they look like, how to manipulate them). Whether and to which extent conceptual object knowledge is represented in the same sensory and motor systems recruited during object-specific learning experience is still a controversial question. A direct approach to assess the experience-dependence of conceptual object representations is based on training with novel objects. The present study extended previous research, which focused mainly on the role of manipulation experience for tool-like stimuli, by considering sensory experience only. Specifically, we examined the impact of experience in the non-dominant olfactory modality on the neural representation of novel objects. Sixteen healthy participants visually explored a set of novel objects during the training phase while for each object an odor (e.g., peppermint) was presented (olfactory-visual training). As control conditions, a second set of objects was only visually explored (visual-only training), and a third set was not part of the training. In a post-training fMRI session, participants performed an old/new task with pictures of objects associated with olfactory-visual and visual-only training (old) and no training objects (new). Although we did not find any evidence of activations in primary olfactory areas, the processing of olfactory-visual versus visual-only training objects elicited greater activation in the right anterior hippocampus, a region included in the extended olfactory network. This finding is discussed in terms of different functional roles of the hippocampus in olfactory processes. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. A Comprehensive Review on Handcrafted and Learning-Based Action Representation Approaches for Human Activity Recognition

    Directory of Open Access Journals (Sweden)

    Allah Bux Sargano

    2017-01-01

    Full Text Available Human activity recognition (HAR is an important research area in the fields of human perception and computer vision due to its wide range of applications. These applications include: intelligent video surveillance, ambient assisted living, human computer interaction, human-robot interaction, entertainment, and intelligent driving. Recently, with the emergence and successful deployment of deep learning techniques for image classification, researchers have migrated from traditional handcrafting to deep learning techniques for HAR. However, handcrafted representation-based approaches are still widely used due to some bottlenecks such as computational complexity of deep learning techniques for activity recognition. However, approaches based on handcrafted representation are not able to handle complex scenarios due to their limitations and incapability; therefore, resorting to deep learning-based techniques is a natural option. This review paper presents a comprehensive survey of both handcrafted and learning-based action representations, offering comparison, analysis, and discussions on these approaches. In addition to this, the well-known public datasets available for experimentations and important applications of HAR are also presented to provide further insight into the field. This is the first review paper of its kind which presents all these aspects of HAR in a single review article with comprehensive coverage of each part. Finally, the paper is concluded with important discussions and research directions in the domain of HAR.

  19. An Interactive Approach to Learning and Teaching in Visual Arts Education

    Directory of Open Access Journals (Sweden)

    Zlata Tomljenović

    2015-09-01

    Full Text Available The present research focuses on modernising the approach to learning and teaching the visual arts in teaching practice, as well as examining the performance of an interactive approach to learning and teaching in visual arts classes with the use of a combination of general and specific (visual arts teaching methods. The study uses quantitative analysis of data on the basis of results obtained from a pedagogical experiment. The subjects of the research were 285 second- and fourth-grade students from four primary schools in the city of Rijeka, Croatia. Paintings made by the students in the initial and final stage of the pedagogical experiment were evaluated. The research results confirmed the hypotheses about the positive effect of interactive approaches to learning and teaching on the following variables: (1 knowledge and understanding of visual arts terms, (2 abilities and skills in the use of art materials and techniques within the framework of planned painting tasks, and (3 creativity in solving visual arts problems. The research results can help shape an optimised model for the planning and performance of visual arts education, and provide guidelines for planning professional development and the further professional education of teachers, with the aim of establishing more efficient learning and teaching of the visual arts in primary school.

  20. Learning of Grammar-Like Visual Sequences by Adults with and without Language-Learning Disabilities

    Science.gov (United States)

    Aguilar, Jessica M.; Plante, Elena

    2014-01-01

    Purpose: Two studies examined learning of grammar-like visual sequences to determine whether a general deficit in statistical learning characterizes this population. Furthermore, we tested the hypothesis that difficulty in sustaining attention during the learning task might account for differences in statistical learning. Method: In Study 1,…

  1. Incidental orthographic learning during a color detection task.

    Science.gov (United States)

    Protopapas, Athanassios; Mitsi, Anna; Koustoumbardis, Miltiadis; Tsitsopoulou, Sofia M; Leventi, Marianna; Seitz, Aaron R

    2017-09-01

    Orthographic learning refers to the acquisition of knowledge about specific spelling patterns forming words and about general biases and constraints on letter sequences. It is thought to occur by strengthening simultaneously activated visual and phonological representations during reading. Here we demonstrate that a visual perceptual learning procedure that leaves no time for articulation can result in orthographic learning evidenced in improved reading and spelling performance. We employed task-irrelevant perceptual learning (TIPL), in which the stimuli to be learned are paired with an easy task target. Assorted line drawings and difficult-to-spell words were presented in red color among sequences of other black-colored words and images presented in rapid succession, constituting a fast-TIPL procedure with color detection being the explicit task. In five experiments, Greek children in Grades 4-5 showed increased recognition of words and images that had appeared in red, both during and after the training procedure, regardless of within-training testing, and also when targets appeared in blue instead of red. Significant transfer to reading and spelling emerged only after increased training intensity. In a sixth experiment, children in Grades 2-3 showed generalization to words not presented during training that carried the same derivational affixes as in the training set. We suggest that reinforcement signals related to detection of the target stimuli contribute to the strengthening of orthography-phonology connections beyond earlier levels of visually-based orthographic representation learning. These results highlight the potential of perceptual learning procedures for the reinforcement of higher-level orthographic representations. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  2. Learning Programming Technique through Visual Programming Application as Learning Media with Fuzzy Rating

    Science.gov (United States)

    Buditjahjanto, I. G. P. Asto; Nurlaela, Luthfiyah; Ekohariadi; Riduwan, Mochamad

    2017-01-01

    Programming technique is one of the subjects at Vocational High School in Indonesia. This subject contains theory and application of programming utilizing Visual Programming. Students experience some difficulties to learn textual learning. Therefore, it is necessary to develop media as a tool to transfer learning materials. The objectives of this…

  3. Representations as Mediation between Purposes as Junior Secondary Science Students Learn about the Human Body

    Science.gov (United States)

    Olander, Clas; Wickman, Per-Olof; Tytler, Russell; Ingerman, Åke

    2018-01-01

    The aim of this article is to investigate students' meaning-making processes of multiple representations during a teaching sequence about the human body in lower secondary school. Two main influences are brought together to accomplish the analysis: on the one hand, theories on signs and representations as scaffoldings for learning and, on the…

  4. VStops: A Thinking Strategy and Visual Representation Approach in Mathematical Word Problem Solving toward Enhancing STEM Literacy

    Science.gov (United States)

    Abdullah, Nasarudin; Halim, Lilia; Zakaria, Effandi

    2014-01-01

    This study aimed to determine the impact of strategic thinking and visual representation approaches (VStops) on the achievement, conceptual knowledge, metacognitive awareness, awareness of problem-solving strategies, and student attitudes toward mathematical word problem solving among primary school students. The experimental group (N = 96)…

  5. Walk and Learn: Facial Attribute Representation Learning from Egocentric Video and Contextual Data

    OpenAIRE

    Wang, Jing; Cheng, Yu; Feris, Rogerio Schmidt

    2016-01-01

    The way people look in terms of facial attributes (ethnicity, hair color, facial hair, etc.) and the clothes or accessories they wear (sunglasses, hat, hoodies, etc.) is highly dependent on geo-location and weather condition, respectively. This work explores, for the first time, the use of this contextual information, as people with wearable cameras walk across different neighborhoods of a city, in order to learn a rich feature representation for facial attribute classification, without the c...

  6. Short-term perceptual learning in visual conjunction search.

    Science.gov (United States)

    Su, Yuling; Lai, Yunpeng; Huang, Wanyi; Tan, Wei; Qu, Zhe; Ding, Yulong

    2014-08-01

    Although some studies showed that training can improve the ability of cross-dimension conjunction search, less is known about the underlying mechanism. Specifically, it remains unclear whether training of visual conjunction search can successfully bind different features of separated dimensions into a new function unit at early stages of visual processing. In the present study, we utilized stimulus specificity and generalization to provide a new approach to investigate the mechanisms underlying perceptual learning (PL) in visual conjunction search. Five experiments consistently showed that after 40 to 50 min of training of color-shape/orientation conjunction search, the ability to search for a certain conjunction target improved significantly and the learning effects did not transfer to a new target that differed from the trained target in both color and shape/orientation features. However, the learning effects were not strictly specific. In color-shape conjunction search, although the learning effect could not transfer to a same-shape different-color target, it almost completely transferred to a same-color different-shape target. In color-orientation conjunction search, the learning effect partly transferred to a new target that shared same color or same orientation with the trained target. Moreover, the sum of transfer effects for the same color target and the same orientation target in color-orientation conjunction search was algebraically equivalent to the learning effect for trained target, showing an additive transfer effect. The different transfer patterns in color-shape and color-orientation conjunction search learning might reflect the different complexity and discriminability between feature dimensions. These results suggested a feature-based attention enhancement mechanism rather than a unitization mechanism underlying the short-term PL of color-shape/orientation conjunction search.

  7. Incremental learning of perceptual and conceptual representations and the puzzle of neural repetition suppression.

    Science.gov (United States)

    Gotts, Stephen J

    2016-08-01

    Incremental learning models of long-term perceptual and conceptual knowledge hold that neural representations are gradually acquired over many individual experiences via Hebbian-like activity-dependent synaptic plasticity across cortical connections of the brain. In such models, variation in task relevance of information, anatomic constraints, and the statistics of sensory inputs and motor outputs lead to qualitative alterations in the nature of representations that are acquired. Here, the proposal that behavioral repetition priming and neural repetition suppression effects are empirical markers of incremental learning in the cortex is discussed, and research results that both support and challenge this position are reviewed. Discussion is focused on a recent fMRI-adaptation study from our laboratory that shows decoupling of experience-dependent changes in neural tuning, priming, and repetition suppression, with representational changes that appear to work counter to the explicit task demands. Finally, critical experiments that may help to clarify and resolve current challenges are outlined.

  8. Interrogating the Conventional Boundaries of Research Methods in Social Sciences: The Role of Visual Representation in Ethnography

    Directory of Open Access Journals (Sweden)

    Nel Glass

    2008-05-01

    Full Text Available The author will propose that the use of performative social science is a means to deliberately interrogate long held conventions of established research. The innovative role of visual art representation in data collection, analysis and public engagement with research will be discussed. Examples will be drawn from two postmodern feminist ethnographic research which investigated academic professional development, resilience, hope and optimism in the UK, US, Australia and New Zealand from 1997-2005. Artwork was initially created as data collection and digitalised as representation to intentionally validate the voices of research participants, the researcher and viewers of the work. The research participants and viewers were given opportunities to actively engage with the visual work. Artwork complimented two additional research methods: critical conversational interviewing and reflective journaling. This paper will address the ways inclusion of art methods contributed and deepened data representation. The role of crafting artwork in the field, the artistic changes that represented the complexity of data analysis and engagement with the work will be explored. It will be argued that the creation and engagement with artwork in research is an empowering and dynamic process for researchers and participants. It is an innovative means of representing intersubjectivity that results in reciprocity. URN: urn:nbn:de:0114-fqs0802509

  9. Fine-grained visual marine vessel classification for coastal surveillance and defense applications

    Science.gov (United States)

    Solmaz, Berkan; Gundogdu, Erhan; Karaman, Kaan; Yücesoy, Veysel; Koç, Aykut

    2017-10-01

    The need for capabilities of automated visual content analysis has substantially increased due to presence of large number of images captured by surveillance cameras. With a focus on development of practical methods for extracting effective visual data representations, deep neural network based representations have received great attention due to their success in visual categorization of generic images. For fine-grained image categorization, a closely related yet a more challenging research problem compared to generic image categorization due to high visual similarities within subgroups, diverse applications were developed such as classifying images of vehicles, birds, food and plants. Here, we propose the use of deep neural network based representations for categorizing and identifying marine vessels for defense and security applications. First, we gather a large number of marine vessel images via online sources grouping them into four coarse categories; naval, civil, commercial and service vessels. Next, we subgroup naval vessels into fine categories such as corvettes, frigates and submarines. For distinguishing images, we extract state-of-the-art deep visual representations and train support-vector-machines. Furthermore, we fine tune deep representations for marine vessel images. Experiments address two scenarios, classification and verification of naval marine vessels. Classification experiment aims coarse categorization, as well as learning models of fine categories. Verification experiment embroils identification of specific naval vessels by revealing if a pair of images belongs to identical marine vessels by the help of learnt deep representations. Obtaining promising performance, we believe these presented capabilities would be essential components of future coastal and on-board surveillance systems.

  10. Cross-modal interaction between visual and olfactory learning in Apis cerana.

    Science.gov (United States)

    Zhang, Li-Zhen; Zhang, Shao-Wu; Wang, Zi-Long; Yan, Wei-Yu; Zeng, Zhi-Jiang

    2014-10-01

    The power of the small honeybee brain carrying out behavioral and cognitive tasks has been shown repeatedly to be highly impressive. The present study investigates, for the first time, the cross-modal interaction between visual and olfactory learning in Apis cerana. To explore the role and molecular mechanisms of cross-modal learning in A. cerana, the honeybees were trained and tested in a modified Y-maze with seven visual and five olfactory stimulus, where a robust visual threshold for black/white grating (period of 2.8°-3.8°) and relatively olfactory threshold (concentration of 50-25%) was obtained. Meanwhile, the expression levels of five genes (AcCREB, Acdop1, Acdop2, Acdop3, Actyr1) related to learning and memory were analyzed under different training conditions by real-time RT-PCR. The experimental results indicate that A. cerana could exhibit cross-modal interactions between visual and olfactory learning by reducing the threshold level of the conditioning stimuli, and that these genes may play important roles in the learning process of honeybees.

  11. Visual Perceptual Echo Reflects Learning of Regularities in Rapid Luminance Sequences.

    Science.gov (United States)

    Chang, Acer Y-C; Schwartzman, David J; VanRullen, Rufin; Kanai, Ryota; Seth, Anil K

    2017-08-30

    A novel neural signature of active visual processing has recently been described in the form of the "perceptual echo", in which the cross-correlation between a sequence of randomly fluctuating luminance values and occipital electrophysiological signals exhibits a long-lasting periodic (∼100 ms cycle) reverberation of the input stimulus (VanRullen and Macdonald, 2012). As yet, however, the mechanisms underlying the perceptual echo and its function remain unknown. Reasoning that natural visual signals often contain temporally predictable, though nonperiodic features, we hypothesized that the perceptual echo may reflect a periodic process associated with regularity learning. To test this hypothesis, we presented subjects with successive repetitions of a rapid nonperiodic luminance sequence, and examined the effects on the perceptual echo, finding that echo amplitude linearly increased with the number of presentations of a given luminance sequence. These data suggest that the perceptual echo reflects a neural signature of regularity learning.Furthermore, when a set of repeated sequences was followed by a sequence with inverted luminance polarities, the echo amplitude decreased to the same level evoked by a novel stimulus sequence. Crucially, when the original stimulus sequence was re-presented, the echo amplitude returned to a level consistent with the number of presentations of this sequence, indicating that the visual system retained sequence-specific information, for many seconds, even in the presence of intervening visual input. Altogether, our results reveal a previously undiscovered regularity learning mechanism within the human visual system, reflected by the perceptual echo. SIGNIFICANCE STATEMENT How the brain encodes and learns fast-changing but nonperiodic visual input remains unknown, even though such visual input characterizes natural scenes. We investigated whether the phenomenon of "perceptual echo" might index such learning. The perceptual echo is a

  12. You see what you have learned. Evidence for an interrelation of associative learning and visual selective attention.

    Science.gov (United States)

    Feldmann-Wüstefeld, Tobias; Uengoer, Metin; Schubö, Anna

    2015-11-01

    Besides visual salience and observers' current intention, prior learning experience may influence deployment of visual attention. Associative learning models postulate that observers pay more attention to stimuli previously experienced as reliable predictors of specific outcomes. To investigate the impact of learning experience on deployment of attention, we combined an associative learning task with a visual search task and measured event-related potentials of the EEG as neural markers of attention deployment. In the learning task, participants categorized stimuli varying in color/shape with only one dimension being predictive of category membership. In the search task, participants searched a shape target while disregarding irrelevant color distractors. Behavioral results showed that color distractors impaired performance to a greater degree when color rather than shape was predictive in the learning task. Neurophysiological results show that the amplified distraction was due to differential attention deployment (N2pc). Experiment 2 showed that when color was predictive for learning, color distractors captured more attention in the search task (ND component) and more suppression of color distractor was required (PD component). The present results thus demonstrate that priority in visual attention is biased toward predictive stimuli, which allows learning experience to shape selection. We also show that learning experience can overrule strong top-down control (blocked tasks, Experiment 3) and that learning experience has a longer-term effect on attention deployment (tasks on two successive days, Experiment 4). © 2015 Society for Psychophysiological Research.

  13. The aftermath of memory retrieval for recycling visual working memory representations.

    Science.gov (United States)

    Park, Hyung-Bum; Zhang, Weiwei; Hyun, Joo-Seok

    2017-07-01

    We examined the aftermath of accessing and retrieving a subset of information stored in visual working memory (VWM)-namely, whether detection of a mismatch between memory and perception can impair the original memory of an item while triggering recognition-induced forgetting for the remaining, untested items. For this purpose, we devised a consecutive-change detection task wherein two successive testing probes were displayed after a single set of memory items. Across two experiments utilizing different memory-testing methods (whole vs. single probe), we observed a reliable pattern of poor performance in change detection for the second test when the first test had exhibited a color change. The impairment after a color change was evident even when the same memory item was repeatedly probed; this suggests that an attention-driven, salient visual change made it difficult to reinstate the previously remembered item. The second change detection, for memory items untested during the first change detection, was also found to be inaccurate, indicating that recognition-induced forgetting had occurred for the unprobed items in VWM. In a third experiment, we conducted a task that involved change detection plus continuous recall, wherein a memory recall task was presented after the change detection task. The analyses of the distributions of recall errors with a probabilistic mixture model revealed that the memory impairments from both visual changes and recognition-induced forgetting are explained better by the stochastic loss of memory items than by their degraded resolution. These results indicate that attention-driven visual change and recognition-induced forgetting jointly influence the "recycling" of VWM representations.

  14. PatterNet: a system to learn compact physical design pattern representations for pattern-based analytics

    Science.gov (United States)

    Lutich, Andrey

    2017-07-01

    This research considers the problem of generating compact vector representations of physical design patterns for analytics purposes in semiconductor patterning domain. PatterNet uses a deep artificial neural network to learn mapping of physical design patterns to a compact Euclidean hyperspace. Distances among mapped patterns in this space correspond to dissimilarities among patterns defined at the time of the network training. Once the mapping network has been trained, PatterNet embeddings can be used as feature vectors with standard machine learning algorithms, and pattern search, comparison, and clustering become trivial problems. PatterNet is inspired by the concepts developed within the framework of generative adversarial networks as well as the FaceNet. Our method facilitates a deep neural network (DNN) to learn directly the compact representation by supplying it with pairs of design patterns and dissimilarity among these patterns defined by a user. In the simplest case, the dissimilarity is represented by an area of the XOR of two patterns. Important to realize that our PatterNet approach is very different to the methods developed for deep learning on image data. In contrast to "conventional" pictures, the patterns in the CAD world are the lists of polygon vertex coordinates. The method solely relies on the promise of deep learning to discover internal structure of the incoming data and learn its hierarchical representations. Artificial intelligence arising from the combination of PatterNet and clustering analysis very precisely follows intuition of patterning/optical proximity correction experts paving the way toward human-like and human-friendly engineering tools.

  15. Scientific visualization as an expressive medium for project science inquiry

    Science.gov (United States)

    Gordin, Douglas Norman

    Scientists' external representations can help science education by providing powerful tools for students' inquiry. Scientific visualization is particularly well suited for this as it uses color patterns, rather than algebraic notation. Nonetheless, visualization must be adapted so it better fits with students' interests, goals, and abilities. I describe how visualization was adapted for students' expressive use and provide a case study where students successfully used visualization. The design process began with scientists' tools, data sets, and activities which were then adapted for students' use. I describe the design through scenarios where students create and analyze visualizations and present the software's functionality through visualization's sub-representations of data; color; scale, resolution, and projection; and examining the relationships between visualizations. I evaluate these designs through a "hot-house" study where a small group of students used visualization under near ideal circumstances for two weeks. Using videotapes of group interactions, software logs, and students' work I examine their representational and inquiry strategies. These inquiries were successful in that the group pursued their interest in world hunger by creating a visualization of daily per capita calorie consumption. Through creating the visualization the students engage in a process of meaning making where they interweave their prior experiences and beliefs with the representations they are using. This interweaving and other processes of collaborative visualization are shown when the students (a) computed values, (b) created a new color scheme, (c) cooperated to create the visualization, and (d) presented their work to other students. I also discuss problems that arose when students (a) used units without considering their meaning, (b) chose inappropriate comparisons in case-based reasoning, (c) did not participate equally during group work, (d) were confused about additive

  16. Rapid learning in visual cortical networks.

    Science.gov (United States)

    Wang, Ye; Dragoi, Valentin

    2015-08-26

    Although changes in brain activity during learning have been extensively examined at the single neuron level, the coding strategies employed by cell populations remain mysterious. We examined cell populations in macaque area V4 during a rapid form of perceptual learning that emerges within tens of minutes. Multiple single units and LFP responses were recorded as monkeys improved their performance in an image discrimination task. We show that the increase in behavioral performance during learning is predicted by a tight coordination of spike timing with local population activity. More spike-LFP theta synchronization is correlated with higher learning performance, while high-frequency synchronization is unrelated with changes in performance, but these changes were absent once learning had stabilized and stimuli became familiar, or in the absence of learning. These findings reveal a novel mechanism of plasticity in visual cortex by which elevated low-frequency synchronization between individual neurons and local population activity accompanies the improvement in performance during learning.

  17. Students and Teachers as Developers of Visual Learning Designs with Augmented Reality for Visual Arts Education

    DEFF Research Database (Denmark)

    Buhl, Mie

    2017-01-01

    upon which to discuss the potential for reengineering the traditional role of the teacher/learning designer as the only supplier and the students as the receivers of digital learning designs in higher education. The discussion applies the actor-network theory and socio-material perspectives...... on education in order to enhance the meta-perspective of traditional teacher and student roles.......Abstract This paper reports on a project in which communication and digital media students collaborated with visual arts teacher students and their teacher trainer to develop visual digital designs for learning that involved Augmented Reality (AR) technology. The project exemplified a design...

  18. Curriculum Q-Learning for Visual Vocabulary Acquisition

    OpenAIRE

    Zaidi, Ahmed H.; Moore, Russell; Briscoe, Ted

    2017-01-01

    The structure of curriculum plays a vital role in our learning process, both as children and adults. Presenting material in ascending order of difficulty that also exploits prior knowledge can have a significant impact on the rate of learning. However, the notion of difficulty and prior knowledge differs from person to person. Motivated by the need for a personalised curriculum, we present a novel method of curriculum learning for vocabulary words in the form of visual prompts. We employ a re...

  19. Anodal tDCS to V1 blocks visual perceptual learning consolidation.

    Science.gov (United States)

    Peters, Megan A K; Thompson, Benjamin; Merabet, Lotfi B; Wu, Allan D; Shams, Ladan

    2013-06-01

    This study examined the effects of visual cortex transcranial direct current stimulation (tDCS) on visual processing and learning. Participants performed a contrast detection task on two consecutive days. Each session consisted of a baseline measurement followed by measurements made during active or sham stimulation. On the first day, one group received anodal stimulation to primary visual cortex (V1), while another received cathodal stimulation. Stimulation polarity was reversed for these groups on the second day. The third (control) group of subjects received sham stimulation on both days. No improvements or decrements in contrast sensitivity relative to the same-day baseline were observed during real tDCS, nor was any within-session learning trend observed. However, task performance improved significantly from Day 1 to Day 2 for the participants who received cathodal tDCS on Day 1 and for the sham group. No such improvement was found for the participants who received anodal stimulation on Day 1, indicating that anodal tDCS blocked overnight consolidation of visual learning, perhaps through engagement of inhibitory homeostatic plasticity mechanisms or alteration of the signal-to-noise ratio within stimulated cortex. These results show that applying tDCS to the visual cortex can modify consolidation of visual learning. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Exploiting Attribute Correlations: A Novel Trace Lasso-Based Weakly Supervised Dictionary Learning Method.

    Science.gov (United States)

    Wu, Lin; Wang, Yang; Pan, Shirui

    2017-12-01

    It is now well established that sparse representation models are working effectively for many visual recognition tasks, and have pushed forward the success of dictionary learning therein. Recent studies over dictionary learning focus on learning discriminative atoms instead of purely reconstructive ones. However, the existence of intraclass diversities (i.e., data objects within the same category but exhibit large visual dissimilarities), and interclass similarities (i.e., data objects from distinct classes but share much visual similarities), makes it challenging to learn effective recognition models. To this end, a large number of labeled data objects are required to learn models which can effectively characterize these subtle differences. However, labeled data objects are always limited to access, committing it difficult to learn a monolithic dictionary that can be discriminative enough. To address the above limitations, in this paper, we propose a weakly-supervised dictionary learning method to automatically learn a discriminative dictionary by fully exploiting visual attribute correlations rather than label priors. In particular, the intrinsic attribute correlations are deployed as a critical cue to guide the process of object categorization, and then a set of subdictionaries are jointly learned with respect to each category. The resulting dictionary is highly discriminative and leads to intraclass diversity aware sparse representations. Extensive experiments on image classification and object recognition are conducted to show the effectiveness of our approach.

  1. Mexican high school students' social representations of mathematics, its teaching and learning

    Science.gov (United States)

    Martínez-Sierra, Gustavo; Miranda-Tirado, Marisa

    2015-07-01

    This paper reports a qualitative research that identifies Mexican high school students' social representations of mathematics. For this purpose, the social representations of 'mathematics', 'learning mathematics' and 'teaching mathematics' were identified in a group of 50 students. Focus group interviews were carried out in order to obtain the data. The constant comparative style was the strategy used for the data analysis because it allowed the categories to emerge from the data. The students' social representations are: (A) Mathematics is…(1) important for daily life, (2) important for careers and for life, (3) important because it is in everything that surrounds us, (4) a way to solve problems of daily life, (5) calculations and operations with numbers, (6) complex and difficult, (7) exact and (6) a subject that develops thinking skills; (B) To learn mathematics is…(1) to possess knowledge to solve problems, (2) to be able to solve everyday problems, (3) to be able to make calculations and operations, and (4) to think logically to be able to solve problems; and (C) To teach mathematics is…(1) to transmit knowledge, (2) to know to share it, (3) to transmit the reasoning ability, and (4) to show how to solve problems.

  2. Associating Animations with Concrete Models to Enhance Students' Comprehension of Different Visual Representations in Organic Chemistry

    Science.gov (United States)

    Al-Balushi, Sulaiman M.; Al-Hajri, Sheikha H.

    2014-01-01

    The purpose of the current study is to explore the impact of associating animations with concrete models on eleventh-grade students' comprehension of different visual representations in organic chemistry. The study used a post-test control group quasi-experimental design. The experimental group (N = 28) used concrete models, submicroscopic…

  3. The Uses of Literacy in Studying Computer Games: Comparing Students' Oral and Visual Representations of Games

    Science.gov (United States)

    Pelletier, Caroline

    2005-01-01

    This paper compares the oral and visual representations which 12 to 13-year-old students produced in studying computer games as part of an English and Media course. It presents the arguments for studying multimodal texts as part of a literacy curriculum and then provides an overview of the games course devised by teachers and researchers. The…

  4. Mid-level image representations for real-time heart view plane classification of echocardiograms.

    Science.gov (United States)

    Penatti, Otávio A B; Werneck, Rafael de O; de Almeida, Waldir R; Stein, Bernardo V; Pazinato, Daniel V; Mendes Júnior, Pedro R; Torres, Ricardo da S; Rocha, Anderson

    2015-11-01

    In this paper, we explore mid-level image representations for real-time heart view plane classification of 2D echocardiogram ultrasound images. The proposed representations rely on bags of visual words, successfully used by the computer vision community in visual recognition problems. An important element of the proposed representations is the image sampling with large regions, drastically reducing the execution time of the image characterization procedure. Throughout an extensive set of experiments, we evaluate the proposed approach against different image descriptors for classifying four heart view planes. The results show that our approach is effective and efficient for the target problem, making it suitable for use in real-time setups. The proposed representations are also robust to different image transformations, e.g., downsampling, noise filtering, and different machine learning classifiers, keeping classification accuracy above 90%. Feature extraction can be performed in 30 fps or 60 fps in some cases. This paper also includes an in-depth review of the literature in the area of automatic echocardiogram view classification giving the reader a through comprehension of this field of study. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Learning from Your Network of Friends: A Trajectory Representation Learning Model Based on Online Social Ties

    KAUST Repository

    Alharbi, Basma Mohammed; Zhang, Xiangliang

    2017-01-01

    Location-Based Social Networks (LBSNs) capture individuals whereabouts for a large portion of the population. To utilize this data for user (location)-similarity based tasks, one must map the raw data into a low-dimensional uniform feature space. However, due to the nature of LBSNs, many users have sparse and incomplete check-ins. In this work, we propose to overcome this issue by leveraging the network of friends, when learning the new feature space. We first analyze the impact of friends on individuals's mobility, and show that individuals trajectories are correlated with thoseof their friends and friends of friends (2-hop friends) in an online setting. Based on our observation, we propose a mixed-membership model that infers global mobility patterns from users' check-ins and their network of friends, without impairing the model's complexity. Our proposed model infers global patterns and learns new representations for both usersand locations simultaneously. We evaluate the inferred patterns and compare the quality of the new user representation against baseline methods on a social link prediction problem.

  6. Learning from Your Network of Friends: A Trajectory Representation Learning Model Based on Online Social Ties

    KAUST Repository

    Alharbi, Basma Mohammed

    2017-02-07

    Location-Based Social Networks (LBSNs) capture individuals whereabouts for a large portion of the population. To utilize this data for user (location)-similarity based tasks, one must map the raw data into a low-dimensional uniform feature space. However, due to the nature of LBSNs, many users have sparse and incomplete check-ins. In this work, we propose to overcome this issue by leveraging the network of friends, when learning the new feature space. We first analyze the impact of friends on individuals\\'s mobility, and show that individuals trajectories are correlated with thoseof their friends and friends of friends (2-hop friends) in an online setting. Based on our observation, we propose a mixed-membership model that infers global mobility patterns from users\\' check-ins and their network of friends, without impairing the model\\'s complexity. Our proposed model infers global patterns and learns new representations for both usersand locations simultaneously. We evaluate the inferred patterns and compare the quality of the new user representation against baseline methods on a social link prediction problem.

  7. The Representation of Color across the Human Visual Cortex: Distinguishing Chromatic Signals Contributing to Object Form Versus Surface Color.

    Science.gov (United States)

    Seymour, K J; Williams, M A; Rich, A N

    2016-05-01

    Many theories of visual object perception assume the visual system initially extracts borders between objects and their background and then "fills in" color to the resulting object surfaces. We investigated the transformation of chromatic signals across the human ventral visual stream, with particular interest in distinguishing representations of object surface color from representations of chromatic signals reflecting the retinal input. We used fMRI to measure brain activity while participants viewed figure-ground stimuli that differed either in the position or in the color contrast polarity of the foreground object (the figure). Multivariate pattern analysis revealed that classifiers were able to decode information about which color was presented at a particular retinal location from early visual areas, whereas regions further along the ventral stream exhibited biases for representing color as part of an object's surface, irrespective of its position on the retina. Additional analyses showed that although activity in V2 contained strong chromatic contrast information to support the early parsing of objects within a visual scene, activity in this area also signaled information about object surface color. These findings are consistent with the view that mechanisms underlying scene segmentation and the binding of color to object surfaces converge in V2. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  8. A knowledge representation approach using fuzzy cognitive maps for better navigation support in an adaptive learning system.

    Science.gov (United States)

    Chrysafiadi, Konstantina; Virvou, Maria

    2013-12-01

    In this paper a knowledge representation approach of an adaptive and/or personalized tutoring system is presented. The domain knowledge should be represented in a more realistic way in order to allow the adaptive and/or personalized tutoring system to deliver the learning material to each individual learner dynamically taking into account her/his learning needs and her/his different learning pace. To succeed this, the domain knowledge representation has to depict the possible increase or decrease of the learner's knowledge. Considering that the domain concepts that constitute the learning material are not independent from each other, the knowledge representation approach has to allow the system to recognize either the domain concepts that are already partly or completely known for a learner, or the domain concepts that s/he has forgotten, taking into account the learner's knowledge level of the related concepts. In other words, the system should be informed about the knowledge dependencies that exist among the domain concepts of the learning material, as well as the strength on impact of each domain concept on others. Fuzzy Cognitive Maps (FCMs) seem to be an ideal way for representing graphically this kind of information. The suggested knowledge representation approach has been implemented in an e-learning adaptive system for teaching computer programming. The particular system was used by the students of a postgraduate program in the field of Informatics in the University of Piraeus and was compared with a corresponding system, in which the domain knowledge was represented using the most common used technique of network of concepts. The results of the evaluation were very encouraging.

  9. Designing electronic module based on learning content development system in fostering students’ multi representation skills

    Science.gov (United States)

    Resita, I.; Ertikanto, C.

    2018-05-01

    This study aims to develop electronic module design based on Learning Content Development System (LCDS) to foster students’ multi representation skills in physics subject material. This study uses research and development method to the product design. This study involves 90 students and 6 physics teachers who were randomly chosen from 3 different Senior High Schools in Lampung Province. The data were collected by using questionnaires and analyzed by using quantitative descriptive method. Based on the data, 95% of the students only use one form of representation in solving physics problems. Representation which is tend to be used by students is symbolic representation. Students are considered to understand the concept of physics if they are able to change from one form to the other forms of representation. Product design of LCDS-based electronic module presents text, image, symbolic, video, and animation representation.

  10. Robust Visual Tracking Via Consistent Low-Rank Sparse Learning

    KAUST Repository

    Zhang, Tianzhu; Liu, Si; Ahuja, Narendra; Yang, Ming-Hsuan; Ghanem, Bernard

    2014-01-01

    and the low-rank minimization problem for learning joint sparse representations can be efficiently solved by a sequence of closed form update operations. We evaluate the proposed CLRST algorithm against 14 state-of-the-art tracking methods on a set of 25

  11. Robust Visual Knowledge Transfer via Extreme Learning Machine Based Domain Adaptation.

    Science.gov (United States)

    Zhang, Lei; Zhang, David

    2016-08-10

    We address the problem of visual knowledge adaptation by leveraging labeled patterns from source domain and a very limited number of labeled instances in target domain to learn a robust classifier for visual categorization. This paper proposes a new extreme learning machine based cross-domain network learning framework, that is called Extreme Learning Machine (ELM) based Domain Adaptation (EDA). It allows us to learn a category transformation and an ELM classifier with random projection by minimizing the -norm of the network output weights and the learning error simultaneously. The unlabeled target data, as useful knowledge, is also integrated as a fidelity term to guarantee the stability during cross domain learning. It minimizes the matching error between the learned classifier and a base classifier, such that many existing classifiers can be readily incorporated as base classifiers. The network output weights cannot only be analytically determined, but also transferrable. Additionally, a manifold regularization with Laplacian graph is incorporated, such that it is beneficial to semi-supervised learning. Extensively, we also propose a model of multiple views, referred as MvEDA. Experiments on benchmark visual datasets for video event recognition and object recognition, demonstrate that our EDA methods outperform existing cross-domain learning methods.

  12. Context generalization in Drosophila visual learning requires the mushroom bodies

    Science.gov (United States)

    Liu, Li; Wolf, Reinhard; Ernst, Roman; Heisenberg, Martin

    1999-08-01

    The world is permanently changing. Laboratory experiments on learning and memory normally minimize this feature of reality, keeping all conditions except the conditioned and unconditioned stimuli as constant as possible. In the real world, however, animals need to extract from the universe of sensory signals the actual predictors of salient events by separating them from non-predictive stimuli (context). In principle, this can be achieved ifonly those sensory inputs that resemble the reinforcer in theirtemporal structure are taken as predictors. Here we study visual learning in the fly Drosophila melanogaster, using a flight simulator,, and show that memory retrieval is, indeed, partially context-independent. Moreover, we show that the mushroom bodies, which are required for olfactory but not visual or tactile learning, effectively support context generalization. In visual learning in Drosophila, it appears that a facilitating effect of context cues for memory retrieval is the default state, whereas making recall context-independent requires additional processing.

  13. The Rhetoric of Multi-Display Learning Spaces: exploratory experiences in visual art disciplines

    Directory of Open Access Journals (Sweden)

    Brett Bligh

    2010-11-01

    Full Text Available Multi-Display Learning Spaces (MD-LS comprise technologies to allow the viewing of multiple simultaneous visual materials, modes of learning which encourage critical reflection upon these materials, and spatial configurations which afford interaction between learners and the materials in orchestrated ways. In this paper we provide an argument for the benefits of Multi-Display Learning Spaces in supporting complex, disciplinary reasoning within learning, focussing upon our experiences within postgraduate visual arts education. The importance of considering the affordances of the physical environment within education has been acknowledged by the recent attention given to Learning Spaces, yet within visual art disciplines the perception of visual material within a given space has long been seen as a key methodological consideration with implications for the identity of the discipline itself. We analyse the methodological, technological and spatial affordances of MD-LS to support learning, and discuss comparative viewing as a disciplinary method to structure visual analysis within the space which benefits from the simultaneous display of multiple partitions of visual evidence. We offer an analysis of the role of the teacher in authoring and orchestration and conclude by proposing a more general structure for what we term ‘multiple perspective learning’, in which the presentation of multiple pieces of visual evidence creates the conditions for complex argumentation within Higher Education.

  14. Implementation of ICARE learning model using visualization animation on biotechnology course

    Science.gov (United States)

    Hidayat, Habibi

    2017-12-01

    ICARE is a learning model that directly ensure the students to actively participate in the learning process using animation media visualization. ICARE have five key elements of learning experience from children and adult that is introduction, connection, application, reflection and extension. The use of Icare system to ensure that participants have opportunity to apply what have been they learned. So that, the message delivered by lecture to students can be understood and recorded by students in a long time. Learning model that was deemed capable of improving learning outcomes and interest to learn in following learning process Biotechnology with applying the ICARE learning model using visualization animation. This learning model have been giving motivation to participate in the learning process and learning outcomes obtained becomes more increased than before. From the results of student learning in subjects Biotechnology by applying the ICARE learning model using Visualization Animation can improving study results of student from the average value of middle test amounted to 70.98 with the percentage of 75% increased value of final test to be 71.57 with the percentage of 68.63%. The interest to learn from students more increasing visits of student activities at each cycle, namely the first cycle obtained average value by 33.5 with enough category. The second cycle is obtained an average value of 36.5 to good category and third cycle the average value of 36.5 with a student activity to good category.

  15. Learning about Probability from Text and Tables: Do Color Coding and Labeling through an Interactive-User Interface Help?

    Science.gov (United States)

    Clinton, Virginia; Morsanyi, Kinga; Alibali, Martha W.; Nathan, Mitchell J.

    2016-01-01

    Learning from visual representations is enhanced when learners appropriately integrate corresponding visual and verbal information. This study examined the effects of two methods of promoting integration, color coding and labeling, on learning about probabilistic reasoning from a table and text. Undergraduate students (N = 98) were randomly…

  16. Top-down attention based on object representation and incremental memory for knowledge building and inference.

    Science.gov (United States)

    Kim, Bumhwi; Ban, Sang-Woo; Lee, Minho

    2013-10-01

    Humans can efficiently perceive arbitrary visual objects based on an incremental learning mechanism with selective attention. This paper proposes a new task specific top-down attention model to locate a target object based on its form and color representation along with a bottom-up saliency based on relativity of primitive visual features and some memory modules. In the proposed model top-down bias signals corresponding to the target form and color features are generated, which draw the preferential attention to the desired object by the proposed selective attention model in concomitance with the bottom-up saliency process. The object form and color representation and memory modules have an incremental learning mechanism together with a proper object feature representation scheme. The proposed model includes a Growing Fuzzy Topology Adaptive Resonance Theory (GFTART) network which plays two important roles in object color and form biased attention; one is to incrementally learn and memorize color and form features of various objects, and the other is to generate a top-down bias signal to localize a target object by focusing on the candidate local areas. Moreover, the GFTART network can be utilized for knowledge inference which enables the perception of new unknown objects on the basis of the object form and color features stored in the memory during training. Experimental results show that the proposed model is successful in focusing on the specified target objects, in addition to the incremental representation and memorization of various objects in natural scenes. In addition, the proposed model properly infers new unknown objects based on the form and color features of previously trained objects. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Library Evaluation and Organizational Learning: A Questionnaire Study

    Science.gov (United States)

    Chen, Kuan-Nien

    2006-01-01

    This article focuses on organizational learning, particularly in the context of evaluation and organizational change. These concepts are discussed in terms of academic libraries. As part of this discussion, a model entitled Processes and Phases of Organizational Learning (PPOL) was developed which is a visual representation of the range of…

  18. Visual cognition

    Science.gov (United States)

    Cavanagh, Patrick

    2011-01-01

    Visual cognition, high-level vision, mid-level vision and top-down processing all refer to decision-based scene analyses that combine prior knowledge with retinal input to generate representations. The label “visual cognition” is little used at present, but research and experiments on mid- and high-level, inference-based vision have flourished, becoming in the 21st century a significant, if often understated part, of current vision research. How does visual cognition work? What are its moving parts? This paper reviews the origins and architecture of visual cognition and briefly describes some work in the areas of routines, attention, surfaces, objects, and events (motion, causality, and agency). Most vision scientists avoid being too explicit when presenting concepts about visual cognition, having learned that explicit models invite easy criticism. What we see in the literature is ample evidence for visual cognition, but few or only cautious attempts to detail how it might work. This is the great unfinished business of vision research: at some point we will be done with characterizing how the visual system measures the world and we will have to return to the question of how vision constructs models of objects, surfaces, scenes, and events. PMID:21329719

  19. Visual cognition.

    Science.gov (United States)

    Cavanagh, Patrick

    2011-07-01

    Visual cognition, high-level vision, mid-level vision and top-down processing all refer to decision-based scene analyses that combine prior knowledge with retinal input to generate representations. The label "visual cognition" is little used at present, but research and experiments on mid- and high-level, inference-based vision have flourished, becoming in the 21st century a significant, if often understated part, of current vision research. How does visual cognition work? What are its moving parts? This paper reviews the origins and architecture of visual cognition and briefly describes some work in the areas of routines, attention, surfaces, objects, and events (motion, causality, and agency). Most vision scientists avoid being too explicit when presenting concepts about visual cognition, having learned that explicit models invite easy criticism. What we see in the literature is ample evidence for visual cognition, but few or only cautious attempts to detail how it might work. This is the great unfinished business of vision research: at some point we will be done with characterizing how the visual system measures the world and we will have to return to the question of how vision constructs models of objects, surfaces, scenes, and events. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. Functional relationships between the hippocampus and dorsomedial striatum in learning a visual scene-based memory task in rats.

    Science.gov (United States)

    Delcasso, Sébastien; Huh, Namjung; Byeon, Jung Seop; Lee, Jihyun; Jung, Min Whan; Lee, Inah

    2014-11-19

    The hippocampus is important for contextual behavior, and the striatum plays key roles in decision making. When studying the functional relationships with the hippocampus, prior studies have focused mostly on the dorsolateral striatum (DLS), emphasizing the antagonistic relationships between the hippocampus and DLS in spatial versus response learning. By contrast, the functional relationships between the dorsomedial striatum (DMS) and hippocampus are relatively unknown. The current study reports that lesions to both the hippocampus and DMS profoundly impaired performance of rats in a visual scene-based memory task in which the animals were required to make a choice response by using visual scenes displayed in the background. Analysis of simultaneous recordings of local field potentials revealed that the gamma oscillatory power was higher in the DMS, but not in CA1, when the rat performed the task using familiar scenes than novel ones. In addition, the CA1-DMS networks increased coherence at γ, but not at θ, rhythm as the rat mastered the task. At the single-unit level, the neuronal populations in CA1 and DMS showed differential firing patterns when responses were made using familiar visual scenes than novel ones. Such learning-dependent firing patterns were observed earlier in the DMS than in CA1 before the rat made choice responses. The present findings suggest that both the hippocampus and DMS process memory representations for visual scenes in parallel with different time courses and that flexible choice action using background visual scenes requires coordinated operations of the hippocampus and DMS at γ frequencies. Copyright © 2014 the authors 0270-6474/14/3415534-14$15.00/0.

  1. Student representation of magnetic field concepts in learning by guided inquiry

    International Nuclear Information System (INIS)

    Fatmaryanti, Siska Desy; Suparmi; Sarwanto; Ashadi

    2017-01-01

    The purpose of this study was to determine the change of student’s representation after the intervention of learning by guided inquiry. The population in this research were all students who took a fundamental physics course, consisted of 28 students academic year 2016, Department of Physics Education, Faculty of Teacher Training and Education, University of Muhammadiyah Purworejo. This study employed a quasi-experimental design with group pre-test and post-test. The result of the research showed that the average of students representation of magnetic field before implementation of guided inquiry was 28,6 % and after implementation was 71,4%. It means that the student’s ability of multi-representation increase. Moreover, the number of students who is able to write and draw based on experiment data increased from 10,7% to 21,4 %. It was also showed that the number of student with no answer decreased from 28,5% to 10,7%. (paper)

  2. Student representation of magnetic field concepts in learning by guided inquiry

    Science.gov (United States)

    Desy Fatmaryanti, Siska; Suparmi; Sarwanto; Ashadi

    2017-01-01

    The purpose of this study was to determine the change of student’s representation after the intervention of learning by guided inquiry. The population in this research were all students who took a fundamental physics course, consisted of 28 students academic year 2016, Department of Physics Education, Faculty of Teacher Training and Education, University of Muhammadiyah Purworejo. This study employed a quasi-experimental design with group pre-test and post-test. The result of the research showed that the average of students representation of magnetic field before implementation of guided inquiry was 28,6 % and after implementation was 71,4%. It means that the student’s ability of multi-representation increase. Moreover, the number of students who is able to write and draw based on experiment data increased from 10,7% to 21,4 %. It was also showed that the number of student with no answer decreased from 28,5% to 10,7%.

  3. Making perceptual learning practical to improve visual functions.

    Science.gov (United States)

    Polat, Uri

    2009-10-01

    Task-specific improvement in performance after training is well established. The finding that learning is stimulus-specific and does not transfer well between different stimuli, between stimulus locations in the visual field, or between the two eyes has been used to support the notion that neurons or assemblies of neurons are modified at the earliest stage of cortical processing. However, a debate regarding the proposed mechanism underlying perceptual learning is an ongoing issue. Nevertheless, generalization of a trained task to other functions is an important key, for both understanding the neural mechanisms and the practical value of the training. This manuscript describes a structured perceptual learning method that previously used (amblyopia, myopia) and a novel technique and results that were applied for presbyopia. In general, subjects were trained for contrast detection of Gabor targets under lateral masking conditions. Training improved contrast sensitivity and diminished the lateral suppression when it existed (amblyopia). The improvement was transferred to unrelated functions such as visual acuity. The new results of presbyopia show substantial improvement of the spatial and temporal contrast sensitivity, leading to improved processing speed of target detection as well as reaction time. Consequently, the subjects, who were able to eliminate the need for reading glasses, benefited. Thus, here we show that the transfer of functions indicates that the specificity of improvement in the trained task can be generalized by repetitive practice of target detection, covering a sufficient range of spatial frequencies and orientations, leading to an improvement in unrelated visual functions. Thus, perceptual learning can be a practical method to improve visual functions in people with impaired or blurred vision.

  4. Teaching object concepts for XML-based representations.

    Energy Technology Data Exchange (ETDEWEB)

    Kelsey, R. L. (Robert L.)

    2002-01-01

    Students learned about object-oriented design concepts and knowledge representation through the use of a set of toy blocks. The blocks represented a limited and focused domain of knowledge and one that was physical and tangible. The blocks helped the students to better visualize, communicate, and understand the domain of knowledge as well as how to perform object decomposition. The blocks were further abstracted to an engineering design kit for water park design. This helped the students to work on techniques for abstraction and conceptualization. It also led the project from tangible exercises into software and programming exercises. Students employed XML to create object-based knowledge representations and Java to use the represented knowledge. The students developed and implemented software allowing a lay user to design and create their own water slide and then to take a simulated ride on their slide.

  5. Mirror reversal and visual rotation are learned and consolidated via separate mechanisms: recalibrating or learning de novo?

    Science.gov (United States)

    Telgen, Sebastian; Parvin, Darius; Diedrichsen, Jörn

    2014-10-08

    Motor learning tasks are often classified into adaptation tasks, which involve the recalibration of an existing control policy (the mapping that determines both feedforward and feedback commands), and skill-learning tasks, requiring the acquisition of new control policies. We show here that this distinction also applies to two different visuomotor transformations during reaching in humans: Mirror-reversal (left-right reversal over a mid-sagittal axis) of visual feedback versus rotation of visual feedback around the movement origin. During mirror-reversal learning, correct movement initiation (feedforward commands) and online corrections (feedback responses) were only generated at longer latencies. The earliest responses were directed into a nonmirrored direction, even after two training sessions. In contrast, for visual rotation learning, no dependency of directional error on reaction time emerged, and fast feedback responses to visual displacements of the cursor were immediately adapted. These results suggest that the motor system acquires a new control policy for mirror reversal, which initially requires extra processing time, while it recalibrates an existing control policy for visual rotations, exploiting established fast computational processes. Importantly, memory for visual rotation decayed between sessions, whereas memory for mirror reversals showed offline gains, leading to better performance at the beginning of the second session than in the end of the first. With shifts in time-accuracy tradeoff and offline gains, mirror-reversal learning shares common features with other skill-learning tasks. We suggest that different neuronal mechanisms underlie the recalibration of an existing versus acquisition of a new control policy and that offline gains between sessions are a characteristic of latter. Copyright © 2014 the authors 0270-6474/14/3413768-12$15.00/0.

  6. Interaction for visualization

    CERN Document Server

    Tominski, Christian

    2015-01-01

    Visualization has become a valuable means for data exploration and analysis. Interactive visualization combines expressive graphical representations and effective user interaction. Although interaction is an important component of visualization approaches, much of the visualization literature tends to pay more attention to the graphical representation than to interaction.The goal of this work is to strengthen the interaction side of visualization. Based on a brief review of general aspects of interaction, we develop an interaction-oriented view on visualization. This view comprises five key as

  7. Robust Subjective Visual Property Prediction from Crowdsourced Pairwise Labels.

    Science.gov (United States)

    Fu, Yanwei; Hospedales, Timothy M; Xiang, Tao; Xiong, Jiechao; Gong, Shaogang; Wang, Yizhou; Yao, Yuan

    2016-03-01

    The problem of estimating subjective visual properties from image and video has attracted increasing interest. A subjective visual property is useful either on its own (e.g. image and video interestingness) or as an intermediate representation for visual recognition (e.g. a relative attribute). Due to its ambiguous nature, annotating the value of a subjective visual property for learning a prediction model is challenging. To make the annotation more reliable, recent studies employ crowdsourcing tools to collect pairwise comparison labels. However, using crowdsourced data also introduces outliers. Existing methods rely on majority voting to prune the annotation outliers/errors. They thus require a large amount of pairwise labels to be collected. More importantly as a local outlier detection method, majority voting is ineffective in identifying outliers that can cause global ranking inconsistencies. In this paper, we propose a more principled way to identify annotation outliers by formulating the subjective visual property prediction task as a unified robust learning to rank problem, tackling both the outlier detection and learning to rank jointly. This differs from existing methods in that (1) the proposed method integrates local pairwise comparison labels together to minimise a cost that corresponds to global inconsistency of ranking order, and (2) the outlier detection and learning to rank problems are solved jointly. This not only leads to better detection of annotation outliers but also enables learning with extremely sparse annotations.

  8. Age-related impairments in active learning and strategic visual exploration

    Directory of Open Access Journals (Sweden)

    Kelly L Brandstatt

    2014-02-01

    Full Text Available Old age could impair memory by disrupting learning strategies used by younger individuals. We tested this possibility by manipulating the ability to use visual-exploration strategies during learning. Subjects controlled visual exploration during active learning, thus permitting the use of strategies, whereas strategies were limited during passive learning via predetermined exploration patterns. Performance on tests of object recognition and object-location recall was matched for younger and older subjects for objects studied passively, when learning strategies were restricted. Active learning improved object recognition similarly for younger and older subjects. However, active learning improved object-location recall for younger subjects, but not older subjects. Exploration patterns were used to identify a learning strategy involving repeat viewing. Older subjects used this strategy less frequently and it provided less memory benefit compared to younger subjects. In previous experiments, we linked hippocampal-prefrontal co-activation to improvements in object-location recall from active learning and to the exploration strategy. Collectively, these findings suggest that age-related memory problems result partly from impaired strategies during learning, potentially due to reduced hippocampal-prefrontal co-engagement.

  9. Age-related impairments in active learning and strategic visual exploration.

    Science.gov (United States)

    Brandstatt, Kelly L; Voss, Joel L

    2014-01-01

    Old age could impair memory by disrupting learning strategies used by younger individuals. We tested this possibility by manipulating the ability to use visual-exploration strategies during learning. Subjects controlled visual exploration during active learning, thus permitting the use of strategies, whereas strategies were limited during passive learning via predetermined exploration patterns. Performance on tests of object recognition and object-location recall was matched for younger and older subjects for objects studied passively, when learning strategies were restricted. Active learning improved object recognition similarly for younger and older subjects. However, active learning improved object-location recall for younger subjects, but not older subjects. Exploration patterns were used to identify a learning strategy involving repeat viewing. Older subjects used this strategy less frequently and it provided less memory benefit compared to younger subjects. In previous experiments, we linked hippocampal-prefrontal co-activation to improvements in object-location recall from active learning and to the exploration strategy. Collectively, these findings suggest that age-related memory problems result partly from impaired strategies during learning, potentially due to reduced hippocampal-prefrontal co-engagement.

  10. Time representation in reinforcement learning models of the basal ganglia

    Directory of Open Access Journals (Sweden)

    Samuel Joseph Gershman

    2014-01-01

    Full Text Available Reinforcement learning models have been influential in understanding many aspects of basal ganglia function, from reward prediction to action selection. Time plays an important role in these models, but there is still no theoretical consensus about what kind of time representation is used by the basal ganglia. We review several theoretical accounts and their supporting evidence. We then discuss the relationship between reinforcement learning models and the timing mechanisms that have been attributed to the basal ganglia. We hypothesize that a single computational system may underlie both reinforcement learning and interval timing—the perception of duration in the range of seconds to hours. This hypothesis, which extends earlier models by incorporating a time-sensitive action selection mechanism, may have important implications for understanding disorders like Parkinson's disease in which both decision making and timing are impaired.

  11. Learning weighted sparse representation of encoded facial normal information for expression-robust 3D face recognition

    KAUST Repository

    Li, Huibin

    2011-10-01

    This paper proposes a novel approach for 3D face recognition by learning weighted sparse representation of encoded facial normal information. To comprehensively describe 3D facial surface, three components, in X, Y, and Z-plane respectively, of normal vector are encoded locally to their corresponding normal pattern histograms. They are finally fed to a sparse representation classifier enhanced by learning based spatial weights. Experimental results achieved on the FRGC v2.0 database prove that the proposed encoded normal information is much more discriminative than original normal information. Moreover, the patch based weights learned using the FRGC v1.0 and Bosphorus datasets also demonstrate the importance of each facial physical component for 3D face recognition. © 2011 IEEE.

  12. Summarize to learn: summarization and visualization of text for ubiquitous learning

    DEFF Research Database (Denmark)

    Chongtay, Rocio; Last, Mark; Verbeke, Mathias

    2013-01-01

    Visualizations can stand in many relations to texts – and, as research into learning with pictures has shown, they can become particularly valuable when they transform the contents of the text (rather than just duplicate its message or structure it). But what kinds of transformations can...... be particularly helpful in the learning process? In this paper, we argue that interacting with, and creating, summaries of texts is a key transformation technique, and we investigate how textual and graphical summarization approaches, as well as automatic and manual summarization, can complement one another...... to support effective learning....

  13. Low-rank and sparse modeling for visual analysis

    CERN Document Server

    Fu, Yun

    2014-01-01

    This book provides a view of low-rank and sparse computing, especially approximation, recovery, representation, scaling, coding, embedding and learning among unconstrained visual data. The book includes chapters covering multiple emerging topics in this new field. It links multiple popular research fields in Human-Centered Computing, Social Media, Image Classification, Pattern Recognition, Computer Vision, Big Data, and Human-Computer Interaction. Contains an overview of the low-rank and sparse modeling techniques for visual analysis by examining both theoretical analysis and real-world applic

  14. The cerebellum and visual perceptual learning: evidence from a motion extrapolation task.

    Science.gov (United States)

    Deluca, Cristina; Golzar, Ashkan; Santandrea, Elisa; Lo Gerfo, Emanuele; Eštočinová, Jana; Moretto, Giuseppe; Fiaschi, Antonio; Panzeri, Marta; Mariotti, Caterina; Tinazzi, Michele; Chelazzi, Leonardo

    2014-09-01

    Visual perceptual learning is widely assumed to reflect plastic changes occurring along the cerebro-cortical visual pathways, including at the earliest stages of processing, though increasing evidence indicates that higher-level brain areas are also involved. Here we addressed the possibility that the cerebellum plays an important role in visual perceptual learning. Within the realm of motor control, the cerebellum supports learning of new skills and recalibration of motor commands when movement execution is consistently perturbed (adaptation). Growing evidence indicates that the cerebellum is also involved in cognition and mediates forms of cognitive learning. Therefore, the obvious question arises whether the cerebellum might play a similar role in learning and adaptation within the perceptual domain. We explored a possible deficit in visual perceptual learning (and adaptation) in patients with cerebellar damage using variants of a novel motion extrapolation, psychophysical paradigm. Compared to their age- and gender-matched controls, patients with focal damage to the posterior (but not the anterior) cerebellum showed strongly diminished learning, in terms of both rate and amount of improvement over time. Consistent with a double-dissociation pattern, patients with focal damage to the anterior cerebellum instead showed more severe clinical motor deficits, indicative of a distinct role of the anterior cerebellum in the motor domain. The collected evidence demonstrates that a pure form of slow-incremental visual perceptual learning is crucially dependent on the intact cerebellum, bearing the notion that the human cerebellum acts as a learning device for motor, cognitive and perceptual functions. We interpret the deficit in terms of an inability to fine-tune predictive models of the incoming flow of visual perceptual input over time. Moreover, our results suggest a strong dissociation between the role of different portions of the cerebellum in motor versus

  15. Alchemical and structural distribution based representation for universal quantum machine learning

    Science.gov (United States)

    Faber, Felix A.; Christensen, Anders S.; Huang, Bing; von Lilienfeld, O. Anatole

    2018-06-01

    We introduce a representation of any atom in any chemical environment for the automatized generation of universal kernel ridge regression-based quantum machine learning (QML) models of electronic properties, trained throughout chemical compound space. The representation is based on Gaussian distribution functions, scaled by power laws and explicitly accounting for structural as well as elemental degrees of freedom. The elemental components help us to lower the QML model's learning curve, and, through interpolation across the periodic table, even enable "alchemical extrapolation" to covalent bonding between elements not part of training. This point is demonstrated for the prediction of covalent binding in single, double, and triple bonds among main-group elements as well as for atomization energies in organic molecules. We present numerical evidence that resulting QML energy models, after training on a few thousand random training instances, reach chemical accuracy for out-of-sample compounds. Compound datasets studied include thousands of structurally and compositionally diverse organic molecules, non-covalently bonded protein side-chains, (H2O)40-clusters, and crystalline solids. Learning curves for QML models also indicate competitive predictive power for various other electronic ground state properties of organic molecules, calculated with hybrid density functional theory, including polarizability, heat-capacity, HOMO-LUMO eigenvalues and gap, zero point vibrational energy, dipole moment, and highest vibrational fundamental frequency.

  16. Visual Hybrid Development Learning System (VHDLS) framework for children with autism.

    Science.gov (United States)

    Banire, Bilikis; Jomhari, Nazean; Ahmad, Rodina

    2015-10-01

    The effect of education on children with autism serves as a relative cure for their deficits. As a result of this, they require special techniques to gain their attention and interest in learning as compared to typical children. Several studies have shown that these children are visual learners. In this study, we proposed a Visual Hybrid Development Learning System (VHDLS) framework that is based on an instructional design model, multimedia cognitive learning theory, and learning style in order to guide software developers in developing learning systems for children with autism. The results from this study showed that the attention of children with autism increased more with the proposed VHDLS framework.

  17. Learning invariance from natural images inspired by observations in the primary visual cortex.

    Science.gov (United States)

    Teichmann, Michael; Wiltschut, Jan; Hamker, Fred

    2012-05-01

    The human visual system has the remarkable ability to largely recognize objects invariant of their position, rotation, and scale. A good interpretation of neurobiological findings involves a computational model that simulates signal processing of the visual cortex. In part, this is likely achieved step by step from early to late areas of visual perception. While several algorithms have been proposed for learning feature detectors, only few studies at hand cover the issue of biologically plausible learning of such invariance. In this study, a set of Hebbian learning rules based on calcium dynamics and homeostatic regulations of single neurons is proposed. Their performance is verified within a simple model of the primary visual cortex to learn so-called complex cells, based on a sequence of static images. As a result, the learned complex-cell responses are largely invariant to phase and position.

  18. Differential learning and memory performance in OEF/OIF veterans for verbal and visual material.

    Science.gov (United States)

    Sozda, Christopher N; Muir, James J; Springer, Utaka S; Partovi, Diana; Cole, Michael A

    2014-05-01

    Memory complaints are particularly salient among veterans who experience combat-related mild traumatic brain injuries and/or trauma exposure, and represent a primary barrier to successful societal reintegration and everyday functioning. Anecdotally within clinical practice, verbal learning and memory performance frequently appears differentially reduced versus visual learning and memory scores. We sought to empirically investigate the robustness of a verbal versus visual learning and memory discrepancy and to explore potential mechanisms for a verbal/visual performance split. Participants consisted of 103 veterans with reported history of mild traumatic brain injuries returning home from U.S. military Operations Enduring Freedom and Iraqi Freedom referred for outpatient neuropsychological evaluation. Findings indicate that visual learning and memory abilities were largely intact while verbal learning and memory performance was significantly reduced in comparison, residing at approximately 1.1 SD below the mean for verbal learning and approximately 1.4 SD below the mean for verbal memory. This difference was not observed in verbal versus visual fluency performance, nor was it associated with estimated premorbid verbal abilities or traumatic brain injury history. In our sample, symptoms of depression, but not posttraumatic stress disorder, were significantly associated with reduced composite verbal learning and memory performance. Verbal learning and memory performance may benefit from targeted treatment of depressive symptomatology. Also, because visual learning and memory functions may remain intact, these might be emphasized when applying neurocognitive rehabilitation interventions to compensate for observed verbal learning and memory difficulties.

  19. Parametric Representation of the Speaker's Lips for Multimodal Sign Language and Speech Recognition

    Science.gov (United States)

    Ryumin, D.; Karpov, A. A.

    2017-05-01

    In this article, we propose a new method for parametric representation of human's lips region. The functional diagram of the method is described and implementation details with the explanation of its key stages and features are given. The results of automatic detection of the regions of interest are illustrated. A speed of the method work using several computers with different performances is reported. This universal method allows applying parametrical representation of the speaker's lipsfor the tasks of biometrics, computer vision, machine learning, and automatic recognition of face, elements of sign languages, and audio-visual speech, including lip-reading.

  20. Errors of Students Learning with React Strategy in Solving the Problems of Mathematical Representation Ability

    Science.gov (United States)

    Sari, Delsika Pramata; Darhim; Rosjanuardi, Rizky

    2018-01-01

    The purpose of this study was to investigate the errors experienced by students learning with REACT strategy and traditional learning in solving problems of mathematical representation ability. This study used quasi experimental pattern with static-group comparison design. The subjects of this study were 47 eighth grade students of junior high…

  1. Sensory modality specificity of neural activity related to memory in visual cortex.

    Science.gov (United States)

    Gibson, J R; Maunsell, J H

    1997-09-01

    Previous studies have shown that when monkeys perform a delayed match-to-sample (DMS) task, some neurons in inferotemporal visual cortex are activated selectively during the delay period when the animal must remember particular visual stimuli. This selective delay activity may be involved in short-term memory. It does not depend on visual stimulation: both auditory and tactile stimuli can trigger selective delay activity in inferotemporal cortex when animals expect to respond to visual stimuli in a DMS task. We have examined the overall modality specificity of delay period activity using a variety of auditory/visual cross-modal and unimodal DMS tasks. The cross-modal DMS tasks involved making specific long-term memory associations between visual and auditory stimuli, whereas the unimodal DMS tasks were standard identity matching tasks. Delay activity existed in auditory/visual cross-modal DMS tasks whether the animal anticipated responding to visual or auditory stimuli. No evidence of selective delay period activation was seen in a purely auditory DMS task. Delay-selective cells were relatively common in one animal where they constituted up to 53% neurons tested with a given task. This was only the case for up to 9% of cells in a second animal. In the first animal, a specific long-term memory representation for learned cross-modal associations was observed in delay activity, indicating that this type of representation need not be purely visual. Furthermore, in this same animal, delay activity in one cross-modal task, an auditory-to-visual task, predicted correct and incorrect responses. These results suggest that neurons in inferotemporal cortex contribute to abstract memory representations that can be activated by input from other sensory modalities, but these representations are specific to visual behaviors.

  2. Brain activity associated with translation from a visual to a symbolic representation in algebra and geometry.

    Science.gov (United States)

    Leikin, Mark; Waisman, Ilana; Shaul, Shelley; Leikin, Roza

    2014-03-01

    This paper presents a small part of a larger interdisciplinary study that investigates brain activity (using event related potential methodology) of male adolescents when solving mathematical problems of different types. The study design links mathematics education research with neurocognitive studies. In this paper we performed a comparative analysis of brain activity associated with the translation from visual to symbolic representations of mathematical objects in algebra and geometry. Algebraic tasks require translation from graphical to symbolic representation of a function, whereas tasks in geometry require translation from a drawing of a geometric figure to a symbolic representation of its property. The findings demonstrate that electrical activity associated with the performance of geometrical tasks is stronger than that associated with solving algebraic tasks. Additionally, we found different scalp topography of the brain activity associated with algebraic and geometric tasks. Based on these results, we argue that problem solving in algebra and geometry is associated with different patterns of brain activity.

  3. Enhanced learning of natural visual sequences in newborn chicks.

    Science.gov (United States)

    Wood, Justin N; Prasad, Aditya; Goldman, Jason G; Wood, Samantha M W

    2016-07-01

    To what extent are newborn brains designed to operate over natural visual input? To address this question, we used a high-throughput controlled-rearing method to examine whether newborn chicks (Gallus gallus) show enhanced learning of natural visual sequences at the onset of vision. We took the same set of images and grouped them into either natural sequences (i.e., sequences showing different viewpoints of the same real-world object) or unnatural sequences (i.e., sequences showing different images of different real-world objects). When raised in virtual worlds containing natural sequences, newborn chicks developed the ability to recognize familiar images of objects. Conversely, when raised in virtual worlds containing unnatural sequences, newborn chicks' object recognition abilities were severely impaired. In fact, the majority of the chicks raised with the unnatural sequences failed to recognize familiar images of objects despite acquiring over 100 h of visual experience with those images. Thus, newborn chicks show enhanced learning of natural visual sequences at the onset of vision. These results indicate that newborn brains are designed to operate over natural visual input.

  4. Visual memory and learning in extremely low-birth-weight/extremely preterm adolescents compared with controls: a geographic study.

    Science.gov (United States)

    Molloy, Carly S; Wilson-Ching, Michelle; Doyle, Lex W; Anderson, Vicki A; Anderson, Peter J

    2014-04-01

    Contemporary data on visual memory and learning in survivors born extremely preterm (EP; Visual learning and memory data were available for 221 (74.2%) EP/ELBW subjects and 159 (60.7%) controls. EP/ELBW adolescents exhibited significantly poorer performance across visual memory and learning variables compared with controls. Visual learning and delayed visual memory were particularly problematic and remained so after controlling for visual-motor integration and visual perception and excluding adolescents with neurosensory disability, and/or IQ visual memory and learning outcomes compared with controls, which cannot be entirely explained by poor visual perceptual or visual constructional skills or intellectual impairment.

  5. Using E-Learning Portfolio Technology To Support Visual Art Learning

    Directory of Open Access Journals (Sweden)

    Greer Jones-Woodham

    2009-08-01

    Full Text Available Inspired by self-directed learning (SDL theories, this paper uses learning portfolios as a reflective practice to improve student learning and develop personal responsibility, growth and autonomy in learning in a Visual Arts course. Students use PowerPoint presentations to demonstrate their concepts by creating folders that are linked to e-portfolios on the University website. This paper establishes the role of learning e-portfolios to improve teaching and learning as a model of reflection, collaboration and documentation in the making of art as a self-directed process. These portfolios link students' creative thinking to their conceptual frameworks. They also establish a process of inquiry using journals to map students' processes through their reflections and peer feedback. This practice argues that learning e-portfolios in studio art not only depends on a set of objectives whose means are justified by an agreed end but also depends on a practice that engages students' reflection about their actions while in their art- making practice. Using the principles of the maker as the intuitive and reflective practitioner, the making as the process in which the learning e-portfolios communicate the process and conceptual frameworks of learning and the eventual product, and the made as evidence of that learning in light of progress made, this paper demonstrates that learning-in-action and reflecting-in and-on-action are driven by self-direction. With technology, students bring their learning context to bear with the use of SDL. Students' use of PowerPoint program technology in making their portfolios is systematic and builds on students' competencies as this process guides students' beliefs and actions about their work that is based on theory and concepts in response to a visual culture that is Trinidad and Tobago. Students' self–directed art-making process as a self directed learning, models the process of articulated learning. Communicating about

  6. Optimal spatiotemporal representation of multichannel EEG for recognition of brain states associated with distinct visual stimulus

    Science.gov (United States)

    Hramov, Alexander; Musatov, Vyacheslav Yu.; Runnova, Anastasija E.; Efremova, Tatiana Yu.; Koronovskii, Alexey A.; Pisarchik, Alexander N.

    2018-04-01

    In the paper we propose an approach based on artificial neural networks for recognition of different human brain states associated with distinct visual stimulus. Based on the developed numerical technique and the analysis of obtained experimental multichannel EEG data, we optimize the spatiotemporal representation of multichannel EEG to provide close to 97% accuracy in recognition of the EEG brain states during visual perception. Different interpretations of an ambiguous image produce different oscillatory patterns in the human EEG with similar features for every interpretation. Since these features are inherent to all subjects, a single artificial network can classify with high quality the associated brain states of other subjects.

  7. Visual learning alters the spontaneous activity of the resting human brain: an fNIRS study.

    Science.gov (United States)

    Niu, Haijing; Li, Hao; Sun, Li; Su, Yongming; Huang, Jing; Song, Yan

    2014-01-01

    Resting-state functional connectivity (RSFC) has been widely used to investigate spontaneous brain activity that exhibits correlated fluctuations. RSFC has been found to be changed along the developmental course and after learning. Here, we investigated whether and how visual learning modified the resting oxygenated hemoglobin (HbO) functional brain connectivity by using functional near-infrared spectroscopy (fNIRS). We demonstrate that after five days of training on an orientation discrimination task constrained to the right visual field, resting HbO functional connectivity and directed mutual interaction between high-level visual cortex and frontal/central areas involved in the top-down control were significantly modified. Moreover, these changes, which correlated with the degree of perceptual learning, were not limited to the trained left visual cortex. We conclude that the resting oxygenated hemoglobin functional connectivity could be used as a predictor of visual learning, supporting the involvement of high-level visual cortex and the involvement of frontal/central cortex during visual perceptual learning.

  8. Learning Sorting Algorithms through Visualization Construction

    Science.gov (United States)

    Cetin, Ibrahim; Andrews-Larson, Christine

    2016-01-01

    Recent increased interest in computational thinking poses an important question to researchers: What are the best ways to teach fundamental computing concepts to students? Visualization is suggested as one way of supporting student learning. This mixed-method study aimed to (i) examine the effect of instruction in which students constructed…

  9. Interactions between attention, context and learning in primary visual cortex.

    Science.gov (United States)

    Gilbert, C; Ito, M; Kapadia, M; Westheimer, G

    2000-01-01

    Attention in early visual processing engages the higher order, context dependent properties of neurons. Even at the earliest stages of visual cortical processing neurons play a role in intermediate level vision - contour integration and surface segmentation. The contextual influences mediating this process may be derived from long range connections within primary visual cortex (V1). These influences are subject to perceptual learning, and are strongly modulated by visuospatial attention, which is itself a learning dependent process. The attentional influences may involve interactions between feedback and horizontal connections in V1. V1 is therefore a dynamic and active processor, subject to top-down influences.

  10. The Effect of Content Representation Design Principles on Users' Intuitive Beliefs and Use of E-Learning Systems

    Science.gov (United States)

    Al-Samarraie, Hosam; Selim, Hassan; Zaqout, Fahed

    2016-01-01

    A model is proposed to assess the effect of different content representation design principles on learners' intuitive beliefs about using e-learning. We hypothesized that the impact of the representation of course contents is mediated by the design principles of alignment, quantity, clarity, simplicity, and affordance, which influence the…

  11. Principle and engineering implementation of 3D visual representation and indexing of medical diagnostic records (Conference Presentation)

    Science.gov (United States)

    Shi, Liehang; Sun, Jianyong; Yang, Yuanyuan; Ling, Tonghui; Wang, Mingqing; Zhang, Jianguo

    2017-03-01

    Purpose: Due to the generation of a large number of electronic imaging diagnostic records (IDR) year after year in a digital hospital, The IDR has become the main component of medical big data which brings huge values to healthcare services, professionals and administration. But a large volume of IDR presented in a hospital also brings new challenges to healthcare professionals and services as there may be too many IDRs for each patient so that it is difficult for a doctor to review all IDR of each patient in a limited appointed time slot. In this presentation, we presented an innovation method which uses an anatomical 3D structure object visually to represent and index historical medical status of each patient, which is called Visual Patient (VP) in this presentation, based on long term archived electronic IDR in a hospital, so that a doctor can quickly learn the historical medical status of the patient, quickly point and retrieve the IDR he or she interested in a limited appointed time slot. Method: The engineering implementation of VP was to build 3D Visual Representation and Index system called VP system (VPS) including components of natural language processing (NLP) for Chinese, Visual Index Creator (VIC), and 3D Visual Rendering Engine.There were three steps in this implementation: (1) an XML-based electronic anatomic structure of human body for each patient was created and used visually to index the all of abstract information of each IDR for each patient; (2)a number of specific designed IDR parsing processors were developed and used to extract various kinds of abstract information of IDRs retrieved from hospital information systems; (3) a 3D anatomic rendering object was introduced visually to represent and display the content of VIO for each patient. Results: The VPS was implemented in a simulated clinical environment including PACS/RIS to show VP instance to doctors. We setup two evaluation scenario in a hospital radiology department to evaluate whether

  12. A Visualization of Evolving Clinical Sentiment Using Vector Representations of Clinical Notes.

    Science.gov (United States)

    Ghassemi, Mohammad M; Mark, Roger G; Nemati, Shamim

    2015-09-01

    Our objective in this paper was to visualize the evolution of clinical language and sentiment with respect to several common population-level categories including: time in the hospital, age, mortality, gender and race. Our analysis utilized seven years of unstructured free text notes from the Multiparameter Intelligent Monitoring in Intensive Care (MIMIC) database. The text data was partitioned by category and used to generate several high dimensional vector space representations. We generated visualizations of the vector spaces using Distributed Stochastic Neighbor Embedding (tSNE) and Principal Component Analysis (PCA). We also investigated representative words from clusters in the vector space. Lastly, we inferred the general sentiment of the clinical notes toward each parameter by gauging the average distance between positive and negative keywords and all other terms in the space. We found intriguing differences in the sentiment of clinical notes over time, outcome, and demographic features. We noted a decrease in the homogeneity and complexity of clusters over time for patients with poor outcomes. We also found greater positive sentiment for females, unmarried patients, and patients of African ethnicity.

  13. Differential Effects of Music and Video Gaming During Breaks on Auditory and Visual Learning.

    Science.gov (United States)

    Liu, Shuyan; Kuschpel, Maxim S; Schad, Daniel J; Heinz, Andreas; Rapp, Michael A

    2015-11-01

    The interruption of learning processes by breaks filled with diverse activities is common in everyday life. This study investigated the effects of active computer gaming and passive relaxation (rest and music) breaks on auditory versus visual memory performance. Young adults were exposed to breaks involving (a) open eyes resting, (b) listening to music, and (c) playing a video game, immediately after memorizing auditory versus visual stimuli. To assess learning performance, words were recalled directly after the break (an 8:30 minute delay) and were recalled and recognized again after 7 days. Based on linear mixed-effects modeling, it was found that playing the Angry Birds video game during a short learning break impaired long-term retrieval in auditory learning but enhanced long-term retrieval in visual learning compared with the music and rest conditions. These differential effects of video games on visual versus auditory learning suggest specific interference of common break activities on learning.

  14. Visual Representations of Microcosm in Textbooks of Chemistry: Constructing a Systemic Network for Their Main Conceptual Framework

    Science.gov (United States)

    Papageorgiou, George; Amariotakis, Vasilios; Spiliotopoulou, Vasiliki

    2017-01-01

    The main objective of this work is to analyse the visual representations (VRs) of the microcosm depicted in nine Greek secondary chemistry school textbooks of the last three decades in order to construct a systemic network for their main conceptual framework and to evaluate the contribution of each one of the resulting categories to the network.…

  15. Visual representation of medical information: the importance of considering the end-user in the design of medical illustrations.

    Science.gov (United States)

    Scheltema, Emma; Reay, Stephen; Piper, Greg

    2018-01-01

    This practice led research project explored visual representation through illustrations designed to communicate often complex medical information for different users within Auckland City Hospital, New Zealand. Media and tools were manipulated to affect varying degrees of naturalism or abstraction from reality in the creation of illustrations for a variety of real-life clinical projects, and user feedback on illustration preference gathered from both medical professionals and patients. While all users preferred the most realistic representations of medical information from the illustrations presented, patients often favoured illustrations that depicted a greater amount of information than professionals suggested was necessary.

  16. "Triangulation": An Expression for Stimulating Metacognitive Reflection Regarding the Use of "Triplet" Representations for Chemistry Learning

    Science.gov (United States)

    Thomas, Gregory P.

    2017-01-01

    Concerns persist regarding high school students' chemistry learning. Learning chemistry is challenging because of chemistry's innate complexity and the need for students to construct associations between different, yet related representations of matter and its changes. Students should be taught to reason about and consider chemical phenomena using…

  17. Independent Interactive Inquiry-Based Learning Modules Using Audio-Visual Instruction In Statistics

    OpenAIRE

    McDaniel, Scott N.; Green, Lisa

    2012-01-01

    Simulations can make complex ideas easier for students to visualize and understand. It has been shown that guidance in the use of these simulations enhances students’ learning. This paper describes the implementation and evaluation of the Independent Interactive Inquiry-based (I3) Learning Modules, which use existing open-source Java applets, combined with audio-visual instruction. Students are guided to discover and visualize important concepts in post-calculus and algebra-based courses in p...

  18. Supervised Learning for Visual Pattern Classification

    Science.gov (United States)

    Zheng, Nanning; Xue, Jianru

    This chapter presents an overview of the topics and major ideas of supervised learning for visual pattern classification. Two prevalent algorithms, i.e., the support vector machine (SVM) and the boosting algorithm, are briefly introduced. SVMs and boosting algorithms are two hot topics of recent research in supervised learning. SVMs improve the generalization of the learning machine by implementing the rule of structural risk minimization (SRM). It exhibits good generalization even when little training data are available for machine training. The boosting algorithm can boost a weak classifier to a strong classifier by means of the so-called classifier combination. This algorithm provides a general way for producing a classifier with high generalization capability from a great number of weak classifiers.

  19. Words, shape, visual search and visual working memory in 3-year-old children.

    Science.gov (United States)

    Vales, Catarina; Smith, Linda B

    2015-01-01

    Do words cue children's visual attention, and if so, what are the relevant mechanisms? Across four experiments, 3-year-old children (N = 163) were tested in visual search tasks in which targets were cued with only a visual preview versus a visual preview and a spoken name. The experiments were designed to determine whether labels facilitated search times and to examine one route through which labels could have their effect: By influencing the visual working memory representation of the target. The targets and distractors were pictures of instances of basic-level known categories and the labels were the common name for the target category. We predicted that the label would enhance the visual working memory representation of the target object, guiding attention to objects that better matched the target representation. Experiments 1 and 2 used conjunctive search tasks, and Experiment 3 varied shape discriminability between targets and distractors. Experiment 4 compared the effects of labels to repeated presentations of the visual target, which should also influence the working memory representation of the target. The overall pattern fits contemporary theories of how the contents of visual working memory interact with visual search and attention, and shows that even in very young children heard words affect the processing of visual information. © 2014 John Wiley & Sons Ltd.

  20. Active-duty military service members' visual representations of PTSD and TBI in masks.

    Science.gov (United States)

    Walker, Melissa S; Kaimal, Girija; Gonzaga, Adele M L; Myers-Coffman, Katherine A; DeGraba, Thomas J

    2017-12-01

    Active-duty military service members have a significant risk of sustaining physical and psychological trauma resulting in traumatic brain injury (TBI) and post-traumatic stress disorder (PTSD). Within an interdisciplinary treatment approach at the National Intrepid Center of Excellence, service members participated in mask making during art therapy sessions. This study presents an analysis of the mask-making experiences of service members (n = 370) with persistent symptoms from combat- and mission-related TBI, PTSD, and other concurrent mood issues. Data sources included mask images and therapist notes collected over a five-year period. The data were coded and analyzed using grounded theory methods. Findings indicated that mask making offered visual representations of the self related to individual personhood, relationships, community, and society. Imagery themes referenced the injury, relational supports/losses, identity transitions/questions, cultural metaphors, existential reflections, and conflicted sense of self. These visual insights provided an increased understanding of the experiences of service members, facilitating their recovery.

  1. Reinforcement learning for dpm of embedded visual sensor nodes

    International Nuclear Information System (INIS)

    Khani, U.; Sadhayo, I. H.

    2014-01-01

    This paper proposes a RL (Reinforcement Learning) based DPM (Dynamic Power Management) technique to learn time out policies during a visual sensor node's operation which has multiple power/performance states. As opposed to the widely used static time out policies, our proposed DPM policy which is also referred to as OLTP (Online Learning of Time out Policies), learns to dynamically change the time out decisions in the different node states including the non-operational states. The selection of time out values in different power/performance states of a visual sensing platform is based on the workload estimates derived from a ML-ANN (Multi-Layer Artificial Neural Network) and an objective function given by weighted performance and power parameters. The DPM approach is also able to dynamically adjust the power-performance weights online to satisfy a given constraint of either power consumption or performance. Results show that the proposed learning algorithm explores the power-performance tradeoff with non-stationary workload and outperforms other DPM policies. It also performs the online adjustment of the tradeoff parameters in order to meet a user-specified constraint. (author)

  2. Dissociable loss of the representations in visual short-term memory.

    Science.gov (United States)

    Li, Jie

    2016-01-01

    The present study investigated in what manner the information in visual short-term memory (VSTM) is lost. Participants memorized four items, one of which was given higher priority later by a retro-cue. Then participants were required to detect a possible change, which could be either a large or small change, occurred to one of the items. The results showed that the detection performance for the small change of the uncued items was poorer than the cued item, yet large change that occurred to all four memory items could be detected perfectly, indicating that the uncued representations lost some detailed information yet still had some basic features retained in VSTM. The present study suggests that after being encoded into VSTM, the information is not lost in an object-based manner; rather, features of an item are still dissociable, so that they can be lost separately.

  3. Visual artificial grammar learning in dyslexia : A meta-analysis

    NARCIS (Netherlands)

    van Witteloostuijn, Merel; Boersma, Paul; Wijnen, Frank; Rispens, Judith

    2017-01-01

    Background Literacy impairments in dyslexia have been hypothesized to be (partly) due to an implicit learning deficit. However, studies of implicit visual artificial grammar learning (AGL) have often yielded null results. Aims The aim of this study is to weigh the evidence collected thus far by

  4. Correlation Filter Learning Toward Peak Strength for Visual Tracking.

    Science.gov (United States)

    Sui, Yao; Wang, Guanghui; Zhang, Li

    2018-04-01

    This paper presents a novel visual tracking approach to correlation filter learning toward peak strength of correlation response. Previous methods leverage all features of the target and the immediate background to learn a correlation filter. Some features, however, may be distractive to tracking, like those from occlusion and local deformation, resulting in unstable tracking performance. This paper aims at solving this issue and proposes a novel algorithm to learn the correlation filter. The proposed approach, by imposing an elastic net constraint on the filter, can adaptively eliminate those distractive features in the correlation filtering. A new peak strength metric is proposed to measure the discriminative capability of the learned correlation filter. It is demonstrated that the proposed approach effectively strengthens the peak of the correlation response, leading to more discriminative performance than previous methods. Extensive experiments on a challenging visual tracking benchmark demonstrate that the proposed tracker outperforms most state-of-the-art methods.

  5. Neuro-symbolic representation learning on biological knowledge graphs

    KAUST Repository

    Alshahrani, Mona

    2017-04-21

    Biological data and knowledge bases increasingly rely on Semantic Web technologies and the use of knowledge graphs for data integration, retrieval and federated queries. In the past years, feature learning methods that are applicable to graph-structured data are becoming available, but have not yet widely been applied and evaluated on structured biological knowledge.We develop a novel method for feature learning on biological knowledge graphs. Our method combines symbolic methods, in particular knowledge representation using symbolic logic and automated reasoning, with neural networks to generate embeddings of nodes that encode for related information within knowledge graphs. Through the use of symbolic logic, these embeddings contain both explicit and implicit information. We apply these embeddings to the prediction of edges in the knowledge graph representing problems of function prediction, finding candidate genes of diseases, protein-protein interactions, or drug target relations, and demonstrate performance that matches and sometimes outperforms traditional approaches based on manually crafted features. Our method can be applied to any biological knowledge graph, and will thereby open up the increasing amount of SemanticWeb based knowledge bases in biology to use in machine learning and data analytics.https://github.com/bio-ontology-research-group/walking-rdf-and-owl.robert.hoehndorf@kaust.edu.sa.Supplementary data are available at Bioinformatics online.

  6. Representation Learning of Logic Words by an RNN: From Word Sequences to Robot Actions

    Directory of Open Access Journals (Sweden)

    Tatsuro Yamada

    2017-12-01

    Full Text Available An important characteristic of human language is compositionality. We can efficiently express a wide variety of real-world situations, events, and behaviors by compositionally constructing the meaning of a complex expression from a finite number of elements. Previous studies have analyzed how machine-learning models, particularly neural networks, can learn from experience to represent compositional relationships between language and robot actions with the aim of understanding the symbol grounding structure and achieving intelligent communicative agents. Such studies have mainly dealt with the words (nouns, adjectives, and verbs that directly refer to real-world matters. In addition to these words, the current study deals with logic words, such as “not,” “and,” and “or” simultaneously. These words are not directly referring to the real world, but are logical operators that contribute to the construction of meaning in sentences. In human–robot communication, these words may be used often. The current study builds a recurrent neural network model with long short-term memory units and trains it to learn to translate sentences including logic words into robot actions. We investigate what kind of compositional representations, which mediate sentences and robot actions, emerge as the network's internal states via the learning process. Analysis after learning shows that referential words are merged with visual information and the robot's own current state, and the logical words are represented by the model in accordance with their functions as logical operators. Words such as “true,” “false,” and “not” work as non-linear transformations to encode orthogonal phrases into the same area in a memory cell state space. The word “and,” which required a robot to lift up both its hands, worked as if it was a universal quantifier. The word “or,” which required action generation that looked apparently random, was represented as an

  7. Investigating Verbal and Visual Auditory Learning After Conformal Radiation Therapy for Childhood Ependymoma

    International Nuclear Information System (INIS)

    Di Pinto, Marcos; Conklin, Heather M.; Li Chenghong; Xiong Xiaoping; Merchant, Thomas E.

    2010-01-01

    Purpose: The primary objective of this study was to determine whether children with localized ependymoma experience a decline in verbal or visual-auditory learning after conformal radiation therapy (CRT). The secondary objective was to investigate the impact of age and select clinical factors on learning before and after treatment. Methods and Materials: Learning in a sample of 71 patients with localized ependymoma was assessed with the California Verbal Learning Test (CVLT-C) and the Visual-Auditory Learning Test (VAL). Learning measures were administered before CRT, at 6 months, and then yearly for a total of 5 years. Results: There was no significant decline on measures of verbal or visual-auditory learning after CRT; however, younger age, more surgeries, and cerebrospinal fluid shunting did predict lower scores at baseline. There were significant longitudinal effects (improved learning scores after treatment) among older children on the CVLT-C and children that did not receive pre-CRT chemotherapy on the VAL. Conclusion: There was no evidence of global decline in learning after CRT in children with localized ependymoma. Several important implications from the findings include the following: (1) identification of and differentiation among variables with transient vs. long-term effects on learning, (2) demonstration that children treated with chemotherapy before CRT had greater risk of adverse visual-auditory learning performance, and (3) establishment of baseline and serial assessment as critical in ascertaining necessary sensitivity and specificity for the detection of modest effects.

  8. Three visual techniques to enhance interprofessional learning.

    Science.gov (United States)

    Parsell, G; Gibbs, T; Bligh, J

    1998-07-01

    Many changes in the delivery of healthcare in the UK have highlighted the need for healthcare professionals to learn to work together as teams for the benefit of patients. Whatever the profession or level, whether for postgraduate education and training, continuing professional development, or for undergraduates, learners should have an opportunity to learn about and with, other healthcare practitioners in a stimulating and exciting way. Learning to understand how people think, feel, and react, and the parts they play at work, both as professionals and individuals, can only be achieved through sensitive discussion and exchange of views. Teaching and learning methods must provide opportunities for this to happen. This paper describes three small-group teaching techniques which encourage a high level of learner collaboration and team-working. Learning content is focused on real-life health-care issues and strong visual images are used to stimulate lively discussion and debate. Each description includes the learning objectives of each exercise, basic equipment and resources, and learning outcomes.

  9. Visual representations of Iranian transgenders.

    Science.gov (United States)

    Shakerifar, Elhum

    2011-01-01

    Transsexuality in Iran has gained much attention and media coverage in the past few years, particularly in its questionable depiction as a permitted loophole for homosexuality, which is prohibited under Iran's Islamic-inspired legal system. Of course, attention in the West is also encouraged by the “shock” that sex change is available in Iran, a country that Western media and society delights in portraying as monolithically repressive. As a result, Iranian filmmakers inevitably have their own agendas, which are unsurprisingly brought into the film making process—from a desire to sell a product that will appeal to the Western market, to films that endorse specific socio-political agendas. This paper is an attempt to situate sex change and representations of sex change in Iran within a wider theoretical framework than the frequently reiterated conflation with homosexuality, and to open and engage with a wider debate concerning transsexuality in Iran, as well as to specifically analyze the representation of transexuality, in view of its current prominent presence in media.

  10. Redefining "Learning" in Statistical Learning: What Does an Online Measure Reveal About the Assimilation of Visual Regularities?

    Science.gov (United States)

    Siegelman, Noam; Bogaerts, Louisa; Kronenfeld, Ofer; Frost, Ram

    2017-10-07

    From a theoretical perspective, most discussions of statistical learning (SL) have focused on the possible "statistical" properties that are the object of learning. Much less attention has been given to defining what "learning" is in the context of "statistical learning." One major difficulty is that SL research has been monitoring participants' performance in laboratory settings with a strikingly narrow set of tasks, where learning is typically assessed offline, through a set of two-alternative-forced-choice questions, which follow a brief visual or auditory familiarization stream. Is that all there is to characterizing SL abilities? Here we adopt a novel perspective for investigating the processing of regularities in the visual modality. By tracking online performance in a self-paced SL paradigm, we focus on the trajectory of learning. In a set of three experiments we show that this paradigm provides a reliable and valid signature of SL performance, and it offers important insights for understanding how statistical regularities are perceived and assimilated in the visual modality. This demonstrates the promise of integrating different operational measures to our theory of SL. © 2017 Cognitive Science Society, Inc.

  11. Visual and Haptic Mental Rotation

    Directory of Open Access Journals (Sweden)

    Satoshi Shioiri

    2011-10-01

    Full Text Available It is well known that visual information can be retained in several types of memory systems. Haptic information can also be retained in a memory because we can repeat a hand movement. There may be a common memory system for vision and action. On the one hand, it may be convenient to have a common system for acting with visual information. On the other hand, different modalities may have their own memory and use retained information without transforming specific to the modality. We compared memory properties of visual and haptic information. There is a phenomenon known as mental rotation, which is possibly unique to visual representation. The mental rotation is a phenomenon where reaction time increases with the angle of visual target (eg,, a letter to identify. The phenomenon is explained by the difference in time to rotate the representation of the target in the visual sytem. In this study, we compared the effect of stimulus angle on visual and haptic shape identification (two-line shapes were used. We found that a typical effect of mental rotation for the visual stimulus. However, no such effect was found for the haptic stimulus. This difference cannot be explained by the modality differences in response because similar difference was found even when haptical response was used for visual representation and visual response was used for haptic representation. These results indicate that there are independent systems for visual and haptic representations.

  12. Concrete and abstract visualizations in history learning tasks

    NARCIS (Netherlands)

    Prangsma, Maaike; Van Boxtel, Carla; Kanselaar, Gellof; Kirschner, Paul A.

    2010-01-01

    Prangsma, M. E., Van Boxtel, C. A. M., Kanselaar, G., & Kirschner, P. A. (2009). Concrete and abstract visualizations in history learning tasks. British Journal of Educational Psychology, 79, 371-387.

  13. Advocating for a Population-Specific Health Literacy for People With Visual Impairments.

    Science.gov (United States)

    Harrison, Tracie; Lazard, Allison

    2015-01-01

    Health literacy, the ability to access, process, and understand health information, is enhanced by the visual senses among people who are typically sighted. Emotions, meaning, speed of knowledge transfer, level of attention, and degree of relevance are all manipulated by the visual design of health information when people can see. When consumers of health information are blind or visually impaired, they access, process, and understand their health information in a multitude of methods using a variety of accommodations depending upon their severity and type of impairment. They are taught, or they learn how, to accommodate their differences by using alternative sensory experiences and interpretations. In this article, we argue that due to the unique and powerful aspects of visual learning and due to the differences in knowledge creation when people are not visually oriented, health literacy must be considered a unique construct for people with visual impairment, which requires a distinctive theoretical basis for determining the impact of their mind-constructed representations of health.

  14. Supporting visual quality assessment with machine learning

    NARCIS (Netherlands)

    Gastaldo, P.; Zunino, R.; Redi, J.

    2013-01-01

    Objective metrics for visual quality assessment often base their reliability on the explicit modeling of the highly non-linear behavior of human perception; as a result, they may be complex and computationally expensive. Conversely, machine learning (ML) paradigms allow to tackle the quality

  15. Learning with Multiple Representations: An Example of a Revision Lesson in Mathematics

    Science.gov (United States)

    Wong, Darren; Poo, Sng Peng; Hock, Ng Eng; Kang, Wee Loo

    2011-01-01

    We describe an example of learning with multiple representations in an A-level revision lesson on mechanics. The context of the problem involved the motion of a ball thrown vertically upwards in air and studying how the associated physical quantities changed during its flight. Different groups of students were assigned to look at the ball's motion…

  16. Perceptual geometry of space and form: visual perception of natural scenes and their virtual representation

    Science.gov (United States)

    Assadi, Amir H.

    2001-11-01

    Perceptual geometry is an emerging field of interdisciplinary research whose objectives focus on study of geometry from the perspective of visual perception, and in turn, apply such geometric findings to the ecological study of vision. Perceptual geometry attempts to answer fundamental questions in perception of form and representation of space through synthesis of cognitive and biological theories of visual perception with geometric theories of the physical world. Perception of form and space are among fundamental problems in vision science. In recent cognitive and computational models of human perception, natural scenes are used systematically as preferred visual stimuli. Among key problems in perception of form and space, we have examined perception of geometry of natural surfaces and curves, e.g. as in the observer's environment. Besides a systematic mathematical foundation for a remarkably general framework, the advantages of the Gestalt theory of natural surfaces include a concrete computational approach to simulate or recreate images whose geometric invariants and quantities might be perceived and estimated by an observer. The latter is at the very foundation of understanding the nature of perception of space and form, and the (computer graphics) problem of rendering scenes to visually invoke virtual presence.

  17. Enhanced visual statistical learning in adults with autism

    Science.gov (United States)

    Roser, Matthew E.; Aslin, Richard N.; McKenzie, Rebecca; Zahra, Daniel; Fiser, József

    2014-01-01

    Individuals with autism spectrum disorder (ASD) are often characterized as having social engagement and language deficiencies, but a sparing of visuo-spatial processing and short-term memory, with some evidence of supra-normal levels of performance in these domains. The present study expanded on this evidence by investigating the observational learning of visuospatial concepts from patterns of covariation across multiple exemplars. Child and adult participants with ASD, and age-matched control participants, viewed multi-shape arrays composed from a random combination of pairs of shapes that were each positioned in a fixed spatial arrangement. After this passive exposure phase, a post-test revealed that all participant groups could discriminate pairs of shapes with high covariation from randomly paired shapes with low covariation. Moreover, learning these shape-pairs with high covariation was superior in adults with ASD than in age-matched controls, while performance in children with ASD was no different than controls. These results extend previous observations of visuospatial enhancement in ASD into the domain of learning, and suggest that enhanced visual statistical learning may have arisen from a sustained bias to attend to local details in complex arrays of visual features. PMID:25151115

  18. Target templates: the precision of mental representations affects attentional guidance and decision-making in visual search.

    Science.gov (United States)

    Hout, Michael C; Goldinger, Stephen D

    2015-01-01

    When people look for things in the environment, they use target templates-mental representations of the objects they are attempting to locate-to guide attention and to assess incoming visual input as potential targets. However, unlike laboratory participants, searchers in the real world rarely have perfect knowledge regarding the potential appearance of targets. In seven experiments, we examined how the precision of target templates affects the ability to conduct visual search. Specifically, we degraded template precision in two ways: 1) by contaminating searchers' templates with inaccurate features, and 2) by introducing extraneous features to the template that were unhelpful. We recorded eye movements to allow inferences regarding the relative extents to which attentional guidance and decision-making are hindered by template imprecision. Our findings support a dual-function theory of the target template and highlight the importance of examining template precision in visual search.

  19. Visual Representation Determines Search Difficulty: Explaining Visual Search Asymmetries

    Directory of Open Access Journals (Sweden)

    Neil eBruce

    2011-07-01

    Full Text Available In visual search experiments there exist a variety of experimental paradigms in which a symmetric set of experimental conditions yields asymmetric corresponding task performance. There are a variety of examples of this that currently lack a satisfactory explanation. In this paper, we demonstrate that distinct classes of asymmetries may be explained by virtue of a few simple conditions that are consistent with current thinking surrounding computational modeling of visual search and coding in the primate brain. This includes a detailed look at the role that stimulus familiarity plays in the determination of search performance. Overall, we demonstrate that all of these asymmetries have a common origin, namely, they are a consequence of the encoding that appears in the visual cortex. The analysis associated with these cases yields insight into the problem of visual search in general and predictions of novel search asymmetries.

  20. Robust Visual Tracking Via Consistent Low-Rank Sparse Learning

    KAUST Repository

    Zhang, Tianzhu

    2014-06-19

    Object tracking is the process of determining the states of a target in consecutive video frames based on properties of motion and appearance consistency. In this paper, we propose a consistent low-rank sparse tracker (CLRST) that builds upon the particle filter framework for tracking. By exploiting temporal consistency, the proposed CLRST algorithm adaptively prunes and selects candidate particles. By using linear sparse combinations of dictionary templates, the proposed method learns the sparse representations of image regions corresponding to candidate particles jointly by exploiting the underlying low-rank constraints. In addition, the proposed CLRST algorithm is computationally attractive since temporal consistency property helps prune particles and the low-rank minimization problem for learning joint sparse representations can be efficiently solved by a sequence of closed form update operations. We evaluate the proposed CLRST algorithm against 14 state-of-the-art tracking methods on a set of 25 challenging image sequences. Experimental results show that the CLRST algorithm performs favorably against state-of-the-art tracking methods in terms of accuracy and execution time.