WorldWideScience

Sample records for learning visual representation

  1. Learning STEM Through Integrative Visual Representations

    Science.gov (United States)

    Virk, Satyugjit Singh

    Previous cognitive models of memory have not comprehensively taken into account the internal cognitive load of chunking isolated information and have emphasized the external cognitive load of visual presentation only. Under the Virk Long Term Working Memory Multimedia Model of cognitive load, drawing from the Cowan model, students presented with integrated animations of the key neural signal transmission subcomponents where the interrelationships between subcomponents are visually and verbally explicit, were hypothesized to perform significantly better on free response and diagram labeling questions, than students presented with isolated animations of these subcomponents. This is because the internal attentional cognitive load of chunking these concepts is greatly reduced and hence the overall cognitive load is less for the integrated visuals group than the isolated group, despite the higher external load for the integrated group of having the interrelationships between subcomponents presented explicitly. Experiment 1 demonstrated that integrating the subcomponents of the neuron significantly enhanced comprehension of the interconnections between cellular subcomponents and approached significance for enhancing comprehension of the layered molecular correlates of the cellular structures and their interconnections. Experiment 2 corrected time on task confounds from Experiment 1 and focused on the cellular subcomponents of the neuron only. Results from the free response essay subcomponent subscores did demonstrate significant differences in favor of the integrated group as well as some evidence from the diagram labeling section. Results from free response, short answer and What-If (problem solving), and diagram labeling detailed interrelationship subscores demonstrated the integrated group did indeed learn the extra material they were presented with. This data demonstrating the integrated group learned the extra material they were presented with provides some initial

  2. Learned image representations for visual recognition

    DEFF Research Database (Denmark)

    Larsen, Anders Boesen Lindbo

    This thesis addresses the problem of extracting image structures for representing images effectively in order to solve visual recognition tasks. Problems from diverse research areas (medical imaging, material science and food processing) have motivated large parts of the methodological development...

  3. Learning Visual Representations for Perception-Action Systems

    DEFF Research Database (Denmark)

    Piater, Justus; Jodogne, Sebastien; Detry, Renaud

    2011-01-01

    and RLJC, our second method learns structural object models for robust object detection and pose estimation by probabilistic inference. To these models, the method associates grasp experiences autonomously learned by trial and error. These experiences form a nonparametric representation of grasp success......We discuss vision as a sensory modality for systems that effect actions in response to perceptions. While the internal representations informed by vision may be arbitrarily complex, we argue that in many cases it is advantageous to link them rather directly to action via learned mappings....... These arguments are illustrated by two examples of our own work. First, our RLVC algorithm performs reinforcement learning directly on the visual input space. To make this very large space manageable, RLVC interleaves the reinforcement learner with a supervised classification algorithm that seeks to split...

  4. Effects of Computer-Based Visual Representation on Mathematics Learning and Cognitive Load

    Science.gov (United States)

    Yung, Hsin I.; Paas, Fred

    2015-01-01

    Visual representation has been recognized as a powerful learning tool in many learning domains. Based on the assumption that visual representations can support deeper understanding, we examined the effects of visual representations on learning performance and cognitive load in the domain of mathematics. An experimental condition with visual…

  5. Learning Sparse Visual Representations with Leaky Capped Norm Regularizers

    OpenAIRE

    Wangni, Jianqiao; Lin, Dahua

    2017-01-01

    Sparsity inducing regularization is an important part for learning over-complete visual representations. Despite the popularity of $\\ell_1$ regularization, in this paper, we investigate the usage of non-convex regularizations in this problem. Our contribution consists of three parts. First, we propose the leaky capped norm regularization (LCNR), which allows model weights below a certain threshold to be regularized more strongly as opposed to those above, therefore imposes strong sparsity and...

  6. Visual Literacy and Biochemistry Learning: The role of external representations

    Directory of Open Access Journals (Sweden)

    V.J.S.V. Santos

    2011-04-01

    Full Text Available Visual Literacy can bedefined as people’s ability to understand, use, think, learn and express themselves through external representations (ER in a given subject. This research aims to investigate the development of abilities of ERs reading and interpretation by students from a Biochemistry graduate course of theFederal University of São João Del-Rei. In this way, Visual Literacy level was  assessed using a questionnaire validatedin a previous educational research. This diagnosis questionnaire was elaborated according to six visual abilitiesidentified as essential for the study of the metabolic pathways. The initial statistical analysis of data collectedin this study was carried out using ANOVA method. Results obtained showed that the questionnaire used is adequate for the research and indicated that the level of Visual Literacy related to the metabolic processes increased significantly with the progress of the students in the graduation course. There was also an indication of a possible interference in the student’s performancedetermined by the cutoff punctuation in the university selection process.

  7. Conditions for the Effectiveness of Multiple Visual Representations in Enhancing STEM Learning

    Science.gov (United States)

    Rau, Martina A.

    2017-01-01

    Visual representations play a critical role in enhancing science, technology, engineering, and mathematics (STEM) learning. Educational psychology research shows that adding visual representations to text can enhance students' learning of content knowledge, compared to text-only. But should students learn with a single type of visual…

  8. The role of visual representation in physics learning: dynamic versus static visualization

    Science.gov (United States)

    Suyatna, Agus; Anggraini, Dian; Agustina, Dina; Widyastuti, Dini

    2017-11-01

    This study aims to examine the role of visual representation in physics learning and to compare the learning outcomes of using dynamic and static visualization media. The study was conducted using quasi-experiment with Pretest-Posttest Control Group Design. The samples of this research are students of six classes at State Senior High School in Lampung Province. The experimental class received a learning using dynamic visualization and control class using static visualization media. Both classes are given pre-test and post-test with the same instruments. Data were tested with N-gain analysis, normality test, homogeneity test and mean difference test. The results showed that there was a significant increase of mean (N-Gain) learning outcomes (p physical phenomena and requires long-term observation.

  9. Teaching with Concrete and Abstract Visual Representations: Effects on Students' Problem Solving, Problem Representations, and Learning Perceptions

    Science.gov (United States)

    Moreno, Roxana; Ozogul, Gamze; Reisslein, Martin

    2011-01-01

    In 3 experiments, we examined the effects of using concrete and/or abstract visual problem representations during instruction on students' problem-solving practice, near transfer, problem representations, and learning perceptions. In Experiments 1 and 2, novice students learned about electrical circuit analysis with an instructional program that…

  10. Learning Convolutional Text Representations for Visual Question Answering

    OpenAIRE

    Wang, Zhengyang; Ji, Shuiwang

    2017-01-01

    Visual question answering is a recently proposed artificial intelligence task that requires a deep understanding of both images and texts. In deep learning, images are typically modeled through convolutional neural networks, and texts are typically modeled through recurrent neural networks. While the requirement for modeling images is similar to traditional computer vision tasks, such as object recognition and image classification, visual question answering raises a different need for textual...

  11. Sparse representation, modeling and learning in visual recognition theory, algorithms and applications

    CERN Document Server

    Cheng, Hong

    2015-01-01

    This unique text/reference presents a comprehensive review of the state of the art in sparse representations, modeling and learning. The book examines both the theoretical foundations and details of algorithm implementation, highlighting the practical application of compressed sensing research in visual recognition and computer vision. Topics and features: provides a thorough introduction to the fundamentals of sparse representation, modeling and learning, and the application of these techniques in visual recognition; describes sparse recovery approaches, robust and efficient sparse represen

  12. Exploring Middle School Students' Representational Competence in Science: Development and Verification of a Framework for Learning with Visual Representations

    Science.gov (United States)

    Tippett, Christine Diane

    Scientific knowledge is constructed and communicated through a range of forms in addition to verbal language. Maps, graphs, charts, diagrams, formulae, models, and drawings are just some of the ways in which science concepts can be represented. Representational competence---an aspect of visual literacy that focuses on the ability to interpret, transform, and produce visual representations---is a key component of science literacy and an essential part of science reading and writing. To date, however, most research has examined learning from representations rather than learning with representations. This dissertation consisted of three distinct projects that were related by a common focus on learning from visual representations as an important aspect of scientific literacy. The first project was the development of an exploratory framework that is proposed for use in investigations of students constructing and interpreting multimedia texts. The exploratory framework, which integrates cognition, metacognition, semiotics, and systemic functional linguistics, could eventually result in a model that might be used to guide classroom practice, leading to improved visual literacy, better comprehension of science concepts, and enhanced science literacy because it emphasizes distinct aspects of learning with representations that can be addressed though explicit instruction. The second project was a metasynthesis of the research that was previously conducted as part of the Explicit Literacy Instruction Embedded in Middle School Science project (Pacific CRYSTAL, http://www.educ.uvic.ca/pacificcrystal). Five overarching themes emerged from this case-to-case synthesis: the engaging and effective nature of multimedia genres, opportunities for differentiated instruction using multimodal strategies, opportunities for assessment, an emphasis on visual representations, and the robustness of some multimodal literacy strategies across content areas. The third project was a mixed

  13. Constructing visual representations

    DEFF Research Database (Denmark)

    Huron, Samuel; Jansen, Yvonne; Carpendale, Sheelagh

    2014-01-01

    tangible building blocks. We learned that all participants, most of whom had little experience in visualization authoring, were readily able to create and talk about their own visualizations. Based on our observations, we discuss participants’ actions during the development of their visual representations......The accessibility of infovis authoring tools to a wide audience has been identified as a major research challenge. A key task in the authoring process is the development of visual mappings. While the infovis community has long been deeply interested in finding effective visual mappings......, comparatively little attention has been placed on how people construct visual mappings. In this paper, we present the results of a study designed to shed light on how people transform data into visual representations. We asked people to create, update and explain their own information visualizations using only...

  14. Exploring Multi-Modal and Structured Representation Learning for Visual Image and Video Understanding

    OpenAIRE

    Xu, Dan

    2018-01-01

    As the explosive growth of the visual data, it is particularly important to develop intelligent visual understanding techniques for dealing with a large amount of data. Many efforts have been made in recent years to build highly effective and large-scale visual processing algorithms and systems. One of the core aspects in the research line is how to learn robust representations to better describe the data. In this thesis we study the problem of visual image and video understanding and specifi...

  15. Visual Vehicle Tracking Based on Deep Representation and Semisupervised Learning

    Directory of Open Access Journals (Sweden)

    Yingfeng Cai

    2017-01-01

    Full Text Available Discriminative tracking methods use binary classification to discriminate between the foreground and background and have achieved some useful results. However, the use of labeled training samples is insufficient for them to achieve accurate tracking. Hence, discriminative classifiers must use their own classification results to update themselves, which may lead to feedback-induced tracking drift. To overcome these problems, we propose a semisupervised tracking algorithm that uses deep representation and transfer learning. Firstly, a 2D multilayer deep belief network is trained with a large amount of unlabeled samples. The nonlinear mapping point at the top of this network is subtracted as the feature dictionary. Then, this feature dictionary is utilized to transfer train and update a deep tracker. The positive samples for training are the tracked vehicles, and the negative samples are the background images. Finally, a particle filter is used to estimate vehicle position. We demonstrate experimentally that our proposed vehicle tracking algorithm can effectively restrain drift while also maintaining the adaption of vehicle appearance. Compared with similar algorithms, our method achieves a better tracking success rate and fewer average central-pixel errors.

  16. How learning might strengthen existing visual object representations in human object-selective cortex.

    Science.gov (United States)

    Brants, Marijke; Bulthé, Jessica; Daniels, Nicky; Wagemans, Johan; Op de Beeck, Hans P

    2016-02-15

    Visual object perception is an important function in primates which can be fine-tuned by experience, even in adults. Which factors determine the regions and the neurons that are modified by learning is still unclear. Recently, it was proposed that the exact cortical focus and distribution of learning effects might depend upon the pre-learning mapping of relevant functional properties and how this mapping determines the informativeness of neural units for the stimuli and the task to be learned. From this hypothesis we would expect that visual experience would strengthen the pre-learning distributed functional map of the relevant distinctive object properties. Here we present a first test of this prediction in twelve human subjects who were trained in object categorization and differentiation, preceded and followed by a functional magnetic resonance imaging session. Specifically, training increased the distributed multi-voxel pattern information for trained object distinctions in object-selective cortex, resulting in a generalization from pre-training multi-voxel activity patterns to after-training activity patterns. Simulations show that the increased selectivity combined with the inter-session generalization is consistent with a training-induced strengthening of a pre-existing selectivity map. No training-related neural changes were detected in other regions. In sum, training to categorize or individuate objects strengthened pre-existing representations in human object-selective cortex, providing a first indication that the neuroanatomical distribution of learning effects depends upon the pre-learning mapping of visual object properties. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Incidental learning of probability information is differentially affected by the type of visual working memory representation.

    Science.gov (United States)

    van Lamsweerde, Amanda E; Beck, Melissa R

    2015-12-01

    In this study, we investigated whether the ability to learn probability information is affected by the type of representation held in visual working memory. Across 4 experiments, participants detected changes to displays of coloured shapes. While participants detected changes in 1 dimension (e.g., colour), a feature from a second, nonchanging dimension (e.g., shape) predicted which object was most likely to change. In Experiments 1 and 3, items could be grouped by similarity in the changing dimension across items (e.g., colours and shapes were repeated in the display), while in Experiments 2 and 4 items could not be grouped by similarity (all features were unique). Probability information from the predictive dimension was learned and used to increase performance, but only when all of the features within a display were unique (Experiments 2 and 4). When it was possible to group by feature similarity in the changing dimension (e.g., 2 blue objects appeared within an array), participants were unable to learn probability information and use it to improve performance (Experiments 1 and 3). The results suggest that probability information can be learned in a dimension that is not explicitly task-relevant, but only when the probability information is represented with the changing dimension in visual working memory. (c) 2015 APA, all rights reserved).

  18. The Effect of Using a Visual Representation Tool in a Teaching-Learning Sequence for Teaching Newton's Third Law

    Science.gov (United States)

    Savinainen, Antti; Mäkynen, Asko; Nieminen, Pasi; Viiri, Jouni

    2017-01-01

    This paper presents a research-based teaching-learning sequence (TLS) that focuses on the notion of interaction in teaching Newton's third law (N3 law) which is, as earlier studies have shown, a challenging topic for students to learn. The TLS made systematic use of a visual representation tool--an interaction diagram (ID)--highlighting…

  19. Learning representation hierarchies by sharing visual features: a computational investigation of Persian character recognition with unsupervised deep learning.

    Science.gov (United States)

    Sadeghi, Zahra; Testolin, Alberto

    2017-08-01

    In humans, efficient recognition of written symbols is thought to rely on a hierarchical processing system, where simple features are progressively combined into more abstract, high-level representations. Here, we present a computational model of Persian character recognition based on deep belief networks, where increasingly more complex visual features emerge in a completely unsupervised manner by fitting a hierarchical generative model to the sensory data. Crucially, high-level internal representations emerging from unsupervised deep learning can be easily read out by a linear classifier, achieving state-of-the-art recognition accuracy. Furthermore, we tested the hypothesis that handwritten digits and letters share many common visual features: A generative model that captures the statistical structure of the letters distribution should therefore also support the recognition of written digits. To this aim, deep networks trained on Persian letters were used to build high-level representations of Persian digits, which were indeed read out with high accuracy. Our simulations show that complex visual features, such as those mediating the identification of Persian symbols, can emerge from unsupervised learning in multilayered neural networks and can support knowledge transfer across related domains.

  20. How Do Students Learn to See Concepts in Visualizations? Social Learning Mechanisms with Physical and Virtual Representations

    Science.gov (United States)

    Rau, Martina A.

    2017-01-01

    STEM instruction often uses visual representations. To benefit from these, students need to understand how representations show domain-relevant concepts. Yet, this is difficult for students. Prior research shows that physical representations (objects that students manipulate by hand) and virtual representations (objects on a computer screen that…

  1. Picture this: The value of multiple visual representations for student learning of quantum concepts in general chemistry

    Science.gov (United States)

    Allen, Emily Christine

    Mental models for scientific learning are often defined as, "cognitive tools situated between experiments and theories" (Duschl & Grandy, 2012). In learning, these cognitive tools are used to not only take in new information, but to help problem solve in new contexts. Nancy Nersessian (2008) describes a mental model as being "[loosely] characterized as a representation of a system with interactive parts with representations of those interactions. Models can be qualitative, quantitative, and/or simulative (mental, physical, computational)" (p. 63). If conceptual parts used by the students in science education are inaccurate, then the resulting model will not be useful. Students in college general chemistry courses are presented with multiple abstract topics and often struggle to fit these parts into complete models. This is especially true for topics that are founded on quantum concepts, such as atomic structure and molecular bonding taught in college general chemistry. The objectives of this study were focused on how students use visual tools introduced during instruction to reason with atomic and molecular structure, what misconceptions may be associated with these visual tools, and how visual modeling skills may be taught to support students' use of visual tools for reasoning. The research questions for this study follow from Gilbert's (2008) theory that experts use multiple representations when reasoning and modeling a system, and Kozma and Russell's (2005) theory of representational competence levels. This study finds that as students developed greater command of their understanding of abstract quantum concepts, they spontaneously provided additional representations to describe their more sophisticated models of atomic and molecular structure during interviews. This suggests that when visual modeling with multiple representations is taught, along with the limitations of the representations, it can assist students in the development of models for reasoning about

  2. Hebbian learning of hand-centred representations in a hierarchical neural network model of the primate visual system.

    Science.gov (United States)

    Born, Jannis; Galeazzi, Juan M; Stringer, Simon M

    2017-01-01

    A subset of neurons in the posterior parietal and premotor areas of the primate brain respond to the locations of visual targets in a hand-centred frame of reference. Such hand-centred visual representations are thought to play an important role in visually-guided reaching to target locations in space. In this paper we show how a biologically plausible, Hebbian learning mechanism may account for the development of localized hand-centred representations in a hierarchical neural network model of the primate visual system, VisNet. The hand-centered neurons developed in the model use an invariance learning mechanism known as continuous transformation (CT) learning. In contrast to previous theoretical proposals for the development of hand-centered visual representations, CT learning does not need a memory trace of recent neuronal activity to be incorporated in the synaptic learning rule. Instead, CT learning relies solely on a Hebbian learning rule, which is able to exploit the spatial overlap that naturally occurs between successive images of a hand-object configuration as it is shifted across different retinal locations due to saccades. Our simulations show how individual neurons in the network model can learn to respond selectively to target objects in particular locations with respect to the hand, irrespective of where the hand-object configuration occurs on the retina. The response properties of these hand-centred neurons further generalise to localised receptive fields in the hand-centred space when tested on novel hand-object configurations that have not been explored during training. Indeed, even when the network is trained with target objects presented across a near continuum of locations around the hand during training, the model continues to develop hand-centred neurons with localised receptive fields in hand-centred space. With the help of principal component analysis, we provide the first theoretical framework that explains the behavior of Hebbian learning

  3. Hebbian learning of hand-centred representations in a hierarchical neural network model of the primate visual system

    Science.gov (United States)

    Born, Jannis; Stringer, Simon M.

    2017-01-01

    A subset of neurons in the posterior parietal and premotor areas of the primate brain respond to the locations of visual targets in a hand-centred frame of reference. Such hand-centred visual representations are thought to play an important role in visually-guided reaching to target locations in space. In this paper we show how a biologically plausible, Hebbian learning mechanism may account for the development of localized hand-centred representations in a hierarchical neural network model of the primate visual system, VisNet. The hand-centered neurons developed in the model use an invariance learning mechanism known as continuous transformation (CT) learning. In contrast to previous theoretical proposals for the development of hand-centered visual representations, CT learning does not need a memory trace of recent neuronal activity to be incorporated in the synaptic learning rule. Instead, CT learning relies solely on a Hebbian learning rule, which is able to exploit the spatial overlap that naturally occurs between successive images of a hand-object configuration as it is shifted across different retinal locations due to saccades. Our simulations show how individual neurons in the network model can learn to respond selectively to target objects in particular locations with respect to the hand, irrespective of where the hand-object configuration occurs on the retina. The response properties of these hand-centred neurons further generalise to localised receptive fields in the hand-centred space when tested on novel hand-object configurations that have not been explored during training. Indeed, even when the network is trained with target objects presented across a near continuum of locations around the hand during training, the model continues to develop hand-centred neurons with localised receptive fields in hand-centred space. With the help of principal component analysis, we provide the first theoretical framework that explains the behavior of Hebbian learning

  4. Hebbian learning of hand-centred representations in a hierarchical neural network model of the primate visual system.

    Directory of Open Access Journals (Sweden)

    Jannis Born

    Full Text Available A subset of neurons in the posterior parietal and premotor areas of the primate brain respond to the locations of visual targets in a hand-centred frame of reference. Such hand-centred visual representations are thought to play an important role in visually-guided reaching to target locations in space. In this paper we show how a biologically plausible, Hebbian learning mechanism may account for the development of localized hand-centred representations in a hierarchical neural network model of the primate visual system, VisNet. The hand-centered neurons developed in the model use an invariance learning mechanism known as continuous transformation (CT learning. In contrast to previous theoretical proposals for the development of hand-centered visual representations, CT learning does not need a memory trace of recent neuronal activity to be incorporated in the synaptic learning rule. Instead, CT learning relies solely on a Hebbian learning rule, which is able to exploit the spatial overlap that naturally occurs between successive images of a hand-object configuration as it is shifted across different retinal locations due to saccades. Our simulations show how individual neurons in the network model can learn to respond selectively to target objects in particular locations with respect to the hand, irrespective of where the hand-object configuration occurs on the retina. The response properties of these hand-centred neurons further generalise to localised receptive fields in the hand-centred space when tested on novel hand-object configurations that have not been explored during training. Indeed, even when the network is trained with target objects presented across a near continuum of locations around the hand during training, the model continues to develop hand-centred neurons with localised receptive fields in hand-centred space. With the help of principal component analysis, we provide the first theoretical framework that explains the behavior

  5. What recent research on diagrams suggests about learning with rather than learning from visual representations in science

    Science.gov (United States)

    Tippett, Christine D.

    2016-03-01

    The move from learning science from representations to learning science with representations has many potential and undocumented complexities. This thematic analysis partially explores the trends of representational uses in science instruction, examining 80 research studies on diagram use in science. These studies, published during 2000-2014, were located through searches of journal databases and books. Open coding of the studies identified 13 themes, 6 of which were identified in at least 10% of the studies: eliciting mental models, classroom-based research, multimedia principles, teaching and learning strategies, representational competence, and student agency. A shift in emphasis on learning with rather than learning from representations was evident across the three 5-year intervals considered, mirroring a pedagogical shift from science instruction as transmission of information to constructivist approaches in which learners actively negotiate understanding and construct knowledge. The themes and topics in recent research highlight areas of active interest and reveal gaps that may prove fruitful for further research, including classroom-based studies, the role of prior knowledge, and the use of eye-tracking. The results of the research included in this thematic review of the 2000-2014 literature suggest that both interpreting and constructing representations can lead to better understanding of science concepts.

  6. Making Connections among Multiple Visual Representations: How Do Sense-Making Skills and Perceptual Fluency Relate to Learning of Chemistry Knowledge?

    Science.gov (United States)

    Rau, Martina A.

    2018-01-01

    To learn content knowledge in science, technology, engineering, and math domains, students need to make connections among visual representations. This article considers two kinds of connection-making skills: (1) "sense-making skills" that allow students to verbally explain mappings among representations and (2) "perceptual…

  7. Does Sleep Facilitate the Consolidation of Allocentric or Egocentric Representations of Implicitly Learned Visual-Motor Sequence Learning?

    Science.gov (United States)

    Viczko, Jeremy; Sergeeva, Valya; Ray, Laura B.; Owen, Adrian M.; Fogel, Stuart M.

    2018-01-01

    Sleep facilitates the consolidation (i.e., enhancement) of simple, explicit (i.e., conscious) motor sequence learning (MSL). MSL can be dissociated into egocentric (i.e., motor) or allocentric (i.e., spatial) frames of reference. The consolidation of the allocentric memory representation is sleep-dependent, whereas the egocentric consolidation…

  8. Transformations in the Visual Representation of a Figural Pattern

    Science.gov (United States)

    Montenegro, Paula; Costa, Cecília; Lopes, Bernardino

    2018-01-01

    Multiple representations of a given mathematical object/concept are one of the biggest difficulties encountered by students. The aim of this study is to investigate the impact of the use of visual representations in teaching and learning algebra. In this paper, we analyze the transformations from and to visual representations that were performed…

  9. From phonemes to images : levels of representation in a recurrent neural model of visually-grounded language learning

    NARCIS (Netherlands)

    Gelderloos, L.J.; Chrupala, Grzegorz

    2016-01-01

    We present a model of visually-grounded language learning based on stacked gated recurrent neural networks which learns to predict visual features given an image description in the form of a sequence of phonemes. The learning task resembles that faced by human language learners who need to discover

  10. Visual representation of spatiotemporal structure

    Science.gov (United States)

    Schill, Kerstin; Zetzsche, Christoph; Brauer, Wilfried; Eisenkolb, A.; Musto, A.

    1998-07-01

    The processing and representation of motion information is addressed from an integrated perspective comprising low- level signal processing properties as well as higher-level cognitive aspects. For the low-level processing of motion information we argue that a fundamental requirement is the existence of a spatio-temporal memory. Its key feature, the provision of an orthogonal relation between external time and its internal representation, is achieved by a mapping of temporal structure into a locally distributed activity distribution accessible in parallel by higher-level processing stages. This leads to a reinterpretation of the classical concept of `iconic memory' and resolves inconsistencies on ultra-short-time processing and visual masking. The spatial-temporal memory is further investigated by experiments on the perception of spatio-temporal patterns. Results on the direction discrimination of motion paths provide evidence that information about direction and location are not processed and represented independent of each other. This suggests a unified representation on an early level, in the sense that motion information is internally available in form of a spatio-temporal compound. For the higher-level representation we have developed a formal framework for the qualitative description of courses of motion that may occur with moving objects.

  11. Visual Perceptual Learning and Models.

    Science.gov (United States)

    Dosher, Barbara; Lu, Zhong-Lin

    2017-09-15

    Visual perceptual learning through practice or training can significantly improve performance on visual tasks. Originally seen as a manifestation of plasticity in the primary visual cortex, perceptual learning is more readily understood as improvements in the function of brain networks that integrate processes, including sensory representations, decision, attention, and reward, and balance plasticity with system stability. This review considers the primary phenomena of perceptual learning, theories of perceptual learning, and perceptual learning's effect on signal and noise in visual processing and decision. Models, especially computational models, play a key role in behavioral and physiological investigations of the mechanisms of perceptual learning and for understanding, predicting, and optimizing human perceptual processes, learning, and performance. Performance improvements resulting from reweighting or readout of sensory inputs to decision provide a strong theoretical framework for interpreting perceptual learning and transfer that may prove useful in optimizing learning in real-world applications.

  12. Facilitating Mathematical Practices through Visual Representations

    Science.gov (United States)

    Murata, Aki; Stewart, Chana

    2017-01-01

    Effective use of mathematical representation is key to supporting student learning. In "Principles to Actions: Ensuring Mathematical Success for All" (NCTM 2014), "use and connect mathematical representations" is one of the effective Mathematics Teaching Practices. By using different representations, students examine concepts…

  13. Object representations in visual memory: evidence from visual illusions.

    Science.gov (United States)

    Ben-Shalom, Asaf; Ganel, Tzvi

    2012-07-26

    Human visual memory is considered to contain different levels of object representations. Representations in visual working memory (VWM) are thought to contain relatively elaborated information about object structure. Conversely, representations in iconic memory are thought to be more perceptual in nature. In four experiments, we tested the effects of two different categories of visual illusions on representations in VWM and in iconic memory. Unlike VWM that was affected by both types of illusions, iconic memory was immune to the effects of within-object contextual illusions and was affected only by illusions driven by between-objects contextual properties. These results show that iconic and visual working memory contain dissociable representations of object shape. These findings suggest that the global properties of the visual scene are processed prior to the processing of specific elements.

  14. Scientific Representation and Science Learning

    Science.gov (United States)

    Matta, Corrado

    2014-01-01

    In this article I examine three examples of philosophical theories of scientific representation with the aim of assessing which of these is a good candidate for a philosophical theory of scientific representation in science learning. The three candidate theories are Giere's intentional approach, Suárez's inferential approach and Lynch and…

  15. Distorted representation in visual tourism research

    DEFF Research Database (Denmark)

    Jensen, Martin Trandberg

    2016-01-01

    how photographic materialities, performativities and sensations contribute to new tourism knowledges. While highlighting the potential of distorted representation, the paper posits a cautionary note in regards to the influential role of academic journals in determining the qualities of visual data....... The paper exemplifies distorted representation through three impressionistic tales derived from ethnographic research on the European rail travel phenomenon: interrail.......Tourism research has recently been informed by non-representational theories to highlight the socio-material, embodied and heterogeneous composition of tourist experiences. These advances have contributed to further reflexivity and called for novel ways to animate representations...

  16. Accurate metacognition for visual sensory memory representations

    NARCIS (Netherlands)

    Vandenbroucke, A.R.E.; Sligte, I.G.; Barrett, A.B.; Seth, A.K.; Fahrenfort, J.J.; Lamme, V.A.F.

    2014-01-01

    The capacity to attend to multiple objects in the visual field is limited. However, introspectively, people feel that they see the whole visual world at once. Some scholars suggest that this introspective feeling is based on short-lived sensory memory representations, whereas others argue that the

  17. Collective form generation through visual participatory representation

    DEFF Research Database (Denmark)

    Day, Dennis; Sharma, Nishant; Punekar, Ravi

    2012-01-01

    In order to inspire and inform designers with the users data from participatory research, it may be important to represent data in a visual format that is easily understandable to the designers. For a case study in vehicle design, the paper outlines visual representation of data and the use...

  18. Adaptive representations for reinforcement learning

    NARCIS (Netherlands)

    Whiteson, S.

    2010-01-01

    This book presents new algorithms for reinforcement learning, a form of machine learning in which an autonomous agent seeks a control policy for a sequential decision task. Since current methods typically rely on manually designed solution representations, agents that automatically adapt their own

  19. Visual Learning in Application of Integration

    Science.gov (United States)

    Bt Shafie, Afza; Barnachea Janier, Josefina; Bt Wan Ahmad, Wan Fatimah

    Innovative use of technology can improve the way how Mathematics should be taught. It can enhance student's learning the concepts through visualization. Visualization in Mathematics refers to us of texts, pictures, graphs and animations to hold the attention of the learners in order to learn the concepts. This paper describes the use of a developed multimedia courseware as an effective tool for visual learning mathematics. The focus is on the application of integration which is a topic in Engineering Mathematics 2. The course is offered to the foundation students in the Universiti Teknologi of PETRONAS. Questionnaire has been distributed to get a feedback on the visual representation and students' attitudes towards using visual representation as a learning tool. The questionnaire consists of 3 sections: Courseware Design (Part A), courseware usability (Part B) and attitudes towards using the courseware (Part C). The results showed that students demonstrated the use of visual representation has benefited them in learning the topic.

  20. Visual representations of Iranian transgenders.

    Science.gov (United States)

    Shakerifar, Elhum

    2011-01-01

    Transsexuality in Iran has gained much attention and media coverage in the past few years, particularly in its questionable depiction as a permitted loophole for homosexuality, which is prohibited under Iran's Islamic-inspired legal system. Of course, attention in the West is also encouraged by the “shock” that sex change is available in Iran, a country that Western media and society delights in portraying as monolithically repressive. As a result, Iranian filmmakers inevitably have their own agendas, which are unsurprisingly brought into the film making process—from a desire to sell a product that will appeal to the Western market, to films that endorse specific socio-political agendas. This paper is an attempt to situate sex change and representations of sex change in Iran within a wider theoretical framework than the frequently reiterated conflation with homosexuality, and to open and engage with a wider debate concerning transsexuality in Iran, as well as to specifically analyze the representation of transexuality, in view of its current prominent presence in media.

  1. Dictionary learning in visual computing

    CERN Document Server

    Zhang, Qiang

    2015-01-01

    The last few years have witnessed fast development on dictionary learning approaches for a set of visual computing tasks, largely due to their utilization in developing new techniques based on sparse representation. Compared with conventional techniques employing manually defined dictionaries, such as Fourier Transform and Wavelet Transform, dictionary learning aims at obtaining a dictionary adaptively from the data so as to support optimal sparse representation of the data. In contrast to conventional clustering algorithms like K-means, where a data point is associated with only one cluster c

  2. Advances in visual representation of molecular potentials.

    Science.gov (United States)

    Du, Qi-Shi; Huang, Ri-Bo; Chou, Kuo-Chen

    2010-06-01

    The recent advances in visual representations of molecular properties in 3D space are summarized, and their applications in molecular modeling study and rational drug design are introduced. The visual representation methods provide us with detailed insights into protein-ligand interactions, and hence can play a major role in elucidating the structure or reactivity of a biomolecular system. Three newly developed computation and visualization methods for studying the physical and chemical properties of molecules are introduced, including their electrostatic potential, lipophilicity potential and excess chemical potential. The newest application examples of visual representations in structure-based rational drug are presented. The 3D electrostatic potentials, calculated using the empirical method (EM-ESP), in which the classical Coulomb equation and traditional atomic partial changes are discarded, are highly consistent with the results by the higher level quantum chemical method. The 3D lipophilicity potentials, computed by the heuristic molecular lipophilicity potential method based on the principles of quantum mechanics and statistical mechanics, are more accurate and reliable than those by using the traditional empirical methods. The 3D excess chemical potentials, derived by the reference interaction site model-hypernetted chain theory, provide a new tool for computational chemistry and molecular modeling. For structure-based drug design, the visual representations of molecular properties will play a significant role in practical applications. It is anticipated that the new advances in computational chemistry will stimulate the development of molecular modeling methods, further enriching the visual representation techniques for rational drug design, as well as other relevant fields in life science.

  3. Visual word representation in the brain

    NARCIS (Netherlands)

    Ramakrishnan, K.; Groen, I.; Scholte, S.; Smeulders, A.; Ghebreab, S.

    2013-01-01

    The human visual system is thought to use features of intermediate complexity for scene representation. How the brain computationally represents intermediate features is unclear, however. To study this, we tested the Bag of Words (BoW) model in computer vision against human brain activity. This

  4. Accurate metacognition for visual sensory memory representations.

    Science.gov (United States)

    Vandenbroucke, Annelinde R E; Sligte, Ilja G; Barrett, Adam B; Seth, Anil K; Fahrenfort, Johannes J; Lamme, Victor A F

    2014-04-01

    The capacity to attend to multiple objects in the visual field is limited. However, introspectively, people feel that they see the whole visual world at once. Some scholars suggest that this introspective feeling is based on short-lived sensory memory representations, whereas others argue that the feeling of seeing more than can be attended to is illusory. Here, we investigated this phenomenon by combining objective memory performance with subjective confidence ratings during a change-detection task. This allowed us to compute a measure of metacognition--the degree of knowledge that subjects have about the correctness of their decisions--for different stages of memory. We show that subjects store more objects in sensory memory than they can attend to but, at the same time, have similar metacognition for sensory memory and working memory representations. This suggests that these subjective impressions are not an illusion but accurate reflections of the richness of visual perception.

  5. Hierarchical Representation Learning for Kinship Verification.

    Science.gov (United States)

    Kohli, Naman; Vatsa, Mayank; Singh, Richa; Noore, Afzel; Majumdar, Angshul

    2017-01-01

    Kinship verification has a number of applications such as organizing large collections of images and recognizing resemblances among humans. In this paper, first, a human study is conducted to understand the capabilities of human mind and to identify the discriminatory areas of a face that facilitate kinship-cues. The visual stimuli presented to the participants determine their ability to recognize kin relationship using the whole face as well as specific facial regions. The effect of participant gender and age and kin-relation pair of the stimulus is analyzed using quantitative measures such as accuracy, discriminability index d' , and perceptual information entropy. Utilizing the information obtained from the human study, a hierarchical kinship verification via representation learning (KVRL) framework is utilized to learn the representation of different face regions in an unsupervised manner. We propose a novel approach for feature representation termed as filtered contractive deep belief networks (fcDBN). The proposed feature representation encodes relational information present in images using filters and contractive regularization penalty. A compact representation of facial images of kin is extracted as an output from the learned model and a multi-layer neural network is utilized to verify the kin accurately. A new WVU kinship database is created, which consists of multiple images per subject to facilitate kinship verification. The results show that the proposed deep learning framework (KVRL-fcDBN) yields the state-of-the-art kinship verification accuracy on the WVU kinship database and on four existing benchmark data sets. Furthermore, kinship information is used as a soft biometric modality to boost the performance of face verification via product of likelihood ratio and support vector machine based approaches. Using the proposed KVRL-fcDBN framework, an improvement of over 20% is observed in the performance of face verification.

  6. Developing a Local Instruction Theory for Learning the Concept of Angle through Visual Field Activities and Spatial Representations

    NARCIS (Netherlands)

    Bustang, B.; Zulkardi, Z.; Darmawijoyo, D.; Dolk, M.L.A.M.; van Eerde, H.A.A.

    2013-01-01

    This paper reports a study on designing and testing an instructional sequence for the teaching and learning of the concept of angle in Indonesian primary schools. The study’s context is employing the current reform movement adopting Pendidikan Matematika Realistik Indonesia (an Indonesian version of

  7. It's Not a Math Lesson--We're Learning to Draw! Teachers' Use of Visual Representations in Instructing Word Problem Solving in Sixth Grade of Elementary School

    Science.gov (United States)

    Boonen, Anton J. H.; Reed, Helen C.; Schoonenboom, Judith; Jolles, Jelle

    2016-01-01

    Non-routine word problem solving is an essential feature of the mathematical development of elementary school students worldwide. Many students experience difficulties in solving these problems due to erroneous problem comprehension. These difficulties could be alleviated by instructing students how to use visual representations that clarify the…

  8. Learning Visual Basic NET

    CERN Document Server

    Liberty, Jesse

    2009-01-01

    Learning Visual Basic .NET is a complete introduction to VB.NET and object-oriented programming. By using hundreds of examples, this book demonstrates how to develop various kinds of applications--including those that work with databases--and web services. Learning Visual Basic .NET will help you build a solid foundation in .NET.

  9. The Nature of Experience Determines Object Representations in the Visual System

    Science.gov (United States)

    Wong, Yetta K.; Folstein, Jonathan R.; Gauthier, Isabel

    2012-01-01

    Visual perceptual learning (PL) and perceptual expertise (PE) traditionally lead to different training effects and recruit different brain areas, but reasons for these differences are largely unknown. Here, we tested how the learning history influences visual object representations. Two groups were trained with tasks typically used in PL or PE…

  10. Cross-cultural understanding through visual representation

    Directory of Open Access Journals (Sweden)

    Kristina Beckman

    2011-04-01

    Full Text Available This article analyzes international students’ drawings of their home countries’ essay assignments. These English as a Second Language (ESL students often have difficulty in meeting the local demands of our Writing Program, which centers on argumentative writing with thesis and support. Any part of an essay deemed irrelevant is censured as “off topic;” some students see this structure as too direct or even impolite. While not all students found visual representation easy, the drawings reveal some basic assumptions about writing embodied in their native cultures’ assignments. We discuss the drawings first for visual rhetorical content, then in the students’ own terms. Last, we consider how our own pedagogy has been shaped.

  11. Acoustic Tactile Representation of Visual Information

    Science.gov (United States)

    Silva, Pubudu Madhawa

    Our goal is to explore the use of hearing and touch to convey graphical and pictorial information to visually impaired people. Our focus is on dynamic, interactive display of visual information using existing, widely available devices, such as smart phones and tablets with touch sensitive screens. We propose a new approach for acoustic-tactile representation of visual signals that can be implemented on a touch screen and allows the user to actively explore a two-dimensional layout consisting of one or more objects with a finger or a stylus while listening to auditory feedback via stereo headphones. The proposed approach is acoustic-tactile because sound is used as the primary source of information for object localization and identification, while touch is used for pointing and kinesthetic feedback. A static overlay of raised-dot tactile patterns can also be added. A key distinguishing feature of the proposed approach is the use of spatial sound (directional and distance cues) to facilitate the active exploration of the layout. We consider a variety of configurations for acoustic-tactile rendering of object size, shape, identity, and location, as well as for the overall perception of simple layouts and scenes. While our primary goal is to explore the fundamental capabilities and limitations of representing visual information in acoustic-tactile form, we also consider a number of relatively simple configurations that can be tied to specific applications. In particular, we consider a simple scene layout consisting of objects in a linear arrangement, each with a distinct tapping sound, which we compare to a ''virtual cane.'' We will also present a configuration that can convey a ''Venn diagram.'' We present systematic subjective experiments to evaluate the effectiveness of the proposed display for shape perception, object identification and localization, and 2-D layout perception, as well as the applications. Our experiments were conducted with visually blocked

  12. Geometric Hypergraph Learning for Visual Tracking

    OpenAIRE

    Du, Dawei; Qi, Honggang; Wen, Longyin; Tian, Qi; Huang, Qingming; Lyu, Siwei

    2016-01-01

    Graph based representation is widely used in visual tracking field by finding correct correspondences between target parts in consecutive frames. However, most graph based trackers consider pairwise geometric relations between local parts. They do not make full use of the target's intrinsic structure, thereby making the representation easily disturbed by errors in pairwise affinities when large deformation and occlusion occur. In this paper, we propose a geometric hypergraph learning based tr...

  13. How initial representations shape coupled learning processes

    DEFF Research Database (Denmark)

    Puranam, Phanish; Swamy, M.

    2016-01-01

    Coupled learning processes, in which specialists from different domains learn how to make interdependent choices among alternatives, are common in organizations. We explore the role played by initial representations held by the learners in coupled learning processes using a formal agent-based model....... We find that initial representations have important consequences for the success of the coupled learning process, particularly when communication is constrained and individual rates of learning are high. Under these conditions, initial representations that generate incorrect beliefs can outperform...... one that does not discriminate among alternatives, or even a mix of correct and incorrect representations among the learners. We draw implications for the design of coupled learning processes in organizations. © 2016 INFORMS....

  14. Separate visual representations for perception and for visually guided behavior

    Science.gov (United States)

    Bridgeman, Bruce

    1989-01-01

    Converging evidence from several sources indicates that two distinct representations of visual space mediate perception and visually guided behavior, respectively. The two maps of visual space follow different rules; spatial values in either one can be biased without affecting the other. Ordinarily the two maps give equivalent responses because both are veridically in register with the world; special techniques are required to pull them apart. One such technique is saccadic suppression: small target displacements during saccadic eye movements are not preceived, though the displacements can change eye movements or pointing to the target. A second way to separate cognitive and motor-oriented maps is with induced motion: a slowly moving frame will make a fixed target appear to drift in the opposite direction, while motor behavior toward the target is unchanged. The same result occurs with stroboscopic induced motion, where the frame jump abruptly and the target seems to jump in the opposite direction. A third method of separating cognitive and motor maps, requiring no motion of target, background or eye, is the Roelofs effect: a target surrounded by an off-center rectangular frame will appear to be off-center in the direction opposite the frame. Again the effect influences perception, but in half of the subjects it does not influence pointing to the target. This experience also reveals more characteristics of the maps and their interactions with one another, the motor map apparently has little or no memory, and must be fed from the biased cognitive map if an enforced delay occurs between stimulus presentation and motor response. In designing spatial displays, the results mean that what you see isn't necessarily what you get. Displays must be designed with either perception or visually guided behavior in mind.

  15. Spontaneously emerging cortical representations of visual attributes

    Science.gov (United States)

    Kenet, Tal; Bibitchkov, Dmitri; Tsodyks, Misha; Grinvald, Amiram; Arieli, Amos

    2003-10-01

    Spontaneous cortical activity-ongoing activity in the absence of intentional sensory input-has been studied extensively, using methods ranging from EEG (electroencephalography), through voltage sensitive dye imaging, down to recordings from single neurons. Ongoing cortical activity has been shown to play a critical role in development, and must also be essential for processing sensory perception, because it modulates stimulus-evoked activity, and is correlated with behaviour. Yet its role in the processing of external information and its relationship to internal representations of sensory attributes remains unknown. Using voltage sensitive dye imaging, we previously established a close link between ongoing activity in the visual cortex of anaesthetized cats and the spontaneous firing of a single neuron. Here we report that such activity encompasses a set of dynamically switching cortical states, many of which correspond closely to orientation maps. When such an orientation state emerged spontaneously, it spanned several hypercolumns and was often followed by a state corresponding to a proximal orientation. We suggest that dynamically switching cortical states could represent the brain's internal context, and therefore reflect or influence memory, perception and behaviour.

  16. Learning Science Through Visualization

    Science.gov (United States)

    Chaudhury, S. Raj

    2005-01-01

    In the context of an introductory physical science course for non-science majors, I have been trying to understand how scientific visualizations of natural phenomena can constructively impact student learning. I have also necessarily been concerned with the instructional and assessment approaches that need to be considered when focusing on learning science through visually rich information sources. The overall project can be broken down into three distinct segments : (i) comparing students' abilities to demonstrate proportional reasoning competency on visual and verbal tasks (ii) decoding and deconstructing visualizations of an object falling under gravity (iii) the role of directed instruction to elicit alternate, valid scientific visualizations of the structure of the solar system. Evidence of student learning was collected in multiple forms for this project - quantitative analysis of student performance on written, graded assessments (tests and quizzes); qualitative analysis of videos of student 'think aloud' sessions. The results indicate that there are significant barriers for non-science majors to succeed in mastering the content of science courses, but with informed approaches to instruction and assessment, these barriers can be overcome.

  17. Fundamental Visual Representations of Social Cognition in ASD

    Science.gov (United States)

    2016-12-01

    AWARD NUMBER: W81XWH-14-1-0565 TITLE: Fundamental Visual Representations of Social Cognition in ASD PRINCIPAL INVESTIGATOR: John Foxe, Ph.D...Visual Representations of Social Cognition in ASD 5a. CONTRACT NUMBER 5b. GRANT NUMBER W81XWH-14-1-0565 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S... vertical line) adaptation trials are started. This involves moving the target in by 3 degrees of visual angle while the participants eyes are “in

  18. Numerical Magnitude Representations Influence Arithmetic Learning

    Science.gov (United States)

    Booth, Julie L.; Siegler, Robert S.

    2008-01-01

    This study examined whether the quality of first graders' (mean age = 7.2 years) numerical magnitude representations is correlated with, predictive of, and causally related to their arithmetic learning. The children's pretest numerical magnitude representations were found to be correlated with their pretest arithmetic knowledge and to be…

  19. Learning Multimodal Deep Representations for Crowd Anomaly Event Detection

    Directory of Open Access Journals (Sweden)

    Shaonian Huang

    2018-01-01

    Full Text Available Anomaly event detection in crowd scenes is extremely important; however, the majority of existing studies merely use hand-crafted features to detect anomalies. In this study, a novel unsupervised deep learning framework is proposed to detect anomaly events in crowded scenes. Specifically, low-level visual features, energy features, and motion map features are simultaneously extracted based on spatiotemporal energy measurements. Three convolutional restricted Boltzmann machines are trained to model the mid-level feature representation of normal patterns. Then a multimodal fusion scheme is utilized to learn the deep representation of crowd patterns. Based on the learned deep representation, a one-class support vector machine model is used to detect anomaly events. The proposed method is evaluated using two available public datasets and compared with state-of-the-art methods. The experimental results show its competitive performance for anomaly event detection in video surveillance.

  20. Rich Representations with Exposed Semantics for Deep Visual Reasoning

    Science.gov (United States)

    2016-06-01

    of a relationship between visual recognition, associative processing, and episodic memory and provides important clues into the neural mechanism...provides critical evidence of a relationship between visual recognition, associative processing, and episodic memory and provides important clues into...From - To) ;run.- ~01~ Final!Technical 4. TITLE AND SUBTITLE Sa. CONTRACT NUMBER Rich Representations with Exposed Semantics for Deep Visual

  1. Expertise Reversal for Iconic Representations in Science Visualizations

    Science.gov (United States)

    Homer, Bruce D.; Plass, Jan L.

    2010-01-01

    The influence of prior knowledge and cognitive development on the effectiveness of iconic representations in science visualizations was examined. Middle and high school students (N = 186) were given narrated visualizations of two chemistry topics: Kinetic Molecular Theory (Day 1) and Ideal Gas Laws (Day 2). For half of the visualizations, iconic…

  2. Visual Representations of the Water Cycle in Science Textbooks

    Science.gov (United States)

    Vinisha, K.; Ramadas, J.

    2013-01-01

    Visual representations, including photographs, sketches and schematic diagrams, are a valuable yet often neglected aspect of textbooks. Visual means of communication are particularly helpful in introducing abstract concepts in science. For effective communication, visuals and text need to be appropriately integrated within the textbook. This study…

  3. Interactions between visual working memory representations.

    Science.gov (United States)

    Bae, Gi-Yeul; Luck, Steven J

    2017-11-01

    We investigated whether the representations of different objects are maintained independently in working memory or interact with each other. Observers were shown two sequentially presented orientations and required to reproduce each orientation after a delay. The sequential presentation minimized perceptual interactions so that we could isolate interactions between memory representations per se. We found that similar orientations were repelled from each other whereas dissimilar orientations were attracted to each other. In addition, when one of the items was given greater attentional priority by means of a cue, the representation of the high-priority item was not influenced very much by the orientation of the low-priority item, but the representation of the low-priority item was strongly influenced by the orientation of the high-priority item. This indicates that attention modulates the interactions between working memory representations. In addition, errors in the reported orientations of the two objects were positively correlated under some conditions, suggesting that representations of distinct objects may become grouped together in memory. Together, these results demonstrate that working-memory representations are not independent but instead interact with each other in a manner that depends on attentional priority.

  4. Educating "The Simpsons": Teaching Queer Representations in Contemporary Visual Media

    Science.gov (United States)

    Padva, Gilad

    2008-01-01

    This article analyzes queer representation in contemporary visual media and examines how the episode "Homer's Phobia" from Matt Groening's animation series "The Simpsons" can be used to deconstruct hetero- and homo-sexual codes of behavior, socialization, articulation, representation and visibility. The analysis is contextualized in the…

  5. Complex Visual Data Analysis, Uncertainty, and Representation

    National Research Council Canada - National Science Library

    Schunn, Christian D; Saner, Lelyn D; Kirschenbaum, Susan K; Trafton, J. G; Littleton, Eliza B

    2007-01-01

    ... (weather forecasting, submarine target motion analysis, and fMRI data analysis). Internal spatial representations are coded from spontaneous gestures made during cued-recall summaries of problem solving activities...

  6. Visual motion transforms visual space representations similarly throughout the human visual hierarchy.

    Science.gov (United States)

    Harvey, Ben M; Dumoulin, Serge O

    2016-02-15

    Several studies demonstrate that visual stimulus motion affects neural receptive fields and fMRI response amplitudes. Here we unite results of these two approaches and extend them by examining the effects of visual motion on neural position preferences throughout the hierarchy of human visual field maps. We measured population receptive field (pRF) properties using high-field fMRI (7T), characterizing position preferences simultaneously over large regions of the visual cortex. We measured pRFs properties using sine wave gratings in stationary apertures, moving at various speeds in either the direction of pRF measurement or the orthogonal direction. We find direction- and speed-dependent changes in pRF preferred position and size in all visual field maps examined, including V1, V3A, and the MT+ map TO1. These effects on pRF properties increase up the hierarchy of visual field maps. However, both within and between visual field maps the extent of pRF changes was approximately proportional to pRF size. This suggests that visual motion transforms the representation of visual space similarly throughout the visual hierarchy. Visual motion can also produce an illusory displacement of perceived stimulus position. We demonstrate perceptual displacements using the same stimulus configuration. In contrast to effects on pRF properties, perceptual displacements show only weak effects of motion speed, with far larger speed-independent effects. We describe a model where low-level mechanisms could underlie the observed effects on neural position preferences. We conclude that visual motion induces similar transformations of visuo-spatial representations throughout the visual hierarchy, which may arise through low-level mechanisms. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Improving of Junior High School Visual Thinking Representation Ability in Mathematical Problem Solving by CTL

    Directory of Open Access Journals (Sweden)

    Edy Surya

    2013-01-01

    Full Text Available The students’  difficulty which was found is in the problem of understanding, drawing diagrams, reading the charts correctly, conceptual formal  mathematical understanding, and  mathematical problem solving. The appropriate problem representation is the basic way in order to understand the problem itself and make a plan to solve it. This research was the experimental classroom design with a pretest-posttest control in order to increase the representation of visual thinking ability on mathematical problem solving approach  with  contextual learning. The research instrument was a test, observation and interviews. Contextual approach increases of mathematical representations ability increases in students with high initial category, medium, and low compared to conventional approaches. Keywords: Visual Thinking Representation, Mathematical  Problem Solving, Contextual Teaching Learning Approach DOI: http://dx.doi.org/10.22342/jme.4.1.568.113-126

  8. The epistemic representation: visual production and communication of scientific knowledge.

    Directory of Open Access Journals (Sweden)

    Francisco López Cantos

    2015-03-01

    Full Text Available Despite its great influence on the History of Science, visual representations have attracted marginal interest until very recently and have often been regarded as a simple aid for mere illustration or scientific demonstration. However, it has been shown that visualization is an integral element of reasoning and a highly effective and common heuristic strategy in the scientific community and that the study of the conditions of visual production and communication are essential in the development of scientific knowledge. In this paper we deal with the nature of the various forms of visual representation of knowledge that have been happening throughout the history of science, taking as its starting point the illustrated monumental works and three-dimensional models that begin to develop within the scientific community around the fifteenth century. The main thesis of this paper is that any scientific visual representations have common elements that allow us to approach them from epistemic nature, heuristic and communicative dimension.

  9. Building Program Vector Representations for Deep Learning

    OpenAIRE

    Mou, Lili; Li, Ge; Liu, Yuxuan; Peng, Hao; Jin, Zhi; Xu, Yan; Zhang, Lu

    2014-01-01

    Deep learning has made significant breakthroughs in various fields of artificial intelligence. Advantages of deep learning include the ability to capture highly complicated features, weak involvement of human engineering, etc. However, it is still virtually impossible to use deep learning to analyze programs since deep architectures cannot be trained effectively with pure back propagation. In this pioneering paper, we propose the "coding criterion" to build program vector representations, whi...

  10. A survey of visual preprocessing and shape representation techniques

    Science.gov (United States)

    Olshausen, Bruno A.

    1988-01-01

    Many recent theories and methods proposed for visual preprocessing and shape representation are summarized. The survey brings together research from the fields of biology, psychology, computer science, electrical engineering, and most recently, neural networks. It was motivated by the need to preprocess images for a sparse distributed memory (SDM), but the techniques presented may also prove useful for applying other associative memories to visual pattern recognition. The material of this survey is divided into three sections: an overview of biological visual processing; methods of preprocessing (extracting parts of shape, texture, motion, and depth); and shape representation and recognition (form invariance, primitives and structural descriptions, and theories of attention).

  11. Errors of Students Learning With React Strategy in Solving the Problems of Mathematical Representation Ability

    Directory of Open Access Journals (Sweden)

    Delsika Pramata Sari

    2017-06-01

    Full Text Available The purpose of this study was to investigate the errors experienced by students learning with REACT strategy and traditional learning in solving problems of mathematical representation ability. This study used quasi experimental pattern with static-group comparison design. The subjects of this study were 47 eighth grade students of junior high school in Bandung consisting of two samples. The instrument used was a test to measure students' mathematical representation ability. The reliability coefficient about the mathematical representation ability was 0.56. The most prominent errors of mathematical representation ability of students learning with REACT strategy and traditional learning, was on indicator that solving problem involving arithmetic symbols (symbolic representation. In addition, errors were also experienced by many students with traditional learning on the indicator of making the image of a real world situation to clarify the problem and facilitate its completion (visual representation.

  12. Multimodal representations in collaborative history learning

    NARCIS (Netherlands)

    Prangsma, M.E.

    2007-01-01

    This dissertation focuses on the question: How does making and connecting different types of multimodal representations affect the collaborative learning process and the acquisition of a chronological frame of reference in 12 to 14-year olds in pre vocational education? A chronological frame of

  13. Visual perception and verbal descriptions as sources for generating mental representations: Evidence from representational neglect.

    Science.gov (United States)

    Denis, Michel; Beschin, Nicoletta; Logie, Robert H; Della Sala, Sergio

    2002-03-01

    In the majority of investigations of representational neglect, patients are asked to report information derived from long-term visual knowledge. In contrast, studies of perceptual neglect involve reporting the contents of relatively novel scenes in the immediate environment. The present study aimed to establish how representational neglect might affect (a) immediate recall of recently perceived, novel visual layouts, and (b) immediate recall of novel layouts presented only as auditory verbal descriptions. These conditions were contrasted with reports from visual perception and a test of immediate recall of verbal material. Data were obtained from 11 neglect patients (9 with representational neglect), 6 right hemisphere lesion control patients with no evidence of neglect, and 15 healthy controls. In the perception, memory following perception, and memory following layout description conditions, the neglect patients showed poorer report of items depicted or described on the left than on the right of each layout. The lateralised error pattern was not evident in the non-neglect patients or healthy controls, and there was no difference among the three groups on immediate verbal memory. One patient showed pure representational neglect, with ceiling performance in the perception condition, but with lateralised errors for memory following perception or following verbal description. Overall, the results indicate that representational neglect does not depend on the presence of perceptual neglect, that visual perception and visual mental representations are less closely linked than has been thought hitherto, and that visuospatial mental representations have similar functional characteristics whether they are derived from visual perception or from auditory linguistic descriptive inputs.

  14. Ambiguous science and the visual representation of the real

    Science.gov (United States)

    Newbold, Curtis Robert

    The emergence of visual media as prominent and even expected forms of communication in nearly all disciplines, including those scientific, has raised new questions about how the art and science of communication epistemologically affect the interpretation of scientific phenomena. In this dissertation I explore how the influence of aesthetics in visual representations of science inevitably creates ambiguous meanings. As a means to improve visual literacy in the sciences, I call awareness to the ubiquity of visual ambiguity and its importance and relevance in scientific discourse. To do this, I conduct a literature review that spans interdisciplinary research in communication, science, art, and rhetoric. Furthermore, I create a paradoxically ambiguous taxonomy, which functions to exploit the nuances of visual ambiguities and their role in scientific communication. I then extrapolate the taxonomy of visual ambiguity and from it develop an ambiguous, rhetorical heuristic, the Tetradic Model of Visual Ambiguity. The Tetradic Model is applied to a case example of a scientific image as a demonstration of how scientific communicators may increase their awareness of the epistemological effects of ambiguity in the visual representations of science. I conclude by demonstrating how scientific communicators may make productive use of visual ambiguity, even in communications of objective science, and I argue how doing so strengthens scientific communicators' visual literacy skills and their ability to communicate more ethically and effectively.

  15. Transformation-invariant visual representations in self-organizing spiking neural networks.

    Science.gov (United States)

    Evans, Benjamin D; Stringer, Simon M

    2012-01-01

    The ventral visual pathway achieves object and face recognition by building transformation-invariant representations from elementary visual features. In previous computer simulation studies with rate-coded neural networks, the development of transformation-invariant representations has been demonstrated using either of two biologically plausible learning mechanisms, Trace learning and Continuous Transformation (CT) learning. However, it has not previously been investigated how transformation-invariant representations may be learned in a more biologically accurate spiking neural network. A key issue is how the synaptic connection strengths in such a spiking network might self-organize through Spike-Time Dependent Plasticity (STDP) where the change in synaptic strength is dependent on the relative times of the spikes emitted by the presynaptic and postsynaptic neurons rather than simply correlated activity driving changes in synaptic efficacy. Here we present simulations with conductance-based integrate-and-fire (IF) neurons using a STDP learning rule to address these gaps in our understanding. It is demonstrated that with the appropriate selection of model parameters and training regime, the spiking network model can utilize either Trace-like or CT-like learning mechanisms to achieve transform-invariant representations.

  16. Transform-invariant visual representations in self-organizing spiking neural networks

    Directory of Open Access Journals (Sweden)

    Benjamin eEvans

    2012-07-01

    Full Text Available The ventral visual pathway achieves object and face recognition by building transform-invariant representations from elementary visual features. In previous computer simulation studies with rate-coded neural networks, the development of transform invariant representations has been demonstrated using either of two biologically plausible learning mechanisms, Trace learning and Continuous Transformation (CT learning. However, it has not previously been investigated how transform invariant representations may be learned in a more biologically accurate spiking neural network. A key issue is how the synaptic connection strengths in such a spiking network might self-organize through Spike-Time Dependent Plasticity (STDP where the change in synaptic strength is dependent on the relative times of the spikes emitted by the pre- and postsynaptic neurons rather than simply correlated activity driving changes in synaptic efficacy. Here we present simulations with conductance-based integrate-and-fire (IF neurons using a STDP learning rule to address these gaps in our understanding. It is demonstrated that with the appropriate selection of model pa- rameters and training regime, the spiking network model can utilize either Trace-like or CT-like learning mechanisms to achieve transform-invariant representations.

  17. The Effects of Visual Cues and Learners' Field Dependence in Multiple External Representations Environment for Novice Program Comprehension

    Science.gov (United States)

    Wei, Liew Tze; Sazilah, Salam

    2012-01-01

    This study investigated the effects of visual cues in multiple external representations (MER) environment on the learning performance of novices' program comprehension. Program codes and flowchart diagrams were used as dual representations in multimedia environment to deliver lessons on C-Programming. 17 field independent participants and 16 field…

  18. Stable statistical representations facilitate visual search.

    Science.gov (United States)

    Corbett, Jennifer E; Melcher, David

    2014-10-01

    Observers represent the average properties of object ensembles even when they cannot identify individual elements. To investigate the functional role of ensemble statistics, we examined how modulating statistical stability affects visual search. We varied the mean and/or individual sizes of an array of Gabor patches while observers searched for a tilted target. In "stable" blocks, the mean and/or local sizes of the Gabors were constant over successive displays, whereas in "unstable" baseline blocks they changed from trial to trial. Although there was no relationship between the context and the spatial location of the target, observers found targets faster (as indexed by faster correct responses and fewer saccades) as the global mean size became stable over several displays. Building statistical stability also facilitated scanning the scene, as measured by larger saccadic amplitudes, faster saccadic reaction times, and shorter fixation durations. These findings suggest a central role for peripheral visual information, creating context to free resources for detailed processing of salient targets and maintaining the illusion of visual stability.

  19. Are baboons learning "orthographic" representations? Probably not.

    Directory of Open Access Journals (Sweden)

    Maja Linke

    Full Text Available The ability of Baboons (papio papio to distinguish between English words and nonwords has been modeled using a deep learning convolutional network model that simulates a ventral pathway in which lexical representations of different granularity develop. However, given that pigeons (columba livia, whose brain morphology is drastically different, can also be trained to distinguish between English words and nonwords, it appears that a less species-specific learning algorithm may be required to explain this behavior. Accordingly, we examined whether the learning model of Rescorla and Wagner, which has proved to be amazingly fruitful in understanding animal and human learning could account for these data. We show that a discrimination learning network using gradient orientation features as input units and word and nonword units as outputs succeeds in predicting baboon lexical decision behavior-including key lexical similarity effects and the ups and downs in accuracy as learning unfolds-with surprising precision. The models performance, in which words are not explicitly represented, is remarkable because it is usually assumed that lexicality decisions, including the decisions made by baboons and pigeons, are mediated by explicit lexical representations. By contrast, our results suggest that in learning to perform lexical decision tasks, baboons and pigeons do not construct a hierarchy of lexical units. Rather, they make optimal use of low-level information obtained through the massively parallel processing of gradient orientation features. Accordingly, we suggest that reading in humans first involves initially learning a high-level system building on letter representations acquired from explicit instruction in literacy, which is then integrated into a conventionalized oral communication system, and that like the latter, fluent reading involves the massively parallel processing of the low-level features encoding semantic contrasts.

  20. Visual Representation Determines Search Difficulty: Explaining Visual Search Asymmetries

    Directory of Open Access Journals (Sweden)

    Neil eBruce

    2011-07-01

    Full Text Available In visual search experiments there exist a variety of experimental paradigms in which a symmetric set of experimental conditions yields asymmetric corresponding task performance. There are a variety of examples of this that currently lack a satisfactory explanation. In this paper, we demonstrate that distinct classes of asymmetries may be explained by virtue of a few simple conditions that are consistent with current thinking surrounding computational modeling of visual search and coding in the primate brain. This includes a detailed look at the role that stimulus familiarity plays in the determination of search performance. Overall, we demonstrate that all of these asymmetries have a common origin, namely, they are a consequence of the encoding that appears in the visual cortex. The analysis associated with these cases yields insight into the problem of visual search in general and predictions of novel search asymmetries.

  1. Creating visual explanations improves learning.

    Science.gov (United States)

    Bobek, Eliza; Tversky, Barbara

    2016-01-01

    Many topics in science are notoriously difficult for students to learn. Mechanisms and processes outside student experience present particular challenges. While instruction typically involves visualizations, students usually explain in words. Because visual explanations can show parts and processes of complex systems directly, creating them should have benefits beyond creating verbal explanations. We compared learning from creating visual or verbal explanations for two STEM domains, a mechanical system (bicycle pump) and a chemical system (bonding). Both kinds of explanations were analyzed for content and learning assess by a post-test. For the mechanical system, creating a visual explanation increased understanding particularly for participants of low spatial ability. For the chemical system, creating both visual and verbal explanations improved learning without new teaching. Creating a visual explanation was superior and benefitted participants of both high and low spatial ability. Visual explanations often included crucial yet invisible features. The greater effectiveness of visual explanations appears attributable to the checks they provide for completeness and coherence as well as to their roles as platforms for inference. The benefits should generalize to other domains like the social sciences, history, and archeology where important information can be visualized. Together, the findings provide support for the use of learner-generated visual explanations as a powerful learning tool.

  2. Hybrid image representation learning model with invariant features for basal cell carcinoma detection

    Science.gov (United States)

    Arevalo, John; Cruz-Roa, Angel; González, Fabio A.

    2013-11-01

    This paper presents a novel method for basal-cell carcinoma detection, which combines state-of-the-art methods for unsupervised feature learning (UFL) and bag of features (BOF) representation. BOF, which is a form of representation learning, has shown a good performance in automatic histopathology image classi cation. In BOF, patches are usually represented using descriptors such as SIFT and DCT. We propose to use UFL to learn the patch representation itself. This is accomplished by applying a topographic UFL method (T-RICA), which automatically learns visual invariance properties of color, scale and rotation from an image collection. These learned features also reveals these visual properties associated to cancerous and healthy tissues and improves carcinoma detection results by 7% with respect to traditional autoencoders, and 6% with respect to standard DCT representations obtaining in average 92% in terms of F-score and 93% of balanced accuracy.

  3. Reflexive Learning through Visual Methods

    DEFF Research Database (Denmark)

    Frølunde, Lisbeth

    2014-01-01

    What. This chapter concerns how visual methods and visual materials can support visually oriented, collaborative, and creative learning processes in education. The focus is on facilitation (guiding, teaching) with visual methods in learning processes that are designerly or involve design. Visual...... methods are exemplified through two university classroom cases about collaborative idea generation processes. The visual methods and materials in the cases are photo elicitation using photo cards, and modeling with LEGO Serious Play sets. Why. The goal is to encourage the reader, whether student...... or professional, to facilitate with visual methods in a critical, reflective, and experimental way. The chapter offers recommendations for facilitating with visual methods to support playful, emergent designerly processes. The chapter also has a critical, situated perspective. Where. This chapter offers case...

  4. COALA-System for Visual Representation of Cryptography Algorithms

    Science.gov (United States)

    Stanisavljevic, Zarko; Stanisavljevic, Jelena; Vuletic, Pavle; Jovanovic, Zoran

    2014-01-01

    Educational software systems have an increasingly significant presence in engineering sciences. They aim to improve students' attitudes and knowledge acquisition typically through visual representation and simulation of complex algorithms and mechanisms or hardware systems that are often not available to the educational institutions. This paper…

  5. The percien contribution for an indexal representation of visual images

    Directory of Open Access Journals (Sweden)

    Virginia Bentes Pinto

    2008-04-01

    Full Text Available However, even if along history the visual images have gained a great importance as sources of information, one cannot deny that with the newest information and communication technologies (ICT they drew the attention of experts from the most different fields of knowledge, such as arts, biology, astronomy, archeology, history, health, fashion, decoration, public relations, editing, engineering and architecture, among others. Presents some theoretical reflections concerning representation in Peirce’s perspective based on the context of the new approaches used for the treatment of visual images, using as examples the paradigms of the manual, semiautomatic, automatic and mixed index representation. The results of the experiments show that the difficulties found in the construction of an index representation of that document type originate from the complexity inherent in the process of production and reception of the imagetic sign.

  6. Data Representations, Transformations, and Statistics for Visual Reasoning

    CERN Document Server

    Maciejewski, Ross

    2011-01-01

    Analytical reasoning techniques are methods by which users explore their data to obtain insight and knowledge that can directly support situational awareness and decision making. Recently, the analytical reasoning process has been augmented through the use of interactive visual representations and tools which utilize cognitive, design and perceptual principles. These tools are commonly referred to as visual analytics tools, and the underlying methods and principles have roots in a variety of disciplines. This chapter provides an introduction to young researchers as an overview of common visual

  7. Deep neural networks rival the representation of primate IT cortex for core visual object recognition.

    Directory of Open Access Journals (Sweden)

    Charles F Cadieu

    2014-12-01

    Full Text Available The primate visual system achieves remarkable visual object recognition performance even in brief presentations, and under changes to object exemplar, geometric transformations, and background variation (a.k.a. core visual object recognition. This remarkable performance is mediated by the representation formed in inferior temporal (IT cortex. In parallel, recent advances in machine learning have led to ever higher performing models of object recognition using artificial deep neural networks (DNNs. It remains unclear, however, whether the representational performance of DNNs rivals that of the brain. To accurately produce such a comparison, a major difficulty has been a unifying metric that accounts for experimental limitations, such as the amount of noise, the number of neural recording sites, and the number of trials, and computational limitations, such as the complexity of the decoding classifier and the number of classifier training examples. In this work, we perform a direct comparison that corrects for these experimental limitations and computational considerations. As part of our methodology, we propose an extension of "kernel analysis" that measures the generalization accuracy as a function of representational complexity. Our evaluations show that, unlike previous bio-inspired models, the latest DNNs rival the representational performance of IT cortex on this visual object recognition task. Furthermore, we show that models that perform well on measures of representational performance also perform well on measures of representational similarity to IT, and on measures of predicting individual IT multi-unit responses. Whether these DNNs rely on computational mechanisms similar to the primate visual system is yet to be determined, but, unlike all previous bio-inspired models, that possibility cannot be ruled out merely on representational performance grounds.

  8. Multiple instance learning tracking method with local sparse representation

    KAUST Repository

    Xie, Chengjun

    2013-10-01

    When objects undergo large pose change, illumination variation or partial occlusion, most existed visual tracking algorithms tend to drift away from targets and even fail in tracking them. To address this issue, in this study, the authors propose an online algorithm by combining multiple instance learning (MIL) and local sparse representation for tracking an object in a video system. The key idea in our method is to model the appearance of an object by local sparse codes that can be formed as training data for the MIL framework. First, local image patches of a target object are represented as sparse codes with an overcomplete dictionary, where the adaptive representation can be helpful in overcoming partial occlusion in object tracking. Then MIL learns the sparse codes by a classifier to discriminate the target from the background. Finally, results from the trained classifier are input into a particle filter framework to sequentially estimate the target state over time in visual tracking. In addition, to decrease the visual drift because of the accumulative errors when updating the dictionary and classifier, a two-step object tracking method combining a static MIL classifier with a dynamical MIL classifier is proposed. Experiments on some publicly available benchmarks of video sequences show that our proposed tracker is more robust and effective than others. © The Institution of Engineering and Technology 2013.

  9. Apparatus for producing a visual representation of a radiographic scan

    International Nuclear Information System (INIS)

    Hounsfield, G.N.

    1976-01-01

    An apparatus is disclosed for providing a visual representation of the absorption or transmission coefficients of the elements of a two dimensional matrix of elements notionally defined in a cross-sectional plane through a body. The representation is in the form of an analogue display comprising superimposed lines of information scanned on the surface of a suitable screen, the brightness of each line being indicative of the absorption suffered by penetrating radiation on traversing a respective path through said plane of the body. The orientation of each scanned line depends on the orientation of the respective path with respect to the body. 7 Claims, 4 Drawing Figures

  10. Negative emotion boosts quality of visual working memory representation.

    Science.gov (United States)

    Xie, Weizhen; Zhang, Weiwei

    2016-08-01

    Negative emotion impacts a variety of cognitive processes, including working memory (WM). The present study investigated whether negative emotion modulated WM capacity (quantity) or resolution (quality), 2 independent limits on WM storage. In Experiment 1, observers tried to remember several colors over 1-s delay and then recalled the color of a randomly picked memory item by clicking a best-matching color on a continuous color wheel. On each trial, before the visual WM task, 1 of 3 emotion conditions (negative, neutral, or positive) was induced by having observers to rate the valence of an International Affective Picture System image. Visual WM under negative emotion showed enhanced resolution compared with neutral and positive conditions, whereas the number of retained representations was comparable across the 3 emotion conditions. These effects were generalized to closed-contour shapes in Experiment 2. To isolate the locus of these effects, Experiment 3 adopted an iconic memory version of the color recall task by eliminating the 1-s retention interval. No significant change in the quantity or quality of iconic memory was observed, suggesting that the resolution effects in the first 2 experiments were critically dependent on the need to retain memory representations over a short period of time. Taken together, these results suggest that negative emotion selectively boosts visual WM quality, supporting the dissociable nature quantitative and qualitative aspects of visual WM representation. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  11. Learning Visualizations by Analogy: Promoting Visual Literacy through Visualization Morphing.

    Science.gov (United States)

    Ruchikachorn, Puripant; Mueller, Klaus

    2015-09-01

    We propose the concept of teaching (and learning) unfamiliar visualizations by analogy, that is, demonstrating an unfamiliar visualization method by linking it to another more familiar one, where the in-betweens are designed to bridge the gap of these two visualizations and explain the difference in a gradual manner. As opposed to a textual description, our morphing explains an unfamiliar visualization through purely visual means. We demonstrate our idea by ways of four visualization pair examples: data table and parallel coordinates, scatterplot matrix and hyperbox, linear chart and spiral chart, and hierarchical pie chart and treemap. The analogy is commutative i.e. any member of the pair can be the unfamiliar visualization. A series of studies showed that this new paradigm can be an effective teaching tool. The participants could understand the unfamiliar visualization methods in all of the four pairs either fully or at least significantly better after they observed or interacted with the transitions from the familiar counterpart. The four examples suggest how helpful visualization pairings be identified and they will hopefully inspire other visualization morphings and associated transition strategies to be identified.

  12. Computational Modelling of the Neural Representation of Object Shape in the Primate Ventral Visual System

    Directory of Open Access Journals (Sweden)

    Akihiro eEguchi

    2015-08-01

    Full Text Available Neurons in successive stages of the primate ventral visual pathway encode the spatial structure of visual objects. In this paper, we investigate through computer simulation how these cell firing properties may develop through unsupervised visually-guided learning. Individual neurons in the model are shown to exploit statistical regularity and temporal continuity of the visual inputs during training to learn firing properties that are similar to neurons in V4 and TEO. Neurons in V4 encode the conformation of boundary contour elements at a particular position within an object regardless of the location of the object on the retina, while neurons in TEO integrate information from multiple boundary contour elements. This representation goes beyond mere object recognition, in which neurons simply respond to the presence of a whole object, but provides an essential foundation from which the brain is subsequently able to recognise the whole object.

  13. Learning from Balance Sheet Visualization

    Science.gov (United States)

    Tanlamai, Uthai; Soongswang, Oranuj

    2011-01-01

    This exploratory study examines alternative visuals and their effect on the level of learning of balance sheet users. Executive and regular classes of graduate students majoring in information technology in business were asked to evaluate the extent of acceptance and enhanced capability of these alternative visuals toward their learning…

  14. Deep learning for visual understanding

    NARCIS (Netherlands)

    Guo, Y.

    2017-01-01

    With the dramatic growth of the image data on the web, there is an increasing demand of the algorithms capable of understanding the visual information automatically. Deep learning, served as one of the most significant breakthroughs, has brought revolutionary success in diverse visual applications,

  15. Associative learning changes cross-modal representations in the gustatory cortex.

    Science.gov (United States)

    Vincis, Roberto; Fontanini, Alfredo

    2016-08-30

    A growing body of literature has demonstrated that primary sensory cortices are not exclusively unimodal, but can respond to stimuli of different sensory modalities. However, several questions concerning the neural representation of cross-modal stimuli remain open. Indeed, it is poorly understood if cross-modal stimuli evoke unique or overlapping representations in a primary sensory cortex and whether learning can modulate these representations. Here we recorded single unit responses to auditory, visual, somatosensory, and olfactory stimuli in the gustatory cortex (GC) of alert rats before and after associative learning. We found that, in untrained rats, the majority of GC neurons were modulated by a single modality. Upon learning, both prevalence of cross-modal responsive neurons and their breadth of tuning increased, leading to a greater overlap of representations. Altogether, our results show that the gustatory cortex represents cross-modal stimuli according to their sensory identity, and that learning changes the overlap of cross-modal representations.

  16. The body voyage as visual representation and art performance

    DEFF Research Database (Denmark)

    Olsén, Jan-Eric

    2011-01-01

    This paper looks at the notion of the body as an interior landscape that is made intelligible through visual representation. It discerns the key figure of the inner corporeal voyage, identifies its main elements and examines how contemporary artists working with performances and installations deal...... with it. A further aim with the paper is to discuss what kind of image of the body that is conveyed through medical visual technologies, such as endoscopy, and relate it to contemporary discussions on embodiment, embodied vision and bodily presence. The paper concludes with a recent exhibition...

  17. The body voyage as visual representation and art performance.

    Science.gov (United States)

    Olsén, Jan Eric

    2011-01-01

    This paper looks at the notion of the body as an interior landscape that is made intelligible through visual representation. It discerns the key figure of the inner corporeal voyage, identifies its main elements and examines how contemporary artists working with performances and installations deal with it. A further aim with the paper is to discuss what kind of image of the body that is conveyed through medical visual technologies, such as endoscopy, and relate it to contemporary discussions on embodiment, embodied vision and bodily presence. The paper concludes with a recent exhibition by the French artist Christian Boltanski, which gives a somewhat different meaning to the idea of the body voyage.

  18. Mathematical Representation Ability by Using Project Based Learning on the Topic of Statistics

    Science.gov (United States)

    Widakdo, W. A.

    2017-09-01

    Seeing the importance of the role of mathematics in everyday life, mastery of the subject areas of mathematics is a must. Representation ability is one of the fundamental ability that used in mathematics to make connection between abstract idea with logical thinking to understanding mathematics. Researcher see the lack of mathematical representation and try to find alternative solution to dolve it by using project based learning. This research use literature study from some books and articles in journals to see the importance of mathematical representation abiliy in mathemtics learning and how project based learning able to increase this mathematical representation ability on the topic of Statistics. The indicators for mathematical representation ability in this research classifies namely visual representation (picture, diagram, graph, or table); symbolize representation (mathematical statement. Mathematical notation, numerical/algebra symbol) and verbal representation (written text). This article explain about why project based learning able to influence student’s mathematical representation by using some theories in cognitive psychology, also showing the example of project based learning that able to use in teaching statistics, one of mathematics topic that very useful to analyze data.

  19. A deep learning / neuroevolution hybrid for visual control

    DEFF Research Database (Denmark)

    Poulsen, Andreas Precht; Thorhauge, Mark; Funch, Mikkel Hvilshj

    2017-01-01

    This paper presents a deep learning / neuroevolution hybrid approach called DLNE, which allows FPS bots to learn to aim & shoot based only on high-dimensional raw pixel input. The deep learning component is responsible for visual recognition and translating raw pixels to compact feature...... representations, while the evolving network takes those features as inputs to infer actions. The results suggest that combining deep learning and neuroevolution in a hybrid approach is a promising research direction that could make complex visual domains directly accessible to networks trained through evolution....

  20. The development of hand-centred visual representations in the primate brain: a computer modelling study using natural visual scenes.

    Directory of Open Access Journals (Sweden)

    Juan Manuel Galeazzi

    2015-12-01

    Full Text Available Neurons that respond to visual targets in a hand-centred frame of reference have been found within various areas of the primate brain. We investigate how hand-centred visual representations may develop in a neural network model of the primate visual system called VisNet, when the model is trained on images of the hand seen against natural visual scenes. The simulations show how such neurons may develop through a biologically plausible process of unsupervised competitive learning and self-organisation. In an advance on our previous work, the visual scenes consisted of multiple targets presented simultaneously with respect to the hand. Three experiments are presented. First, VisNet was trained with computerized images consisting of a realistic image of a hand and and a variety of natural objects, presented in different textured backgrounds during training. The network was then tested with just one textured object near the hand in order to verify if the output cells were capable of building hand-centered representations with a single localised receptive field. We explain the underlying principles of the statistical decoupling that allows the output cells of the network to develop single localised receptive fields even when the network is trained with multiple objects. In a second simulation we examined how some of the cells with hand-centred receptive fields decreased their shape selectivity and started responding to a localised region of hand-centred space as the number of objects presented in overlapping locations during training increases. Lastly, we explored the same learning principles training the network with natural visual scenes collected by volunteers. These results provide an important step in showing how single, localised, hand-centered receptive fields could emerge under more ecologically realistic visual training conditions.

  1. Change blindness and visual memory: visual representations get rich and act poor.

    Science.gov (United States)

    Varakin, D Alexander; Levin, Daniel T

    2006-02-01

    Change blindness is often taken as evidence that visual representations are impoverished, while successful recognition of specific objects is taken as evidence that they are richly detailed. In the current experiments, participants performed cover tasks that required each object in a display to be attended. Change detection trials were unexpectedly introduced and surprise recognition tests were given for nonchanging displays. For both change detection and recognition, participants had to distinguish objects from the same basic-level category, making it likely that specific visual information had to be used for successful performance. Although recognition was above chance, incidental change detection usually remained at floor. These results help reconcile demonstrations of poor change detection with demonstrations of good memory because they suggest that the capability to store visual information in memory is not reflected by the visual system's tendency to utilize these representations for purposes of detecting unexpected changes.

  2. Learning Document Semantic Representation with Hybrid Deep Belief Network

    Directory of Open Access Journals (Sweden)

    Yan Yan

    2015-01-01

    it is also an effective way to remove noise from the different document representation type; the DBN can enhance extract abstract of the document in depth, making the model learn sufficient semantic representation. At the same time, we explore different input strategies for semantic distributed representation. Experimental results show that our model using the word embedding instead of single word has better performance.

  3. Recommendations for benefit-risk assessment methodologies and visual representations

    DEFF Research Database (Denmark)

    Hughes, Diana; Waddingham, Ed; Mt-Isa, Shahrul

    2016-01-01

    PURPOSE: The purpose of this study is to draw on the practical experience from the PROTECT BR case studies and make recommendations regarding the application of a number of methodologies and visual representations for benefit-risk assessment. METHODS: Eight case studies based on the benefit......-risk balance of real medicines were used to test various methodologies that had been identified from the literature as having potential applications in benefit-risk assessment. Recommendations were drawn up based on the results of the case studies. RESULTS: A general pathway through the case studies...

  4. Visual-Spatial Attention Aids the Maintenance of Object Representations in Visual Working Memory

    Science.gov (United States)

    Williams, Melonie; Pouget, Pierre; Boucher, Leanne; Woodman, Geoffrey F.

    2013-01-01

    Theories have proposed that the maintenance of object representations in visual working memory is aided by a spatial rehearsal mechanism. In this study, we used two different approaches to test the hypothesis that overt and covert visual-spatial attention mechanisms contribute to the maintenance of object representations in visual working memory. First, we tracked observers’ eye movements while remembering a variable number of objects during change-detection tasks. We observed that during the blank retention interval, participants spontaneously shifted gaze to the locations that the objects had occupied in the memory array. Next, we hypothesized that if attention mechanisms contribute to the maintenance of object representations, then drawing attention away from the object locations during the retention interval would impair object memory during these change-detection tasks. Supporting this prediction, we found that attending to the fixation point in anticipation of a brief probe stimulus during the retention interval reduced change-detection accuracy even on the trials in which no probe occurred. These findings support models of working memory in which visual-spatial selection mechanisms contribute to the maintenance of object representations. PMID:23371773

  5. Visual Representations of DNA Replication: Middle Grades Students' Perceptions and Interpretations

    Science.gov (United States)

    Patrick, Michelle D.; Carter, Glenda; Wiebe, Eric N.

    2005-01-01

    Visual representations play a critical role in the communication of science concepts for scientists and students alike. However, recent research suggests that novice students experience difficulty extracting relevant information from representations. This study examined students' interpretations of visual representations of DNA replication. Each…

  6. Representations in learning new faces: evidence from prosopagnosia.

    Science.gov (United States)

    Polster, M R; Rapcsak, S Z

    1996-05-01

    We report the performance of a prosopagnosic patient on face learning tasks under different encoding instructions (i.e., levels of processing manipulations). R.J. performs at chance when given no encoding instructions or when given "shallow" encoding instruction to focus on facial features. By contrast, he performs relatively well with "deep" encoding instructions to rate faces in terms of personality traits or when provided with semantic and name information during the study phase. We propose that the improvement associated with deep encoding instructions may be related to the establishment of distinct visually derived and identity-specific semantic codes. The benefit associated with deep encoding in R.J., however, was found to be restricted to the specific view of the face presented at study and did not generalize to other views of the same face. These observations suggest that deep encoding instructions may enhance memory for concrete or pictorial representations of faces in patients with prosopagnosia, but that these patients cannot compensate for the inability to construct abstract structural codes that normally allow faces to be recognized from different orientations. We postulate further that R.J.'s poor performance on face learning tasks may be attributable to excessive reliance on a feature-based left hemisphere face processing system that operates primarily on view-specific representations.

  7. The role of visual representations within working memory for paired-associate and serial order of spoken words.

    Science.gov (United States)

    Ueno, Taiji; Saito, Satoru

    2013-09-01

    Caplan and colleagues have recently explained paired-associate learning and serial-order learning with a single-mechanism computational model by assuming differential degrees of isolation. Specifically, two items in a pair can be grouped together and associated to positional codes that are somewhat isolated from the rest of the items. In contrast, the degree of isolation among the studied items is lower in serial-order learning. One of the key predictions drawn from this theory is that any variables that help chunking of two adjacent items into a group should be beneficial to paired-associate learning, more than serial-order learning. To test this idea, the role of visual representations in memory for spoken verbal materials (i.e., imagery) was compared between two types of learning directly. Experiment 1 showed stronger effects of word concreteness and of concurrent presentation of irrelevant visual stimuli (dynamic visual noise: DVN) in paired-associate memory than in serial-order memory, consistent with the prediction. Experiment 2 revealed that the irrelevant visual stimuli effect was boosted when the participants had to actively maintain the information within working memory, rather than feed it to long-term memory for subsequent recall, due to cue overloading. This indicates that the sensory input from irrelevant visual stimuli can reach and affect visual representations of verbal items within working memory, and that this disruption can be attenuated when the information within working memory can be efficiently supported by long-term memory for subsequent recall.

  8. Supporting Fieldwork Learning by Visual Documentation and Reflection

    DEFF Research Database (Denmark)

    Saltofte, Margit

    2017-01-01

    Photos can be used as a supplements to written fieldnotes and as a sources for mediating reflection during fieldwork and analysis. As part of a field diary, photos can support the recall of experiences and a reflective distance to the events. Photography, as visual representation, can also lead...... to reflection on learning and knowledge production in the process of learning how to conduct fieldwork. Pictures can open the way for abstractions and hidden knowledge, which might otherwise be difficult to formulate in words. However, writing and written field notes cannot be fully replaced by photos...... the role played by photos in their learning process. For students, photography is an everyday documentation form that can support their memory of field experience and serve as a vehicle for the analysis of data. The article discusses how photos and visual representations support fieldwork learning...

  9. Weighted Discriminative Dictionary Learning based on Low-rank Representation

    International Nuclear Information System (INIS)

    Chang, Heyou; Zheng, Hao

    2017-01-01

    Low-rank representation has been widely used in the field of pattern classification, especially when both training and testing images are corrupted with large noise. Dictionary plays an important role in low-rank representation. With respect to the semantic dictionary, the optimal representation matrix should be block-diagonal. However, traditional low-rank representation based dictionary learning methods cannot effectively exploit the discriminative information between data and dictionary. To address this problem, this paper proposed weighted discriminative dictionary learning based on low-rank representation, where a weighted representation regularization term is constructed. The regularization associates label information of both training samples and dictionary atoms, and encourages to generate a discriminative representation with class-wise block-diagonal structure, which can further improve the classification performance where both training and testing images are corrupted with large noise. Experimental results demonstrate advantages of the proposed method over the state-of-the-art methods. (paper)

  10. Visual attention to features by associative learning.

    Science.gov (United States)

    Gozli, Davood G; Moskowitz, Joshua B; Pratt, Jay

    2014-11-01

    Expecting a particular stimulus can facilitate processing of that stimulus over others, but what is the fate of other stimuli that are known to co-occur with the expected stimulus? This study examined the impact of learned association on feature-based attention. The findings show that the effectiveness of an uninformative color transient in orienting attention can change by learned associations between colors and the expected target shape. In an initial acquisition phase, participants learned two distinct sequences of stimulus-response-outcome, where stimuli were defined by shape ('S' vs. 'H'), responses were localized key-presses (left vs. right), and outcomes were colors (red vs. green). Next, in a test phase, while expecting a target shape (80% probable), participants showed reliable attentional orienting to the color transient associated with the target shape, and showed no attentional orienting with the color associated with the alternative target shape. This bias seemed to be driven by learned association between shapes and colors, and not modulated by the response. In addition, the bias seemed to depend on observing target-color conjunctions, since encountering the two features disjunctively (without spatiotemporal overlap) did not replicate the findings. We conclude that associative learning - likely mediated by mechanisms underlying visual object representation - can extend the impact of goal-driven attention to features associated with a target stimulus. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. ShapeShop: Towards Understanding Deep Learning Representations via Interactive Experimentation

    Energy Technology Data Exchange (ETDEWEB)

    Hohman, Frederick M.; Hodas, Nathan O.; Chau, Duen Horng

    2017-05-30

    Deep learning is the driving force behind many recent technologies; however, deep neural networks are often viewed as “black-boxes” due to their internal complexity that is hard to understand. Little research focuses on helping people explore and understand the relationship between a user’s data and the learned representations in deep learning models. We present our ongoing work, ShapeShop, an interactive system for visualizing and understanding what semantics a neural network model has learned. Built using standard web technologies, ShapeShop allows users to experiment with and compare deep learning models to help explore the robustness of image classifiers.

  12. ShapeShop: Towards Understanding Deep Learning Representations via Interactive Experimentation.

    Science.gov (United States)

    Hohman, Fred; Hodas, Nathan; Chau, Duen Horng

    2017-05-01

    Deep learning is the driving force behind many recent technologies; however, deep neural networks are often viewed as "black-boxes" due to their internal complexity that is hard to understand. Little research focuses on helping people explore and understand the relationship between a user's data and the learned representations in deep learning models. We present our ongoing work, ShapeShop, an interactive system for visualizing and understanding what semantics a neural network model has learned. Built using standard web technologies, ShapeShop allows users to experiment with and compare deep learning models to help explore the robustness of image classifiers.

  13. Changing viewer perspectives reveals constraints to implicit visual statistical learning.

    Science.gov (United States)

    Jiang, Yuhong V; Swallow, Khena M

    2014-10-07

    Statistical learning-learning environmental regularities to guide behavior-likely plays an important role in natural human behavior. One potential use is in search for valuable items. Because visual statistical learning can be acquired quickly and without intention or awareness, it could optimize search and thereby conserve energy. For this to be true, however, visual statistical learning needs to be viewpoint invariant, facilitating search even when people walk around. To test whether implicit visual statistical learning of spatial information is viewpoint independent, we asked participants to perform a visual search task from variable locations around a monitor placed flat on a stand. Unbeknownst to participants, the target was more often in some locations than others. In contrast to previous research on stationary observers, visual statistical learning failed to produce a search advantage for targets in high-probable regions that were stable within the environment but variable relative to the viewer. This failure was observed even when conditions for spatial updating were optimized. However, learning was successful when the rich locations were referenced relative to the viewer. We conclude that changing viewer perspective disrupts implicit learning of the target's location probability. This form of learning shows limited integration with spatial updating or spatiotopic representations. © 2014 ARVO.

  14. Shape representations in the primate dorsal visual stream

    Directory of Open Access Journals (Sweden)

    Tom eTheys

    2015-04-01

    Full Text Available The primate visual system extracts object shape information for object recognition in the ventral visual stream. Recent research has demonstrated that object shape is also processed in the dorsal visual stream, which is specialized for spatial vision and the planning of actions. A number of studies have investigated the coding of 2D shape in the anterior intraparietal area (AIP, one of the end-stage areas of the dorsal stream which has been implicated in the extraction of affordances for the purpose of grasping. These findings challenge the current understanding of area AIP as a critical stage in the dorsal stream for the extraction of object affordances. The representation of three-dimensional (3D shape has been studied in two interconnected areas known to be critical for object grasping: area AIP and area F5a in the ventral premotor cortex (PMv, to which AIP projects. In both areas neurons respond selectively to 3D shape defined by binocular disparity, but the latency of the neural selectivity is approximately 10 ms longer in F5a compared to AIP, consistent with its higher position in the hierarchy of cortical areas. Furthermore F5a neurons were more sensitive to small amplitudes of 3D curvature and could detect subtle differences in 3D structure more reliably than AIP neurons. In both areas, 3D-shape selective neurons were co-localized with neurons showing motor-related activity during object grasping in the dark, indicating a close convergence of visual and motor information on the same clusters of neurons.

  15. Poincaré Embeddings for Learning Hierarchical Representations

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    Abstracts: Representation learning has become an invaluable approach for learning from symbolic data such as text and graphs. However, while complex symbolic datasets often exhibit a latent hierarchical structure, state-of-the-art methods typically do not account for this property. In this talk, I will discuss a new approach for learning hierarchical representations of symbolic data by embedding them into hyperbolic space -- or more precisely into an n-dimensional Poincaré ball. Due to the underlying hyperbolic geometry, this allows us to learn parsimonious representations of symbolic data by simultaneously capturing hierarchy and similarity. We introduce an efficient algorithm to learn the embeddings based on Riemannian optimization and show experimentally that Poincaré embeddings outperform Euclidean embeddings significantly on data with latent hierarchies, both in terms of representation capacity and in terms of generalization ability.      &...

  16. Reading visual representations of 'Ndabeni' in the public realms

    Directory of Open Access Journals (Sweden)

    Sipokazi Sambumbu

    2010-11-01

    Full Text Available This essay outlines and analyses contemporary image representations of Ndabeni (also called kwa-Ndabeni, a location near Cape Town where a group of people became confined between 1901 and 1936 following an outbreak of the bubonic plague in the city. This location was to shape Cape Town's landscape for a little less that thirty-five years, accommodating people who were forcibly removed from the Cape Town docklands and from District Six. Images representing this place have been produced, archived, recovered, modified, reproduced and circulated in different ways and contexts. Ndabeni has become public knowledge through public visual representations that have been produced across a range of sites in post-apartheid Cape Town. I focus on three sites: the Victoria and Alfred Waterfront, the District Six Museum, and the Eziko Restaurant and Catering School. In each case I analyse the processes through which the Ndabeni images in question have been used and reused over time in changing contexts. I analyse the 'modalities' in which these images have been composed, interpreted and employed and in which knowledge has been mediated. I explore the contents and contexts of the storyboards and exhibition panels that purport to represent Ndabeni. Finally, I discuss potential meanings that could be constructed if the images could be read independent of the texts.

  17. Independent sources of anisotropy in visual orientation representation: a visual and a cognitive oblique effect.

    Science.gov (United States)

    Balikou, Panagiota; Gourtzelidis, Pavlos; Mantas, Asimakis; Moutoussis, Konstantinos; Evdokimidis, Ioannis; Smyrnis, Nikolaos

    2015-11-01

    The representation of visual orientation is more accurate for cardinal orientations compared to oblique, and this anisotropy has been hypothesized to reflect a low-level visual process (visual, "class 1" oblique effect). The reproduction of directional and orientation information also leads to a mean error away from cardinal orientations or directions. This anisotropy has been hypothesized to reflect a high-level cognitive process of space categorization (cognitive, "class 2," oblique effect). This space categorization process would be more prominent when the visual representation of orientation degrades such as in the case of working memory with increasing cognitive load, leading to increasing magnitude of the "class 2" oblique effect, while the "class 1" oblique effect would remain unchanged. Two experiments were performed in which an array of orientation stimuli (1-4 items) was presented and then subjects had to realign a probe stimulus within the previously presented array. In the first experiment, the delay between stimulus presentation and probe varied, while in the second experiment, the stimulus presentation time varied. The variable error was larger for oblique compared to cardinal orientations in both experiments reproducing the visual "class 1" oblique effect. The mean error also reproduced the tendency away from cardinal and toward the oblique orientations in both experiments (cognitive "class 2" oblique effect). The accuracy or the reproduced orientation degraded (increasing variable error) and the cognitive "class 2" oblique effect increased with increasing memory load (number of items) in both experiments and presentation time in the second experiment. In contrast, the visual "class 1" oblique effect was not significantly modulated by any one of these experimental factors. These results confirmed the theoretical predictions for the two anisotropies in visual orientation reproduction and provided support for models proposing the categorization of

  18. Three-dimensional visual feature representation in the primary visual cortex.

    Science.gov (United States)

    Tanaka, Shigeru; Moon, Chan-Hong; Fukuda, Mitsuhiro; Kim, Seong-Gi

    2011-12-01

    In the cat primary visual cortex, it is accepted that neurons optimally responding to similar stimulus orientations are clustered in a column extending from the superficial to deep layers. The cerebral cortex is, however, folded inside a skull, which makes gyri and fundi. The primary visual area of cats, area 17, is located on the fold of the cortex called the lateral gyrus. These facts raise the question of how to reconcile the tangential arrangement of the orientation columns with the curvature of the gyrus. In the present study, we show a possible configuration of feature representation in the visual cortex using a three-dimensional (3D) self-organization model. We took into account preferred orientation, preferred direction, ocular dominance and retinotopy, assuming isotropic interaction. We performed computer simulation only in the middle layer at the beginning and expanded the range of simulation gradually to other layers, which was found to be a unique method in the present model for obtaining orientation columns spanning all the layers in the flat cortex. Vertical columns of preferred orientations were found in the flat parts of the model cortex. On the other hand, in the curved parts, preferred orientations were represented in wedge-like columns rather than straight columns, and preferred directions were frequently reversed in the deeper layers. Singularities associated with orientation representation appeared as warped lines in the 3D model cortex. Direction reversal appeared on the sheets that were delimited by orientation-singularity lines. These structures emerged from the balance between periodic arrangements of preferred orientations and vertical alignment of the same orientations. Our theoretical predictions about orientation representation were confirmed by multi-slice, high-resolution functional MRI in the cat visual cortex. We obtained a close agreement between theoretical predictions and experimental observations. The present study throws a

  19. Supporting Multimedia Learning with Visual Signalling and Animated Pedagogical Agent: Moderating Effects of Prior Knowledge

    Science.gov (United States)

    Johnson, A. M.; Ozogul, G.; Reisslein, M.

    2015-01-01

    An experiment examined the effects of visual signalling to relevant information in multiple external representations and the visual presence of an animated pedagogical agent (APA). Students learned electric circuit analysis using a computer-based learning environment that included Cartesian graphs, equations and electric circuit diagrams. The…

  20. Learning from graphically integrated 2D and 3D representations improves retention of neuroanatomy

    Science.gov (United States)

    Naaz, Farah

    Visualizations in the form of computer-based learning environments are highly encouraged in science education, especially for teaching spatial material. Some spatial material, such as sectional neuroanatomy, is very challenging to learn. It involves learning the two dimensional (2D) representations that are sampled from the three dimensional (3D) object. In this study, a computer-based learning environment was used to explore the hypothesis that learning sectional neuroanatomy from a graphically integrated 2D and 3D representation will lead to better learning outcomes than learning from a sequential presentation. The integrated representation explicitly demonstrates the 2D-3D transformation and should lead to effective learning. This study was conducted using a computer graphical model of the human brain. There were two learning groups: Whole then Sections, and Integrated 2D3D. Both groups learned whole anatomy (3D neuroanatomy) before learning sectional anatomy (2D neuroanatomy). The Whole then Sections group then learned sectional anatomy using 2D representations only. The Integrated 2D3D group learned sectional anatomy from a graphically integrated 3D and 2D model. A set of tests for generalization of knowledge to interpreting biomedical images was conducted immediately after learning was completed. The order of presentation of the tests of generalization of knowledge was counterbalanced across participants to explore a secondary hypothesis of the study: preparation for future learning. If the computer-based instruction programs used in this study are effective tools for teaching anatomy, the participants should continue learning neuroanatomy with exposure to new representations. A test of long-term retention of sectional anatomy was conducted 4-8 weeks after learning was completed. The Integrated 2D3D group was better than the Whole then Sections group in retaining knowledge of difficult instances of sectional anatomy after the retention interval. The benefit

  1. Is This Real Life? Is This Just Fantasy?: Realism and Representations in Learning with Technology

    Science.gov (United States)

    Sauter, Megan Patrice

    Students often engage in hands-on activities during science learning; however, financial and practical constraints often limit the availability of these activities. Recent advances in technology have led to increases in the use of simulations and remote labs, which attempt to recreate hands-on science learning via computer. Remote labs and simulations are interesting from a cognitive perspective because they allow for different relations between representations and their referents. Remote labs are unique in that they provide a yoked representation, meaning that the representation of the lab on the computer screen is actually linked to that which it represents: a real scientific device. Simulations merely represent the lab and are not connected to any real scientific devices. However, the type of visual representations used in the lab may modify the effects of the lab technology. The purpose of this dissertation is to examine the relation between representation and technology and its effects of students' psychological experiences using online science labs. Undergraduates participated in two studies that investigated the relation between technology and representation. In the first study, participants performed either a remote lab or a simulation incorporating one of two visual representations, either a static image or a video of the equipment. Although participants in both lab conditions learned, participants in the remote lab condition had more authentic experiences. However, effects were moderated by the realism of the visual representation. Participants who saw a video were more invested and felt the experience was more authentic. In a second study, participants performed a remote lab and either saw the same video as in the first study, an animation, or the video and an animation. Most participants had an authentic experience because both representations evoked strong feelings of presence. However, participants who saw the video were more likely to believe the

  2. Visual Representations on High School Biology, Chemistry, Earth Science, and Physics Assessments

    Science.gov (United States)

    LaDue, Nicole D.; Libarkin, Julie C.; Thomas, Stephen R.

    2015-01-01

    The pervasive use of visual representations in textbooks, curricula, and assessments underscores their importance in K-12 science education. For example, visual representations figure prominently in the recent publication of the Next Generation Science Standards (NGSS Lead States in Next generation science standards: for states, by states.…

  3. Discriminative object tracking via sparse representation and online dictionary learning.

    Science.gov (United States)

    Xie, Yuan; Zhang, Wensheng; Li, Cuihua; Lin, Shuyang; Qu, Yanyun; Zhang, Yinghua

    2014-04-01

    We propose a robust tracking algorithm based on local sparse coding with discriminative dictionary learning and new keypoint matching schema. This algorithm consists of two parts: the local sparse coding with online updated discriminative dictionary for tracking (SOD part), and the keypoint matching refinement for enhancing the tracking performance (KP part). In the SOD part, the local image patches of the target object and background are represented by their sparse codes using an over-complete discriminative dictionary. Such discriminative dictionary, which encodes the information of both the foreground and the background, may provide more discriminative power. Furthermore, in order to adapt the dictionary to the variation of the foreground and background during the tracking, an online learning method is employed to update the dictionary. The KP part utilizes refined keypoint matching schema to improve the performance of the SOD. With the help of sparse representation and online updated discriminative dictionary, the KP part are more robust than the traditional method to reject the incorrect matches and eliminate the outliers. The proposed method is embedded into a Bayesian inference framework for visual tracking. Experimental results on several challenging video sequences demonstrate the effectiveness and robustness of our approach.

  4. A comparative evaluation of supervised and unsupervised representation learning approaches for anaplastic medulloblastoma differentiation

    Science.gov (United States)

    Cruz-Roa, Angel; Arevalo, John; Basavanhally, Ajay; Madabhushi, Anant; González, Fabio

    2015-01-01

    Learning data representations directly from the data itself is an approach that has shown great success in different pattern recognition problems, outperforming state-of-the-art feature extraction schemes for different tasks in computer vision, speech recognition and natural language processing. Representation learning applies unsupervised and supervised machine learning methods to large amounts of data to find building-blocks that better represent the information in it. Digitized histopathology images represents a very good testbed for representation learning since it involves large amounts of high complex, visual data. This paper presents a comparative evaluation of different supervised and unsupervised representation learning architectures to specifically address open questions on what type of learning architectures (deep or shallow), type of learning (unsupervised or supervised) is optimal. In this paper we limit ourselves to addressing these questions in the context of distinguishing between anaplastic and non-anaplastic medulloblastomas from routine haematoxylin and eosin stained images. The unsupervised approaches evaluated were sparse autoencoders and topographic reconstruct independent component analysis, and the supervised approach was convolutional neural networks. Experimental results show that shallow architectures with more neurons are better than deeper architectures without taking into account local space invariances and that topographic constraints provide useful invariant features in scale and rotations for efficient tumor differentiation.

  5. Analysis of visual representation techniques for product configuration systems in industrial companies

    DEFF Research Database (Denmark)

    Shafiee, Sara; Kristjansdottir, Katrin; Hvam, Lars

    2016-01-01

    with knowledge representations and communications with domain experts. The results presented in the paper are therefore aimed to provide insight into the impact from using visual knowledge representations techniques in PCSs projects. The findings indicate that use of visual knowledge representations techniques...... in PCSs projects will result in improved quality of maintenance and development support for the knowledge base and improved quality of the communication with domain experts....

  6. Differentiating qualitative representations into learning spaces

    NARCIS (Netherlands)

    Liem, J.; Beek, W.; Bredeweg, B.; de Kleer, J.; Forbus, K.D.

    2010-01-01

    The DynaLearn interactive learning environment allows learners to construct their conceptual ideas and investigate the logical consequences of those ideas. By building and simulating causal models, students develop an understanding of how systems work. The DynaLearn interactive learning environment

  7. Learned reward association improves visual working memory.

    Science.gov (United States)

    Gong, Mengyuan; Li, Sheng

    2014-04-01

    Statistical regularities in the natural environment play a central role in adaptive behavior. Among other regularities, reward association is potentially the most prominent factor that influences our daily life. Recent studies have suggested that pre-established reward association yields strong influence on the spatial allocation of attention. Here we show that reward association can also improve visual working memory (VWM) performance when the reward-associated feature is task-irrelevant. We established the reward association during a visual search training session, and investigated the representation of reward-associated features in VWM by the application of a change detection task before and after the training. The results showed that the improvement in VWM was significantly greater for items in the color associated with high reward than for those in low reward-associated or nonrewarded colors. In particular, the results from control experiments demonstrate that the observed reward effect in VWM could not be sufficiently accounted for by attentional capture toward the high reward-associated item. This was further confirmed when the effect of attentional capture was minimized by presenting the items in the sample and test displays of the change detection task with the same color. The results showed significantly larger improvement in VWM performance when the items in a display were in the high reward-associated color than those in the low reward-associated or nonrewarded colors. Our findings suggest that, apart from inducing space-based attentional capture, the learned reward association could also facilitate the perceptual representation of high reward-associated items through feature-based attentional modulation.

  8. Visual Representations of Sexual Violence in Online News Outlets

    Directory of Open Access Journals (Sweden)

    Sandra Schwark

    2017-05-01

    Full Text Available To study visual representations of sexual violence, photographs accompanying German Internet news articles that appeared between January 2013 and March 2015 (N = 42 were subjected to thematic analysis. Two main themes, consisting of several sub-themes, emerged from the data. The first theme was “rape myths,” illustrating a stereotypical view of sexual violence. It consisted of three sub-themes: “beauty standards,” referring to the fact that all women in our sample fit western beauty standards, “physical violence,” as most images implied some form of physical violence, and finally “location,” suggesting that rape only happens in secluded outdoor areas. These findings suggest that the images from our sample perpetuate certain rape myths. The second theme was “portrayal of victimhood,” referring to the way victims of sexual violence were portrayed in photographs. The analysis of the sub-theme “passivity” showed that these portrayals fit a certain stereotype: the women were shown to be weak and helpless rather than individuals with agency and able to leave their status as a victim. Further sub-themes were “background,” “organization of space,” “camera perspective,” and “lighting.” We discuss these findings in relation to possibly reinforcing rape myths in society and as an issue in creating a biased perception of women who have experienced sexual violence.

  9. Women in Sanaa: Public Appearance and Visual Representation

    Directory of Open Access Journals (Sweden)

    Irina Linke

    2009-03-01

    Full Text Available An exponential increase in media usage in the Yemeni capital, Sanaa (foreign satellite channels, Yemeni TV, photography and video changes not only the (media public (Öffentlichkeit, but social spaces in a local setting within a particular global-local framework. In this article I discuss women in the Yemeni capital who use television and other pictorial representations strategically, and, in reworking the frontiers between visibility and invisibility, change the gendered social spaces of their life world (Lebenswelt. Pictures, as parts of the life world open up views into new spaces ([Blick-] Räume and make new relationships ([Blick-] Kontakte possible. Looks and gazes determine social space and play a part in the social construction of bodies and spaces. This is negotiated on the performative as well as on the discursive level. The case study I present is part of a larger research project based on one year of fieldwork, field notes and 45 hours of audio-visual material. Analysis of the discourses of young women about their own image practices reveals how they perceive the endangerment of a social order, how they articulate their interest in change, and their strategies for becoming "visible." Thus, this article refers to culturally different readings of what can be seen. URN: urn:nbn:de:0114-fqs0902150

  10. [Representation of letter position in visual word recognition process].

    Science.gov (United States)

    Makioka, S

    1994-08-01

    Two experiments investigated the representation of letter position in visual word recognition process. In Experiment 1, subjects (12 undergraduates and graduates) were asked to detect a target word in a briefly-presented probe. Probes consisted of two kanji words. The latters which formed targets (critical letters) were always contained in probes. (e.g. target: [symbol: see text] probe: [symbol: see text]) High false alarm rate was observed when critical letters occupied the same within-word relative position (left or right within the word) in the probe words as in the target word. In Experiment 2 (subject were ten undergraduates and graduates), spaces adjacent to probe words were replaced by randomly chosen hiragana letters (e.g. [symbol: see text]), because spaces are not used to separate words in regular Japanese sentences. In addition to the effect of within-word relative position as in Experiment 1, the effect of between-word relative position (left or right across the probe words) was observed. These results suggest that information about within-word relative position of a letter is used in word recognition process. The effect of within-word relative position was explained by a connectionist model of word recognition.

  11. Efficacy of Simulation-Based Learning of Electronics Using Visualization and Manipulation

    Science.gov (United States)

    Chen, Yu-Lung; Hong, Yu-Ru; Sung, Yao-Ting; Chang, Kuo-En

    2011-01-01

    Software for simulation-based learning of electronics was implemented to help learners understand complex and abstract concepts through observing external representations and exploring concept models. The software comprises modules for visualization and simulative manipulation. Differences in learning performance of using the learning software…

  12. Unimodal Learning Enhances Crossmodal Learning in Robotic Audio-Visual Tracking

    DEFF Research Database (Denmark)

    Shaikh, Danish; Bodenhagen, Leon; Manoonpong, Poramate

    2017-01-01

    Crossmodal sensory integration is a fundamental feature of the brain that aids in forming an coherent and unified representation of observed events in the world. Spatiotemporally correlated sensory stimuli brought about by rich sensorimotor experiences drive the development of crossmodal integrat...... a non-holonomic robotic agent towards a moving audio-visual target. Simulation results demonstrate that unimodal learning enhances crossmodal learning and improves both the overall accuracy and precision of multisensory orientation response....

  13. Unimodal Learning Enhances Crossmodal Learning in Robotic Audio-Visual Tracking

    DEFF Research Database (Denmark)

    Shaikh, Danish; Bodenhagen, Leon; Manoonpong, Poramate

    2018-01-01

    Crossmodal sensory integration is a fundamental feature of the brain that aids in forming an coherent and unified representation of observed events in the world. Spatiotemporally correlated sensory stimuli brought about by rich sensorimotor experiences drive the development of crossmodal integrat...... a non-holonomic robotic agent towards a moving audio-visual target. Simulation results demonstrate that unimodal learning enhances crossmodal learning and improves both the overall accuracy and precision of multisensory orientation response....

  14. Spectral Approaches to Learning Predictive Representations

    Science.gov (United States)

    2012-09-01

    representation and a value function. In practice, we would like to be able to find a good set of features, without prior knowledge of the system model. Kolter ...http://www.cs.ucr.edu/ eamonn/TSDMA/index.html. 7.1 [55] J. Zico Kolter and Andrew Y. Ng. Regularization and feature selection in least-squares temporal

  15. Zero-Shot Learning by Generating Pseudo Feature Representations

    OpenAIRE

    Lu, Jiang; Li, Jin; Yan, Ziang; Zhang, Changshui

    2017-01-01

    Zero-shot learning (ZSL) is a challenging task aiming at recognizing novel classes without any training instances. In this paper we present a simple but high-performance ZSL approach by generating pseudo feature representations (GPFR). Given the dataset of seen classes and side information of unseen classes (e.g. attributes), we synthesize feature-level pseudo representations for novel concepts, which allows us access to the formulation of unseen class predictor. Firstly we design a Joint Att...

  16. Learning word vector representations based on acoustic counts

    OpenAIRE

    Ribeiro, Sam; Watts, Oliver; Yamagishi, Junichi

    2017-01-01

    This paper presents a simple count-based approach to learning word vector representations by leveraging statistics of cooccurrences between text and speech. This type of representation requires two discrete sequences of units defined across modalities. Two possible methods for the discretization of an acoustic signal are presented, which are then applied to fundamental frequency and energy contours of a transcribed corpus of speech, yielding a sequence of textual objects (e.g. words, syllable...

  17. A visual representation system for the scheduling and management of projects

    NARCIS (Netherlands)

    Pollalis, S.N.

    1992-01-01

    A VISUAL SCHEDULING AND MANAGEMENT SYSTEM (VSMS) This work proposes a new system for the visual representation of projects that displays the quantities of work, resources and cost. This new system, called Visual Scheduling and Management System, has a built-in hierarchical system to provide

  18. Using Technology to Support Visual Learning Strategies

    Science.gov (United States)

    O'Bannon, Blanche; Puckett, Kathleen; Rakes, Glenda

    2006-01-01

    Visual learning is a strategy for visually representing the structure of information and for representing the ways in which concepts are related. Based on the work of Ausubel, these hierarchical maps facilitate student learning of unfamiliar information in the K-12 classroom. This paper presents the research base for this Type II computer tool, as…

  19. Weakly supervised visual dictionary learning by harnessing image attributes.

    Science.gov (United States)

    Gao, Yue; Ji, Rongrong; Liu, Wei; Dai, Qionghai; Hua, Gang

    2014-12-01

    Bag-of-features (BoFs) representation has been extensively applied to deal with various computer vision applications. To extract discriminative and descriptive BoF, one important step is to learn a good dictionary to minimize the quantization loss between local features and codewords. While most existing visual dictionary learning approaches are engaged with unsupervised feature quantization, the latest trend has turned to supervised learning by harnessing the semantic labels of images or regions. However, such labels are typically too expensive to acquire, which restricts the scalability of supervised dictionary learning approaches. In this paper, we propose to leverage image attributes to weakly supervise the dictionary learning procedure without requiring any actual labels. As a key contribution, our approach establishes a generative hidden Markov random field (HMRF), which models the quantized codewords as the observed states and the image attributes as the hidden states, respectively. Dictionary learning is then performed by supervised grouping the observed states, where the supervised information is stemmed from the hidden states of the HMRF. In such a way, the proposed dictionary learning approach incorporates the image attributes to learn a semantic-preserving BoF representation without any genuine supervision. Experiments in large-scale image retrieval and classification tasks corroborate that our approach significantly outperforms the state-of-the-art unsupervised dictionary learning approaches.

  20. Deep Residual Network Predicts Cortical Representation and Organization of Visual Features for Rapid Categorization.

    Science.gov (United States)

    Wen, Haiguang; Shi, Junxing; Chen, Wei; Liu, Zhongming

    2018-02-28

    The brain represents visual objects with topographic cortical patterns. To address how distributed visual representations enable object categorization, we established predictive encoding models based on a deep residual network, and trained them to predict cortical responses to natural movies. Using this predictive model, we mapped human cortical representations to 64,000 visual objects from 80 categories with high throughput and accuracy. Such representations covered both the ventral and dorsal pathways, reflected multiple levels of object features, and preserved semantic relationships between categories. In the entire visual cortex, object representations were organized into three clusters of categories: biological objects, non-biological objects, and background scenes. In a finer scale specific to each cluster, object representations revealed sub-clusters for further categorization. Such hierarchical clustering of category representations was mostly contributed by cortical representations of object features from middle to high levels. In summary, this study demonstrates a useful computational strategy to characterize the cortical organization and representations of visual features for rapid categorization.

  1. Experience-driven formation of parts-based representations in a model of layered visual memory

    Directory of Open Access Journals (Sweden)

    Jenia Jitsev

    2009-09-01

    Full Text Available Growing neuropsychological and neurophysiological evidence suggests that the visual cortex uses parts-based representations to encode, store and retrieve relevant objects. In such a scheme, objects are represented as a set of spatially distributed local features, or parts, arranged in stereotypical fashion. To encode the local appearance and to represent the relations between the constituent parts, there has to be an appropriate memory structure formed by previous experience with visual objects. Here, we propose a model how a hierarchical memory structure supporting efficient storage and rapid recall of parts-based representations can be established by an experience-driven process of self-organization. The process is based on the collaboration of slow bidirectional synaptic plasticity and homeostatic unit activity regulation, both running at the top of fast activity dynamics with winner-take-all character modulated by an oscillatory rhythm. These neural mechanisms lay down the basis for cooperation and competition between the distributed units and their synaptic connections. Choosing human face recognition as a test task, we show that, under the condition of open-ended, unsupervised incremental learning, the system is able to form memory traces for individual faces in a parts-based fashion. On a lower memory layer the synaptic structure is developed to represent local facial features and their interrelations, while the identities of different persons are captured explicitly on a higher layer. An additional property of the resulting representations is the sparseness of both the activity during the recall and the synaptic patterns comprising the memory traces.

  2. V4 activity predicts the strength of visual short-term memory representations.

    Science.gov (United States)

    Sligte, Ilja G; Scholte, H Steven; Lamme, Victor A F

    2009-06-10

    Recent studies have shown the existence of a form of visual memory that lies intermediate of iconic memory and visual short-term memory (VSTM), in terms of both capacity (up to 15 items) and the duration of the memory trace (up to 4 s). Because new visual objects readily overwrite this intermediate visual store, we believe that it reflects a weak form of VSTM with high capacity that exists alongside a strong but capacity-limited form of VSTM. In the present study, we isolated brain activity related to weak and strong VSTM representations using functional magnetic resonance imaging. We found that activity in visual cortical area V4 predicted the strength of VSTM representations; activity was low when there was no VSTM, medium when there was a weak VSTM representation regardless of whether this weak representation was available for report or not, and high when there was a strong VSTM representation. Altogether, this study suggests that the high capacity yet weak VSTM store is represented in visual parts of the brain. Allegedly, only some of these VSTM traces are amplified by parietal and frontal regions and as a consequence reside in traditional or strong VSTM. The additional weak VSTM representations remain available for conscious access and report when attention is redirected to them yet are overwritten as soon as new visual stimuli hit the eyes.

  3. Crowding in Visual Working Memory Reveals Its Spatial Resolution and the Nature of Its Representations.

    Science.gov (United States)

    Tamber-Rosenau, Benjamin J; Fintzi, Anat R; Marois, René

    2015-09-01

    Spatial resolution fundamentally limits any image representation. Although this limit has been extensively investigated for perceptual representations by assessing how neighboring flankers degrade the perception of a peripheral target with visual crowding, the corresponding limit for representations held in visual working memory (VWM) is unknown. In the present study, we evoked crowding in VWM and directly compared resolution in VWM and perception. Remarkably, the spatial resolution of VWM proved to be no worse than that of perception. However, mixture modeling of errors caused by crowding revealed the qualitatively distinct nature of these representations. Perceptual crowding errors arose from both increased imprecision in target representations and substitution of flankers for targets. By contrast, VWM crowding errors arose exclusively from substitutions, which suggests that VWM transforms analog perceptual representations into discrete items. Thus, although perception and VWM share a common resolution limit, exceeding this limit reveals distinct mechanisms for perceiving images and holding them in mind. © The Author(s) 2015.

  4. Realistic versus Schematic Interactive Visualizations for Learning Surveying Practices: A Comparative Study

    Science.gov (United States)

    Dib, Hazar; Adamo-Villani, Nicoletta; Garver, Stephen

    2014-01-01

    Many benefits have been claimed for visualizations, a general assumption being that learning is facilitated. However, several researchers argue that little is known about the cognitive value of graphical representations, be they schematic visualizations, such as diagrams or more realistic, such as virtual reality. The study reported in the paper…

  5. Supervised Filter Learning for Representation Based Face Recognition.

    Directory of Open Access Journals (Sweden)

    Chao Bi

    Full Text Available Representation based classification methods, such as Sparse Representation Classification (SRC and Linear Regression Classification (LRC have been developed for face recognition problem successfully. However, most of these methods use the original face images without any preprocessing for recognition. Thus, their performances may be affected by some problematic factors (such as illumination and expression variances in the face images. In order to overcome this limitation, a novel supervised filter learning algorithm is proposed for representation based face recognition in this paper. The underlying idea of our algorithm is to learn a filter so that the within-class representation residuals of the faces' Local Binary Pattern (LBP features are minimized and the between-class representation residuals of the faces' LBP features are maximized. Therefore, the LBP features of filtered face images are more discriminative for representation based classifiers. Furthermore, we also extend our algorithm for heterogeneous face recognition problem. Extensive experiments are carried out on five databases and the experimental results verify the efficacy of the proposed algorithm.

  6. Visual recognition and inference using dynamic overcomplete sparse learning.

    Science.gov (United States)

    Murray, Joseph F; Kreutz-Delgado, Kenneth

    2007-09-01

    We present a hierarchical architecture and learning algorithm for visual recognition and other visual inference tasks such as imagination, reconstruction of occluded images, and expectation-driven segmentation. Using properties of biological vision for guidance, we posit a stochastic generative world model and from it develop a simplified world model (SWM) based on a tractable variational approximation that is designed to enforce sparse coding. Recent developments in computational methods for learning overcomplete representations (Lewicki & Sejnowski, 2000; Teh, Welling, Osindero, & Hinton, 2003) suggest that overcompleteness can be useful for visual tasks, and we use an overcomplete dictionary learning algorithm (Kreutz-Delgado, et al., 2003) as a preprocessing stage to produce accurate, sparse codings of images. Inference is performed by constructing a dynamic multilayer network with feedforward, feedback, and lateral connections, which is trained to approximate the SWM. Learning is done with a variant of the back-propagation-through-time algorithm, which encourages convergence to desired states within a fixed number of iterations. Vision tasks require large networks, and to make learning efficient, we take advantage of the sparsity of each layer to update only a small subset of elements in a large weight matrix at each iteration. Experiments on a set of rotated objects demonstrate various types of visual inference and show that increasing the degree of overcompleteness improves recognition performance in difficult scenes with occluded objects in clutter.

  7. Visual Representation in GENESIS as a tool for Physical Modeling, Sound Synthesis and Musical Composition

    OpenAIRE

    Villeneuve, Jérôme; Cadoz, Claude; Castagné, Nicolas

    2015-01-01

    The motivation of this paper is to highlight the importance of visual representations for artists when modeling and simulating mass-interaction physical networks in the context of sound synthesis and musical composition. GENESIS is a musician-oriented software environment for sound synthesis and musical composition. However, despite this orientation, a substantial amount of effort has been put into building a rich variety of tools based on static or dynamic visual representations of models an...

  8. Visual Aversive Learning Compromises Sensory Discrimination.

    Science.gov (United States)

    Shalev, Lee; Paz, Rony; Avidan, Galia

    2018-03-14

    Aversive learning is thought to modulate perceptual thresholds, which can lead to overgeneralization. However, it remains undetermined whether this modulation is domain specific or a general effect. Moreover, despite the unique role of the visual modality in human perception, it is unclear whether this aspect of aversive learning exists in this modality. The current study was designed to examine the effect of visual aversive outcomes on the perception of basic visual and auditory features. We tested the ability of healthy participants, both males and females, to discriminate between neutral stimuli, before and after visual learning. In each experiment, neutral stimuli were associated with aversive images in an experimental group and with neutral images in a control group. Participants demonstrated a deterioration in discrimination (higher discrimination thresholds) only after aversive learning. This deterioration was measured for both auditory (tone frequency) and visual (orientation and contrast) features. The effect was replicated in five different experiments and lasted for at least 24 h. fMRI neural responses and pupil size were also measured during learning. We showed an increase in neural activations in the anterior cingulate cortex, insula, and amygdala during aversive compared with neutral learning. Interestingly, the early visual cortex showed increased brain activity during aversive compared with neutral context trials, with identical visual information. Our findings imply the existence of a central multimodal mechanism, which modulates early perceptual properties, following exposure to negative situations. Such a mechanism could contribute to abnormal responses that underlie anxiety states, even in new and safe environments. SIGNIFICANCE STATEMENT Using a visual aversive-learning paradigm, we found deteriorated discrimination abilities for visual and auditory stimuli that were associated with visual aversive stimuli. We showed increased neural

  9. Effects of prior knowledge on learning from different compositions of representations in a mobile learning environment

    NARCIS (Netherlands)

    T.-C. Liu (Tzu-Chien); Y.-C. Lin (Yi-Chun); G.W.C. Paas (Fred)

    2014-01-01

    textabstractTwo experiments examined the effects of prior knowledge on learning from different compositions of multiple representations in a mobile learning environment on plant leaf morphology for primary school students. Experiment 1 compared the learning effects of a mobile learning environment

  10. Representation of Coordination Mechanisms in IMS Learning Design to Support Group-based Learning

    NARCIS (Netherlands)

    Miao, Yongwu; Burgos, Daniel; Griffiths, David; Koper, Rob

    2007-01-01

    Miao, Y., Burgos, D., Griffiths, D., & Koper, R. (2008). Representation of Coordination Mechanisms in IMS Learning Design to Support Group-based Learning. In L. Lockyer, S. Bennet, S. Agostinho & B. Harper (Eds.), Handbook of Research on Learning Design and Learning Objects: Issues, Applications and

  11. Children's Learning from Touch Screens: A Dual Representation Perspective.

    Science.gov (United States)

    Sheehan, Kelly J; Uttal, David H

    2016-01-01

    Parents and educators often expect that children will learn from touch screen devices, such as during joint e-book reading. Therefore an essential question is whether young children understand that the touch screen can be a symbolic medium - that entities represented on the touch screen can refer to entities in the real world. Research on symbolic development suggests that symbolic understanding requires that children develop dual representational abilities, meaning children need to appreciate that a symbol is an object in itself (i.e., picture of a dog) while also being a representation of something else (i.e., the real dog). Drawing on classic research on symbols and new research on children's learning from touch screens, we offer the perspective that children's ability to learn from the touch screen as a symbolic medium depends on the effect of interactivity on children's developing dual representational abilities. Although previous research on dual representation suggests the interactive nature of the touch screen might make it difficult for young children to use as a symbolic medium, the unique interactive affordances may help alleviate this difficulty. More research needs to investigate how the interactivity of the touch screen affects children's ability to connect the symbols on the screen to the real world. Given the interactive nature of the touch screen, researchers and educators should consider both the affordances of the touch screen as well as young children's cognitive abilities when assessing whether young children can learn from it as a symbolic medium.

  12. Motor sequence learning occurs despite disrupted visual and proprioceptive feedback

    Directory of Open Access Journals (Sweden)

    Boyd Lara A

    2008-07-01

    Full Text Available Abstract Background Recent work has demonstrated the importance of proprioception for the development of internal representations of the forces encountered during a task. Evidence also exists for a significant role for proprioception in the execution of sequential movements. However, little work has explored the role of proprioceptive sensation during the learning of continuous movement sequences. Here, we report that the repeated segment of a continuous tracking task can be learned despite peripherally altered arm proprioception and severely restricted visual feedback regarding motor output. Methods Healthy adults practiced a continuous tracking task over 2 days. Half of the participants experienced vibration that altered proprioception of shoulder flexion/extension of the active tracking arm (experimental condition and half experienced vibration of the passive resting arm (control condition. Visual feedback was restricted for all participants. Retention testing was conducted on a separate day to assess motor learning. Results Regardless of vibration condition, participants learned the repeated segment demonstrated by significant improvements in accuracy for tracking repeated as compared to random continuous movement sequences. Conclusion These results suggest that with practice, participants were able to use residual afferent information to overcome initial interference of tracking ability related to altered proprioception and restricted visual feedback to learn a continuous motor sequence. Motor learning occurred despite an initial interference of tracking noted during acquisition practice.

  13. Learning a New Selection Rule in Visual and Frontal Cortex.

    Science.gov (United States)

    van der Togt, Chris; Stănişor, Liviu; Pooresmaeili, Arezoo; Albantakis, Larissa; Deco, Gustavo; Roelfsema, Pieter R

    2016-08-01

    How do you make a decision if you do not know the rules of the game? Models of sensory decision-making suggest that choices are slow if evidence is weak, but they may only apply if the subject knows the task rules. Here, we asked how the learning of a new rule influences neuronal activity in the visual (area V1) and frontal cortex (area FEF) of monkeys. We devised a new icon-selection task. On each day, the monkeys saw 2 new icons (small pictures) and learned which one was relevant. We rewarded eye movements to a saccade target connected to the relevant icon with a curve. Neurons in visual and frontal cortex coded the monkey's choice, because the representation of the selected curve was enhanced. Learning delayed the neuronal selection signals and we uncovered the cause of this delay in V1, where learning to select the relevant icon caused an early suppression of surrounding image elements. These results demonstrate that the learning of a new rule causes a transition from fast and random decisions to a more considerate strategy that takes additional time and they reveal the contribution of visual and frontal cortex to the learning process. © The Author 2016. Published by Oxford University Press.

  14. Implicit visual learning and the expression of learning.

    Science.gov (United States)

    Haider, Hilde; Eberhardt, Katharina; Kunde, Alexander; Rose, Michael

    2013-03-01

    Although the existence of implicit motor learning is now widely accepted, the findings concerning perceptual implicit learning are ambiguous. Some researchers have observed perceptual learning whereas other authors have not. The review of the literature provides different reasons to explain this ambiguous picture, such as differences in the underlying learning processes, selective attention, or differences in the difficulty to express this knowledge. In three experiments, we investigated implicit visual learning within the original serial reaction time task. We used different response devices (keyboard vs. mouse) in order to manipulate selective attention towards response dimensions. Results showed that visual and motor sequence learning differed in terms of RT-benefits, but not in terms of the amount of knowledge assessed after training. Furthermore, visual sequence learning was modulated by selective attention. However, the findings of all three experiments suggest that selective attention did not alter implicit but rather explicit learning processes. Copyright © 2012 Elsevier Inc. All rights reserved.

  15. The Representation of Object Viewpoint in Human Visual Cortex

    OpenAIRE

    Andresen, David R.; Vinberg, Joakim; Grill-Spector, Kalanit

    2008-01-01

    Understanding the nature of object representations in the human brain is critical for understanding the neural basis of invariant object recognition. However, the degree to which object representations are sensitive to object viewpoint is unknown. Using fMRI we employed a parametric approach to examine the sensitivity to object view as a function of rotation (0°–180°), category (animal/vehicle) and fMRI-adaptation paradigm (short or long-lagged). For both categories and fMRI-adaptation paradi...

  16. Visual Descriptor Learning for Predicting Grasping Affordances

    DEFF Research Database (Denmark)

    Thomsen, Mikkel Tang

    2016-01-01

    by the task of grasping unknown objects given visual sensor information. The contributions from this thesis stem from three works that all relate to the task of grasping unknown objects but with particular focus on the visual representation part of the problem. First an investigation of a visual feature space...... consisting of surface features was performed. Dimensions in the visual space were varied and the effects were evaluated with the task of grasping unknown object. The evaluation was performed using a novel probabilistic grasp prediction approach based on neighbourhood analysis. The resulting success......-rates for predicting grasps were between 75% and 90% depending on the object class. The investigations also provided insights into the importance of selecting a proper visual feature space when utilising it for predicting affordances. As a consequence of the gained insights, a semi-local surface feature, the Sliced...

  17. Visual Representations in Mathematics Teaching: An Experiment with Students

    Science.gov (United States)

    Debrenti, Edith

    2015-01-01

    General problem-solving skills are of central importance in school mathematics achievement. Word problems play an important role not just in mathematical education, but in general education as well. Meaningful learning and understanding are basic aspects of all kinds of learning and it is even more important in the case of learning mathematics. In…

  18. Evidence for optimal integration of visual feature representations across saccades

    NARCIS (Netherlands)

    Oostwoud Wijdenes, L.; Marshall, L.; Bays, P.M.

    2015-01-01

    We explore the visual world through saccadic eye movements, but saccades also present a challenge to visual processing by shifting externally stable objects from one retinal location to another. The brain could solve this problem in two ways: by overwriting preceding input and starting afresh with

  19. Unsupervised learning of a steerable basis for invariant image representations

    Science.gov (United States)

    Bethge, Matthias; Gerwinn, Sebastian; Macke, Jakob H.

    2007-02-01

    There are two aspects to unsupervised learning of invariant representations of images: First, we can reduce the dimensionality of the representation by finding an optimal trade-off between temporal stability and informativeness. We show that the answer to this optimization problem is generally not unique so that there is still considerable freedom in choosing a suitable basis. Which of the many optimal representations should be selected? Here, we focus on this second aspect, and seek to find representations that are invariant under geometrical transformations occuring in sequences of natural images. We utilize ideas of 'steerability' and Lie groups, which have been developed in the context of filter design. In particular, we show how an anti-symmetric version of canonical correlation analysis can be used to learn a full-rank image basis which is steerable with respect to rotations. We provide a geometric interpretation of this algorithm by showing that it finds the two-dimensional eigensubspaces of the average bivector. For data which exhibits a variety of transformations, we develop a bivector clustering algorithm, which we use to learn a basis of generalized quadrature pairs (i.e. 'complex cells') from sequences of natural images.

  20. Feature and Region Selection for Visual Learning.

    Science.gov (United States)

    Zhao, Ji; Wang, Liantao; Cabral, Ricardo; De la Torre, Fernando

    2016-03-01

    Visual learning problems, such as object classification and action recognition, are typically approached using extensions of the popular bag-of-words (BoWs) model. Despite its great success, it is unclear what visual features the BoW model is learning. Which regions in the image or video are used to discriminate among classes? Which are the most discriminative visual words? Answering these questions is fundamental for understanding existing BoW models and inspiring better models for visual recognition. To answer these questions, this paper presents a method for feature selection and region selection in the visual BoW model. This allows for an intermediate visualization of the features and regions that are important for visual learning. The main idea is to assign latent weights to the features or regions, and jointly optimize these latent variables with the parameters of a classifier (e.g., support vector machine). There are four main benefits of our approach: 1) our approach accommodates non-linear additive kernels, such as the popular χ(2) and intersection kernel; 2) our approach is able to handle both regions in images and spatio-temporal regions in videos in a unified way; 3) the feature selection problem is convex, and both problems can be solved using a scalable reduced gradient method; and 4) we point out strong connections with multiple kernel learning and multiple instance learning approaches. Experimental results in the PASCAL VOC 2007, MSR Action Dataset II and YouTube illustrate the benefits of our approach.

  1. Learning QlikView data visualization

    CERN Document Server

    Pover, Karl

    2013-01-01

    A practical and fast-paced guide that gives you all the information you need to start developing charts from your data.Learning QlikView Data Visualization is for anybody interested in performing powerful data analysis and crafting insightful data visualization, independent of any previous knowledge of QlikView. Experience with spreadsheet software will help you understand QlikView functions.

  2. Two-stage perceptual learning to break visual crowding.

    Science.gov (United States)

    Zhu, Ziyun; Fan, Zhenzhi; Fang, Fang

    2016-01-01

    When a target is presented with nearby flankers in the peripheral visual field, it becomes harder to identify, which is referred to as crowding. Crowding sets a fundamental limit of object recognition in peripheral vision, preventing us from fully appreciating cluttered visual scenes. We trained adult human subjects on a crowded orientation discrimination task and investigated whether crowding could be completely eliminated by training. We discovered a two-stage learning process with this training task. In the early stage, when the target and flankers were separated beyond a certain distance, subjects acquired a relatively general ability to break crowding, as evidenced by the fact that the breaking of crowding could transfer to another crowded orientation, even a crowded motion stimulus, although the transfer to the opposite visual hemi-field was weak. In the late stage, like many classical perceptual learning effects, subjects' performance gradually improved and showed specificity to the trained orientation. We also found that, when the target and flankers were spaced too finely, training could only reduce, rather than completely eliminate, the crowding effect. This two-stage learning process illustrates a learning strategy for our brain to deal with the notoriously difficult problem of identifying peripheral objects in clutter. The brain first learned to solve the "easy and general" part of the problem (i.e., improving the processing resolution and segmenting the target and flankers) and then tackle the "difficult and specific" part (i.e., refining the representation of the target).

  3. Adaptive learning in a compartmental model of visual cortex—how feedback enables stable category learning and refinement

    Science.gov (United States)

    Layher, Georg; Schrodt, Fabian; Butz, Martin V.; Neumann, Heiko

    2014-01-01

    The categorization of real world objects is often reflected in the similarity of their visual appearances. Such categories of objects do not necessarily form disjunct sets of objects, neither semantically nor visually. The relationship between categories can often be described in terms of a hierarchical structure. For instance, tigers and leopards build two separate mammalian categories, both of which are subcategories of the category Felidae. In the last decades, the unsupervised learning of categories of visual input stimuli has been addressed by numerous approaches in machine learning as well as in computational neuroscience. However, the question of what kind of mechanisms might be involved in the process of subcategory learning, or category refinement, remains a topic of active investigation. We propose a recurrent computational network architecture for the unsupervised learning of categorial and subcategorial visual input representations. During learning, the connection strengths of bottom-up weights from input to higher-level category representations are adapted according to the input activity distribution. In a similar manner, top-down weights learn to encode the characteristics of a specific stimulus category. Feedforward and feedback learning in combination realize an associative memory mechanism, enabling the selective top-down propagation of a category's feedback weight distribution. We suggest that the difference between the expected input encoded in the projective field of a category node and the current input pattern controls the amplification of feedforward-driven representations. Large enough differences trigger the recruitment of new representational resources and the establishment of additional (sub-) category representations. We demonstrate the temporal evolution of such learning and show how the proposed combination of an associative memory with a modulatory feedback integration successfully establishes category and subcategory representations

  4. Adaptive learning in a compartmental model of visual cortex - how feedback enables stable category learning and refinement

    Directory of Open Access Journals (Sweden)

    Georg eLayher

    2014-12-01

    Full Text Available The categorization of real world objects is often reflected in the similarity of their visual appearances. Such categories of objects do not necessarily form disjunct sets of objects, neither semantically nor visually. The relationship between categories can often be described in terms of a hierarchical structure. For instance, tigers and leopards build two separate mammalian categories, but both belong to the category of felines. In other words, tigers and leopards are subcategories of the category Felidae. In the last decades, the unsupervised learning of categories of visual input stimuli has been addressed by numerous approaches in machine learning as well as in the computational neurosciences. However, the question of what kind of mechanisms might be involved in the process of subcategory learning, or category refinement, remains a topic of active investigation. We propose a recurrent computational network architecture for the unsupervised learning of categorial and subcategorial visual input representations. During learning, the connection strengths of bottom-up weights from input to higher-level category representations are adapted according to the input activity distribution. In a similar manner, top-down weights learn to encode the characteristics of a specific stimulus category. Feedforward and feedback learning in combination realize an associative memory mechanism, enabling the selective top-down propagation of a category's feedback weight distribution. We suggest that the difference between the expected input encoded in the projective field of a category node and the current input pattern controls the amplification of feedforward-driven representations. Large enough differences trigger the recruitment of new representational resources and the establishment of (sub- category representations. We demonstrate the temporal evolution of such learning and show how the approach successully establishes category and subcategory

  5. What is adapted in face adaptation? The neural representations of expression in the human visual system.

    Science.gov (United States)

    Fox, Christopher J; Barton, Jason J S

    2007-01-05

    The neural representation of facial expression within the human visual system is not well defined. Using an adaptation paradigm, we examined aftereffects on expression perception produced by various stimuli. Adapting to a face, which was used to create morphs between two expressions, substantially biased expression perception within the morphed faces away from the adapting expression. This adaptation was not based on low-level image properties, as a different image of the same person displaying that expression produced equally robust aftereffects. Smaller but significant aftereffects were generated by images of different individuals, irrespective of gender. Non-face visual, auditory, or verbal representations of emotion did not generate significant aftereffects. These results suggest that adaptation affects at least two neural representations of expression: one specific to the individual (not the image), and one that represents expression across different facial identities. The identity-independent aftereffect suggests the existence of a 'visual semantic' for facial expression in the human visual system.

  6. Visual representation of gender in flood coverage of Pakistani print media

    Directory of Open Access Journals (Sweden)

    Zarqa S. Ali

    2014-08-01

    Full Text Available This paper studies gender representation in the visual coverage of the 2010 floods in Pakistan. The data were collected from flood visuals published in the most circulated mainstream English newspapers in Pakistan, Dawn and The News. This study analyses how gender has been framed in the flood visuals. It is argued that visual representation of gender reinforces the gender stereotypes and cultural norms of Pakistani society. The gender-oriented flood coverage in both newspapers frequently seemed to take a reductionist approach while confining the representation of women to gender, and gender-specific roles. Though the gender-sensitive coverage displayed has been typical, showing women as helpless victims of flood, it has aroused sentiments of sympathy among readers and donors, inspiring them to give immediate moral and material help to the affected people. This agenda set by media might be to exploit the politics of sympathy but it has the effect of endorsing gender stereotypes.

  7. A computational model of the development of separate representations of facial identity and expression in the primate visual system.

    Science.gov (United States)

    Tromans, James Matthew; Harris, Mitchell; Stringer, Simon Maitland

    2011-01-01

    Experimental studies have provided evidence that the visual processing areas of the primate brain represent facial identity and facial expression within different subpopulations of neurons. For example, in non-human primates there is evidence that cells within the inferior temporal gyrus (TE) respond primarily to facial identity, while cells within the superior temporal sulcus (STS) respond to facial expression. More recently, it has been found that the orbitofrontal cortex (OFC) of non-human primates contains some cells that respond exclusively to changes in facial identity, while other cells respond exclusively to facial expression. How might the primate visual system develop physically separate representations of facial identity and expression given that the visual system is always exposed to simultaneous combinations of facial identity and expression during learning? In this paper, a biologically plausible neural network model, VisNet, of the ventral visual pathway is trained on a set of carefully-designed cartoon faces with different identities and expressions. The VisNet model architecture is composed of a hierarchical series of four Self-Organising Maps (SOMs), with associative learning in the feedforward synaptic connections between successive layers. During learning, the network develops separate clusters of cells that respond exclusively to either facial identity or facial expression. We interpret the performance of the network in terms of the learning properties of SOMs, which are able to exploit the statistical indendependence between facial identity and expression.

  8. A computational model of the development of separate representations of facial identity and expression in the primate visual system.

    Directory of Open Access Journals (Sweden)

    James Matthew Tromans

    Full Text Available Experimental studies have provided evidence that the visual processing areas of the primate brain represent facial identity and facial expression within different subpopulations of neurons. For example, in non-human primates there is evidence that cells within the inferior temporal gyrus (TE respond primarily to facial identity, while cells within the superior temporal sulcus (STS respond to facial expression. More recently, it has been found that the orbitofrontal cortex (OFC of non-human primates contains some cells that respond exclusively to changes in facial identity, while other cells respond exclusively to facial expression. How might the primate visual system develop physically separate representations of facial identity and expression given that the visual system is always exposed to simultaneous combinations of facial identity and expression during learning? In this paper, a biologically plausible neural network model, VisNet, of the ventral visual pathway is trained on a set of carefully-designed cartoon faces with different identities and expressions. The VisNet model architecture is composed of a hierarchical series of four Self-Organising Maps (SOMs, with associative learning in the feedforward synaptic connections between successive layers. During learning, the network develops separate clusters of cells that respond exclusively to either facial identity or facial expression. We interpret the performance of the network in terms of the learning properties of SOMs, which are able to exploit the statistical indendependence between facial identity and expression.

  9. Perceptual learning modifies the functional specializations of visual cortical areas.

    Science.gov (United States)

    Chen, Nihong; Cai, Peng; Zhou, Tiangang; Thompson, Benjamin; Fang, Fang

    2016-05-17

    Training can improve performance of perceptual tasks. This phenomenon, known as perceptual learning, is strongest for the trained task and stimulus, leading to a widely accepted assumption that the associated neuronal plasticity is restricted to brain circuits that mediate performance of the trained task. Nevertheless, learning does transfer to other tasks and stimuli, implying the presence of more widespread plasticity. Here, we trained human subjects to discriminate the direction of coherent motion stimuli. The behavioral learning effect substantially transferred to noisy motion stimuli. We used transcranial magnetic stimulation (TMS) and functional magnetic resonance imaging (fMRI) to investigate the neural mechanisms underlying the transfer of learning. The TMS experiment revealed dissociable, causal contributions of V3A (one of the visual areas in the extrastriate visual cortex) and MT+ (middle temporal/medial superior temporal cortex) to coherent and noisy motion processing. Surprisingly, the contribution of MT+ to noisy motion processing was replaced by V3A after perceptual training. The fMRI experiment complemented and corroborated the TMS finding. Multivariate pattern analysis showed that, before training, among visual cortical areas, coherent and noisy motion was decoded most accurately in V3A and MT+, respectively. After training, both kinds of motion were decoded most accurately in V3A. Our findings demonstrate that the effects of perceptual learning extend far beyond the retuning of specific neural populations for the trained stimuli. Learning could dramatically modify the inherent functional specializations of visual cortical areas and dynamically reweight their contributions to perceptual decisions based on their representational qualities. These neural changes might serve as the neural substrate for the transfer of perceptual learning.

  10. The representation of object viewpoint in human visual cortex.

    Science.gov (United States)

    Andresen, David R; Vinberg, Joakim; Grill-Spector, Kalanit

    2009-04-01

    Understanding the nature of object representations in the human brain is critical for understanding the neural basis of invariant object recognition. However, the degree to which object representations are sensitive to object viewpoint is unknown. Using fMRI we employed a parametric approach to examine the sensitivity to object view as a function of rotation (0 degrees-180 degrees ), category (animal/vehicle) and fMRI-adaptation paradigm (short or long-lagged). For both categories and fMRI-adaptation paradigms, object-selective regions recovered from adaptation when a rotated view of an object was shown after adaptation to a specific view of that object, suggesting that representations are sensitive to object rotation. However, we found evidence for differential representations across categories and ventral stream regions. Rotation cross-adaptation was larger for animals than vehicles, suggesting higher sensitivity to vehicle than animal rotation, and was largest in the left fusiform/occipito-temporal sulcus (pFUS/OTS), suggesting that this region has low sensitivity to rotation. Moreover, right pFUS/OTS and FFA responded more strongly to front than back views of animals (without adaptation) and rotation cross-adaptation depended both on the level of rotation and the adapting view. This result suggests a prevalence of neurons that prefer frontal views of animals in fusiform regions. Using a computational model of view-tuned neurons, we demonstrate that differential neural view tuning widths and relative distributions of neural-tuned populations in fMRI voxels can explain the fMRI results. Overall, our findings underscore the utility of parametric approaches for studying the neural basis of object invariance and suggest that there is no complete invariance to object view in the human ventral stream.

  11. Forms of Memory for Representation of Visual Objects

    Science.gov (United States)

    1991-04-15

    neuropsychological syndromes that involve disruption of perceptual representation systems should pay rich dividends for implicit memory research (Schacter et al...BLACKORDi. 1988b. Deficits in the implicit retention of new associations by alcoholic Korsakoff patients. Brain and Cognition 7: 145-156. COFER, C. C...MOREINES & N. BUTTERS. 1973. Retrieving information from Korsakoff patients: Effects of categorical cues and reference to the task. Cortex 9: 165

  12. Online multi-modal robust non-negative dictionary learning for visual tracking.

    Science.gov (United States)

    Zhang, Xiang; Guan, Naiyang; Tao, Dacheng; Qiu, Xiaogang; Luo, Zhigang

    2015-01-01

    Dictionary learning is a method of acquiring a collection of atoms for subsequent signal representation. Due to its excellent representation ability, dictionary learning has been widely applied in multimedia and computer vision. However, conventional dictionary learning algorithms fail to deal with multi-modal datasets. In this paper, we propose an online multi-modal robust non-negative dictionary learning (OMRNDL) algorithm to overcome this deficiency. Notably, OMRNDL casts visual tracking as a dictionary learning problem under the particle filter framework and captures the intrinsic knowledge about the target from multiple visual modalities, e.g., pixel intensity and texture information. To this end, OMRNDL adaptively learns an individual dictionary, i.e., template, for each modality from available frames, and then represents new particles over all the learned dictionaries by minimizing the fitting loss of data based on M-estimation. The resultant representation coefficient can be viewed as the common semantic representation of particles across multiple modalities, and can be utilized to track the target. OMRNDL incrementally learns the dictionary and the coefficient of each particle by using multiplicative update rules to respectively guarantee their non-negativity constraints. Experimental results on a popular challenging video benchmark validate the effectiveness of OMRNDL for visual tracking in both quantity and quality.

  13. EMR-based medical knowledge representation and inference via Markov random fields and distributed representation learning.

    Science.gov (United States)

    Zhao, Chao; Jiang, Jingchi; Guan, Yi; Guo, Xitong; He, Bin

    2018-05-01

    Electronic medical records (EMRs) contain medical knowledge that can be used for clinical decision support (CDS). Our objective is to develop a general system that can extract and represent knowledge contained in EMRs to support three CDS tasks-test recommendation, initial diagnosis, and treatment plan recommendation-given the condition of a patient. We extracted four kinds of medical entities from records and constructed an EMR-based medical knowledge network (EMKN), in which nodes are entities and edges reflect their co-occurrence in a record. Three bipartite subgraphs (bigraphs) were extracted from the EMKN, one to support each task. One part of the bigraph was the given condition (e.g., symptoms), and the other was the condition to be inferred (e.g., diseases). Each bigraph was regarded as a Markov random field (MRF) to support the inference. We proposed three graph-based energy functions and three likelihood-based energy functions. Two of these functions are based on knowledge representation learning and can provide distributed representations of medical entities. Two EMR datasets and three metrics were utilized to evaluate the performance. As a whole, the evaluation results indicate that the proposed system outperformed the baseline methods. The distributed representation of medical entities does reflect similarity relationships with respect to knowledge level. Combining EMKN and MRF is an effective approach for general medical knowledge representation and inference. Different tasks, however, require individually designed energy functions. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Conceptual Understanding and Representation Quality through Multi-representation Learning on Newton Law Content

    Directory of Open Access Journals (Sweden)

    Suci Furwati

    2017-08-01

    Full Text Available Abstract: Students who have good conceptual acquisition will be able to represent the concept by using multi representation. This study aims to determine the improvement of students' understanding of the concept of Newton's Law material, and the quality of representation used in solving problems on Newton's Law material. The results showed that the concept acquisition of students increased from the average of 35.32 to 78.97 with an effect size of 2.66 (strong and N-gain of 0.68 (medium. The quality of each type of student representation also increased from level 1 and level 2 up to level 3. Key Words: concept aquisition, represetation quality, multi representation learning, Newton’s Law Abstrak: Siswa yang memiliki penguasaan konsep yang baik akan mampu merepresentasikan konsep dengan menggunakan multi representasi. Penelitian ini bertujuan untuk mengetahui peningkatan pemahaman konsep siswa SMP pada materi Hukum Newton, dan kualitas representasi yang digunakan dalam menyelesaikan masalah pada materi Hukum Newton. Hasil penelitian menunjukkan bahwa penguasaan konsep siswa meningkat dari rata-rata 35,32 menjadi 78,97 dengan effect size sebesar 2,66 (kuat dan N-gain sebesar 0,68 (sedang. Kualitas tiap jenis representasi siswa juga mengalami peningkatan dari level 1 dan level 2 naik menjadi level 3. Kata kunci: hukum Newton, kualitas representasi, pemahaman konsep, pembelajaran multi representasi

  15. Deep generative learning of location-invariant visual word recognition.

    Science.gov (United States)

    Di Bono, Maria Grazia; Zorzi, Marco

    2013-01-01

    It is widely believed that orthographic processing implies an approximate, flexible coding of letter position, as shown by relative-position and transposition priming effects in visual word recognition. These findings have inspired alternative proposals about the representation of letter position, ranging from noisy coding across the ordinal positions to relative position coding based on open bigrams. This debate can be cast within the broader problem of learning location-invariant representations of written words, that is, a coding scheme abstracting the identity and position of letters (and combinations of letters) from their eye-centered (i.e., retinal) locations. We asked whether location-invariance would emerge from deep unsupervised learning on letter strings and what type of intermediate coding would emerge in the resulting hierarchical generative model. We trained a deep network with three hidden layers on an artificial dataset of letter strings presented at five possible retinal locations. Though word-level information (i.e., word identity) was never provided to the network during training, linear decoding from the activity of the deepest hidden layer yielded near-perfect accuracy in location-invariant word recognition. Conversely, decoding from lower layers yielded a large number of transposition errors. Analyses of emergent internal representations showed that word selectivity and location invariance increased as a function of layer depth. Word-tuning and location-invariance were found at the level of single neurons, but there was no evidence for bigram coding. Finally, the distributed internal representation of words at the deepest layer showed higher similarity to the representation elicited by the two exterior letters than by other combinations of two contiguous letters, in agreement with the hypothesis that word edges have special status. These results reveal that the efficient coding of written words-which was the model's learning objective

  16. Deep generative learning of location-invariant visual word recognition

    Directory of Open Access Journals (Sweden)

    Maria Grazia eDi Bono

    2013-09-01

    Full Text Available It is widely believed that orthographic processing implies an approximate, flexible coding of letter position, as shown by relative-position and transposition priming effects in visual word recognition. These findings have inspired alternative proposals about the representation of letter position, ranging from noisy coding across the ordinal positions to relative position coding based on open bigrams. This debate can be cast within the broader problem of learning location-invariant representations of written words, that is, a coding scheme abstracting the identity and position of letters (and combinations of letters from their eye-centred (i.e., retinal locations. We asked whether location-invariance would emerge from deep unsupervised learning on letter strings and what type of intermediate coding would emerge in the resulting hierarchical generative model. We trained a deep network with three hidden layers on an artificial dataset of letter strings presented at five possible retinal locations. Though word-level information (i.e., word identity was never provided to the network during training, linear decoding from the activity of the deepest hidden layer yielded near-perfect accuracy in location-invariant word recognition. Conversely, decoding from lower layers yielded a large number of transposition errors. Analyses of emergent internal representations showed that word selectivity and location invariance increased as a function of layer depth. Conversely, there was no evidence for bigram coding. Finally, the distributed internal representation of words at the deepest layer showed higher similarity to the representation elicited by the two exterior letters than by other combinations of two contiguous letters, in agreement with the hypothesis that word edges have special status. These results reveal that the efficient coding of written words – which was the model’s learning objective – is largely based on letter-level information.

  17. Deep generative learning of location-invariant visual word recognition

    Science.gov (United States)

    Di Bono, Maria Grazia; Zorzi, Marco

    2013-01-01

    It is widely believed that orthographic processing implies an approximate, flexible coding of letter position, as shown by relative-position and transposition priming effects in visual word recognition. These findings have inspired alternative proposals about the representation of letter position, ranging from noisy coding across the ordinal positions to relative position coding based on open bigrams. This debate can be cast within the broader problem of learning location-invariant representations of written words, that is, a coding scheme abstracting the identity and position of letters (and combinations of letters) from their eye-centered (i.e., retinal) locations. We asked whether location-invariance would emerge from deep unsupervised learning on letter strings and what type of intermediate coding would emerge in the resulting hierarchical generative model. We trained a deep network with three hidden layers on an artificial dataset of letter strings presented at five possible retinal locations. Though word-level information (i.e., word identity) was never provided to the network during training, linear decoding from the activity of the deepest hidden layer yielded near-perfect accuracy in location-invariant word recognition. Conversely, decoding from lower layers yielded a large number of transposition errors. Analyses of emergent internal representations showed that word selectivity and location invariance increased as a function of layer depth. Word-tuning and location-invariance were found at the level of single neurons, but there was no evidence for bigram coding. Finally, the distributed internal representation of words at the deepest layer showed higher similarity to the representation elicited by the two exterior letters than by other combinations of two contiguous letters, in agreement with the hypothesis that word edges have special status. These results reveal that the efficient coding of written words—which was the model's learning objective

  18. Crystal structure representations for machine learning models of formation energies

    Energy Technology Data Exchange (ETDEWEB)

    Faber, Felix [Department of Chemistry, Institute of Physical Chemistry and National Center for Computational Design and Discovery of Novel Materials, University of Basel Switzerland; Lindmaa, Alexander [Department of Physics, Chemistry and Biology, Linköping University, SE-581 83 Linköping Sweden; von Lilienfeld, O. Anatole [Department of Chemistry, Institute of Physical Chemistry and National Center for Computational Design and Discovery of Novel Materials, University of Basel Switzerland; Argonne Leadership Computing Facility, Argonne National Laboratory, 9700 S. Cass Avenue Lemont Illinois 60439; Armiento, Rickard [Department of Physics, Chemistry and Biology, Linköping University, SE-581 83 Linköping Sweden

    2015-04-20

    We introduce and evaluate a set of feature vector representations of crystal structures for machine learning (ML) models of formation energies of solids. ML models of atomization energies of organic molecules have been successful using a Coulomb matrix representation of the molecule. We consider three ways to generalize such representations to periodic systems: (i) a matrix where each element is related to the Ewald sum of the electrostatic interaction between two different atoms in the unit cell repeated over the lattice; (ii) an extended Coulomb-like matrix that takes into account a number of neighboring unit cells; and (iii) an ansatz that mimics the periodicity and the basic features of the elements in the Ewald sum matrix using a sine function of the crystal coordinates of the atoms. The representations are compared for a Laplacian kernel with Manhattan norm, trained to reproduce formation energies using a dataset of 3938 crystal structures obtained from the Materials Project. For training sets consisting of 3000 crystals, the generalization error in predicting formation energies of new structures corresponds to (i) 0.49, (ii) 0.64, and (iii) 0.37eV/atom for the respective representations.

  19. The comparison of visual working memory representations with perceptual inputs.

    Science.gov (United States)

    Hyun, Joo-seok; Woodman, Geoffrey F; Vogel, Edward K; Hollingworth, Andrew; Luck, Steven J

    2009-08-01

    The human visual system can notice differences between memories of previous visual inputs and perceptions of new visual inputs, but the comparison process that detects these differences has not been well characterized. In this study, the authors tested the hypothesis that differences between the memory of a stimulus array and the perception of a new array are detected in a manner that is analogous to the detection of simple features in visual search tasks. That is, just as the presence of a task-relevant feature in visual search can be detected in parallel, triggering a rapid shift of attention to the object containing the feature, the presence of a memory-percept difference along a task-relevant dimension can be detected in parallel, triggering a rapid shift of attention to the changed object. Supporting evidence was obtained in a series of experiments in which manual reaction times, saccadic reaction times, and event-related potential latencies were examined. However, these experiments also showed that a slow, limited-capacity process must occur before the observer can make a manual change detection response.

  20. Supramodal processing optimizes visual perceptual learning and plasticity.

    Science.gov (United States)

    Zilber, Nicolas; Ciuciu, Philippe; Gramfort, Alexandre; Azizi, Leila; van Wassenhove, Virginie

    2014-06-01

    Multisensory interactions are ubiquitous in cortex and it has been suggested that sensory cortices may be supramodal i.e. capable of functional selectivity irrespective of the sensory modality of inputs (Pascual-Leone and Hamilton, 2001; Renier et al., 2013; Ricciardi and Pietrini, 2011; Voss and Zatorre, 2012). Here, we asked whether learning to discriminate visual coherence could benefit from supramodal processing. To this end, three groups of participants were briefly trained to discriminate which of a red or green intermixed population of random-dot-kinematograms (RDKs) was most coherent in a visual display while being recorded with magnetoencephalography (MEG). During training, participants heard no sound (V), congruent acoustic textures (AV) or auditory noise (AVn); importantly, congruent acoustic textures shared the temporal statistics - i.e. coherence - of visual RDKs. After training, the AV group significantly outperformed participants trained in V and AVn although they were not aware of their progress. In pre- and post-training blocks, all participants were tested without sound and with the same set of RDKs. When contrasting MEG data collected in these experimental blocks, selective differences were observed in the dynamic pattern and the cortical loci responsive to visual RDKs. First and common to all three groups, vlPFC showed selectivity to the learned coherence levels whereas selectivity in visual motion area hMT+ was only seen for the AV group. Second and solely for the AV group, activity in multisensory cortices (mSTS, pSTS) correlated with post-training performances; additionally, the latencies of these effects suggested feedback from vlPFC to hMT+ possibly mediated by temporal cortices in AV and AVn groups. Altogether, we interpret our results in the context of the Reverse Hierarchy Theory of learning (Ahissar and Hochstein, 2004) in which supramodal processing optimizes visual perceptual learning by capitalizing on sensory

  1. Representation of visual gravitational motion in the human vestibular cortex.

    Science.gov (United States)

    Indovina, Iole; Maffei, Vincenzo; Bosco, Gianfranco; Zago, Myrka; Macaluso, Emiliano; Lacquaniti, Francesco

    2005-04-15

    How do we perceive the visual motion of objects that are accelerated by gravity? We propose that, because vision is poorly sensitive to accelerations, an internal model that calculates the effects of gravity is derived from graviceptive information, is stored in the vestibular cortex, and is activated by visual motion that appears to be coherent with natural gravity. The acceleration of visual targets was manipulated while brain activity was measured using functional magnetic resonance imaging. In agreement with the internal model hypothesis, we found that the vestibular network was selectively engaged when acceleration was consistent with natural gravity. These findings demonstrate that predictive mechanisms of physical laws of motion are represented in the human brain.

  2. Children's Learning from Touch Screens: A Dual Representation perspective

    Directory of Open Access Journals (Sweden)

    Kelly Jean Sheehan

    2016-08-01

    Full Text Available Parents and educators often expect that children will learn from touch screen devices, such as during joint e-book reading. Therefore an essential question is whether young children understand that the touch screen can be a symbolic medium – that entities represented on the touch screen can refer to entities in the real world. Research on symbolic development suggests that symbolic understanding requires that children develop dual representational abilities, meaning children need to appreciate that a symbol is an object in itself (i.e., picture of a dog while also being a representation of something else (i.e., the real dog. Drawing on classic research on symbols and new research on children’s learning from touch screens, we offer the perspective that children’s ability to learn from the touch screen as a symbolic medium depends on the effect of interactivity on children’s developing dual representational abilities. Although previous research on dual representation suggests the interactive nature of the touch screen might make it difficult for young children to use as a symbolic medium, the unique interactive affordances may help alleviate this difficulty. More research needs to investigate how the interactivity of the touch screen affects children’s ability to connect the symbols on the screen to the real world. Given the interactive nature of the touch screen, researchers and educators should consider both the affordances of the touch screen as well as young children’s cognitive abilities when assessing whether young children can learn from it as a symbolic medium.

  3. A visual representation of Chiriguano in Torino missionary exposition, 1898

    Directory of Open Access Journals (Sweden)

    Pilar García Jordán

    2016-12-01

    Full Text Available Catholic Church used missionary exhibitions in XIXth and XXth centuries to promote its contributions to thought, art, culture and to spread the usefulness of the institution in building a modern and civilized society. This is a study of the representation of a native group settled in the present departments of Chuquisaca, Tarija and Santa Cruz (Bolivia, developped from the collection of photos on the missions among Chiriguano, sent by Fr. Doroteo Giannecchini to the Esposizione d’Arte Sacra e delle Missioni e delle Opere Cattoliche (Torino, 1898, and from the article dedicated to that collection by Amalia Capello in Arte Sacra (1898.

  4. Representations of Mathematics, their teaching and learning: an exploratory study

    Directory of Open Access Journals (Sweden)

    Maria Margarida Graça

    2004-03-01

    Full Text Available This work describes an exploratory study, the first of the four phases of a more inclusive research, which aims at understanding the way to promote, in a Mathematics teachers’ group, a representational evolution leading to a practice that allows a Mathematical meaningful learning of Mathematics. The methodology of this study is qualitative. Data gathering was based on questioning; all the subjects of the sample (n=48 carried out a projective task (a hierarchical evocation test and answered a written individual questionnaire. Data analysis was based in a set of categories previously defined. The main purpose of this research was to identify, to characterize and to describe the representations of Mathematics, their teaching and learning, in a group of 48 subjects, from different social groups, in order to get indicators for the construction of the instruments to be used in to the next phases of the research. The main results of this study are the following: (1 we were able to identify and characterize different representations of the teaching and learning of Mathematics, in what respects its epistemological, pedagogical, emotional and sociocultural dimensions; (2 we were also able to identify limitations, difficulties and items to be included or rephrased in the instruments used.

  5. Neuronal representations of stimulus associations develop in the temporal lobe during learning.

    Science.gov (United States)

    Messinger, A; Squire, L R; Zola, S M; Albright, T D

    2001-10-09

    Visual stimuli that are frequently seen together become associated in long-term memory, such that the sight of one stimulus readily brings to mind the thought or image of the other. It has been hypothesized that acquisition of such long-term associative memories proceeds via the strengthening of connections between neurons representing the associated stimuli, such that a neuron initially responding only to one stimulus of an associated pair eventually comes to respond to both. Consistent with this hypothesis, studies have demonstrated that individual neurons in the primate inferior temporal cortex tend to exhibit similar responses to pairs of visual stimuli that have become behaviorally associated. In the present study, we investigated the role of these areas in the formation of conditional visual associations by monitoring the responses of individual neurons during the learning of new stimulus pairs. We found that many neurons in both area TE and perirhinal cortex came to elicit more similar neuronal responses to paired stimuli as learning proceeded. Moreover, these neuronal response changes were learning-dependent and proceeded with an average time course that paralleled learning. This experience-dependent plasticity of sensory representations in the cerebral cortex may underlie the learning of associations between objects.

  6. Ambiguous Science and the Visual Representation of the Real

    Science.gov (United States)

    Newbold, Curtis Robert

    2012-01-01

    The emergence of visual media as prominent and even expected forms of communication in nearly all disciplines, including those scientific, has raised new questions about how the art and science of communication epistemologically affect the interpretation of scientific phenomena. In this dissertation I explore how the influence of aesthetics in…

  7. Visual Representations of Academic Misconduct: Enhancing Information Literacy Skills

    Science.gov (United States)

    Ivancic, Sonia R.; Hosek, Angela M.

    2017-01-01

    Courses: This unit activity is suited for courses with research and source citation components, such as the Basic Communication; Interpersonal, and Organizational Communication courses. Objectives: Students will (a) visually interpret and analyze instances of plagiarism; (b) revise their work to use proper citations and reduce instances of…

  8. Visual Metaphors in the Representation of Communication Technology.

    Science.gov (United States)

    Kaplan, Stuart Jay

    1990-01-01

    Examines the role of metaphors (particularly visual metaphors) in communicating social values associated with new communication technology by analyzing magazine advertisements for computing and advanced telecommunications products and services. Finds that the "lever" and the "synthesis of old and new values" metaphors are dominant in both general…

  9. A review of visual memory capacity: Beyond individual items and towards structured representations

    OpenAIRE

    Brady, Timothy F.; Konkle, Talia; Alvarez, George A.

    2011-01-01

    Traditional memory research has focused on identifying separate memory systems and exploring different stages of memory processing. This approach has been valuable for establishing a taxonomy of memory systems and characterizing their function, but has been less informative about the nature of stored memory representations. Recent research on visual memory has shifted towards a representation-based emphasis, focusing on the contents of memory, and attempting to determine the format and struct...

  10. The Effect of Visual-Chunking-Representation Accommodation on Geometry Testing for Students with Math Disabilities

    Science.gov (United States)

    Zhang, Dake; Ding, Yi; Stegall, Joanna; Mo, Lei

    2012-01-01

    Students who struggle with learning mathematics often have difficulties with geometry problem solving, which requires strong visual imagery skills. These difficulties have been correlated with deficiencies in visual working memory. Cognitive psychology has shown that chunking of visual items accommodates students' working memory deficits. This…

  11. Digital representations of the real world how to capture, model, and render visual reality

    CERN Document Server

    Magnor, Marcus A; Sorkine-Hornung, Olga; Theobalt, Christian

    2015-01-01

    Create Genuine Visual Realism in Computer Graphics Digital Representations of the Real World: How to Capture, Model, and Render Visual Reality explains how to portray visual worlds with a high degree of realism using the latest video acquisition technology, computer graphics methods, and computer vision algorithms. It explores the integration of new capture modalities, reconstruction approaches, and visual perception into the computer graphics pipeline.Understand the Entire Pipeline from Acquisition, Reconstruction, and Modeling to Realistic Rendering and ApplicationsThe book covers sensors fo

  12. Experience-Dependency of Reliance on Local Visual and Idiothetic Cues for Spatial Representations Created in the Absence of Distal Information

    Directory of Open Access Journals (Sweden)

    Fabian Draht

    2017-06-01

    Full Text Available Spatial encoding in the hippocampus is based on a range of different input sources. To generate spatial representations, reliable sensory cues from the external environment are integrated with idiothetic cues, derived from self-movement, that enable path integration and directional perception. In this study, we examined to what extent idiothetic cues significantly contribute to spatial representations and navigation: we recorded place cells while rodents navigated towards two visually identical chambers in 180° orientation via two different paths in darkness and in the absence of reliable auditory or olfactory cues. Our goal was to generate a conflict between local visual and direction-specific information, and then to assess which strategy was prioritized in different learning phases. We observed that, in the absence of distal cues, place fields are initially controlled by local visual cues that override idiothetic cues, but that with multiple exposures to the paradigm, spaced at intervals of days, idiothetic cues become increasingly implemented in generating an accurate spatial representation. Taken together, these data support that, in the absence of distal cues, local visual cues are prioritized in the generation of context-specific spatial representations through place cells, whereby idiothetic cues are deemed unreliable. With cumulative exposures to the environments, the animal learns to attend to subtle idiothetic cues to resolve the conflict between visual and direction-specific information.

  13. Experience-Dependency of Reliance on Local Visual and Idiothetic Cues for Spatial Representations Created in the Absence of Distal Information.

    Science.gov (United States)

    Draht, Fabian; Zhang, Sijie; Rayan, Abdelrahman; Schönfeld, Fabian; Wiskott, Laurenz; Manahan-Vaughan, Denise

    2017-01-01

    Spatial encoding in the hippocampus is based on a range of different input sources. To generate spatial representations, reliable sensory cues from the external environment are integrated with idiothetic cues, derived from self-movement, that enable path integration and directional perception. In this study, we examined to what extent idiothetic cues significantly contribute to spatial representations and navigation: we recorded place cells while rodents navigated towards two visually identical chambers in 180° orientation via two different paths in darkness and in the absence of reliable auditory or olfactory cues. Our goal was to generate a conflict between local visual and direction-specific information, and then to assess which strategy was prioritized in different learning phases. We observed that, in the absence of distal cues, place fields are initially controlled by local visual cues that override idiothetic cues, but that with multiple exposures to the paradigm, spaced at intervals of days, idiothetic cues become increasingly implemented in generating an accurate spatial representation. Taken together, these data support that, in the absence of distal cues, local visual cues are prioritized in the generation of context-specific spatial representations through place cells, whereby idiothetic cues are deemed unreliable. With cumulative exposures to the environments, the animal learns to attend to subtle idiothetic cues to resolve the conflict between visual and direction-specific information.

  14. Emerging category representation in the visual forebrain hierarchy of pigeons (Columba livia).

    Science.gov (United States)

    Azizi, Amir Hossein; Pusch, Roland; Koenen, Charlotte; Klatt, Sebastian; Bröcker, Franziska; Thiele, Samuel; Kellermann, Janosch; Güntürkün, Onur; Cheng, Sen

    2018-06-06

    Recognizing and categorizing visual stimuli are cognitive functions vital for survival, and an important feature of visual systems in primates as well as in birds. Visual stimuli are processed along the ventral visual pathway. At every stage in the hierarchy, neurons respond selectively to more complex features, transforming the population representation of the stimuli. It is therefore easier to read-out category information in higher visual areas. While explicit category representations have been observed in the primate brain, less is known on equivalent processes in the avian brain. Even though their brain anatomies are radically different, it has been hypothesized that visual object representations are comparable across mammals and birds. In the present study, we investigated category representations in the pigeon visual forebrain using recordings from single cells responding to photographs of real-world objects. Using a linear classifier, we found that the population activity in the visual associative area mesopallium ventrolaterale (MVL) distinguishes between animate and inanimate objects, although this distinction is not required by the task. By contrast, a population of cells in the entopallium, a region that is lower in the hierarchy of visual areas and that is related to the primate extrastriate cortex, lacked this information. A model that pools responses of simple cells, which function as edge detectors, can account for the animate vs. inanimate categorization in the MVL, but performance in the model is based on different features than in MVL. Therefore, processing in MVL cells is very likely more abstract than simple computations on the output of edge detectors. Copyright © 2018. Published by Elsevier B.V.

  15. The loss of short-term visual representations over time: decay or temporal distinctiveness?

    Science.gov (United States)

    Mercer, Tom

    2014-12-01

    There has been much recent interest in the loss of visual short-term memories over the passage of time. According to decay theory, visual representations are gradually forgotten as time passes, reflecting a slow and steady distortion of the memory trace. However, this is controversial and decay effects can be explained in other ways. The present experiment aimed to reexamine the maintenance and loss of visual information over the short term. Decay and temporal distinctiveness models were tested using a delayed discrimination task, in which participants compared complex and novel objects over unfilled retention intervals of variable length. Experiment 1 found no significant change in the accuracy of visual memory from 2 to 6 s, but the gap separating trials reliably influenced task performance. Experiment 2 found evidence for information loss at a 10-s retention interval, but temporally separating trials restored the fidelity of visual memory, possibly because temporally isolated representations are distinct from older memory traces. In conclusion, visual representations lose accuracy at some point after 6 s, but only within temporally crowded contexts. These findings highlight the importance of temporal distinctiveness within visual short-term memory. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  16. Rapid learning in visual cortical networks.

    Science.gov (United States)

    Wang, Ye; Dragoi, Valentin

    2015-08-26

    Although changes in brain activity during learning have been extensively examined at the single neuron level, the coding strategies employed by cell populations remain mysterious. We examined cell populations in macaque area V4 during a rapid form of perceptual learning that emerges within tens of minutes. Multiple single units and LFP responses were recorded as monkeys improved their performance in an image discrimination task. We show that the increase in behavioral performance during learning is predicted by a tight coordination of spike timing with local population activity. More spike-LFP theta synchronization is correlated with higher learning performance, while high-frequency synchronization is unrelated with changes in performance, but these changes were absent once learning had stabilized and stimuli became familiar, or in the absence of learning. These findings reveal a novel mechanism of plasticity in visual cortex by which elevated low-frequency synchronization between individual neurons and local population activity accompanies the improvement in performance during learning.

  17. Language in the Mind's Eye: Visual Representations and Language Processing

    NARCIS (Netherlands)

    L. Vandeberg (Lisa)

    2012-01-01

    textabstractMy favorite children’s book was (and still is) Matilda, by Roald Dahl. The story is about Matilda Wormwood, an extraordinarily clever and sweet five year old girl who loves to learn and read. Unfortunately, her unpleasant parents are contemptuous of her inquisitiveness and talent, as is

  18. Problem solving based learning model with multiple representations to improve student's mental modelling ability on physics

    Science.gov (United States)

    Haili, Hasnawati; Maknun, Johar; Siahaan, Parsaoran

    2017-08-01

    Physics is a lessons that related to students' daily experience. Therefore, before the students studying in class formally, actually they have already have a visualization and prior knowledge about natural phenomenon and could wide it themselves. The learning process in class should be aimed to detect, process, construct, and use students' mental model. So, students' mental model agree with and builds in the right concept. The previous study held in MAN 1 Muna informs that in learning process the teacher did not pay attention students' mental model. As a consequence, the learning process has not tried to build students' mental modelling ability (MMA). The purpose of this study is to describe the improvement of students' MMA as a effect of problem solving based learning model with multiple representations approach. This study is pre experimental design with one group pre post. It is conducted in XI IPA MAN 1 Muna 2016/2017. Data collection uses problem solving test concept the kinetic theory of gasses and interview to get students' MMA. The result of this study is clarification students' MMA which is categorized in 3 category; High Mental Modelling Ability (H-MMA) for 7Mental Modelling Ability (M-MMA) for 3Mental Modelling Ability (L-MMA) for 0 ≤ x ≤ 3 score. The result shows that problem solving based learning model with multiple representations approach can be an alternative to be applied in improving students' MMA.

  19. Learning a Mid-Level Representation for Multiview Action Recognition

    Directory of Open Access Journals (Sweden)

    Cuiwei Liu

    2018-01-01

    Full Text Available Recognizing human actions in videos is an active topic with broad commercial potentials. Most of the existing action recognition methods are supposed to have the same camera view during both training and testing. And thus performances of these single-view approaches may be severely influenced by the camera movement and variation of viewpoints. In this paper, we address the above problem by utilizing videos simultaneously recorded from multiple views. To this end, we propose a learning framework based on multitask random forest to exploit a discriminative mid-level representation for videos from multiple cameras. In the first step, subvolumes of continuous human-centered figures are extracted from original videos. In the next step, spatiotemporal cuboids sampled from these subvolumes are characterized by multiple low-level descriptors. Then a set of multitask random forests are built upon multiview cuboids sampled at adjacent positions and construct an integrated mid-level representation for multiview subvolumes of one action. Finally, a random forest classifier is employed to predict the action category in terms of the learned representation. Experiments conducted on the multiview IXMAS action dataset illustrate that the proposed method can effectively recognize human actions depicted in multiview videos.

  20. Hierarchical representation of shapes in visual cortex - from localized features to figural shape segregation

    Directory of Open Access Journals (Sweden)

    Stephan eTschechne

    2014-08-01

    Full Text Available Visual structures in the environment are effortlessly segmented into image regions and those combined to a representation of surfaces and prototypical objects. Such a perceptual organization is performed by complex neural mechanisms in the visual cortex of primates. Multiple mutually connected areas in the ventral cortical pathway receive visual input and extract local form features that are subsequently grouped into increasingly complex, more meaningful image elements. At this stage, highly articulated changes in shape boundary as well as very subtle curvature changes contribute to the perception of an object.We propose a recurrent computational network architecture that utilizes a hierarchical distributed representation of shape features to encode boundary features over different scales of resolution. Our model makes use of neural mechanisms that model the processing capabilities of early and intermediate stages in visual cortex, namely areas V1-V4 and IT. We suggest that multiple specialized component representations interact by feedforward hierarchical processing that is combined with feedback from representations generated at higher stages. In so doing, global configurational as well as local information is available to distinguish changes in the object's contour. Once the outline of a shape has been established, contextual contour configurations are used to assign border ownership directions and thus achieve segregation of figure and ground. This combines separate findings about the generation of cortical shape representation using hierarchical representations with figure-ground segregation mechanisms.Our model is probed with a selection of artificial and real world images to illustrate processing results at different processing stages. We especially highlight how modulatory feedback connections contribute to the processing of visual input at various stages in the processing hierarchy.

  1. Heterogeneous iris image hallucination using sparse representation on a learned heterogeneous patch dictionary

    Science.gov (United States)

    Li, Yung-Hui; Zheng, Bo-Ren; Ji, Dai-Yan; Tien, Chung-Hao; Liu, Po-Tsun

    2014-09-01

    Cross sensor iris matching may seriously degrade the recognition performance because of the sensor mis-match problem of iris images between the enrollment and test stage. In this paper, we propose two novel patch-based heterogeneous dictionary learning method to attack this problem. The first method applies the latest sparse representation theory while the second method tries to learn the correspondence relationship through PCA in heterogeneous patch space. Both methods learn the basic atoms in iris textures across different image sensors and build connections between them. After such connections are built, at test stage, it is possible to hallucinate (synthesize) iris images across different sensors. By matching training images with hallucinated images, the recognition rate can be successfully enhanced. The experimental results showed the satisfied results both visually and in terms of recognition rate. Experimenting with an iris database consisting of 3015 images, we show that the EER is decreased 39.4% relatively by the proposed method.

  2. Digital media Experiences for Visual Learning

    DEFF Research Database (Denmark)

    Buhl, Mie

    2013-01-01

    for new tools and new theoretical approaches with which to understand them. the article argues that the current phase of social practices and technological development makes it difficult to disitnguish between experience with digital media and mediated experiences, because of the use of renegotiation og......Visual learning is a topic for didactic studies in all levels of educaion, brought about by an increasing use of digital meida- digital media give rise to discussions of how learning expereienes come about from various media ressources that generate new learning situations. new situations call...... about by the nature of diverse digital artefacts, 3. the learning potentials in using mobils devices for integrating the body in visual perception processes....

  3. Time representation in reinforcement learning models of the basal ganglia

    Directory of Open Access Journals (Sweden)

    Samuel Joseph Gershman

    2014-01-01

    Full Text Available Reinforcement learning models have been influential in understanding many aspects of basal ganglia function, from reward prediction to action selection. Time plays an important role in these models, but there is still no theoretical consensus about what kind of time representation is used by the basal ganglia. We review several theoretical accounts and their supporting evidence. We then discuss the relationship between reinforcement learning models and the timing mechanisms that have been attributed to the basal ganglia. We hypothesize that a single computational system may underlie both reinforcement learning and interval timing—the perception of duration in the range of seconds to hours. This hypothesis, which extends earlier models by incorporating a time-sensitive action selection mechanism, may have important implications for understanding disorders like Parkinson's disease in which both decision making and timing are impaired.

  4. Mirror representations innate versus determined by experience: a viewpoint from learning theory.

    Science.gov (United States)

    Giese, Martin A

    2014-04-01

    From the viewpoint of pattern recognition and computational learning, mirror neurons form an interesting multimodal representation that links action perception and planning. While it seems unlikely that all details of such representations are specified by the genetic code, robust learning of such complex representations likely requires an appropriate interplay between plasticity, generalization, and anatomical constraints of the underlying neural architecture.

  5. The effects of technology on making conjectures: linking multiple representations in learning iterations

    OpenAIRE

    San Diego, Jonathan; Aczel, James; Hodgson, Barbara

    2004-01-01

    Numerous studies have suggested that different technologies have different effects on students' learning of mathematics, particularly in facilitating students' graphing skills and preferences for representations. For example, there are claims that students who prefer algebraic representations can experience discomfort in learning mathematics concepts using computers (Weigand and Weller, 2001; Villarreal, 2000) whilst students using calculators preferred graphical representation (Keller and Hi...

  6. Embodied experiences. Visual representations of woman and maternity

    Directory of Open Access Journals (Sweden)

    Serena BRIGIDI

    2016-04-01

    Full Text Available The paper presents a reflection on the embodied experience of mother and representations of mother- hood in Western culture, within advertising and television series, documentaries and movies.Typically, motherhood is imagined as the product (having a baby, becoming a parent and not as the arduous process over the life of a person. It is presents with a universal character and it is used in movies as a strategy when they want to feel emotions: an a-historical mother gives birth, looks at his son, she takes him in her arms, and she loves him, sacrifices and she is next him forever. In other words, it’s all worth it if the prize is to become mother. In the collective imagination, these ideas have helped to cre- ate the ideal type of mother: how she should act and what value would motherhood in our society. With this premise, I analyze the omissions after the images: we are taking just a simple model of mother who destroy or idealize. Though models of women, mothers, couples and families are many more today.

  7. A Visual Detection Learning Model

    Science.gov (United States)

    Beard, Bettina L.; Ahumada, Albert J., Jr.; Trejo, Leonard (Technical Monitor)

    1998-01-01

    Our learning model has memory templates representing the target-plus-noise and noise-alone stimulus sets. The best correlating template determines the response. The correlations and the feedback participate in the additive template updating rule. The model can predict the relative thresholds for detection in random, fixed and twin noise.

  8. Learning spaces as representational scaffolds for learning conceptual knowledge of system behaviour

    NARCIS (Netherlands)

    Bredeweg, B.; Liem, J.; Beek, W.; Salles, P.; Linnebank, F.; Wolpers, M.; Kirschner, P.A.; Scheffel, M.; Lindstaedt, S.; Dimitrova, V.

    2010-01-01

    Scaffolding is a well-known approach to bridge the gap between novice and expert capabilities in a discovery-oriented learning environment. This paper discusses a set of knowledge representations referred to as Learning Spaces (LSs) that can be used to support learners in acquiring conceptual

  9. Learning semantic histopathological representation for basal cell carcinoma classification

    Science.gov (United States)

    Gutiérrez, Ricardo; Rueda, Andrea; Romero, Eduardo

    2013-03-01

    Diagnosis of a histopathology glass slide is a complex process that involves accurate recognition of several structures, their function in the tissue and their relation with other structures. The way in which the pathologist represents the image content and the relations between those objects yields a better and accurate diagnoses. Therefore, an appropriate semantic representation of the image content will be useful in several analysis tasks such as cancer classification, tissue retrieval and histopahological image analysis, among others. Nevertheless, to automatically recognize those structures and extract their inner semantic meaning are still very challenging tasks. In this paper we introduce a new semantic representation that allows to describe histopathological concepts suitable for classification. The approach herein identify local concepts using a dictionary learning approach, i.e., the algorithm learns the most representative atoms from a set of random sampled patches, and then models the spatial relations among them by counting the co-occurrence between atoms, while penalizing the spatial distance. The proposed approach was compared with a bag-of-features representation in a tissue classification task. For this purpose, 240 histological microscopical fields of view, 24 per tissue class, were collected. Those images fed a Support Vector Machine classifier per class, using 120 images as train set and the remaining ones for testing, maintaining the same proportion of each concept in the train and test sets. The obtained classification results, averaged from 100 random partitions of training and test sets, shows that our approach is more sensitive in average than the bag-of-features representation in almost 6%.

  10. Top-down attention affects sequential regularity representation in the human visual system.

    Science.gov (United States)

    Kimura, Motohiro; Widmann, Andreas; Schröger, Erich

    2010-08-01

    Recent neuroscience studies using visual mismatch negativity (visual MMN), an event-related brain potential (ERP) index of memory-mismatch processes in the visual sensory system, have shown that although sequential regularities embedded in successive visual stimuli can be automatically represented in the visual sensory system, an existence of sequential regularity itself does not guarantee that the sequential regularity will be automatically represented. In the present study, we investigated the effects of top-down attention on sequential regularity representation in the visual sensory system. Our results showed that a sequential regularity (SSSSD) embedded in a modified oddball sequence where infrequent deviant (D) and frequent standard stimuli (S) differing in luminance were regularly presented (SSSSDSSSSDSSSSD...) was represented in the visual sensory system only when participants attended the sequential regularity in luminance, but not when participants ignored the stimuli or simply attended the dimension of luminance per se. This suggests that top-down attention affects sequential regularity representation in the visual sensory system and that top-down attention is a prerequisite for particular sequential regularities to be represented. Copyright 2010 Elsevier B.V. All rights reserved.

  11. Scaffolding vector representations for student learning inside a physics game

    Science.gov (United States)

    D'Angelo, Cynthia

    Vectors and vector addition are difficult concepts for many introductory physics students and traditional instruction does not usually sufficiently address these difficulties. Vectors play a major role in most topics in introductory physics and without a complete understanding of them many students are unable to make sense of the physics topics covered in their classes. Video games present a unique opportunity to help students develop an intuitive understanding of motion, forces, and vectors while immersed in an enjoyable and interactive environment. This study examines two dimensions of design decisions to help students learn while playing a physics-based game. The representational complexity dimension looked at two ways of presenting dynamic information about the velocity of the game object on the screen. The scaffolding context dimension looked at two different contexts for presenting vector addition problems that were related to the game. While all students made significant learning games from the pre to the post test, there were virtually no differences between students along the representational complexity dimension and small differences between students along the scaffolding context dimension. A context that directly connects to students' game playing experience was in most cases more productive to learning than an abstract context.

  12. Developing Explanations and Developing Understanding: Students Explain the Phases of the Moon Using Visual Representations

    Science.gov (United States)

    Parnafes, Orit

    2012-01-01

    This article presents a theoretical model of the process by which students construct and elaborate explanations of scientific phenomena using visual representations. The model describes progress in the underlying conceptual processes in students' explanations as a reorganization of fine-grained knowledge elements based on the Knowledge in Pieces…

  13. Priming Contour-Deleted Images: Evidence for Immediate Representations in Visual Object Recognition.

    Science.gov (United States)

    Biederman, Irving; Cooper, Eric E.

    1991-01-01

    Speed and accuracy of identification of pictures of objects are facilitated by prior viewing. Contributions of image features, convex or concave components, and object models in a repetition priming task were explored in 2 studies involving 96 college students. Results provide evidence of intermediate representations in visual object recognition.…

  14. Shape representation modulating the effect of motion on visual search performance.

    Science.gov (United States)

    Yang, Lindong; Yu, Ruifeng; Lin, Xuelian; Liu, Na

    2017-11-02

    The effect of motion on visual search has been extensively investigated, but that of uniform linear motion of display on search performance for tasks with different target-distractor shape representations has been rarely explored. The present study conducted three visual search experiments. In Experiments 1 and 2, participants finished two search tasks that differed in target-distractor shape representations under static and dynamic conditions. Two tasks with clear and blurred stimuli were performed in Experiment 3. The experiments revealed that target-distractor shape representation modulated the effect of motion on visual search performance. For tasks with low target-distractor shape similarity, motion negatively affected search performance, which was consistent with previous studies. However, for tasks with high target-distractor shape similarity, if the target differed from distractors in that a gap with a linear contour was added to the target, and the corresponding part of distractors had a curved contour, motion positively influenced search performance. Motion blur contributed to the performance enhancement under dynamic conditions. The findings are useful for understanding the influence of target-distractor shape representation on dynamic visual search performance when display had uniform linear motion.

  15. A review of visual memory capacity: Beyond individual items and towards structured representations

    Science.gov (United States)

    Brady, Timothy F.; Konkle, Talia; Alvarez, George A.

    2012-01-01

    Traditional memory research has focused on identifying separate memory systems and exploring different stages of memory processing. This approach has been valuable for establishing a taxonomy of memory systems and characterizing their function, but has been less informative about the nature of stored memory representations. Recent research on visual memory has shifted towards a representation-based emphasis, focusing on the contents of memory, and attempting to determine the format and structure of remembered information. The main thesis of this review will be that one cannot fully understand memory systems or memory processes without also determining the nature of memory representations. Nowhere is this connection more obvious than in research that attempts to measure the capacity of visual memory. We will review research on the capacity of visual working memory and visual long-term memory, highlighting recent work that emphasizes the contents of memory. This focus impacts not only how we estimate the capacity of the system - going beyond quantifying how many items can be remembered, and moving towards structured representations - but how we model memory systems and memory processes. PMID:21617025

  16. The Uses of Literacy in Studying Computer Games: Comparing Students' Oral and Visual Representations of Games

    Science.gov (United States)

    Pelletier, Caroline

    2005-01-01

    This paper compares the oral and visual representations which 12 to 13-year-old students produced in studying computer games as part of an English and Media course. It presents the arguments for studying multimodal texts as part of a literacy curriculum and then provides an overview of the games course devised by teachers and researchers. The…

  17. Analysis and Visualization of Relations in eLearning

    Science.gov (United States)

    Dráždilová, Pavla; Obadi, Gamila; Slaninová, Kateřina; Martinovič, Jan; Snášel, Václav

    The popularity of eLearning systems is growing rapidly; this growth is enabled by the consecutive development in Internet and multimedia technologies. Web-based education became wide spread in the past few years. Various types of learning management systems facilitate development of Web-based courses. Users of these courses form social networks through the different activities performed by them. This chapter focuses on searching the latent social networks in eLearning systems data. These data consist of students activity records wherein latent ties among actors are embedded. The social network studied in this chapter is represented by groups of students who have similar contacts and interact in similar social circles. Different methods of data clustering analysis can be applied to these groups, and the findings show the existence of latent ties among the group members. The second part of this chapter focuses on social network visualization. Graphical representation of social network can describe its structure very efficiently. It can enable social network analysts to determine the network degree of connectivity. Analysts can easily determine individuals with a small or large amount of relationships as well as the amount of independent groups in a given network. When applied to the field of eLearning, data visualization simplifies the process of monitoring the study activities of individuals or groups, as well as the planning of educational curriculum, the evaluation of study processes, etc.

  18. Role of working memory in transformation of visual and motor representations for use in mental simulation.

    Science.gov (United States)

    Gabbard, Carl; Lee, Jihye; Caçola, Priscila

    2013-01-01

    This study examined the role of visual working memory when transforming visual representations to motor representations in the context of motor imagery. Participants viewed randomized number sequences of three, four, and five digits, and then reproduced the sequence by finger tapping using motor imagery or actually executing the movements; movement duration was recorded. One group viewed the stimulus for three seconds and responded immediately, while the second group had a three-second view followed by a three-second blank screen delay before responding. As expected, delay group times were longer with each condition and digit load. Whereas correlations between imagined and executed actions (temporal congruency) were significant in a positive direction for both groups, interestingly, the delay group's values were significantly stronger. That outcome prompts speculation that delay influenced the congruency between motor representation and actual execution.

  19. Autonomous learning of robust visual object detection and identification on a humanoid

    NARCIS (Netherlands)

    Leitner, J.; Chandrashekhariah, P.; Harding, S.; Frank, M.; Spina, G.; Förster, A.; Triesch, J.; Schmidhuber, J.

    2012-01-01

    In this work we introduce a technique for a humanoid robot to autonomously learn the representations of objects within its visual environment. Our approach involves an attention mechanism in association with feature based segmentation that explores the environment and provides object samples for

  20. Learning Sorting Algorithms through Visualization Construction

    Science.gov (United States)

    Cetin, Ibrahim; Andrews-Larson, Christine

    2016-01-01

    Recent increased interest in computational thinking poses an important question to researchers: What are the best ways to teach fundamental computing concepts to students? Visualization is suggested as one way of supporting student learning. This mixed-method study aimed to (i) examine the effect of instruction in which students constructed…

  1. Right Hemisphere Dominance in Visual Statistical Learning

    Science.gov (United States)

    Roser, Matthew E.; Fiser, Jozsef; Aslin, Richard N.; Gazzaniga, Michael S.

    2011-01-01

    Several studies report a right hemisphere advantage for visuospatial integration and a left hemisphere advantage for inferring conceptual knowledge from patterns of covariation. The present study examined hemispheric asymmetry in the implicit learning of new visual feature combinations. A split-brain patient and normal control participants viewed…

  2. Supporting visual quality assessment with machine learning

    NARCIS (Netherlands)

    Gastaldo, P.; Zunino, R.; Redi, J.

    2013-01-01

    Objective metrics for visual quality assessment often base their reliability on the explicit modeling of the highly non-linear behavior of human perception; as a result, they may be complex and computationally expensive. Conversely, machine learning (ML) paradigms allow to tackle the quality

  3. Emerging Object Representations in the Visual System Predict Reaction Times for Categorization

    Science.gov (United States)

    Ritchie, J. Brendan; Tovar, David A.; Carlson, Thomas A.

    2015-01-01

    Recognizing an object takes just a fraction of a second, less than the blink of an eye. Applying multivariate pattern analysis, or “brain decoding”, methods to magnetoencephalography (MEG) data has allowed researchers to characterize, in high temporal resolution, the emerging representation of object categories that underlie our capacity for rapid recognition. Shortly after stimulus onset, object exemplars cluster by category in a high-dimensional activation space in the brain. In this emerging activation space, the decodability of exemplar category varies over time, reflecting the brain’s transformation of visual inputs into coherent category representations. How do these emerging representations relate to categorization behavior? Recently it has been proposed that the distance of an exemplar representation from a categorical boundary in an activation space is critical for perceptual decision-making, and that reaction times should therefore correlate with distance from the boundary. The predictions of this distance hypothesis have been born out in human inferior temporal cortex (IT), an area of the brain crucial for the representation of object categories. When viewed in the context of a time varying neural signal, the optimal time to “read out” category information is when category representations in the brain are most decodable. Here, we show that the distance from a decision boundary through activation space, as measured using MEG decoding methods, correlates with reaction times for visual categorization during the period of peak decodability. Our results suggest that the brain begins to read out information about exemplar category at the optimal time for use in choice behaviour, and support the hypothesis that the structure of the representation for objects in the visual system is partially constitutive of the decision process in recognition. PMID:26107634

  4. Visual Pretraining for Deep Q-Learning

    OpenAIRE

    Sandven, Torstein

    2016-01-01

    Recent advances in reinforcement learning enable computers to learn human level polices for Atari 2600 games. This is done by training a convolutional neural network to play based on screenshots and in-game rewards. The network is referred to as a deep Q-network (DQN). The main disadvantage to this approach is a long training time. A computer will typically learn for approximately one week. In this time it processes 38 days of game play. This thesis explores the possibility of using visual pr...

  5. Hierarchical representation of shapes in visual cortex-from localized features to figural shape segregation.

    Science.gov (United States)

    Tschechne, Stephan; Neumann, Heiko

    2014-01-01

    Visual structures in the environment are segmented into image regions and those combined to a representation of surfaces and prototypical objects. Such a perceptual organization is performed by complex neural mechanisms in the visual cortex of primates. Multiple mutually connected areas in the ventral cortical pathway receive visual input and extract local form features that are subsequently grouped into increasingly complex, more meaningful image elements. Such a distributed network of processing must be capable to make accessible highly articulated changes in shape boundary as well as very subtle curvature changes that contribute to the perception of an object. We propose a recurrent computational network architecture that utilizes hierarchical distributed representations of shape features to encode surface and object boundary over different scales of resolution. Our model makes use of neural mechanisms that model the processing capabilities of early and intermediate stages in visual cortex, namely areas V1-V4 and IT. We suggest that multiple specialized component representations interact by feedforward hierarchical processing that is combined with feedback signals driven by representations generated at higher stages. Based on this, global configurational as well as local information is made available to distinguish changes in the object's contour. Once the outline of a shape has been established, contextual contour configurations are used to assign border ownership directions and thus achieve segregation of figure and ground. The model, thus, proposes how separate mechanisms contribute to distributed hierarchical cortical shape representation and combine with processes of figure-ground segregation. Our model is probed with a selection of stimuli to illustrate processing results at different processing stages. We especially highlight how modulatory feedback connections contribute to the processing of visual input at various stages in the processing hierarchy.

  6. Emergence of realism: Enhanced visual artistry and high accuracy of visual numerosity representation after left prefrontal damage.

    Science.gov (United States)

    Takahata, Keisuke; Saito, Fumie; Muramatsu, Taro; Yamada, Makiko; Shirahase, Joichiro; Tabuchi, Hajime; Suhara, Tetsuya; Mimura, Masaru; Kato, Motoichiro

    2014-05-01

    Over the last two decades, evidence of enhancement of drawing and painting skills due to focal prefrontal damage has accumulated. It is of special interest that most artworks created by such patients were highly realistic ones, but the mechanism underlying this phenomenon remains to be understood. Our hypothesis is that enhanced tendency of realism was associated with accuracy of visual numerosity representation, which has been shown to be mediated predominantly by right parietal functions. Here, we report a case of left prefrontal stroke, where the patient showed enhancement of artistic skills of realistic painting after the onset of brain damage. We investigated cognitive, functional and esthetic characteristics of the patient׳s visual artistry and visual numerosity representation. Neuropsychological tests revealed impaired executive function after the stroke. Despite that, the patient׳s visual artistry related to realism was rather promoted across the onset of brain damage as demonstrated by blind evaluation of the paintings by professional art reviewers. On visual numerical cognition tasks, the patient showed higher performance in comparison with age-matched healthy controls. These results paralleled increased perfusion in the right parietal cortex including the precuneus and intraparietal sulcus. Our data provide new insight into mechanisms underlying change in artistic style due to focal prefrontal lesion. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Attention enhances multi-voxel representation of novel objects in frontal, parietal and visual cortices.

    Science.gov (United States)

    Woolgar, Alexandra; Williams, Mark A; Rich, Anina N

    2015-04-01

    Selective attention is fundamental for human activity, but the details of its neural implementation remain elusive. One influential theory, the adaptive coding hypothesis (Duncan, 2001, An adaptive coding model of neural function in prefrontal cortex, Nature Reviews Neuroscience 2:820-829), proposes that single neurons in certain frontal and parietal regions dynamically adjust their responses to selectively encode relevant information. This selective representation may in turn support selective processing in more specialized brain regions such as the visual cortices. Here, we use multi-voxel decoding of functional magnetic resonance images to demonstrate selective representation of attended--and not distractor--objects in frontal, parietal, and visual cortices. In addition, we highlight a critical role for task demands in determining which brain regions exhibit selective coding. Strikingly, representation of attended objects in frontoparietal cortex was highest under conditions of high perceptual demand, when stimuli were hard to perceive and coding in early visual cortex was weak. Coding in early visual cortex varied as a function of attention and perceptual demand, while coding in higher visual areas was sensitive to the allocation of attention but robust to changes in perceptual difficulty. Consistent with high-profile reports, peripherally presented objects could also be decoded from activity at the occipital pole, a region which corresponds to the fovea. Our results emphasize the flexibility of frontoparietal and visual systems. They support the hypothesis that attention enhances the multi-voxel representation of information in the brain, and suggest that the engagement of this attentional mechanism depends critically on current task demands. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Visual variability affects early verb learning.

    Science.gov (United States)

    Twomey, Katherine E; Lush, Lauren; Pearce, Ruth; Horst, Jessica S

    2014-09-01

    Research demonstrates that within-category visual variability facilitates noun learning; however, the effect of visual variability on verb learning is unknown. We habituated 24-month-old children to a novel verb paired with an animated star-shaped actor. Across multiple trials, children saw either a single action from an action category (identical actions condition, for example, travelling while repeatedly changing into a circle shape) or multiple actions from that action category (variable actions condition, for example, travelling while changing into a circle shape, then a square shape, then a triangle shape). Four test trials followed habituation. One paired the habituated verb with a new action from the habituated category (e.g., 'dacking' + pentagon shape) and one with a completely novel action (e.g., 'dacking' + leg movement). The others paired a new verb with a new same-category action (e.g., 'keefing' + pentagon shape), or a completely novel category action (e.g., 'keefing' + leg movement). Although all children discriminated novel verb/action pairs, children in the identical actions condition discriminated trials that included the completely novel verb, while children in the variable actions condition discriminated the out-of-category action. These data suggest that - as in noun learning - visual variability affects verb learning and children's ability to form action categories. © 2014 The British Psychological Society.

  9. Differentiating Visual from Response Sequencing during Long-term Skill Learning.

    Science.gov (United States)

    Lynch, Brighid; Beukema, Patrick; Verstynen, Timothy

    2017-01-01

    The dual-system model of sequence learning posits that during early learning there is an advantage for encoding sequences in sensory frames; however, it remains unclear whether this advantage extends to long-term consolidation. Using the serial RT task, we set out to distinguish the dynamics of learning sequential orders of visual cues from learning sequential responses. On each day, most participants learned a new mapping between a set of symbolic cues and responses made with one of four fingers, after which they were exposed to trial blocks of either randomly ordered cues or deterministic ordered cues (12-item sequence). Participants were randomly assigned to one of four groups (n = 15 per group): Visual sequences (same sequence of visual cues across training days), Response sequences (same order of key presses across training days), Combined (same serial order of cues and responses on all training days), and a Control group (a novel sequence each training day). Across 5 days of training, sequence-specific measures of response speed and accuracy improved faster in the Visual group than any of the other three groups, despite no group differences in explicit awareness of the sequence. The two groups that were exposed to the same visual sequence across days showed a marginal improvement in response binding that was not found in the other groups. These results indicate that there is an advantage, in terms of rate of consolidation across multiple days of training, for learning sequences of actions in a sensory representational space, rather than as motoric representations.

  10. Women And Visual Representations Of Space In Two Chinese Film Adaptations Of Hamlet

    Directory of Open Access Journals (Sweden)

    CHEANG WAI FONG

    2014-12-01

    Full Text Available This paper studies two Chinese film adaptations of Shakespeare’s Hamlet, Xiaogang Feng’s The Banquet (2006 and Sherwood Hu’s Prince of the Himalayas (2006, by focusing on their visual representations of spaces allotted to women. Its thesis is that even though on the original Shakespearean stage details of various spaces might not be as vividly represented as in modern film productions, spaces are still crucial dramatic elements imbued with powerful significations. By analyzing the two Chinese film adaptations alongside the original Hamlet text, the paper attempts to reinterpret their different representations of spaces in relation to their different historical-cultural gender notions.

  11. Learning without knowing: subliminal visual feedback facilitates ballistic motor learning

    DEFF Research Database (Denmark)

    Lundbye-Jensen, Jesper; Leukel, Christian; Nielsen, Jens Bo

    by subconscious (subliminal) augmented visual feedback on motor performance. To test this, 45 subjects participated in the experiment, which involved learning of a ballistic task. The task was to execute simple ankle plantar flexion movements as quickly as possible within 200 ms and to continuously improve...... by the learner, indeed facilitated ballistic motor learning. This effect likely relates to multiple (conscious versus unconscious) processing of visual feedback and to the specific neural circuitries involved in optimization of ballistic motor performance.......). It is a well- described phenomenon that we may respond to features of our surroundings without being aware of them. It is also a well-known principle, that learning is reinforced by augmented feedback on motor performance. In the present experiment we hypothesized that motor learning may be facilitated...

  12. Visual and Verbal Learning in a Genetic Metabolic Disorder

    Science.gov (United States)

    Spilkin, Amy M.; Ballantyne, Angela O.; Trauner, Doris A.

    2009-01-01

    Visual and verbal learning in a genetic metabolic disorder (cystinosis) were examined in the following three studies. The goal of Study I was to provide a normative database and establish the reliability and validity of a new test of visual learning and memory (Visual Learning and Memory Test; VLMT) that was modeled after a widely used test of…

  13. HD-MTL: Hierarchical Deep Multi-Task Learning for Large-Scale Visual Recognition.

    Science.gov (United States)

    Fan, Jianping; Zhao, Tianyi; Kuang, Zhenzhong; Zheng, Yu; Zhang, Ji; Yu, Jun; Peng, Jinye

    2017-02-09

    In this paper, a hierarchical deep multi-task learning (HD-MTL) algorithm is developed to support large-scale visual recognition (e.g., recognizing thousands or even tens of thousands of atomic object classes automatically). First, multiple sets of multi-level deep features are extracted from different layers of deep convolutional neural networks (deep CNNs), and they are used to achieve more effective accomplishment of the coarseto- fine tasks for hierarchical visual recognition. A visual tree is then learned by assigning the visually-similar atomic object classes with similar learning complexities into the same group, which can provide a good environment for determining the interrelated learning tasks automatically. By leveraging the inter-task relatedness (inter-class similarities) to learn more discriminative group-specific deep representations, our deep multi-task learning algorithm can train more discriminative node classifiers for distinguishing the visually-similar atomic object classes effectively. Our hierarchical deep multi-task learning (HD-MTL) algorithm can integrate two discriminative regularization terms to control the inter-level error propagation effectively, and it can provide an end-to-end approach for jointly learning more representative deep CNNs (for image representation) and more discriminative tree classifier (for large-scale visual recognition) and updating them simultaneously. Our incremental deep learning algorithms can effectively adapt both the deep CNNs and the tree classifier to the new training images and the new object classes. Our experimental results have demonstrated that our HD-MTL algorithm can achieve very competitive results on improving the accuracy rates for large-scale visual recognition.

  14. Learning sorting algorithms through visualization construction

    Science.gov (United States)

    Cetin, Ibrahim; Andrews-Larson, Christine

    2016-01-01

    Recent increased interest in computational thinking poses an important question to researchers: What are the best ways to teach fundamental computing concepts to students? Visualization is suggested as one way of supporting student learning. This mixed-method study aimed to (i) examine the effect of instruction in which students constructed visualizations on students' programming achievement and students' attitudes toward computer programming, and (ii) explore how this kind of instruction supports students' learning according to their self-reported experiences in the course. The study was conducted with 58 pre-service teachers who were enrolled in their second programming class. They expect to teach information technology and computing-related courses at the primary and secondary levels. An embedded experimental model was utilized as a research design. Students in the experimental group were given instruction that required students to construct visualizations related to sorting, whereas students in the control group viewed pre-made visualizations. After the instructional intervention, eight students from each group were selected for semi-structured interviews. The results showed that the intervention based on visualization construction resulted in significantly better acquisition of sorting concepts. However, there was no significant difference between the groups with respect to students' attitudes toward computer programming. Qualitative data analysis indicated that students in the experimental group constructed necessary abstractions through their engagement in visualization construction activities. The authors of this study argue that the students' active engagement in the visualization construction activities explains only one side of students' success. The other side can be explained through the instructional approach, constructionism in this case, used to design instruction. The conclusions and implications of this study can be used by researchers and

  15. Perisaccadic Updating of Visual Representations and Attentional States: Linking Behavior and Neurophysiology

    Science.gov (United States)

    Marino, Alexandria C.; Mazer, James A.

    2016-01-01

    During natural vision, saccadic eye movements lead to frequent retinal image changes that result in different neuronal subpopulations representing the same visual feature across fixations. Despite these potentially disruptive changes to the neural representation, our visual percept is remarkably stable. Visual receptive field remapping, characterized as an anticipatory shift in the position of a neuron’s spatial receptive field immediately before saccades, has been proposed as one possible neural substrate for visual stability. Many of the specific properties of remapping, e.g., the exact direction of remapping relative to the saccade vector and the precise mechanisms by which remapping could instantiate stability, remain a matter of debate. Recent studies have also shown that visual attention, like perception itself, can be sustained across saccades, suggesting that the attentional control system can also compensate for eye movements. Classical remapping could have an attentional component, or there could be a distinct attentional analog of visual remapping. At this time we do not yet fully understand how the stability of attentional representations relates to perisaccadic receptive field shifts. In this review, we develop a vocabulary for discussing perisaccadic shifts in receptive field location and perisaccadic shifts of attentional focus, review and synthesize behavioral and neurophysiological studies of perisaccadic perception and perisaccadic attention, and identify open questions that remain to be experimentally addressed. PMID:26903820

  16. Improving Mobility Performance in Low Vision With a Distance-Based Representation of the Visual Scene.

    Science.gov (United States)

    van Rheede, Joram J; Wilson, Iain R; Qian, Rose I; Downes, Susan M; Kennard, Christopher; Hicks, Stephen L

    2015-07-01

    Severe visual impairment can have a profound impact on personal independence through its effect on mobility. We investigated whether the mobility of people with vision low enough to be registered as blind could be improved by presenting the visual environment in a distance-based manner for easier detection of obstacles. We accomplished this by developing a pair of "residual vision glasses" (RVGs) that use a head-mounted depth camera and displays to present information about the distance of obstacles to the wearer as brightness, such that obstacles closer to the wearer are represented more brightly. We assessed the impact of the RVGs on the mobility performance of visually impaired participants during the completion of a set of obstacle courses. Participant position was monitored continuously, which enabled us to capture the temporal dynamics of mobility performance. This allowed us to find correlates of obstacle detection and hesitations in walking behavior, in addition to the more commonly used measures of trial completion time and number of collisions. All participants were able to use the smart glasses to navigate the course, and mobility performance improved for those visually impaired participants with the worst prior mobility performance. However, walking speed was slower and hesitations increased with the altered visual representation. A depth-based representation of the visual environment may offer low vision patients improvements in independent mobility. It is important for further work to explore whether practice can overcome the reductions in speed and increased hesitation that were observed in our trial.

  17. The Focus of Attention in Visual Working Memory: Protection of Focused Representations and Its Individual Variation.

    Science.gov (United States)

    Heuer, Anna; Schubö, Anna

    2016-01-01

    Visual working memory can be modulated according to changes in the cued task relevance of maintained items. Here, we investigated the mechanisms underlying this modulation. In particular, we studied the consequences of attentional selection for selected and unselected items, and the role of individual differences in the efficiency with which attention is deployed. To this end, performance in a visual working memory task as well as the CDA/SPCN and the N2pc, ERP components associated with visual working memory and attentional processes, were analysed. Selection during the maintenance stage was manipulated by means of two successively presented retrocues providing spatial information as to which items were most likely to be tested. Results show that attentional selection serves to robustly protect relevant representations in the focus of attention while unselected representations which may become relevant again still remain available. Individuals with larger retrocueing benefits showed higher efficiency of attentional selection, as indicated by the N2pc, and showed stronger maintenance-associated activity (CDA/SPCN). The findings add to converging evidence that focused representations are protected, and highlight the flexibility of visual working memory, in which information can be weighted according its relevance.

  18. Contextual effects in visual working memory reveal hierarchically structured memory representations.

    Science.gov (United States)

    Brady, Timothy F; Alvarez, George A

    2015-01-01

    Influential slot and resource models of visual working memory make the assumption that items are stored in memory as independent units, and that there are no interactions between them. Consequently, these models predict that the number of items to be remembered (the set size) is the primary determinant of working memory performance, and therefore these models quantify memory capacity in terms of the number and quality of individual items that can be stored. Here we demonstrate that there is substantial variance in display difficulty within a single set size, suggesting that limits based on the number of individual items alone cannot explain working memory storage. We asked hundreds of participants to remember the same sets of displays, and discovered that participants were highly consistent in terms of which items and displays were hardest or easiest to remember. Although a simple grouping or chunking strategy could not explain this individual-display variability, a model with multiple, interacting levels of representation could explain some of the display-by-display differences. Specifically, a model that includes a hierarchical representation of items plus the mean and variance of sets of the colors on the display successfully accounts for some of the variability across displays. We conclude that working memory representations are composed only in part of individual, independent object representations, and that a major factor in how many items are remembered on a particular display is interitem representations such as perceptual grouping, ensemble, and texture representations.

  19. Simulating my own or others action plans?--Motor representations, not visual representations are recalled in motor memory.

    Directory of Open Access Journals (Sweden)

    Christian Seegelke

    Full Text Available Action plans are not generated from scratch for each movement, but features of recently generated plans are recalled for subsequent movements. This study investigated whether the observation of an action is sufficient to trigger plan recall processes. Participant dyads performed an object manipulation task in which one participant transported a plunger from an outer platform to a center platform of different heights (first move. Subsequently, either the same (intra-individual task condition or the other participant (inter-individual task condition returned the plunger to the outer platform (return moves. Grasp heights were inversely related to center target height and similar irrespective of direction (first vs. return move and task condition (intra- vs. inter-individual. Moreover, participants' return move grasp heights were highly correlated with their own, but not with their partners' first move grasp heights. Our findings provide evidence that a simulated action plan resembles a plan of how the observer would execute that action (based on a motor representation rather than a plan of the actually observed action (based on a visual representation.

  20. Claroscura Representation: An Audio-visual and Theoretical Exploration of the Representation of the Past Through Documentary Filmmaking

    Directory of Open Access Journals (Sweden)

    Gerrit Stollbrock Trujillo

    2017-09-01

    Full Text Available At the nexus between audio-visual production and theoretical research, this article is based on the experience of producing a documentary on the history of a cement plant in Colombia: La Siberia. The tensions between the narratives constructed in the documentary and the immensity of the discarded archives from the plant drive a theoretical quest to respond to its own iconoclast and the post-structuralist critique of history. This brought us to the formulation of the concept of claroscura representation, defined as representation that is transparent about its own limitations. I put this concept to the test through the medium of documentary film, talking specifically about the making of La Siberia, and suggest its relevance in other projects that attempt to represent the past or history through film. I suggest that this theory drives us towards the formulation of a new artistic project. The research process, and the dialogue between theory and practice, is interpreted using the model of abduction proposed by Charles Sanders Peirce.

  1. Visuals Matter! Designing and using effective visual representations to support project and portfolio decisions

    DEFF Research Database (Denmark)

    Geraldi, Joana; Arlt, Mario

    . They can help managers to be sharper and quicker, especially if visuals are used in a mindful manner. The intent of this book is to increase the awareness of project, program and portfolio practitioners and scholars about the importance of visuals and to provide practical recommendations on how they can......This book is the result of a two-year research project, funded by Project Management Institute and University College London on data visualization in the project and portfolio management contexts. Visuals are powerful and constitute an integral part of analyzing problems and making decisions...... be used and designed mindfully. The research, which underpins this book, focuses on the impact of visuals on cognition of data in project portfolio decisions. The complexity of portfolio problems often exceed human cognitive limitations as a result of a number of factors, such as the large number...

  2. Lateralization of visual learning in the honeybee.

    Science.gov (United States)

    Letzkus, Pinar; Boeddeker, Norbert; Wood, Jeff T; Zhang, Shao-Wu; Srinivasan, Mandyam V

    2008-02-23

    Lateralization is a well-described phenomenon in humans and other vertebrates and there are interesting parallels across a variety of different vertebrate species. However, there are only a few studies of lateralization in invertebrates. In a recent report, we showed lateralization of olfactory learning in the honeybee (Apis mellifera). Here, we investigate lateralization of another sensory modality, vision. By training honeybees on a modified version of a visual proboscis extension reflex task, we find that bees learn a colour stimulus better with their right eye.

  3. Lateralization of visual learning in the honeybee

    OpenAIRE

    Letzkus, Pinar; Boeddeker, Norbert; Wood, Jeff T; Zhang, Shao-Wu; Srinivasan, Mandyam V

    2007-01-01

    Lateralization is a well-described phenomenon in humans and other vertebrates and there are interesting parallels across a variety of different vertebrate species. However, there are only a few studies of lateralization in invertebrates. In a recent report, we showed lateralization of olfactory learning in the honeybee (Apis mellifera). Here, we investigate lateralization of another sensory modality, vision. By training honeybees on a modified version of a visual proboscis extension reflex ta...

  4. Three visual techniques to enhance interprofessional learning.

    Science.gov (United States)

    Parsell, G; Gibbs, T; Bligh, J

    1998-07-01

    Many changes in the delivery of healthcare in the UK have highlighted the need for healthcare professionals to learn to work together as teams for the benefit of patients. Whatever the profession or level, whether for postgraduate education and training, continuing professional development, or for undergraduates, learners should have an opportunity to learn about and with, other healthcare practitioners in a stimulating and exciting way. Learning to understand how people think, feel, and react, and the parts they play at work, both as professionals and individuals, can only be achieved through sensitive discussion and exchange of views. Teaching and learning methods must provide opportunities for this to happen. This paper describes three small-group teaching techniques which encourage a high level of learner collaboration and team-working. Learning content is focused on real-life health-care issues and strong visual images are used to stimulate lively discussion and debate. Each description includes the learning objectives of each exercise, basic equipment and resources, and learning outcomes.

  5. Independent Attention Mechanisms Control the Activation of Tactile and Visual Working Memory Representations.

    Science.gov (United States)

    Katus, Tobias; Eimer, Martin

    2018-05-01

    Working memory (WM) is limited in capacity, but it is controversial whether these capacity limitations are domain-general or are generated independently within separate modality-specific memory systems. These alternative accounts were tested in bimodal visual/tactile WM tasks. In Experiment 1, participants memorized the locations of simultaneously presented task-relevant visual and tactile stimuli. Visual and tactile WM load was manipulated independently (one, two, or three items per modality), and one modality was unpredictably tested after each trial. To track the activation of visual and tactile WM representations during the retention interval, the visual contralateral delay activity (CDA) and tactile CDA (tCDA) were measured over visual and somatosensory cortex, respectively. CDA and tCDA amplitudes were selectively affected by WM load in the corresponding (tactile or visual) modality. The CDA parametrically increased when visual load increased from one to two and to three items. The tCDA was enhanced when tactile load increased from one to two items and showed no further enhancement for three tactile items. Critically, these load effects were strictly modality-specific, as substantiated by Bayesian statistics. Increasing tactile load did not affect the visual CDA, and increasing visual load did not modulate the tCDA. Task performance at memory test was also unaffected by WM load in the other (untested) modality. This was confirmed in a second behavioral experiment where tactile and visual loads were either two or four items, unimodal baseline conditions were included, and participants performed a color change detection task in the visual modality. These results show that WM capacity is not limited by a domain-general mechanism that operates across sensory modalities. They suggest instead that WM storage is mediated by distributed modality-specific control mechanisms that are activated independently and in parallel during multisensory WM.

  6. Brain activity associated with translation from a visual to a symbolic representation in algebra and geometry.

    Science.gov (United States)

    Leikin, Mark; Waisman, Ilana; Shaul, Shelley; Leikin, Roza

    2014-03-01

    This paper presents a small part of a larger interdisciplinary study that investigates brain activity (using event related potential methodology) of male adolescents when solving mathematical problems of different types. The study design links mathematics education research with neurocognitive studies. In this paper we performed a comparative analysis of brain activity associated with the translation from visual to symbolic representations of mathematical objects in algebra and geometry. Algebraic tasks require translation from graphical to symbolic representation of a function, whereas tasks in geometry require translation from a drawing of a geometric figure to a symbolic representation of its property. The findings demonstrate that electrical activity associated with the performance of geometrical tasks is stronger than that associated with solving algebraic tasks. Additionally, we found different scalp topography of the brain activity associated with algebraic and geometric tasks. Based on these results, we argue that problem solving in algebra and geometry is associated with different patterns of brain activity.

  7. The Interplay Among Children's Negative Family Representations, Visual Processing of Negative Emotions, and Externalizing Symptoms.

    Science.gov (United States)

    Davies, Patrick T; Coe, Jesse L; Hentges, Rochelle F; Sturge-Apple, Melissa L; van der Kloet, Erika

    2018-03-01

    This study examined the transactional interplay among children's negative family representations, visual processing of negative emotions, and externalizing symptoms in a sample of 243 preschool children (M age  = 4.60 years). Children participated in three annual measurement occasions. Cross-lagged autoregressive models were conducted with multimethod, multi-informant data to identify mediational pathways. Consistent with schema-based top-down models, negative family representations were associated with attention to negative faces in an eye-tracking task and their externalizing symptoms. Children's negative representations of family relationships specifically predicted decreases in their attention to negative emotions, which, in turn, was associated with subsequent increases in their externalizing symptoms. Follow-up analyses indicated that the mediational role of diminished attention to negative emotions was particularly pronounced for angry faces. © 2017 The Authors. Child Development © 2017 Society for Research in Child Development, Inc.

  8. Object representation in the bottlenose dolphin (Tursiops truncatus): integration of visual and echoic information.

    Science.gov (United States)

    Harley, H E; Roitblat, H L; Nachtigall, P E

    1996-04-01

    A dolphin performed a 3-alternative matching-to-sample task in different modality conditions (visual/echoic, both vision and echolocation: visual, vision only; echoic, echolocation only). In Experiment 1, training occurred in the dual-modality (visual/echoic) condition. Choice accuracy in tests of all conditions was above chance without further training. In Experiment 2, unfamiliar objects with complementary similarity relations in vision and echolocation were presented in single-modality conditions until accuracy was about 70%. When tested in the visual/echoic condition, accuracy immediately rose (95%), suggesting integration across modalities. In Experiment 3, conditions varied between presentation of sample and alternatives. The dolphin successfully matched familiar objects in the cross-modal conditions. These data suggest that the dolphin has an object-based representational system.

  9. Cognitive Strategies for Learning from Static and Dynamic Visuals.

    Science.gov (United States)

    Lewalter, D.

    2003-01-01

    Studied the effects of including static or dynamic visuals in an expository text on a learning outcome and the use of learning strategies when working with these visuals. Results for 60 undergraduates for both types of illustration indicate different frequencies in the use of learning strategies relevant for the learning outcome. (SLD)

  10. Constructed vs. received graphical representations for learning about scientific controversy: Implications for learning and coaching

    Science.gov (United States)

    Cavalli-Sforza, Violetta Laura Maria

    Students in science classes hardly ever study scientific controversy, especially in terms of the different types of arguments used to support and criticize theories and hypotheses. Yet, learning the reasons for scientific debate and scientific change is an important part of appreciating the nature of the scientific enterprise and communicating it to the non-scientific world. This dissertation explores the usefulness of graphical representations in teaching students about scientific arguments. Subjects participating in an extended experiment studied instructional materials and used the Belvedere graphical interface to analyze texts drawn from an actual scientific debate. In one experimental condition, subjects used a box-and-arrow representation whose primitive graphical elements had preassigned meanings tailored to the domain of instruction. In the other experimental condition, subjects could use the graphical elements as they wished, thereby creating their own representation. The development of a representation, by forcing a deeper analysis, can potentially yield a greater understanding of the domain under study. The results of the research suggest two conclusions. From the perspective of learning target concepts, asking subjects to develop their own representation may not hurt those subjects who gain a sufficient understanding of the possibilities of abstract representation. The risks are much greater for less able subjects because, if they develop a representation that is inadequate for expressing the target concepts, they will use those concepts less or not at all. From the perspective of coaching subjects as they diagram their analysis of texts, a predefined representation has significant advantages. If it is appropriately expressive for the task, it provides a common language and clearer shared meaning between the subject and the coach. It also enables the coach to understand subjects' analysis more easily, and to evaluate it more effectively against the

  11. Gravity influences the visual representation of object tilt in parietal cortex.

    Science.gov (United States)

    Rosenberg, Ari; Angelaki, Dora E

    2014-10-22

    Sensory systems encode the environment in egocentric (e.g., eye, head, or body) reference frames, creating inherently unstable representations that shift and rotate as we move. However, it is widely speculated that the brain transforms these signals into an allocentric, gravity-centered representation of the world that is stable and independent of the observer's spatial pose. Where and how this representation may be achieved is currently unknown. Here we demonstrate that a subpopulation of neurons in the macaque caudal intraparietal area (CIP) visually encodes object tilt in nonegocentric coordinates defined relative to the gravitational vector. Neuronal responses to the tilt of a visually presented planar surface were measured with the monkey in different spatial orientations (upright and rolled left/right ear down) and then compared. This revealed a continuum of representations in which planar tilt was encoded in a gravity-centered reference frame in approximately one-tenth of the comparisons, intermediate reference frames ranging between gravity-centered and egocentric in approximately two-tenths of the comparisons, and in an egocentric reference frame in less than half of the comparisons. Altogether, almost half of the comparisons revealed a shift in the preferred tilt and/or a gain change consistent with encoding object orientation in nonegocentric coordinates. Through neural network modeling, we further show that a purely gravity-centered representation of object tilt can be achieved directly from the population activity of CIP-like units. These results suggest that area CIP may play a key role in creating a stable, allocentric representation of the environment defined relative to an "earth-vertical" direction. Copyright © 2014 the authors 0270-6474/14/3414170-11$15.00/0.

  12. Optimal spatiotemporal representation of multichannel EEG for recognition of brain states associated with distinct visual stimulus

    Science.gov (United States)

    Hramov, Alexander; Musatov, Vyacheslav Yu.; Runnova, Anastasija E.; Efremova, Tatiana Yu.; Koronovskii, Alexey A.; Pisarchik, Alexander N.

    2018-04-01

    In the paper we propose an approach based on artificial neural networks for recognition of different human brain states associated with distinct visual stimulus. Based on the developed numerical technique and the analysis of obtained experimental multichannel EEG data, we optimize the spatiotemporal representation of multichannel EEG to provide close to 97% accuracy in recognition of the EEG brain states during visual perception. Different interpretations of an ambiguous image produce different oscillatory patterns in the human EEG with similar features for every interpretation. Since these features are inherent to all subjects, a single artificial network can classify with high quality the associated brain states of other subjects.

  13. Representation Learning from Time Labelled Heterogeneous Data for Mobile Crowdsensing

    Directory of Open Access Journals (Sweden)

    Chunmei Ma

    2016-01-01

    Full Text Available Mobile crowdsensing is a new paradigm that can utilize pervasive smartphones to collect and analyze data to benefit users. However, sensory data gathered by smartphone usually involves different data types because of different granularity and multiple sensor sources. Besides, the data are also time labelled. The heterogeneous and time sequential data raise new challenges for data analyzing. Some existing solutions try to learn each type of data one by one and analyze them separately without considering time information. In addition, the traditional methods also have to determine phone orientation because some sensors equipped in smartphone are orientation related. In this paper, we think that a combination of multiple sensors can represent an invariant feature for a crowdsensing context. Therefore, we propose a new representation learning method of heterogeneous data with time labels to extract typical features using deep learning. We evaluate that our proposed method can adapt data generated by different orientations effectively. Furthermore, we test the performance of the proposed method by recognizing two group mobile activities, walking/cycling and driving/bus with smartphone sensors. It achieves precisions of 98.6% and 93.7% in distinguishing cycling from walking and bus from driving, respectively.

  14. Neuro-symbolic representation learning on biological knowledge graphs.

    Science.gov (United States)

    Alshahrani, Mona; Khan, Mohammad Asif; Maddouri, Omar; Kinjo, Akira R; Queralt-Rosinach, Núria; Hoehndorf, Robert

    2017-09-01

    Biological data and knowledge bases increasingly rely on Semantic Web technologies and the use of knowledge graphs for data integration, retrieval and federated queries. In the past years, feature learning methods that are applicable to graph-structured data are becoming available, but have not yet widely been applied and evaluated on structured biological knowledge. Results: We develop a novel method for feature learning on biological knowledge graphs. Our method combines symbolic methods, in particular knowledge representation using symbolic logic and automated reasoning, with neural networks to generate embeddings of nodes that encode for related information within knowledge graphs. Through the use of symbolic logic, these embeddings contain both explicit and implicit information. We apply these embeddings to the prediction of edges in the knowledge graph representing problems of function prediction, finding candidate genes of diseases, protein-protein interactions, or drug target relations, and demonstrate performance that matches and sometimes outperforms traditional approaches based on manually crafted features. Our method can be applied to any biological knowledge graph, and will thereby open up the increasing amount of Semantic Web based knowledge bases in biology to use in machine learning and data analytics. https://github.com/bio-ontology-research-group/walking-rdf-and-owl. robert.hoehndorf@kaust.edu.sa. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  15. Neuro-symbolic representation learning on biological knowledge graphs

    KAUST Repository

    Alshahrani, Mona

    2017-04-21

    Biological data and knowledge bases increasingly rely on Semantic Web technologies and the use of knowledge graphs for data integration, retrieval and federated queries. In the past years, feature learning methods that are applicable to graph-structured data are becoming available, but have not yet widely been applied and evaluated on structured biological knowledge.We develop a novel method for feature learning on biological knowledge graphs. Our method combines symbolic methods, in particular knowledge representation using symbolic logic and automated reasoning, with neural networks to generate embeddings of nodes that encode for related information within knowledge graphs. Through the use of symbolic logic, these embeddings contain both explicit and implicit information. We apply these embeddings to the prediction of edges in the knowledge graph representing problems of function prediction, finding candidate genes of diseases, protein-protein interactions, or drug target relations, and demonstrate performance that matches and sometimes outperforms traditional approaches based on manually crafted features. Our method can be applied to any biological knowledge graph, and will thereby open up the increasing amount of SemanticWeb based knowledge bases in biology to use in machine learning and data analytics.https://github.com/bio-ontology-research-group/walking-rdf-and-owl.robert.hoehndorf@kaust.edu.sa.Supplementary data are available at Bioinformatics online.

  16. Designing Grounded Feedback: Criteria for Using Linked Representations to Support Learning of Abstract Symbols

    Science.gov (United States)

    Wiese, Eliane S.; Koedinger, Kenneth R.

    2017-01-01

    This paper proposes "grounded feedback" as a way to provide implicit verification when students are working with a novel representation. In grounded feedback, students' responses are in the target, to-be-learned representation, and those responses are reflected in a more-accessible linked representation that is intrinsic to the domain.…

  17. A VISUAL AND VERBAL ANALYSIS OF CHILDREN REPRESENTATION IN TELEVISION ADVERTISEMENT

    Directory of Open Access Journals (Sweden)

    Budi Hermawan

    2014-12-01

    Full Text Available The study investigates the representation of children in television advertisement of 3 Indie+ cellular phone operator. The study is descriptive qualitative and has employed Kress & Leuween’s Reading Images (2006 to analyze the visual data, and Halliday’ Transitivity System (1994, 2004 which is simplified by Gerot and Wignell (1995 for the analyzing the verbal data. The aim of the study is to examine the representation of children visually and verbally in the 3 Indie+ cellular phone operator advertisement. Based on the data analysis, the study finds that visually children are represented as a naive person who is “pretending to know” adult life when in fact they are still a child through the use of setting, layout composition, and perspective (shot, gaze. Children are verbally represented through the use of mental and material processes as somebody who tells about their hope, obsession, and aspirations in the future, and their naive imaginations of how an adult life is In relation to the product advertised the representation signifies that unlike other providers, using 3 Indie+ is very easy; it is not as hard as to live as adults.

  18. Reference frames for spatial frequency in face representation differ in the temporal visual cortex and amygdala.

    Science.gov (United States)

    Inagaki, Mikio; Fujita, Ichiro

    2011-07-13

    Social communication in nonhuman primates and humans is strongly affected by facial information from other individuals. Many cortical and subcortical brain areas are known to be involved in processing facial information. However, how the neural representation of faces differs across different brain areas remains unclear. Here, we demonstrate that the reference frame for spatial frequency (SF) tuning of face-responsive neurons differs in the temporal visual cortex and amygdala in monkeys. Consistent with psychophysical properties for face recognition, temporal cortex neurons were tuned to image-based SFs (cycles/image) and showed viewing distance-invariant representation of face patterns. On the other hand, many amygdala neurons were influenced by retina-based SFs (cycles/degree), a characteristic that is useful for social distance computation. The two brain areas also differed in the luminance contrast sensitivity of face-responsive neurons; amygdala neurons sharply reduced their responses to low luminance contrast images, while temporal cortex neurons maintained the level of their responses. From these results, we conclude that different types of visual processing in the temporal visual cortex and the amygdala contribute to the construction of the neural representations of faces.

  19. DLNE: A hybridization of deep learning and neuroevolution for visual control

    DEFF Research Database (Denmark)

    Poulsen, Andreas Precht; Thorhauge, Mark; Funch, Mikkel Hvilshj

    2017-01-01

    This paper investigates the potential of combining deep learning and neuroevolution to create a bot for a simple first person shooter (FPS) game capable of aiming and shooting based on high-dimensional raw pixel input. The deep learning component is responsible for visual recognition...... on evolution, and (3) how well they allow the deep network and evolved network to interface with each other. Overall, the results suggest that combining deep learning and neuroevolution in a hybrid approach is a promising research direction that could make complex visual domains directly accessible to networks...... and translating raw pixels to compact feature representations, while the evolving network takes those features as inputs to infer actions. Two types of feature representations are evaluated in terms of (1) how precise they allow the deep network to recognize the position of the enemy, (2) their effect...

  20. Sinogram denoising via simultaneous sparse representation in learned dictionaries

    International Nuclear Information System (INIS)

    Karimi, Davood; Ward, Rabab K

    2016-01-01

    Reducing the radiation dose in computed tomography (CT) is highly desirable but it leads to excessive noise in the projection measurements. This can significantly reduce the diagnostic value of the reconstructed images. Removing the noise in the projection measurements is, therefore, essential for reconstructing high-quality images, especially in low-dose CT. In recent years, two new classes of patch-based denoising algorithms proved superior to other methods in various denoising applications. The first class is based on sparse representation of image patches in a learned dictionary. The second class is based on the non-local means method. Here, the image is searched for similar patches and the patches are processed together to find their denoised estimates. In this paper, we propose a novel denoising algorithm for cone-beam CT projections. The proposed method has similarities to both these algorithmic classes but is more effective and much faster. In order to exploit both the correlation between neighboring pixels within a projection and the correlation between pixels in neighboring projections, the proposed algorithm stacks noisy cone-beam projections together to form a 3D image and extracts small overlapping 3D blocks from this 3D image for processing. We propose a fast algorithm for clustering all extracted blocks. The central assumption in the proposed algorithm is that all blocks in a cluster have a joint-sparse representation in a well-designed dictionary. We describe algorithms for learning such a dictionary and for denoising a set of projections using this dictionary. We apply the proposed algorithm on simulated and real data and compare it with three other algorithms. Our results show that the proposed algorithm outperforms some of the best denoising algorithms, while also being much faster. (paper)

  1. Supervised Learning for Visual Pattern Classification

    Science.gov (United States)

    Zheng, Nanning; Xue, Jianru

    This chapter presents an overview of the topics and major ideas of supervised learning for visual pattern classification. Two prevalent algorithms, i.e., the support vector machine (SVM) and the boosting algorithm, are briefly introduced. SVMs and boosting algorithms are two hot topics of recent research in supervised learning. SVMs improve the generalization of the learning machine by implementing the rule of structural risk minimization (SRM). It exhibits good generalization even when little training data are available for machine training. The boosting algorithm can boost a weak classifier to a strong classifier by means of the so-called classifier combination. This algorithm provides a general way for producing a classifier with high generalization capability from a great number of weak classifiers.

  2. Representations for Semantic Learning Webs: Semantic Web Technology in Learning Support

    Science.gov (United States)

    Dzbor, M.; Stutt, A.; Motta, E.; Collins, T.

    2007-01-01

    Recent work on applying semantic technologies to learning has concentrated on providing novel means of accessing and making use of learning objects. However, this is unnecessarily limiting: semantic technologies will make it possible to develop a range of educational Semantic Web services, such as interpretation, structure-visualization, support…

  3. L1-norm locally linear representation regularization multi-source adaptation learning.

    Science.gov (United States)

    Tao, Jianwen; Wen, Shiting; Hu, Wenjun

    2015-09-01

    In most supervised domain adaptation learning (DAL) tasks, one has access only to a small number of labeled examples from target domain. Therefore the success of supervised DAL in this "small sample" regime needs the effective utilization of the large amounts of unlabeled data to extract information that is useful for generalization. Toward this end, we here use the geometric intuition of manifold assumption to extend the established frameworks in existing model-based DAL methods for function learning by incorporating additional information about the target geometric structure of the marginal distribution. We would like to ensure that the solution is smooth with respect to both the ambient space and the target marginal distribution. In doing this, we propose a novel L1-norm locally linear representation regularization multi-source adaptation learning framework which exploits the geometry of the probability distribution, which has two techniques. Firstly, an L1-norm locally linear representation method is presented for robust graph construction by replacing the L2-norm reconstruction measure in LLE with L1-norm one, which is termed as L1-LLR for short. Secondly, considering the robust graph regularization, we replace traditional graph Laplacian regularization with our new L1-LLR graph Laplacian regularization and therefore construct new graph-based semi-supervised learning framework with multi-source adaptation constraint, which is coined as L1-MSAL method. Moreover, to deal with the nonlinear learning problem, we also generalize the L1-MSAL method by mapping the input data points from the input space to a high-dimensional reproducing kernel Hilbert space (RKHS) via a nonlinear mapping. Promising experimental results have been obtained on several real-world datasets such as face, visual video and object. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Evaluation of Deep Learning Representations of Spatial Storm Data

    Science.gov (United States)

    Gagne, D. J., II; Haupt, S. E.; Nychka, D. W.

    2017-12-01

    The spatial structure of a severe thunderstorm and its surrounding environment provide useful information about the potential for severe weather hazards, including tornadoes, hail, and high winds. Statistics computed over the area of a storm or from the pre-storm environment can provide descriptive information but fail to capture structural information. Because the storm environment is a complex, high-dimensional space, identifying methods to encode important spatial storm information in a low-dimensional form should aid analysis and prediction of storms by statistical and machine learning models. Principal component analysis (PCA), a more traditional approach, transforms high-dimensional data into a set of linearly uncorrelated, orthogonal components ordered by the amount of variance explained by each component. The burgeoning field of deep learning offers two potential approaches to this problem. Convolutional Neural Networks are a supervised learning method for transforming spatial data into a hierarchical set of feature maps that correspond with relevant combinations of spatial structures in the data. Generative Adversarial Networks (GANs) are an unsupervised deep learning model that uses two neural networks trained against each other to produce encoded representations of spatial data. These different spatial encoding methods were evaluated on the prediction of severe hail for a large set of storm patches extracted from the NCAR convection-allowing ensemble. Each storm patch contains information about storm structure and the near-storm environment. Logistic regression and random forest models were trained using the PCA and GAN encodings of the storm data and were compared against the predictions from a convolutional neural network. All methods showed skill over climatology at predicting the probability of severe hail. However, the verification scores among the methods were very similar and the predictions were highly correlated. Further evaluations are being

  5. Unimodal and crossmodal working memory representations of visual and kinesthetic movement trajectories.

    Science.gov (United States)

    Seemüller, Anna; Fiehler, Katja; Rösler, Frank

    2011-01-01

    The present study investigated whether visual and kinesthetic stimuli are stored as multisensory or modality-specific representations in unimodal and crossmodal working memory tasks. To this end, angle-shaped movement trajectories were presented to 16 subjects in delayed matching-to-sample tasks either visually or kinesthetically during encoding and recognition. During the retention interval, a secondary visual or kinesthetic interference task was inserted either immediately or with a delay after encoding. The modality of the interference task interacted significantly with the encoding modality. After visual encoding, memory was more impaired by a visual than by a kinesthetic secondary task, while after kinesthetic encoding the pattern was reversed. The time when the secondary task had to be performed interacted with the encoding modality as well. For visual encoding, memory was more impaired, when the secondary task had to be performed at the beginning of the retention interval. In contrast, memory after kinesthetic encoding was more affected, when the secondary task was introduced later in the retention interval. The findings suggest that working memory traces are maintained in a modality-specific format characterized by distinct consolidation processes that take longer after kinesthetic than after visual encoding. Copyright © 2010 Elsevier B.V. All rights reserved.

  6. Representation

    National Research Council Canada - National Science Library

    Little, Daniel

    2006-01-01

    ...). The reason this is so is due to hierarchies that we take for granted. By hierarchies I mean that there is a layer of representation of us as individuals, as military professional, as members of a military unit and as citizens of an entire nation...

  7. Representation learning with deep extreme learning machines for efficient image set classification

    KAUST Repository

    Uzair, Muhammad

    2016-12-09

    Efficient and accurate representation of a collection of images, that belong to the same class, is a major research challenge for practical image set classification. Existing methods either make prior assumptions about the data structure, or perform heavy computations to learn structure from the data itself. In this paper, we propose an efficient image set representation that does not make any prior assumptions about the structure of the underlying data. We learn the nonlinear structure of image sets with deep extreme learning machines that are very efficient and generalize well even on a limited number of training samples. Extensive experiments on a broad range of public datasets for image set classification show that the proposed algorithm consistently outperforms state-of-the-art image set classification methods both in terms of speed and accuracy.

  8. Representation learning with deep extreme learning machines for efficient image set classification

    KAUST Repository

    Uzair, Muhammad; Shafait, Faisal; Ghanem, Bernard; Mian, Ajmal

    2016-01-01

    Efficient and accurate representation of a collection of images, that belong to the same class, is a major research challenge for practical image set classification. Existing methods either make prior assumptions about the data structure, or perform heavy computations to learn structure from the data itself. In this paper, we propose an efficient image set representation that does not make any prior assumptions about the structure of the underlying data. We learn the nonlinear structure of image sets with deep extreme learning machines that are very efficient and generalize well even on a limited number of training samples. Extensive experiments on a broad range of public datasets for image set classification show that the proposed algorithm consistently outperforms state-of-the-art image set classification methods both in terms of speed and accuracy.

  9. Forever young: Visual representations of gender and age in online dating sites for older adults.

    Science.gov (United States)

    Gewirtz-Meydan, Ateret; Ayalon, Liat

    2017-06-13

    Online dating has become increasingly popular among older adults following broader social media adoption patterns. The current study examined the visual representations of people on 39 dating sites intended for the older population, with a particular focus on the visualization of the intersection between age and gender. All 39 dating sites for older adults were located through the Google search engine. Visual thematic analysis was performed with reference to general, non-age-related signs (e.g., facial expression, skin color), signs of aging (e.g., perceived age, wrinkles), relational features (e.g., proximity between individuals), and additional features such as number of people presented. The visual analysis in the present study revealed a clear intersection between ageism and sexism in the presentation of older adults. The majority of men and women were smiling and had a fair complexion, with light eye color and perceived age of younger than 60. Older women were presented as younger and wore more cosmetics as compared with older men. The present study stresses the social regulation of sexuality, as only heterosexual couples were presented. The narrow representation of older adults and the anti-aging messages portrayed in the pictures convey that love, intimacy, and sexual activity are for older adults who are "forever young."

  10. Learning style, judgements of learning, and learning of verbal and visual information.

    Science.gov (United States)

    Knoll, Abby R; Otani, Hajime; Skeel, Reid L; Van Horn, K Roger

    2017-08-01

    The concept of learning style is immensely popular despite the lack of evidence showing that learning style influences performance. This study tested the hypothesis that the popularity of learning style is maintained because it is associated with subjective aspects of learning, such as judgements of learning (JOLs). Preference for verbal and visual information was assessed using the revised Verbalizer-Visualizer Questionnaire (VVQ). Then, participants studied a list of word pairs and a list of picture pairs, making JOLs (immediate, delayed, and global) while studying each list. Learning was tested by cued recall. The results showed that higher VVQ verbalizer scores were associated with higher immediate JOLs for words, and higher VVQ visualizer scores were associated with higher immediate JOLs for pictures. There was no association between VVQ scores and recall or JOL accuracy. As predicted, learning style was associated with subjective aspects of learning but not objective aspects of learning. © 2016 The British Psychological Society.

  11. The relevance of visual information on learning sounds in infancy

    NARCIS (Netherlands)

    ter Schure, S.M.M.

    2016-01-01

    Newborn infants are sensitive to combinations of visual and auditory speech. Does this ability to match sounds and sights affect how infants learn the sounds of their native language? And are visual articulations the only type of visual information that can influence sound learning? This

  12. The influence of visual representations of “the Other” in the system of modern sociocultural communications

    Directory of Open Access Journals (Sweden)

    Kolodii Nataliya

    2016-01-01

    Full Text Available The paper deals with the way and the form of modern humanitaristics understanding of the problem of visual representation of “the Other”. The authors’ tasks were to comprehend the nature and dynamics of visualization, to give a distinct working definition of visual competence. Besides, the purpose of the paper was to state the components of visual competence, its criteria, estimation methods and in this context to interpret the image of “the Other” decoded in scientific philosophic and cultural literature and in daily cultural practices. And the final task was to reduce the visual message to the verbal one. The doctrine that the image may be read is the common prejudice, which prevents the formation of a new approach to visuality. The first step towards the solution of problem is to describe the techniques, which help in potential understanding of the visual structure. Understanding the image diversity and its possible text analogues should help in establishing the specific requirements, which can be and must be applicable to visual representation of “the Other”. Representations in the visual culture (photography, cinematography, media, painting, advertisement influence the social image, affects the daily social practices and communications. Visual representations are of interest for social theorists as well as cultural texts, as they give an idea on the context of cultural production, social interaction and individual experience.

  13. Representation

    Science.gov (United States)

    2006-09-01

    two weeks to arrive. Source: http://beergame.mit.edu/ Permission Granted – MIT Supply Chain Forum 2005 Professor Sterman –Sloan School of...Management - MITSource: http://web.mit.edu/jsterman/www/ SDG /beergame.html Rules of Engagement The MIT Beer Game Simulation 04-04 Slide Number 10 Professor...Sterman –Sloan School of Management - MITSource: http://web.mit.edu/jsterman/www/ SDG /beergame.html What is the Significance of Representation

  14. Pupils' Visual Representations in Standard and Problematic Problem Solving in Mathematics: Their Role in the Breach of the Didactical Contract

    Science.gov (United States)

    Deliyianni, Eleni; Monoyiou, Annita; Elia, Iliada; Georgiou, Chryso; Zannettou, Eleni

    2009-01-01

    This study investigated the modes of representations generated by kindergarteners and first graders while solving standard and problematic problems in mathematics. Furthermore, it examined the influence of pupils' visual representations on the breach of the didactical contract rules in problem solving. The sample of the study consisted of 38…

  15. Neurons with two sites of synaptic integration learn invariant representations.

    Science.gov (United States)

    Körding, K P; König, P

    2001-12-01

    Neurons in mammalian cerebral cortex combine specific responses with respect to some stimulus features with invariant responses to other stimulus features. For example, in primary visual cortex, complex cells code for orientation of a contour but ignore its position to a certain degree. In higher areas, such as the inferotemporal cortex, translation-invariant, rotation-invariant, and even view point-invariant responses can be observed. Such properties are of obvious interest to artificial systems performing tasks like pattern recognition. It remains to be resolved how such response properties develop in biological systems. Here we present an unsupervised learning rule that addresses this problem. It is based on a neuron model with two sites of synaptic integration, allowing qualitatively different effects of input to basal and apical dendritic trees, respectively. Without supervision, the system learns to extract invariance properties using temporal or spatial continuity of stimuli. Furthermore, top-down information can be smoothly integrated in the same framework. Thus, this model lends a physiological implementation to approaches of unsupervised learning of invariant-response properties.

  16. Where to attend next: guiding refreshing of visual, spatial, and verbal representations in working memory.

    Science.gov (United States)

    Souza, Alessandra S; Vergauwe, Evie; Oberauer, Klaus

    2018-04-23

    One of the functions that attention may serve in working memory (WM) is boosting information accessibility, a mechanism known as attentional refreshing. Refreshing is assumed to be a domain-general process operating on visual, spatial, and verbal representations alike. So far, few studies have directly manipulated refreshing of individual WM representations to measure the WM benefits of refreshing. Recently, a guided-refreshing method was developed, which consists of presenting cues during the retention interval of a WM task to instruct people to refresh (i.e., attend to) the cued items. Using a continuous-color reconstruction task, previous studies demonstrated that the error in reporting a color varies linearly with the frequency with which it was refreshed. Here, we extend this approach to assess the WM benefits of refreshing different representation types, from colors to spatial locations and words. Across six experiments, we show that refreshing frequency modulates performance in all stimulus domains in accordance with the tenet that refreshing is a domain-general process in WM. The benefits of refreshing were, however, larger for visual-spatial than verbal materials. © 2018 New York Academy of Sciences.

  17. Collaborative Random Faces-Guided Encoders for Pose-Invariant Face Representation Learning.

    Science.gov (United States)

    Shao, Ming; Zhang, Yizhe; Fu, Yun

    2018-04-01

    Learning discriminant face representation for pose-invariant face recognition has been identified as a critical issue in visual learning systems. The challenge lies in the drastic changes of facial appearances between the test face and the registered face. To that end, we propose a high-level feature learning framework called "collaborative random faces (RFs)-guided encoders" toward this problem. The contributions of this paper are three fold. First, we propose a novel supervised autoencoder that is able to capture the high-level identity feature despite of pose variations. Second, we enrich the identity features by replacing the target values of conventional autoencoders with random signals (RFs in this paper), which are unique for each subject under different poses. Third, we further improve the performance of the framework by incorporating deep convolutional neural network facial descriptors and linking discriminative identity features from different RFs for the augmented identity features. Finally, we conduct face identification experiments on Multi-PIE database, and face verification experiments on labeled faces in the wild and YouTube Face databases, where face recognition rate and verification accuracy with Receiver Operating Characteristic curves are rendered. In addition, discussions of model parameters and connections with the existing methods are provided. These experiments demonstrate that our learning system works fairly well on handling pose variations.

  18. On line and on paper: Visual representations, visual culture, and computer graphics in design engineering

    Energy Technology Data Exchange (ETDEWEB)

    Henderson, K.

    1991-01-01

    The research presented examines the visual communication practices of engineers and the impact of the implementation of computer graphics on their visual culture. The study is based on participant observation of day-to-day practices in two contemporary industrial settings among engineers engaged in the actual process of designing new pieces of technology. In addition, over thirty interviews were conducted at other industrial sites to confirm that the findings were not an isolated phenomenon. The data show that there is no one best way' to use a computer graphics system, but rather that use is site specific and firms and individuals engage in mixed paper and electronic practices as well as differential use of electronic options to get the job done. This research illustrates that rigid models which assume a linear theory of innovation, projecting a straight-forward process from idea, to drawing, to prototype, to production, are seriously misguided.

  19. Decoding the future from past experience: learning shapes predictions in early visual cortex.

    Science.gov (United States)

    Luft, Caroline D B; Meeson, Alan; Welchman, Andrew E; Kourtzi, Zoe

    2015-05-01

    Learning the structure of the environment is critical for interpreting the current scene and predicting upcoming events. However, the brain mechanisms that support our ability to translate knowledge about scene statistics to sensory predictions remain largely unknown. Here we provide evidence that learning of temporal regularities shapes representations in early visual cortex that relate to our ability to predict sensory events. We tested the participants' ability to predict the orientation of a test stimulus after exposure to sequences of leftward- or rightward-oriented gratings. Using fMRI decoding, we identified brain patterns related to the observers' visual predictions rather than stimulus-driven activity. Decoding of predicted orientations following structured sequences was enhanced after training, while decoding of cued orientations following exposure to random sequences did not change. These predictive representations appear to be driven by the same large-scale neural populations that encode actual stimulus orientation and to be specific to the learned sequence structure. Thus our findings provide evidence that learning temporal structures supports our ability to predict future events by reactivating selective sensory representations as early as in primary visual cortex. Copyright © 2015 the American Physiological Society.

  20. Studying Visual Displays: How to Instructionally Support Learning

    Science.gov (United States)

    Renkl, Alexander; Scheiter, Katharina

    2017-01-01

    Visual displays are very frequently used in learning materials. Although visual displays have great potential to foster learning, they also pose substantial demands on learners so that the actual learning outcomes are often disappointing. In this article, we pursue three main goals. First, we identify the main difficulties that learners have when…

  1. Visual working memory capacity for color is independent of representation resolution.

    Science.gov (United States)

    Ye, Chaoxiong; Zhang, Lingcong; Liu, Taosheng; Li, Hong; Liu, Qiang

    2014-01-01

    The relationship between visual working memory (VWM) capacity and resolution of representation have been extensively investigated. Several recent ERP studies using orientation (or arrow) stimuli suggest that there is an inverse relationship between VWM capacity and representation resolution. However, different results have been obtained in studies using color stimuli. This could be due to important differences in the experimental paradigms used in previous studies. We examined whether the same relationship between capacity and resolution holds for color information. Participants performed a color change detection task while their electroencephalography was recorded. We manipulated representation resolution by asking participants to detect either a salient change (low-resolution) or a subtle change (high-resolution) in color. We used an ERP component known as contralateral delay activity (CDA) to index the amount of information maintained in VWM. The result demonstrated the same pattern for both low- and high-resolution conditions, with no difference between conditions. This result suggests that VWM always represents a fixed number of approximately 3-4 colors regardless of the resolution of representation.

  2. 3D surface parameterization using manifold learning for medial shape representation

    Science.gov (United States)

    Ward, Aaron D.; Hamarneh, Ghassan

    2007-03-01

    The choice of 3D shape representation for anatomical structures determines the effectiveness with which segmentation, visualization, deformation, and shape statistics are performed. Medial axis-based shape representations have attracted considerable attention due to their inherent ability to encode information about the natural geometry of parts of the anatomy. In this paper, we propose a novel approach, based on nonlinear manifold learning, to the parameterization of medial sheets and object surfaces based on the results of skeletonization. For each single-sheet figure in an anatomical structure, we skeletonize the figure, and classify its surface points according to whether they lie on the upper or lower surface, based on their relationship to the skeleton points. We then perform nonlinear dimensionality reduction on the skeleton, upper, and lower surface points, to find the intrinsic 2D coordinate system of each. We then center a planar mesh over each of the low-dimensional representations of the points, and map the meshes back to 3D using the mappings obtained by manifold learning. Correspondence between mesh vertices, established in their intrinsic 2D coordinate spaces, is used in order to compute the thickness vectors emanating from the medial sheet. We show results of our algorithm on real brain and musculoskeletal structures extracted from MRI, as well as an artificial multi-sheet example. The main advantages to this method are its relative simplicity and noniterative nature, and its ability to correctly compute nonintersecting thickness vectors for a medial sheet regardless of both the amount of coincident bending and thickness in the object, and of the incidence of local concavities and convexities in the object's surface.

  3. Characterizing representational learning: A combined simulation and tutorial on perturbation theory

    Directory of Open Access Journals (Sweden)

    Antje Kohnle

    2017-11-01

    Full Text Available Analyzing, constructing, and translating between graphical, pictorial, and mathematical representations of physics ideas and reasoning flexibly through them (“representational competence” is a key characteristic of expertise in physics but is a challenge for learners to develop. Interactive computer simulations and University of Washington style tutorials both have affordances to support representational learning. This article describes work to characterize students’ spontaneous use of representations before and after working with a combined simulation and tutorial on first-order energy corrections in the context of quantum-mechanical time-independent perturbation theory. Data were collected from two institutions using pre-, mid-, and post-tests to assess short- and long-term gains. A representational competence level framework was adapted to devise level descriptors for the assessment items. The results indicate an increase in the number of representations used by students and the consistency between them following the combined simulation tutorial. The distributions of representational competence levels suggest a shift from perceptual to semantic use of representations based on their underlying meaning. In terms of activity design, this study illustrates the need to support students in making sense of the representations shown in a simulation and in learning to choose the most appropriate representation for a given task. In terms of characterizing representational abilities, this study illustrates the usefulness of a framework focusing on perceptual, syntactic, and semantic use of representations.

  4. Characterizing representational learning: A combined simulation and tutorial on perturbation theory

    Science.gov (United States)

    Kohnle, Antje; Passante, Gina

    2017-12-01

    Analyzing, constructing, and translating between graphical, pictorial, and mathematical representations of physics ideas and reasoning flexibly through them ("representational competence") is a key characteristic of expertise in physics but is a challenge for learners to develop. Interactive computer simulations and University of Washington style tutorials both have affordances to support representational learning. This article describes work to characterize students' spontaneous use of representations before and after working with a combined simulation and tutorial on first-order energy corrections in the context of quantum-mechanical time-independent perturbation theory. Data were collected from two institutions using pre-, mid-, and post-tests to assess short- and long-term gains. A representational competence level framework was adapted to devise level descriptors for the assessment items. The results indicate an increase in the number of representations used by students and the consistency between them following the combined simulation tutorial. The distributions of representational competence levels suggest a shift from perceptual to semantic use of representations based on their underlying meaning. In terms of activity design, this study illustrates the need to support students in making sense of the representations shown in a simulation and in learning to choose the most appropriate representation for a given task. In terms of characterizing representational abilities, this study illustrates the usefulness of a framework focusing on perceptual, syntactic, and semantic use of representations.

  5. Perceptual Learning in Children With Visual Impairment Improves Near Visual Acuity

    NARCIS (Netherlands)

    Huurneman, Bianca; Boonstra, F. Nienke; Cox, Ralf F. A.; van Rens, Ger; Cillessen, Antonius H. N.

    PURPOSE. This study investigated whether visual perceptual learning can improve near visual acuity and reduce foveal crowding effects in four-to nine-year-old children with visual impairment. METHODS. Participants were 45 children with visual impairment and 29 children with normal vision. Children

  6. Perceptual Learning in Children With Visual Impairment Improves Near Visual Acuity

    NARCIS (Netherlands)

    Huurneman, B.; Boonstra, F.N.; Cox, R.F.A.; van Rens, G.H.M.B.; Cillessen, A.H.N.

    2013-01-01

    Purpose. This study investigated whether visual perceptual learning can improve near visual acuity and reduce foveal crowding effects in four- to nine-year-old children with visual impairment. Methods. Participants were 45 children with visual impairment and 29 children with normal vision. Children

  7. Perceptual learning in children with visual impairment improves near visual acuity

    NARCIS (Netherlands)

    Huurneman, B.; Boonstra, F.N.; Cox, R.F.; Rens, G. van; Cillessen, A.H.

    2013-01-01

    PURPOSE: This study investigated whether visual perceptual learning can improve near visual acuity and reduce foveal crowding effects in four- to nine-year-old children with visual impairment. METHODS: Participants were 45 children with visual impairment and 29 children with normal vision. Children

  8. Perceptual Learning in Children With Visual Impairment Improves Near Visual Acuity

    NARCIS (Netherlands)

    Huurneman, B.; Boonstra, F.N.; Cox, R.F.A.; Rens, G.H.M.B. van; Cillessen, A.H.N.

    2013-01-01

    PURPOSE. This study investigated whether visual perceptual learning can improve near visual acuity and reduce foveal crowding effects in four-to nine-year-old children with visual impairment. METHODS. Participants were 45 children with visual impairment and 29 children with normal vision. Children

  9. Visual Literacy in Bloom: Using Bloom's Taxonomy to Support Visual Learning Skills

    Science.gov (United States)

    Arneson, Jessie B.; Offerdahl, Erika G.

    2018-01-01

    "Vision and Change" identifies science communication as one of the core competencies in undergraduate biology. Visual representations are an integral part of science communication, allowing ideas to be shared among and between scientists and the public. As such, development of scientific visual literacy should be a desired outcome of…

  10. Visual working memory gives up attentional control early in learning: ruling out interhemispheric cancellation.

    Science.gov (United States)

    Reinhart, Robert M G; Carlisle, Nancy B; Woodman, Geoffrey F

    2014-08-01

    Current research suggests that we can watch visual working memory surrender the control of attention early in the process of learning to search for a specific object. This inference is based on the observation that the contralateral delay activity (CDA) rapidly decreases in amplitude across trials when subjects search for the same target object. Here, we tested the alternative explanation that the role of visual working memory does not actually decline across learning, but instead lateralized representations accumulate in both hemispheres across trials and wash out the lateralized CDA. We show that the decline in CDA amplitude occurred even when the target objects were consistently lateralized to a single visual hemifield. Our findings demonstrate that reductions in the amplitude of the CDA during learning are not simply due to the dilution of the CDA from interhemispheric cancellation. Copyright © 2014 Society for Psychophysiological Research.

  11. Walk and Learn: Facial Attribute Representation Learning from Egocentric Video and Contextual Data

    OpenAIRE

    Wang, Jing; Cheng, Yu; Feris, Rogerio Schmidt

    2016-01-01

    The way people look in terms of facial attributes (ethnicity, hair color, facial hair, etc.) and the clothes or accessories they wear (sunglasses, hat, hoodies, etc.) is highly dependent on geo-location and weather condition, respectively. This work explores, for the first time, the use of this contextual information, as people with wearable cameras walk across different neighborhoods of a city, in order to learn a rich feature representation for facial attribute classification, without the c...

  12. Learning Semantic Tags from Big Data for Clinical Text Representation.

    Science.gov (United States)

    Li, Yanpeng; Liu, Hongfang

    2015-01-01

    In clinical text mining, it is one of the biggest challenges to represent medical terminologies and n-gram terms in sparse medical reports using either supervised or unsupervised methods. Addressing this issue, we propose a novel method for word and n-gram representation at semantic level. We first represent each word by its distance with a set of reference features calculated by reference distance estimator (RDE) learned from labeled and unlabeled data, and then generate new features using simple techniques of discretization, random sampling and merging. The new features are a set of binary rules that can be interpreted as semantic tags derived from word and n-grams. We show that the new features significantly outperform classical bag-of-words and n-grams in the task of heart disease risk factor extraction in i2b2 2014 challenge. It is promising to see that semantics tags can be used to replace the original text entirely with even better prediction performance as well as derive new rules beyond lexical level.

  13. Isolating Visual and Proprioceptive Components of Motor Sequence Learning in ASD.

    Science.gov (United States)

    Sharer, Elizabeth A; Mostofsky, Stewart H; Pascual-Leone, Alvaro; Oberman, Lindsay M

    2016-05-01

    In addition to defining impairments in social communication skills, individuals with autism spectrum disorder (ASD) also show impairments in more basic sensory and motor skills. Development of new skills involves integrating information from multiple sensory modalities. This input is then used to form internal models of action that can be accessed when both performing skilled movements, as well as understanding those actions performed by others. Learning skilled gestures is particularly reliant on integration of visual and proprioceptive input. We used a modified serial reaction time task (SRTT) to decompose proprioceptive and visual components and examine whether patterns of implicit motor skill learning differ in ASD participants as compared with healthy controls. While both groups learned the implicit motor sequence during training, healthy controls showed robust generalization whereas ASD participants demonstrated little generalization when visual input was constant. In contrast, no group differences in generalization were observed when proprioceptive input was constant, with both groups showing limited degrees of generalization. The findings suggest, when learning a motor sequence, individuals with ASD tend to rely less on visual feedback than do healthy controls. Visuomotor representations are considered to underlie imitative learning and action understanding and are thereby crucial to social skill and cognitive development. Thus, anomalous patterns of implicit motor learning, with a tendency to discount visual feedback, may be an important contributor in core social communication deficits that characterize ASD. Autism Res 2016, 9: 563-569. © 2015 International Society for Autism Research, Wiley Periodicals, Inc. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.

  14. Relative contributions of visual and auditory spatial representations to tactile localization.

    Science.gov (United States)

    Noel, Jean-Paul; Wallace, Mark

    2016-02-01

    Spatial localization of touch is critically dependent upon coordinate transformation between different reference frames, which must ultimately allow for alignment between somatotopic and external representations of space. Although prior work has shown an important role for cues such as body posture in influencing the spatial localization of touch, the relative contributions of the different sensory systems to this process are unknown. In the current study, we had participants perform a tactile temporal order judgment (TOJ) under different body postures and conditions of sensory deprivation. Specifically, participants performed non-speeded judgments about the order of two tactile stimuli presented in rapid succession on their ankles during conditions in which their legs were either uncrossed or crossed (and thus bringing somatotopic and external reference frames into conflict). These judgments were made in the absence of 1) visual, 2) auditory, or 3) combined audio-visual spatial information by blindfolding and/or placing participants in an anechoic chamber. As expected, results revealed that tactile temporal acuity was poorer under crossed than uncrossed leg postures. Intriguingly, results also revealed that auditory and audio-visual deprivation exacerbated the difference in tactile temporal acuity between uncrossed to crossed leg postures, an effect not seen for visual-only deprivation. Furthermore, the effects under combined audio-visual deprivation were greater than those seen for auditory deprivation. Collectively, these results indicate that mechanisms governing the alignment between somatotopic and external reference frames extend beyond those imposed by body posture to include spatial features conveyed by the auditory and visual modalities - with a heavier weighting of auditory than visual spatial information. Thus, sensory modalities conveying exteroceptive spatial information contribute to judgments regarding the localization of touch. Copyright © 2016

  15. Effects of Visual Feedback Distortion on Gait Adaptation: Comparison of Implicit Visual Distortion Versus Conscious Modulation on Retention of Motor Learning.

    Science.gov (United States)

    Kim, Seung-Jae; Ogilvie, Mitchell; Shimabukuro, Nathan; Stewart, Trevor; Shin, Joon-Ho

    2015-09-01

    Visual feedback can be used during gait rehabilitation to improve the efficacy of training. We presented a paradigm called visual feedback distortion; the visual representation of step length was manipulated during treadmill walking. Our prior work demonstrated that an implicit distortion of visual feedback of step length entails an unintentional adaptive process in the subjects' spatial gait pattern. Here, we investigated whether the implicit visual feedback distortion, versus conscious correction, promotes efficient locomotor adaptation that relates to greater retention of a task. Thirteen healthy subjects were studied under two conditions: (1) we implicitly distorted the visual representation of their gait symmetry over 14 min, and (2) with help of visual feedback, subjects were told to walk on the treadmill with the intent of attaining the gait asymmetry observed during the first implicit trial. After adaptation, the visual feedback was removed while subjects continued walking normally. Over this 6-min period, retention of preserved asymmetric pattern was assessed. We found that there was a greater retention rate during the implicit distortion trial than that of the visually guided conscious modulation trial. This study highlights the important role of implicit learning in the context of gait rehabilitation by demonstrating that training with implicit visual feedback distortion may produce longer lasting effects. This suggests that using visual feedback distortion could improve the effectiveness of treadmill rehabilitation processes by influencing the retention of motor skills.

  16. The application of brain-based learning principles aided by GeoGebra to improve mathematical representation ability

    Science.gov (United States)

    Priatna, Nanang

    2017-08-01

    The use of Information and Communication Technology (ICT) in mathematics instruction will help students in building conceptual understanding. One of the software products used in mathematics instruction is GeoGebra. The program enables simple visualization of complex geometric concepts and helps improve students' understanding of geometric concepts. Instruction applying brain-based learning principles is one oriented at the efforts of naturally empowering the brain potentials which enable students to build their own knowledge. One of the goals of mathematics instruction in school is to develop mathematical communication ability. Mathematical representation is regarded as a part of mathematical communication. It is a description, expression, symbolization, or modeling of mathematical ideas/concepts as an attempt of clarifying meanings or seeking for solutions to the problems encountered by students. The research aims to develop a learning model and teaching materials by applying the principles of brain-based learning aided by GeoGebra to improve junior high school students' mathematical representation ability. It adopted a quasi-experimental method with the non-randomized control group pretest-posttest design and the 2x3 factorial model. Based on analysis of the data, it is found that the increase in the mathematical representation ability of students who were treated with mathematics instruction applying the brain-based learning principles aided by GeoGebra was greater than the increase of the students given conventional instruction, both as a whole and based on the categories of students' initial mathematical ability.

  17. The Concept of Happiness as Conveyed in Visual Representations: Analysis of the Work of Early Childhood Educators

    Science.gov (United States)

    Russo-Zimet, Gila; Segel, Sarit

    2014-01-01

    This research was designed to examine how early-childhood educators pursuing their graduate degrees perceive the concept of happiness, as conveyed in visual representations. The research methodology combines qualitative and quantitative paradigms using the metaphoric collage, a tool used to analyze visual and verbal aspects. The research…

  18. Functional organization and visual representations in human ventral lateral prefrontal cortex

    Directory of Open Access Journals (Sweden)

    Annie Wai Yiu Chan

    2013-07-01

    Full Text Available Recent neuroimaging studies in both human and non-human primates have identified face selective activation in the ventral lateral prefrontal cortex even in the absence of working memory demands. Further, research has suggested that this face-selective response is largely driven by the presence of the eyes. However, the nature and origin of visual category responses in the ventral lateral prefrontal cortex remain unclear. Further, in a broader sense, how do these findings relate to our current understandings of lateral prefrontal cortex? What do these findings tell us about the underlying function and organization principles of the ventral lateral prefrontal cortex? What is the future direction for investigating visual representations in this cortex? This review focuses on the function, topography, and circuitry of the ventral lateral prefrontal cortex to enhance our understanding of the evolution and development of this cortex.

  19. The Effect of Visual Variability on the Learning of Academic Concepts.

    Science.gov (United States)

    Bourgoyne, Ashley; Alt, Mary

    2017-06-10

    The purpose of this study was to identify effects of variability of visual input on development of conceptual representations of academic concepts for college-age students with normal language (NL) and those with language-learning disabilities (LLD). Students with NL (n = 11) and LLD (n = 11) participated in a computer-based training for introductory biology course concepts. Participants were trained on half the concepts under a low-variability condition and half under a high-variability condition. Participants completed a posttest in which they were asked to identify and rate the accuracy of novel and trained visual representations of the concepts. We performed separate repeated measures analyses of variance to examine the accuracy of identification and ratings. Participants were equally accurate on trained and novel items in the high-variability condition, but were less accurate on novel items only in the low-variability condition. The LLD group showed the same pattern as the NL group; they were just less accurate. Results indicated that high-variability visual input may facilitate the acquisition of academic concepts in college students with NL and LLD. High-variability visual input may be especially beneficial for generalization to novel representations of concepts. Implicit learning methods may be harnessed by college courses to provide students with basic conceptual knowledge when they are entering courses or beginning new units.

  20. Combining generative and discriminative representation learning for lung CT analysis with convolutional restricted Boltzmann machines

    DEFF Research Database (Denmark)

    van Tulder, Gijs; de Bruijne, Marleen

    2016-01-01

    The choice of features greatly influences the performance of a tissue classification system. Despite this, many systems are built with standard, predefined filter banks that are not optimized for that particular application. Representation learning methods such as restricted Boltzmann machines may...... outperform these standard filter banks because they learn a feature description directly from the training data. Like many other representation learning methods, restricted Boltzmann machines are unsupervised and are trained with a generative learning objective; this allows them to learn representations from...... unlabeled data, but does not necessarily produce features that are optimal for classification. In this paper we propose the convolutional classification restricted Boltzmann machine, which combines a generative and a discriminative learning objective. This allows it to learn filters that are good both...

  1. A computational exploration of complementary learning mechanisms in the primate ventral visual pathway.

    Science.gov (United States)

    Spoerer, Courtney J; Eguchi, Akihiro; Stringer, Simon M

    2016-02-01

    In order to develop transformation invariant representations of objects, the visual system must make use of constraints placed upon object transformation by the environment. For example, objects transform continuously from one point to another in both space and time. These two constraints have been exploited separately in order to develop translation and view invariance in a hierarchical multilayer model of the primate ventral visual pathway in the form of continuous transformation learning and temporal trace learning. We show for the first time that these two learning rules can work cooperatively in the model. Using these two learning rules together can support the development of invariance in cells and help maintain object selectivity when stimuli are presented over a large number of locations or when trained separately over a large number of viewing angles. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  2. Figure-ground representation and its decay in primary visual cortex.

    Science.gov (United States)

    Strother, Lars; Lavell, Cheryl; Vilis, Tutis

    2012-04-01

    We used fMRI to study figure-ground representation and its decay in primary visual cortex (V1). Human observers viewed a motion-defined figure that gradually became camouflaged by a cluttered background after it stopped moving. V1 showed positive fMRI responses corresponding to the moving figure and negative fMRI responses corresponding to the static background. This positive-negative delineation of V1 "figure" and "background" fMRI responses defined a retinotopically organized figure-ground representation that persisted after the figure stopped moving but eventually decayed. The temporal dynamics of V1 "figure" and "background" fMRI responses differed substantially. Positive "figure" responses continued to increase for several seconds after the figure stopped moving and remained elevated after the figure had disappeared. We propose that the sustained positive V1 "figure" fMRI responses reflected both persistent figure-ground representation and sustained attention to the location of the figure after its disappearance, as did subjects' reports of persistence. The decreasing "background" fMRI responses were relatively shorter-lived and less biased by spatial attention. Our results show that the transition from a vivid figure-ground percept to its disappearance corresponds to the concurrent decay of figure enhancement and background suppression in V1, both of which play a role in form-based perceptual memory.

  3. The role of visual representations during the lexical access of spoken words.

    Science.gov (United States)

    Lewis, Gwyneth; Poeppel, David

    2014-07-01

    Do visual representations contribute to spoken word recognition? We examine, using MEG, the effects of sublexical and lexical variables at superior temporal (ST) areas and the posterior middle temporal gyrus (pMTG) compared with that of word imageability at visual cortices. Embodied accounts predict early modulation of visual areas by imageability--concurrently with or prior to modulation of pMTG by lexical variables. Participants responded to speech stimuli varying continuously in imageability during lexical decision with simultaneous MEG recording. We employed the linguistic variables in a new type of correlational time course analysis to assess trial-by-trial activation in occipital, ST, and pMTG regions of interest (ROIs). The linguistic variables modulated the ROIs during different time windows. Critically, visual regions reflected an imageability effect prior to effects of lexicality on pMTG. This surprising effect supports a view on which sensory aspects of a lexical item are not a consequence of lexical activation. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. Perceptual geometry of space and form: visual perception of natural scenes and their virtual representation

    Science.gov (United States)

    Assadi, Amir H.

    2001-11-01

    Perceptual geometry is an emerging field of interdisciplinary research whose objectives focus on study of geometry from the perspective of visual perception, and in turn, apply such geometric findings to the ecological study of vision. Perceptual geometry attempts to answer fundamental questions in perception of form and representation of space through synthesis of cognitive and biological theories of visual perception with geometric theories of the physical world. Perception of form and space are among fundamental problems in vision science. In recent cognitive and computational models of human perception, natural scenes are used systematically as preferred visual stimuli. Among key problems in perception of form and space, we have examined perception of geometry of natural surfaces and curves, e.g. as in the observer's environment. Besides a systematic mathematical foundation for a remarkably general framework, the advantages of the Gestalt theory of natural surfaces include a concrete computational approach to simulate or recreate images whose geometric invariants and quantities might be perceived and estimated by an observer. The latter is at the very foundation of understanding the nature of perception of space and form, and the (computer graphics) problem of rendering scenes to visually invoke virtual presence.

  5. Object Representations in Human Visual Cortex Formed Through Temporal Integration of Dynamic Partial Shape Views.

    Science.gov (United States)

    Orlov, Tanya; Zohary, Ehud

    2018-01-17

    We typically recognize visual objects using the spatial layout of their parts, which are present simultaneously on the retina. Therefore, shape extraction is based on integration of the relevant retinal information over space. The lateral occipital complex (LOC) can represent shape faithfully in such conditions. However, integration over time is sometimes required to determine object shape. To study shape extraction through temporal integration of successive partial shape views, we presented human participants (both men and women) with artificial shapes that moved behind a narrow vertical or horizontal slit. Only a tiny fraction of the shape was visible at any instant at the same retinal location. However, observers perceived a coherent whole shape instead of a jumbled pattern. Using fMRI and multivoxel pattern analysis, we searched for brain regions that encode temporally integrated shape identity. We further required that the representation of shape should be invariant to changes in the slit orientation. We show that slit-invariant shape information is most accurate in the LOC. Importantly, the slit-invariant shape representations matched the conventional whole-shape representations assessed during full-image runs. Moreover, when the same slit-dependent shape slivers were shuffled, thereby preventing their spatiotemporal integration, slit-invariant shape information was reduced dramatically. The slit-invariant representation of the various shapes also mirrored the structure of shape perceptual space as assessed by perceptual similarity judgment tests. Therefore, the LOC is likely to mediate temporal integration of slit-dependent shape views, generating a slit-invariant whole-shape percept. These findings provide strong evidence for a global encoding of shape in the LOC regardless of integration processes required to generate the shape percept. SIGNIFICANCE STATEMENT Visual objects are recognized through spatial integration of features available simultaneously on

  6. Getting the picture: The role of external representations in simulation-based inquiry learning.

    NARCIS (Netherlands)

    Kolloffel, Bas Jan

    2008-01-01

    Three studies were performed to examine the effects of formats of ‘pre-fabricated’ and learner-generated representations on learning outcomes of pupils learning combinatorics and probability theory. In Study I, the effects of different formats on learning outcomes were examined. Learners in five

  7. Spatial specificity of working memory representations in the early visual cortex.

    Science.gov (United States)

    Pratte, Michael S; Tong, Frank

    2014-03-19

    Recent fMRI decoding studies have demonstrated that early retinotopic visual areas exhibit similar patterns of activity during the perception of a stimulus and during the maintenance of that stimulus in working memory. These findings provide support for the sensory recruitment hypothesis that the mechanisms underlying perception serve as a foundation for visual working memory. However, a recent study by Ester, Serences, and Awh (2009) found that the orientation of a peripheral grating maintained in working memory could be classified from both the contralateral and ipsilateral regions of the primary visual cortex (V1), implying that, unlike perception, feature-specific information was maintained in a nonretinotopic manner. Here, we evaluated the hypothesis that early visual areas can maintain information in a spatially specific manner and will do so if the task encourages the binding of feature information to a specific location. To encourage reliance on spatially specific memory, our experiment required observers to retain the orientations of two laterally presented gratings. Multivariate pattern analysis revealed that the orientation of each remembered grating was classified more accurately based on activity patterns in the contralateral than in the ipsilateral regions of V1 and V2. In contrast, higher extrastriate areas exhibited similar levels of performance across the two hemispheres. A time-resolved analysis further indicated that the retinotopic specificity of the working memory representation in V1 and V2 was maintained throughout the retention interval. Our results suggest that early visual areas provide a cortical basis for actively maintaining information about the features and locations of stimuli in visual working memory.

  8. Profile of biology prospective teachers’ representation on plant anatomy learning

    Science.gov (United States)

    Ermayanti; Susanti, R.; Anwar, Y.

    2018-04-01

    This study aims to obtaining students’ representation ability in understanding the structure and function of plant tissues in plant anatomy course. Thirty students of The Biology Education Department of Sriwijaya University were involved in this study. Data on representation ability were collected using test and observation. The instruments had been validated by expert judgment. Test scores were used to represent students’ ability in 4 categories: 2D-image, 3D-image, spatial, and verbal representations. The results show that students’ representation ability is still low: 2D-image (40.0), 3D-image (25.0), spatial (20.0), and verbal representation (45.0). Based on the results of this study, it is suggested that instructional strategies be developed for plant anatomy course.

  9. Enhancing Undergraduate Chemistry Learning by Helping Students Make Connections among Multiple Graphical Representations

    Science.gov (United States)

    Rau, Martina A.

    2015-01-01

    Multiple representations are ubiquitous in chemistry education. To benefit from multiple representations, students have to make connections between them. However, connection making is a difficult task for students. Prior research shows that supporting connection making enhances students' learning in math and science domains. Most prior research…

  10. Representations as Mediation between Purposes as Junior Secondary Science Students Learn about the Human Body

    Science.gov (United States)

    Olander, Clas; Wickman, Per-Olof; Tytler, Russell; Ingerman, Åke

    2018-01-01

    The aim of this article is to investigate students' meaning-making processes of multiple representations during a teaching sequence about the human body in lower secondary school. Two main influences are brought together to accomplish the analysis: on the one hand, theories on signs and representations as scaffoldings for learning and, on the…

  11. Representation Learning of Logic Words by an RNN: From Word Sequences to Robot Actions

    Directory of Open Access Journals (Sweden)

    Tatsuro Yamada

    2017-12-01

    Full Text Available An important characteristic of human language is compositionality. We can efficiently express a wide variety of real-world situations, events, and behaviors by compositionally constructing the meaning of a complex expression from a finite number of elements. Previous studies have analyzed how machine-learning models, particularly neural networks, can learn from experience to represent compositional relationships between language and robot actions with the aim of understanding the symbol grounding structure and achieving intelligent communicative agents. Such studies have mainly dealt with the words (nouns, adjectives, and verbs that directly refer to real-world matters. In addition to these words, the current study deals with logic words, such as “not,” “and,” and “or” simultaneously. These words are not directly referring to the real world, but are logical operators that contribute to the construction of meaning in sentences. In human–robot communication, these words may be used often. The current study builds a recurrent neural network model with long short-term memory units and trains it to learn to translate sentences including logic words into robot actions. We investigate what kind of compositional representations, which mediate sentences and robot actions, emerge as the network's internal states via the learning process. Analysis after learning shows that referential words are merged with visual information and the robot's own current state, and the logical words are represented by the model in accordance with their functions as logical operators. Words such as “true,” “false,” and “not” work as non-linear transformations to encode orthogonal phrases into the same area in a memory cell state space. The word “and,” which required a robot to lift up both its hands, worked as if it was a universal quantifier. The word “or,” which required action generation that looked apparently random, was represented as an

  12. Using Virtual Microscopy to Scaffold Learning of Pathology: A Naturalistic Experiment on the Role of Visual and Conceptual Cues

    Science.gov (United States)

    Nivala, Markus; Saljo, Roger; Rystedt, Hans; Kronqvist, Pauliina; Lehtinen, Erno

    2012-01-01

    New representational technologies, such as virtual microscopy, create new affordances for medical education. In the article, a study on the following two issues is reported: (a) How does collaborative use of virtual microscopy shape students' engagement with and learning from virtual slides of tissue specimen? (b) How do visual and conceptual cues…

  13. Getting the picture: A mixed-methods inquiry into how visual representations are interpreted by students, incorporated within textbooks, and integrated into middle-school science classrooms

    Science.gov (United States)

    Lee, Victor Raymond

    Modern-day middle school science textbooks are heavily populated with colorful images, technical diagrams, and other forms of visual representations. These representations are commonly perceived by educators to be useful aids to support student learning of unfamiliar scientific ideas. However, as the number of representations in science textbooks has seemingly increased in recent decades, concerns have been voiced that many current of these representations are actually undermining instructional goals; they may be introducing substantial conceptual and interpretive difficulties for students. To date, very little empirical work has been done to examine how the representations used in instructional materials have changed, and what influences these changes exert on student understanding. Furthermore, there has also been limited attention given to the extent to which current representational-use routines in science classrooms may mitigate or limit interpretive difficulties. This dissertation seeks to do three things: First, it examines the nature of the relationship between published representations and students' reasoning about the natural world. Second, it considers the ways in which representations are used in textbooks and how that has changed over a span of five decades. Third, this dissertation provides an in-depth look into how middle school science classrooms naturally use these visual representations and what kinds of support are being provided. With respect to the three goals of this dissertation, three pools of data were collected and analyzed for this study. First, interview data was collected in which 32 middle school students interpreted and reasoned with a set of more and less problematic published textbook representations. Quantitative analyses of the interview data suggest that, counter to what has been anticipated in the literature, there were no significant differences in the conceptualizations of students in the different groups. An accompanying

  14. Perceptual learning in children with visual impairment improves near visual acuity.

    Science.gov (United States)

    Huurneman, Bianca; Boonstra, F Nienke; Cox, Ralf F A; van Rens, Ger; Cillessen, Antonius H N

    2013-09-17

    This study investigated whether visual perceptual learning can improve near visual acuity and reduce foveal crowding effects in four- to nine-year-old children with visual impairment. Participants were 45 children with visual impairment and 29 children with normal vision. Children with visual impairment were divided into three groups: a magnifier group (n = 12), a crowded perceptual learning group (n = 18), and an uncrowded perceptual learning group (n = 15). Children with normal vision also were divided in three groups, but were measured only at baseline. Dependent variables were single near visual acuity (NVA), crowded NVA, LH line 50% crowding NVA, number of trials, accuracy, performance time, amount of small errors, and amount of large errors. Children with visual impairment trained during six weeks, two times per week, for 30 minutes (12 training sessions). After training, children showed significant improvement of NVA in addition to specific improvements on the training task. The crowded perceptual learning group showed the largest acuity improvements (1.7 logMAR lines on the crowded chart, P children in the crowded perceptual learning group showed improvements on all NVA charts. Children with visual impairment benefit from perceptual training. While task-specific improvements were observed in all training groups, transfer to crowded NVA was largest in the crowded perceptual learning group. To our knowledge, this is the first study to provide evidence for the improvement of NVA by perceptual learning in children with visual impairment. (http://www.trialregister.nl number, NTR2537.).

  15. How online learning modules can improve the representational fluency and conceptual understanding of university physics students

    Science.gov (United States)

    Hill, M.; Sharma, M. D.; Johnston, H.

    2015-07-01

    The use of online learning resources as core components of university science courses is increasing. Learning resources range from summaries, videos, and simulations, to question banks. Our study set out to develop, implement, and evaluate research-based online learning resources in the form of pre-lecture online learning modules (OLMs). The aim of this paper is to share our experiences with those using, or considering implementing, online learning resources. Our first task was to identify student learning issues in physics to base the learning resources on. One issue with substantial research is conceptual understanding, the other with comparatively less research is scientific representations (graphs, words, equations, and diagrams). We developed learning resources on both these issues and measured their impact. We created weekly OLMs which were delivered to first year physics students at The University of Sydney prior to their first lecture of the week. Students were randomly allocated to either a concepts stream or a representations stream of online modules. The programme was first implemented in 2013 to trial module content, gain experience and process logistical matters and repeated in 2014 with approximately 400 students. Two validated surveys, the Force and Motion Concept Evaluation (FMCE) and the Representational Fluency Survey (RFS) were used as pre-tests and post-tests to measure learning gains while surveys and interviews provided further insights. While both streams of OLMs produced similar positive learning gains on the FMCE, the representations-focussed OLMs produced higher gains on the RFS. Conclusions were triangulated with student responses which indicated that they have recognized the benefit of the OLMs for their learning of physics. Our study shows that carefully designed online resources used as pre-instruction can make a difference in students’ conceptual understanding and representational fluency in physics, as well as make them more aware

  16. Internal attention to features in visual short-term memory guides object learning.

    Science.gov (United States)

    Fan, Judith E; Turk-Browne, Nicholas B

    2013-11-01

    Attending to objects in the world affects how we perceive and remember them. What are the consequences of attending to an object in mind? In particular, how does reporting the features of a recently seen object guide visual learning? In three experiments, observers were presented with abstract shapes in a particular color, orientation, and location. After viewing each object, observers were cued to report one feature from visual short-term memory (VSTM). In a subsequent test, observers were cued to report features of the same objects from visual long-term memory (VLTM). We tested whether reporting a feature from VSTM: (1) enhances VLTM for just that feature (practice-benefit hypothesis), (2) enhances VLTM for all features (object-based hypothesis), or (3) simultaneously enhances VLTM for that feature and suppresses VLTM for unreported features (feature-competition hypothesis). The results provided support for the feature-competition hypothesis, whereby the representation of an object in VLTM was biased towards features reported from VSTM and away from unreported features (Experiment 1). This bias could not be explained by the amount of sensory exposure or response learning (Experiment 2) and was amplified by the reporting of multiple features (Experiment 3). Taken together, these results suggest that selective internal attention induces competitive dynamics among features during visual learning, flexibly tuning object representations to align with prior mnemonic goals. Copyright © 2013 Elsevier B.V. All rights reserved.

  17. How to Make a Good Animation: A Grounded Cognition Model of How Visual Representation Design Affects the Construction of Abstract Physics Knowledge

    Science.gov (United States)

    Chen, Zhongzhou; Gladding, Gary

    2014-01-01

    Visual representations play a critical role in teaching physics. However, since we do not have a satisfactory understanding of how visual perception impacts the construction of abstract knowledge, most visual representations used in instructions are either created based on existing conventions or designed according to the instructor's intuition,…

  18. Teaching and Learning about Force with a Representational Focus: Pedagogy and Teacher Change

    Science.gov (United States)

    Hubber, Peter; Tytler, Russell; Haslam, Filocha

    2010-01-01

    A large body of research in the conceptual change tradition has shown the difficulty of learning fundamental science concepts, yet conceptual change schemes have failed to convincingly demonstrate improvements in supporting significant student learning. Recent work in cognitive science has challenged this purely conceptual view of learning, emphasising the role of language, and the importance of personal and contextual aspects of understanding science. The research described in this paper is designed around the notion that learning involves the recognition and development of students’ representational resources. In particular, we argue that conceptual difficulties with the concept of force are fundamentally representational in nature. This paper describes a classroom sequence in force that focuses on representations and their negotiation, and reports on the effectiveness of this perspective in guiding teaching, and in providing insight into student learning. Classroom sequences involving three teachers were videotaped using a combined focus on the teacher and groups of students. Video analysis software was used to capture the variety of representations used, and sequences of representational negotiation. Stimulated recall interviews were conducted with teachers and students. The paper reports on the nature of the pedagogies developed as part of this representational focus, its effectiveness in supporting student learning, and on the pedagogical and epistemological challenges negotiated by teachers in implementing this approach.

  19. Age-related declines of stability in visual perceptual learning.

    Science.gov (United States)

    Chang, Li-Hung; Shibata, Kazuhisa; Andersen, George J; Sasaki, Yuka; Watanabe, Takeo

    2014-12-15

    One of the biggest questions in learning is how a system can resolve the plasticity and stability dilemma. Specifically, the learning system needs to have not only a high capability of learning new items (plasticity) but also a high stability to retain important items or processing in the system by preventing unimportant or irrelevant information from being learned. This dilemma should hold true for visual perceptual learning (VPL), which is defined as a long-term increase in performance on a visual task as a result of visual experience. Although it is well known that aging influences learning, the effect of aging on the stability and plasticity of the visual system is unclear. To address the question, we asked older and younger adults to perform a task while a task-irrelevant feature was merely exposed. We found that older individuals learned the task-irrelevant features that younger individuals did not learn, both the features that were sufficiently strong for younger individuals to suppress and the features that were too weak for younger individuals to learn. At the same time, there was no plasticity reduction in older individuals within the task tested. These results suggest that the older visual system is less stable to unimportant information than the younger visual system. A learning problem with older individuals may be due to a decrease in stability rather than a decrease in plasticity, at least in VPL. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Multiple instance learning tracking method with local sparse representation

    KAUST Repository

    Xie, Chengjun; Tan, Jieqing; Chen, Peng; Zhang, Jie; Helg, Lei

    2013-01-01

    as training data for the MIL framework. First, local image patches of a target object are represented as sparse codes with an overcomplete dictionary, where the adaptive representation can be helpful in overcoming partial occlusion in object tracking. Then MIL

  1. Principle and engineering implementation of 3D visual representation and indexing of medical diagnostic records (Conference Presentation)

    Science.gov (United States)

    Shi, Liehang; Sun, Jianyong; Yang, Yuanyuan; Ling, Tonghui; Wang, Mingqing; Zhang, Jianguo

    2017-03-01

    Purpose: Due to the generation of a large number of electronic imaging diagnostic records (IDR) year after year in a digital hospital, The IDR has become the main component of medical big data which brings huge values to healthcare services, professionals and administration. But a large volume of IDR presented in a hospital also brings new challenges to healthcare professionals and services as there may be too many IDRs for each patient so that it is difficult for a doctor to review all IDR of each patient in a limited appointed time slot. In this presentation, we presented an innovation method which uses an anatomical 3D structure object visually to represent and index historical medical status of each patient, which is called Visual Patient (VP) in this presentation, based on long term archived electronic IDR in a hospital, so that a doctor can quickly learn the historical medical status of the patient, quickly point and retrieve the IDR he or she interested in a limited appointed time slot. Method: The engineering implementation of VP was to build 3D Visual Representation and Index system called VP system (VPS) including components of natural language processing (NLP) for Chinese, Visual Index Creator (VIC), and 3D Visual Rendering Engine.There were three steps in this implementation: (1) an XML-based electronic anatomic structure of human body for each patient was created and used visually to index the all of abstract information of each IDR for each patient; (2)a number of specific designed IDR parsing processors were developed and used to extract various kinds of abstract information of IDRs retrieved from hospital information systems; (3) a 3D anatomic rendering object was introduced visually to represent and display the content of VIO for each patient. Results: The VPS was implemented in a simulated clinical environment including PACS/RIS to show VP instance to doctors. We setup two evaluation scenario in a hospital radiology department to evaluate whether

  2. Associative visual learning by tethered bees in a controlled visual environment.

    Science.gov (United States)

    Buatois, Alexis; Pichot, Cécile; Schultheiss, Patrick; Sandoz, Jean-Christophe; Lazzari, Claudio R; Chittka, Lars; Avarguès-Weber, Aurore; Giurfa, Martin

    2017-10-10

    Free-flying honeybees exhibit remarkable cognitive capacities but the neural underpinnings of these capacities cannot be studied in flying insects. Conversely, immobilized bees are accessible to neurobiological investigation but display poor visual learning. To overcome this limitation, we aimed at establishing a controlled visual environment in which tethered bees walking on a spherical treadmill learn to discriminate visual stimuli video projected in front of them. Freely flying bees trained to walk into a miniature Y-maze displaying these stimuli in a dark environment learned the visual discrimination efficiently when one of them (CS+) was paired with sucrose and the other with quinine solution (CS-). Adapting this discrimination to the treadmill paradigm with a tethered, walking bee was successful as bees exhibited robust discrimination and preferred the CS+ to the CS- after training. As learning was better in the maze, movement freedom, active vision and behavioral context might be important for visual learning. The nature of the punishment associated with the CS- also affects learning as quinine and distilled water enhanced the proportion of learners. Thus, visual learning is amenable to a controlled environment in which tethered bees learn visual stimuli, a result that is important for future neurobiological studies in virtual reality.

  3. Designing electronic module based on learning content development system in fostering students’ multi representation skills

    Science.gov (United States)

    Resita, I.; Ertikanto, C.

    2018-05-01

    This study aims to develop electronic module design based on Learning Content Development System (LCDS) to foster students’ multi representation skills in physics subject material. This study uses research and development method to the product design. This study involves 90 students and 6 physics teachers who were randomly chosen from 3 different Senior High Schools in Lampung Province. The data were collected by using questionnaires and analyzed by using quantitative descriptive method. Based on the data, 95% of the students only use one form of representation in solving physics problems. Representation which is tend to be used by students is symbolic representation. Students are considered to understand the concept of physics if they are able to change from one form to the other forms of representation. Product design of LCDS-based electronic module presents text, image, symbolic, video, and animation representation.

  4. Dissociable loss of the representations in visual short-term memory.

    Science.gov (United States)

    Li, Jie

    2016-01-01

    The present study investigated in what manner the information in visual short-term memory (VSTM) is lost. Participants memorized four items, one of which was given higher priority later by a retro-cue. Then participants were required to detect a possible change, which could be either a large or small change, occurred to one of the items. The results showed that the detection performance for the small change of the uncued items was poorer than the cued item, yet large change that occurred to all four memory items could be detected perfectly, indicating that the uncued representations lost some detailed information yet still had some basic features retained in VSTM. The present study suggests that after being encoded into VSTM, the information is not lost in an object-based manner; rather, features of an item are still dissociable, so that they can be lost separately.

  5. Fast and automatic activation of an abstract representation of money in the human ventral visual pathway.

    Directory of Open Access Journals (Sweden)

    Catherine Tallon-Baudry

    Full Text Available Money, when used as an incentive, activates the same neural circuits as rewards associated with physiological needs. However, unlike physiological rewards, monetary stimuli are cultural artifacts: how are monetary stimuli identified in the first place? How and when does the brain identify a valid coin, i.e. a disc of metal that is, by social agreement, endowed with monetary properties? We took advantage of the changes in the Euro area in 2002 to compare neural responses to valid coins (Euros, Australian Dollars with neural responses to invalid coins that have lost all monetary properties (French Francs, Finnish Marks. We show in magneto-encephalographic recordings, that the ventral visual pathway automatically distinguishes between valid and invalid coins, within only ∼150 ms. This automatic categorization operates as well on coins subjects were familiar with as on unfamiliar coins. No difference between neural responses to scrambled controls could be detected. These results could suggest the existence of a generic, all-purpose neural representation of money that is independent of experience. This finding is reminiscent of a central assumption in economics, money fungibility, or the fact that a unit of money is substitutable to another. From a neural point of view, our findings may indicate that the ventral visual pathway, a system previously thought to analyze visual features such as shape or color and to be influenced by daily experience, could also able to use conceptual attributes such as monetary validity to categorize familiar as well as unfamiliar visual objects. The symbolic abilities of the posterior fusiform region suggested here could constitute an efficient neural substrate to deal with culturally defined symbols, independently of experience, which probably fostered money's cultural emergence and success.

  6. The aftermath of memory retrieval for recycling visual working memory representations.

    Science.gov (United States)

    Park, Hyung-Bum; Zhang, Weiwei; Hyun, Joo-Seok

    2017-07-01

    We examined the aftermath of accessing and retrieving a subset of information stored in visual working memory (VWM)-namely, whether detection of a mismatch between memory and perception can impair the original memory of an item while triggering recognition-induced forgetting for the remaining, untested items. For this purpose, we devised a consecutive-change detection task wherein two successive testing probes were displayed after a single set of memory items. Across two experiments utilizing different memory-testing methods (whole vs. single probe), we observed a reliable pattern of poor performance in change detection for the second test when the first test had exhibited a color change. The impairment after a color change was evident even when the same memory item was repeatedly probed; this suggests that an attention-driven, salient visual change made it difficult to reinstate the previously remembered item. The second change detection, for memory items untested during the first change detection, was also found to be inaccurate, indicating that recognition-induced forgetting had occurred for the unprobed items in VWM. In a third experiment, we conducted a task that involved change detection plus continuous recall, wherein a memory recall task was presented after the change detection task. The analyses of the distributions of recall errors with a probabilistic mixture model revealed that the memory impairments from both visual changes and recognition-induced forgetting are explained better by the stochastic loss of memory items than by their degraded resolution. These results indicate that attention-driven visual change and recognition-induced forgetting jointly influence the "recycling" of VWM representations.

  7. Benefits of stimulus congruency for multisensory facilitation of visual learning.

    Directory of Open Access Journals (Sweden)

    Robyn S Kim

    Full Text Available BACKGROUND: Studies of perceptual learning have largely focused on unisensory stimuli. However, multisensory interactions are ubiquitous in perception, even at early processing stages, and thus can potentially play a role in learning. Here, we examine the effect of auditory-visual congruency on visual learning. METHODOLOGY/PRINCIPLE FINDINGS: Subjects were trained over five days on a visual motion coherence detection task with either congruent audiovisual, or incongruent audiovisual stimuli. Comparing performance on visual-only trials, we find that training with congruent audiovisual stimuli produces significantly better learning than training with incongruent audiovisual stimuli or with only visual stimuli. CONCLUSIONS/SIGNIFICANCE: This advantage from stimulus congruency during training suggests that the benefits of multisensory training may result from audiovisual interactions at a perceptual rather than cognitive level.

  8. Object representations in visual working memory change according to the task context.

    Science.gov (United States)

    Balaban, Halely; Luria, Roy

    2016-08-01

    This study investigated whether an item's representation in visual working memory (VWM) can be updated according to changes in the global task context. We used a modified change detection paradigm, in which the items moved before the retention interval. In all of the experiments, we presented identical color-color conjunction items that were arranged to provide a common fate Gestalt grouping cue during their movement. Task context was manipulated by adding a condition highlighting either the integrated interpretation of the conjunction items or their individuated interpretation. We monitored the contralateral delay activity (CDA) as an online marker of VWM. Experiment 1 employed only a minimal global context; the conjunction items were integrated during their movement, but then were partially individuated, at a late stage of the retention interval. The same conjunction items were perfectly integrated in an integration context (Experiment 2). An individuation context successfully produced strong individuation, already during the movement, overriding Gestalt grouping cues (Experiment 3). In Experiment 4, a short priming of the individuation context managed to individuate the conjunction items immediately after the Gestalt cue was no longer available. Thus, the representations of identical items changed according to the task context, suggesting that VWM interprets incoming input according to global factors which can override perceptual cues. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Colouring the Gaps in Learning Design: Aesthetics and the Visual in Learning

    Science.gov (United States)

    Carroll, Fiona; Kop, Rita

    2016-01-01

    The visual is a dominant mode of information retrieval and understanding however, the focus on the visual dimension of Technology Enhanced Learning (TEL) is still quite weak in relation to its predominant focus on usability. To accommodate the future needs of the visual learner, designers of e-learning environments should advance the current…

  10. Public Computer Assisted Learning Facilities for Children with Visual Impairment: Universal Design for Inclusive Learning

    Science.gov (United States)

    Siu, Kin Wai Michael; Lam, Mei Seung

    2012-01-01

    Although computer assisted learning (CAL) is becoming increasingly popular, people with visual impairment face greater difficulty in accessing computer-assisted learning facilities. This is primarily because most of the current CAL facilities are not visually impaired friendly. People with visual impairment also do not normally have access to…

  11. Teaching and Learning Logic Programming in Virtual Worlds Using Interactive Microworld Representations

    Science.gov (United States)

    Vosinakis, Spyros; Anastassakis, George; Koutsabasis, Panayiotis

    2018-01-01

    Logic Programming (LP) follows the declarative programming paradigm, which novice students often find hard to grasp. The limited availability of visual teaching aids for LP can lead to low motivation for learning. In this paper, we present a platform for teaching and learning Prolog in Virtual Worlds, which enables the visual interpretation and…

  12. Learning from Your Network of Friends: A Trajectory Representation Learning Model Based on Online Social Ties

    KAUST Repository

    Alharbi, Basma Mohammed; Zhang, Xiangliang

    2017-01-01

    Location-Based Social Networks (LBSNs) capture individuals whereabouts for a large portion of the population. To utilize this data for user (location)-similarity based tasks, one must map the raw data into a low-dimensional uniform feature space. However, due to the nature of LBSNs, many users have sparse and incomplete check-ins. In this work, we propose to overcome this issue by leveraging the network of friends, when learning the new feature space. We first analyze the impact of friends on individuals's mobility, and show that individuals trajectories are correlated with thoseof their friends and friends of friends (2-hop friends) in an online setting. Based on our observation, we propose a mixed-membership model that infers global mobility patterns from users' check-ins and their network of friends, without impairing the model's complexity. Our proposed model infers global patterns and learns new representations for both usersand locations simultaneously. We evaluate the inferred patterns and compare the quality of the new user representation against baseline methods on a social link prediction problem.

  13. Learning from Your Network of Friends: A Trajectory Representation Learning Model Based on Online Social Ties

    KAUST Repository

    Alharbi, Basma Mohammed

    2017-02-07

    Location-Based Social Networks (LBSNs) capture individuals whereabouts for a large portion of the population. To utilize this data for user (location)-similarity based tasks, one must map the raw data into a low-dimensional uniform feature space. However, due to the nature of LBSNs, many users have sparse and incomplete check-ins. In this work, we propose to overcome this issue by leveraging the network of friends, when learning the new feature space. We first analyze the impact of friends on individuals\\'s mobility, and show that individuals trajectories are correlated with thoseof their friends and friends of friends (2-hop friends) in an online setting. Based on our observation, we propose a mixed-membership model that infers global mobility patterns from users\\' check-ins and their network of friends, without impairing the model\\'s complexity. Our proposed model infers global patterns and learns new representations for both usersand locations simultaneously. We evaluate the inferred patterns and compare the quality of the new user representation against baseline methods on a social link prediction problem.

  14. Learning Combinations of Multiple Feature Representations for Music Emotion Prediction

    DEFF Research Database (Denmark)

    Madsen, Jens; Jensen, Bjørn Sand; Larsen, Jan

    2015-01-01

    Music consists of several structures and patterns evolving through time which greatly influences the human decoding of higher-level cognitive aspects of music like the emotions expressed in music. For tasks, such as genre, tag and emotion recognition, these structures have often been identified...... and used as individual and non-temporal features and representations. In this work, we address the hypothesis whether using multiple temporal and non-temporal representations of different features is beneficial for modeling music structure with the aim to predict the emotions expressed in music. We test...

  15. Searching for Variables and Models to Investigate Mediators of Learning from Multiple Representations

    Science.gov (United States)

    Rau, Martina A.; Scheines, Richard

    2012-01-01

    Although learning from multiple representations has been shown to be effective in a variety of domains, little is known about the mechanisms by which it occurs. We analyzed log data on error-rate, hint-use, and time-spent obtained from two experiments with a Cognitive Tutor for fractions. The goal of the experiments was to compare learning from…

  16. Errors of Students Learning with React Strategy in Solving the Problems of Mathematical Representation Ability

    Science.gov (United States)

    Sari, Delsika Pramata; Darhim; Rosjanuardi, Rizky

    2018-01-01

    The purpose of this study was to investigate the errors experienced by students learning with REACT strategy and traditional learning in solving problems of mathematical representation ability. This study used quasi experimental pattern with static-group comparison design. The subjects of this study were 47 eighth grade students of junior high…

  17. Intelligent Fault Diagnosis of Rotary Machinery Based on Unsupervised Multiscale Representation Learning

    Science.gov (United States)

    Jiang, Guo-Qian; Xie, Ping; Wang, Xiao; Chen, Meng; He, Qun

    2017-11-01

    The performance of traditional vibration based fault diagnosis methods greatly depends on those handcrafted features extracted using signal processing algorithms, which require significant amounts of domain knowledge and human labor, and do not generalize well to new diagnosis domains. Recently, unsupervised representation learning provides an alternative promising solution to feature extraction in traditional fault diagnosis due to its superior learning ability from unlabeled data. Given that vibration signals usually contain multiple temporal structures, this paper proposes a multiscale representation learning (MSRL) framework to learn useful features directly from raw vibration signals, with the aim to capture rich and complementary fault pattern information at different scales. In our proposed approach, a coarse-grained procedure is first employed to obtain multiple scale signals from an original vibration signal. Then, sparse filtering, a newly developed unsupervised learning algorithm, is applied to automatically learn useful features from each scale signal, respectively, and then the learned features at each scale to be concatenated one by one to obtain multiscale representations. Finally, the multiscale representations are fed into a supervised classifier to achieve diagnosis results. Our proposed approach is evaluated using two different case studies: motor bearing and wind turbine gearbox fault diagnosis. Experimental results show that the proposed MSRL approach can take full advantages of the availability of unlabeled data to learn discriminative features and achieved better performance with higher accuracy and stability compared to the traditional approaches.

  18. "Triangulation": An Expression for Stimulating Metacognitive Reflection Regarding the Use of "Triplet" Representations for Chemistry Learning

    Science.gov (United States)

    Thomas, Gregory P.

    2017-01-01

    Concerns persist regarding high school students' chemistry learning. Learning chemistry is challenging because of chemistry's innate complexity and the need for students to construct associations between different, yet related representations of matter and its changes. Students should be taught to reason about and consider chemical phenomena using…

  19. Learning modulation of odor representations: new findings from Arc-indexed networks

    Directory of Open Access Journals (Sweden)

    Qi eYuan

    2014-12-01

    Full Text Available We first review our understanding of odor representations in rodent olfactory bulb and anterior piriform cortex. We then consider learning-induced representation changes. Finally we describe the perspective on network representations gained from examining Arc-indexed odor networks of awake rats. Arc-indexed networks are sparse and distributed, consistent with current views. However Arc provides representations of repeated odors. Arc-indexed repeated odor representations are quite variable. Sparse representations are assumed to be compact and reliable memory codes. Arc suggests this is not necessarily the case. The variability seen is consistent with electrophysiology in awake animals and may reflect top down-cortical modulation of context. Arc-indexing shows that distinct odors share larger than predicted neuron pools. These may be low-threshold neuronal subsets.Learning’s effect on Arc-indexed representations is to increase the stable or overlapping component of rewarded odor representations. This component can decrease for similar odors when their discrimination is rewarded. The learning effects seen are supported by electrophysiology, but mechanisms remain to be elucidated.

  20. Feature selection for domain knowledge representation through multitask learning

    CSIR Research Space (South Africa)

    Rosman, Benjamin S

    2014-10-01

    Full Text Available represent stimuli of interest, and rich feature sets which increase the dimensionality of the space and thus the difficulty of the learning problem. We focus on a multitask reinforcement learning setting, where the agent is learning domain knowledge...

  1. Contralateral delay activity provides a neural measure of the number of representations in visual working memory.

    Science.gov (United States)

    Ikkai, Akiko; McCollough, Andrew W; Vogel, Edward K

    2010-04-01

    Visual working memory (VWM) helps to temporarily represent information from the visual environment and is severely limited in capacity. Recent work has linked various forms of neural activity to the ongoing representations in VWM. One piece of evidence comes from human event-related potential studies, which find a sustained contralateral negativity during the retention period of VWM tasks. This contralateral delay activity (CDA) has previously been shown to increase in amplitude as the number of memory items increases, up to the individual's working memory capacity limit. However, significant alternative hypotheses remain regarding the true nature of this activity. Here we test whether the CDA is modulated by the perceptual requirements of the memory items as well as whether it is determined by the number of locations that are being attended within the display. Our results provide evidence against these two alternative accounts and instead strongly support the interpretation that this activity reflects the current number of objects that are being represented in VWM.

  2. Quantifying Shapes: Mathematical Techniques for Analysing Visual Representations of Sound and Music

    Directory of Open Access Journals (Sweden)

    Genevieve L. Noyce

    2013-12-01

    Full Text Available Research on auditory-visual correspondences has a long tradition but innovative experimental paradigms and analytic tools are sparse. In this study, we explore different ways of analysing real-time visual representations of sound and music drawn by both musically-trained and untrained individuals. To that end, participants' drawing responses captured by an electronic graphics tablet were analysed using various regression, clustering, and classification techniques. Results revealed that a Gaussian process (GP regression model with a linear plus squared-exponential covariance function was able to model the data sufficiently, whereas a simpler GP was not a good fit. Spectral clustering analysis was the best of a variety of clustering techniques, though no strong groupings are apparent in these data. This was confirmed by variational Bayes analysis, which only fitted one Gaussian over the dataset. Slight trends in the optimised hyperparameters between musically-trained and untrained individuals allowed for the building of a successful GP classifier that differentiated between these two groups. In conclusion, this set of techniques provides useful mathematical tools for analysing real-time visualisations of sound and can be applied to similar datasets as well.

  3. Active-duty military service members' visual representations of PTSD and TBI in masks.

    Science.gov (United States)

    Walker, Melissa S; Kaimal, Girija; Gonzaga, Adele M L; Myers-Coffman, Katherine A; DeGraba, Thomas J

    2017-12-01

    Active-duty military service members have a significant risk of sustaining physical and psychological trauma resulting in traumatic brain injury (TBI) and post-traumatic stress disorder (PTSD). Within an interdisciplinary treatment approach at the National Intrepid Center of Excellence, service members participated in mask making during art therapy sessions. This study presents an analysis of the mask-making experiences of service members (n = 370) with persistent symptoms from combat- and mission-related TBI, PTSD, and other concurrent mood issues. Data sources included mask images and therapist notes collected over a five-year period. The data were coded and analyzed using grounded theory methods. Findings indicated that mask making offered visual representations of the self related to individual personhood, relationships, community, and society. Imagery themes referenced the injury, relational supports/losses, identity transitions/questions, cultural metaphors, existential reflections, and conflicted sense of self. These visual insights provided an increased understanding of the experiences of service members, facilitating their recovery.

  4. A Visualization of Evolving Clinical Sentiment Using Vector Representations of Clinical Notes.

    Science.gov (United States)

    Ghassemi, Mohammad M; Mark, Roger G; Nemati, Shamim

    2015-09-01

    Our objective in this paper was to visualize the evolution of clinical language and sentiment with respect to several common population-level categories including: time in the hospital, age, mortality, gender and race. Our analysis utilized seven years of unstructured free text notes from the Multiparameter Intelligent Monitoring in Intensive Care (MIMIC) database. The text data was partitioned by category and used to generate several high dimensional vector space representations. We generated visualizations of the vector spaces using Distributed Stochastic Neighbor Embedding (tSNE) and Principal Component Analysis (PCA). We also investigated representative words from clusters in the vector space. Lastly, we inferred the general sentiment of the clinical notes toward each parameter by gauging the average distance between positive and negative keywords and all other terms in the space. We found intriguing differences in the sentiment of clinical notes over time, outcome, and demographic features. We noted a decrease in the homogeneity and complexity of clusters over time for patients with poor outcomes. We also found greater positive sentiment for females, unmarried patients, and patients of African ethnicity.

  5. A blended learning concept for an engineering course in the field of color representation and display technologies

    Science.gov (United States)

    Vauderwange, Oliver; Wozniak, Peter; Javahiraly, Nicolas; Curticapean, Dan

    2016-09-01

    The Paper presents the design and development of a blended learning concept for an engineering course in the field of color representation and display technologies. A suitable learning environment is crucial for the success of the teaching scenario. A mixture of theoretical lectures and hands-on activities with practical applications and experiments, combined with the advantages of modern digital media is the main topic of the paper. Blended learning describes the didactical change of attendance periods and online periods. The e-learning environment for the online period is designed toward an easy access and interaction. Present digital media extends the established teaching scenarios and enables the presentation of videos, animations and augmented reality (AR). Visualizations are effective tools to impart learning contents with lasting effect. The preparation and evaluation of the theoretical lectures and the hands-on activities are stimulated and affects positively the attendance periods. The tasks and experiments require the students to work independently and to develop individual solution strategies. This engages and motivates the students, deepens the knowledge. The authors will present their experience with the implemented blended learning scenario in this field of optics and photonics. All aspects of the learning environment will be introduced.

  6. Visual Interactive Syntax Learning: A Case of Blended Learning

    Directory of Open Access Journals (Sweden)

    Jane Vinther

    2008-11-01

    Full Text Available The integration of the computer as a tool in language learningat the tertiary level brings several opportunities for adaptingto individual student needs, but lack of appropriate material suited for the level of student proficiency in Scandinavia has meant that university teachers have found it difficult to blendthe traditional approach with computer tools. This article will present one programme (VISL which has been developed with the purpose of supporting and enhancing traditional instruction. Visual Interactive Syntax Learning (VISL is a programme which is basically a parser put to pedagogical use. The pedagogical purpose is to teach English syntax to university students at an advanced level. The programme allows the students to build sophisticated tree diagrams of Englishsentences with provisions for both functions and forms (simple or complex, incl. subclauses. VISL was initiated as an attempt to facilitate the metalinguistic learning process. Thisarticle will present VISL as a pedagogical tool and tries to argue the case for the benefits of blending traditional lecturing with modern technology while pointing out some of the issues involved.

  7. Impairments in part-whole representations of objects in two cases of integrative visual agnosia.

    Science.gov (United States)

    Behrmann, Marlene; Williams, Pepper

    2007-10-01

    How complex multipart visual objects are represented perceptually remains a subject of ongoing investigation. One source of evidence that has been used to shed light on this issue comes from the study of individuals who fail to integrate disparate parts of visual objects. This study reports a series of experiments that examine the ability of two such patients with this form of agnosia (integrative agnosia; IA), S.M. and C.R., to discriminate and categorize exemplars of a rich set of novel objects, "Fribbles", whose visual similarity (number of shared parts) and category membership (shared overall shape) can be manipulated. Both patients performed increasingly poorly as the number of parts required for differentiating one Fribble from another increased. Both patients were also impaired at determining when two Fribbles belonged in the same category, a process that relies on abstracting spatial relations between parts. C.R., the less impaired of the two, but not S.M., eventually learned to categorize the Fribbles but required substantially more training than normal perceivers. S.M.'s failure is not attributable to a problem in learning to use a label for identification nor is it obviously attributable to a visual memory deficit. Rather, the findings indicate that, although the patients may be able to represent a small number of parts independently, in order to represent multipart images, the parts need to be integrated or chunked into a coherent whole. It is this integrative process that is impaired in IA and appears to play a critical role in the normal object recognition of complex images.

  8. Learning Visual Design through Hypermedia: Pathways to Visual Literacy.

    Science.gov (United States)

    Lockee, Barbara; Hergert, Tom

    The interactive multimedia application described here attempts to provide learners and teachers with a common frame of reference for communicating about visual media. The system is based on a list of concepts related to composition, and illustrates those concepts with photographs, paintings, graphic designs, and motion picture scenes. The ability…

  9. Posttraining transcranial magnetic stimulation of striate cortex disrupts consolidation early in visual skill learning.

    Science.gov (United States)

    De Weerd, Peter; Reithler, Joel; van de Ven, Vincent; Been, Marin; Jacobs, Christianne; Sack, Alexander T

    2012-02-08

    Practice-induced improvements in skilled performance reflect "offline " consolidation processes extending beyond daily training sessions. According to visual learning theories, an early, fast learning phase driven by high-level areas is followed by a late, asymptotic learning phase driven by low-level, retinotopic areas when higher resolution is required. Thus, low-level areas would not contribute to learning and offline consolidation until late learning. Recent studies have challenged this notion, demonstrating modified responses to trained stimuli in primary visual cortex (V1) and offline activity after very limited training. However, the behavioral relevance of modified V1 activity for offline consolidation of visual skill memory in V1 after early training sessions remains unclear. Here, we used neuronavigated transcranial magnetic stimulation (TMS) directed to a trained retinotopic V1 location to test for behaviorally relevant consolidation in human low-level visual cortex. Applying TMS to the trained V1 location within 45 min of the first or second training session strongly interfered with learning, as measured by impaired performance the next day. The interference was conditional on task context and occurred only when training in the location targeted by TMS was followed by training in a second location before TMS. In this condition, high-level areas may become coupled to the second location and uncoupled from the previously trained low-level representation, thereby rendering consolidation vulnerable to interference. Our data show that, during the earliest phases of skill learning in the lowest-level visual areas, a behaviorally relevant form of consolidation exists of which the robustness is controlled by high-level, contextual factors.

  10. When memory is not enough: Electrophysiological evidence for goal-dependent use of working memory representations in guiding visual attention

    Science.gov (United States)

    Carlisle, Nancy B.; Woodman, Geoffrey F.

    2014-01-01

    Biased competition theory proposes that representations in working memory drive visual attention to select similar inputs. However, behavioral tests of this hypothesis have led to mixed results. These inconsistent findings could be due to the inability of behavioral measures to reliably detect the early, automatic effects on attentional deployment that the memory representations exert. Alternatively, executive mechanisms may govern how working memory representations influence attention based on higher-level goals. In the present study, we tested these hypotheses using the N2pc component of participants’ event-related potentials (ERPs) to directly measure the early deployments of covert attention. Participants searched for a target in an array that sometimes contained a memory-matching distractor. In Experiments 1–3, we manipulated the difficulty of the target discrimination and the proximity of distractors, but consistently observed that covert attention was deployed to the search targets and not the memory-matching distractors. In Experiment 4, we showed that when participants’ goal involved attending to memory-matching items that these items elicited a large and early N2pc. Our findings demonstrate that working memory representations alone are not sufficient to guide early deployments of visual attention to matching inputs and that goal-dependent executive control mediates the interactions between working memory representations and visual attention. PMID:21254796

  11. Audiovisual Association Learning in the Absence of Primary Visual Cortex

    OpenAIRE

    Seirafi, Mehrdad; De Weerd, Peter; Pegna, Alan J.; de Gelder, Beatrice

    2016-01-01

    Learning audiovisual associations is mediated by the primary cortical areas; however, recent animal studies suggest that such learning can take place even in the absence of the primary visual cortex. Other studies have demonstrated the involvement of extra-geniculate pathways and especially the superior colliculus (SC) in audiovisual association learning. Here, we investigated such learning in a rare human patient with complete loss of the bilateral striate cortex. We carried out an implicit ...

  12. An Intrinsic Value System for Developing Multiple Invariant Representations with Incremental Slowness Learning

    Directory of Open Access Journals (Sweden)

    Matthew David Luciw

    2013-05-01

    Full Text Available Curiosity Driven Modular Incremental Slow Feature Analysis (CD-MISFA;~cite{cdmisfa} is a recently introduced model of intrinsically-motivated invariance learning, which shows how curiosity enables the orderly formation of multiple stable sensory representations, through which the agent can simplify its complex sensory input. Here, we first discuss the computational properties of the CD-MISFA model itself, followed by a discussion of neurophysiological analogs fulfilling similar functional roles. CD-MISFA combines 1. unsupervised representation learning through the slowness principle, 2. generation of an intrinsic reward signal through the learning progress of the developing features, and 3. balancing of exploration and exploitation in order to maximize learning progress and quickly learn multiple feature sets for perceptual simplification. Experimental results on synthetic observations and on the iCub robot show that the intrinsic value system is an essential component to representation learning, further, the model explores such that the representations are typically learned in order from least to most costly, as predicted by the theory of Artificial Curiosity.

  13. VStops: A Thinking Strategy and Visual Representation Approach in Mathematical Word Problem Solving toward Enhancing STEM Literacy

    Science.gov (United States)

    Abdullah, Nasarudin; Halim, Lilia; Zakaria, Effandi

    2014-01-01

    This study aimed to determine the impact of strategic thinking and visual representation approaches (VStops) on the achievement, conceptual knowledge, metacognitive awareness, awareness of problem-solving strategies, and student attitudes toward mathematical word problem solving among primary school students. The experimental group (N = 96)…

  14. A Review of the Effects of Visual-Spatial Representations and Heuristics on Word Problem Solving in Middle School Mathematics

    Science.gov (United States)

    Kribbs, Elizabeth E.; Rogowsky, Beth A.

    2016-01-01

    Mathematics word-problems continue to be an insurmountable challenge for many middle school students. Educators have used pictorial and schematic illustrations within the classroom to help students visualize these problems. However, the data shows that pictorial representations can be more harmful than helpful in that they only display objects or…

  15. Visual Representations of Microcosm in Textbooks of Chemistry: Constructing a Systemic Network for Their Main Conceptual Framework

    Science.gov (United States)

    Papageorgiou, George; Amariotakis, Vasilios; Spiliotopoulou, Vasiliki

    2017-01-01

    The main objective of this work is to analyse the visual representations (VRs) of the microcosm depicted in nine Greek secondary chemistry school textbooks of the last three decades in order to construct a systemic network for their main conceptual framework and to evaluate the contribution of each one of the resulting categories to the network.…

  16. Associating Animations with Concrete Models to Enhance Students' Comprehension of Different Visual Representations in Organic Chemistry

    Science.gov (United States)

    Al-Balushi, Sulaiman M.; Al-Hajri, Sheikha H.

    2014-01-01

    The purpose of the current study is to explore the impact of associating animations with concrete models on eleventh-grade students' comprehension of different visual representations in organic chemistry. The study used a post-test control group quasi-experimental design. The experimental group (N = 28) used concrete models, submicroscopic…

  17. An analysis of science content and representations in introductory college physics textbooks and multimodal learning resources

    Science.gov (United States)

    Donnelly, Suzanne M.

    This study features a comparative descriptive analysis of the physics content and representations surrounding the first law of thermodynamics as presented in four widely used introductory college physics textbooks representing each of four physics textbook categories (calculus-based, algebra/trigonometry-based, conceptual, and technical/applied). Introducing and employing a newly developed theoretical framework, multimodal generative learning theory (MGLT), an analysis of the multimodal characteristics of textbook and multimedia representations of physics principles was conducted. The modal affordances of textbook representations were identified, characterized, and compared across the four physics textbook categories in the context of their support of problem-solving. Keywords: college science, science textbooks, multimodal learning theory, thermodynamics, representations

  18. Concrete and abstract visualizations in history learning tasks

    NARCIS (Netherlands)

    Prangsma, Maaike; Van Boxtel, Carla; Kanselaar, Gellof; Kirschner, Paul A.

    2010-01-01

    Prangsma, M. E., Van Boxtel, C. A. M., Kanselaar, G., & Kirschner, P. A. (2009). Concrete and abstract visualizations in history learning tasks. British Journal of Educational Psychology, 79, 371-387.

  19. Is a picture worth a thousand words? The interaction of visual display and attribute representation in attenuating framing bias}

    Directory of Open Access Journals (Sweden)

    Eyal Gamliel

    2013-07-01

    Full Text Available The attribute framing bias is a well-established phenomenon, in which an object or an event is evaluated more favorably when presented in a positive frame such as ``the half full glass'' than when presented in the complementary negative framing. Given that previous research showed that visual aids can attenuate this bias, the current research explores the factors underlying the attenuating effect of visual aids. In a series of three experiments, we examined how attribute framing bias is affected by two factors: (a The display mode---verbal versus visual; and (b the representation of the critical attribute---whether one outcome, either the positive or the negative, is represented or both outcomes are represented. In Experiment 1 a marginal attenuation of attribute framing bias was obtained when verbal description of either positive or negative information was accompanied by corresponding visual representation. In Experiment 2 similar marginal attenuation was obtained when both positive and negative outcomes were verbally represented. In Experiment 3, where the verbal description represented both positive and negative outcomes, significant attenuation was obtained when it was accompanied by a visual display that represented a single outcome, and complete attenuation, totally eliminating the framing bias, was obtained when it was accompanied by a visual display that represented both outcomes. Thus, our findings showed that interaction between the display mode and the representation of the critical attribute attenuated the framing bias. Theoretical and practical implications of the interaction between verbal description, visual aids and representation of the critical attribute are discussed, and future research is suggested.

  20. Parts-based stereoscopic image assessment by learning binocular manifold color visual properties

    Science.gov (United States)

    Xu, Haiyong; Yu, Mei; Luo, Ting; Zhang, Yun; Jiang, Gangyi

    2016-11-01

    Existing stereoscopic image quality assessment (SIQA) methods are mostly based on the luminance information, in which color information is not sufficiently considered. Actually, color is part of the important factors that affect human visual perception, and nonnegative matrix factorization (NMF) and manifold learning are in line with human visual perception. We propose an SIQA method based on learning binocular manifold color visual properties. To be more specific, in the training phase, a feature detector is created based on NMF with manifold regularization by considering color information, which not only allows parts-based manifold representation of an image, but also manifests localized color visual properties. In the quality estimation phase, visually important regions are selected by considering different human visual attention, and feature vectors are extracted by using the feature detector. Then the feature similarity index is calculated and the parts-based manifold color feature energy (PMCFE) for each view is defined based on the color feature vectors. The final quality score is obtained by considering a binocular combination based on PMCFE. The experimental results on LIVE I and LIVE Π 3-D IQA databases demonstrate that the proposed method can achieve much higher consistency with subjective evaluations than the state-of-the-art SIQA methods.

  1. Tool-Based Curricula and Visual Learning

    Directory of Open Access Journals (Sweden)

    Dragica Vasileska

    2013-12-01

    Full Text Available In the last twenty years nanotechnology hasrevolutionized the world of information theory, computers andother important disciplines, such as medicine, where it hascontributed significantly in the creation of more sophisticateddiagnostic tools. Therefore, it is important for people working innanotechnology to better understand basic concepts to be morecreative and productive. To further foster the progress onNanotechnology in the USA, the National Science Foundation hascreated the Network for Computational Nanotechnology (NCNand the dissemination of all the information from member andnon-member participants of the NCN is enabled by thecommunity website www.nanoHUB.org. nanoHUB’s signatureservices online simulation that enables the operation ofsophisticated research and educational simulation engines with acommon browser. No software installation or local computingpower is needed. The simulation tools as well as nano-conceptsare augmented by educational materials, assignments, and toolbasedcurricula, which are assemblies of tools that help studentsexcel in a particular area.As elaborated later in the text, it is the visual mode of learningthat we are exploiting in achieving faster and better results withstudents that go through simulation tool-based curricula. Thereare several tool based curricula already developed on thenanoHUB and undergoing further development, out of which fiveare directly related to nanoelectronics. They are: ABACUS –device simulation module; ACUTE – Computational Electronicsmodule; ANTSY – bending toolkit; and AQME – quantummechanics module. The methodology behind tool-based curriculais discussed in details. Then, the current status of each module ispresented, including user statistics and student learningindicatives. Particular simulation tool is explored further todemonstrate the ease by which students can grasp information.Representative of Abacus is PN-Junction Lab; representative ofAQME is PCPBT tool; and

  2. Frontal and parietal cortical interactions with distributed visual representations during selective attention and action selection.

    Science.gov (United States)

    Nelissen, Natalie; Stokes, Mark; Nobre, Anna C; Rushworth, Matthew F S

    2013-10-16

    Using multivoxel pattern analysis (MVPA), we studied how distributed visual representations in human occipitotemporal cortex are modulated by attention and link their modulation to concurrent activity in frontal and parietal cortex. We detected similar occipitotemporal patterns during a simple visuoperceptual task and an attention-to-working-memory task in which one or two stimuli were cued before being presented among other pictures. Pattern strength varied from highest to lowest when the stimulus was the exclusive focus of attention, a conjoint focus, and when it was potentially distracting. Although qualitatively similar effects were seen inside regions relatively specialized for the stimulus category and outside, the former were quantitatively stronger. By regressing occipitotemporal pattern strength against activity elsewhere in the brain, we identified frontal and parietal areas exerting top-down control over, or reading information out from, distributed patterns in occipitotemporal cortex. Their interactions with patterns inside regions relatively specialized for that stimulus category were higher than those with patterns outside those regions and varied in strength as a function of the attentional condition. One area, the frontal operculum, was distinguished by selectively interacting with occipitotemporal patterns only when they were the focus of attention. There was no evidence that any frontal or parietal area actively inhibited occipitotemporal representations even when they should be ignored and were suppressed. Using MVPA to decode information within these frontal and parietal areas showed that they contained information about attentional context and/or readout information from occipitotemporal cortex to guide behavior but that frontal regions lacked information about category identity.

  3. Frontal and Parietal Cortical Interactions with Distributed Visual Representations during Selective Attention and Action Selection

    Science.gov (United States)

    Stokes, Mark; Nobre, Anna C.; Rushworth, Matthew F. S.

    2013-01-01

    Using multivoxel pattern analysis (MVPA), we studied how distributed visual representations in human occipitotemporal cortex are modulated by attention and link their modulation to concurrent activity in frontal and parietal cortex. We detected similar occipitotemporal patterns during a simple visuoperceptual task and an attention-to-working-memory task in which one or two stimuli were cued before being presented among other pictures. Pattern strength varied from highest to lowest when the stimulus was the exclusive focus of attention, a conjoint focus, and when it was potentially distracting. Although qualitatively similar effects were seen inside regions relatively specialized for the stimulus category and outside, the former were quantitatively stronger. By regressing occipitotemporal pattern strength against activity elsewhere in the brain, we identified frontal and parietal areas exerting top-down control over, or reading information out from, distributed patterns in occipitotemporal cortex. Their interactions with patterns inside regions relatively specialized for that stimulus category were higher than those with patterns outside those regions and varied in strength as a function of the attentional condition. One area, the frontal operculum, was distinguished by selectively interacting with occipitotemporal patterns only when they were the focus of attention. There was no evidence that any frontal or parietal area actively inhibited occipitotemporal representations even when they should be ignored and were suppressed. Using MVPA to decode information within these frontal and parietal areas showed that they contained information about attentional context and/or readout information from occipitotemporal cortex to guide behavior but that frontal regions lacked information about category identity. PMID:24133250

  4. Attentional Modulation in Visual Cortex Is Modified during Perceptual Learning

    Science.gov (United States)

    Bartolucci, Marco; Smith, Andrew T.

    2011-01-01

    Practicing a visual task commonly results in improved performance. Often the improvement does not transfer well to a new retinal location, suggesting that it is mediated by changes occurring in early visual cortex, and indeed neuroimaging and neurophysiological studies both demonstrate that perceptual learning is associated with altered activity…

  5. Images in Language: Metaphors and Metamorphoses. Visual Learning. Volume 1

    Science.gov (United States)

    Benedek, Andras, Ed.; Nyiri, Kristof, Ed.

    2011-01-01

    Learning and teaching are faced with radically new challenges in today's rapidly changing world and its deeply transformed communicational environment. We are living in an era of images. Contemporary visual technology--film, video, interactive digital media--is promoting but also demanding a new approach to education: the age of visual learning…

  6. Speckle Reduction on Ultrasound Liver Images Based on a Sparse Representation over a Learned Dictionary

    Directory of Open Access Journals (Sweden)

    Mohamed Yaseen Jabarulla

    2018-05-01

    Full Text Available Ultrasound images are corrupted with multiplicative noise known as speckle, which reduces the effectiveness of image processing and hampers interpretation. This paper proposes a multiplicative speckle suppression technique for ultrasound liver images, based on a new signal reconstruction model known as sparse representation (SR over dictionary learning. In the proposed technique, the non-uniform multiplicative signal is first converted into additive noise using an enhanced homomorphic filter. This is followed by pixel-based total variation (TV regularization and patch-based SR over a dictionary trained using K-singular value decomposition (KSVD. Finally, the split Bregman algorithm is used to solve the optimization problem and estimate the de-speckled image. The simulations performed on both synthetic and clinical ultrasound images for speckle reduction, the proposed technique achieved peak signal-to-noise ratios of 35.537 dB for the dictionary trained on noisy image patches and 35.033 dB for the dictionary trained using a set of reference ultrasound image patches. Further, the evaluation results show that the proposed method performs better than other state-of-the-art denoising algorithms in terms of both peak signal-to-noise ratio and subjective visual quality assessment.

  7. [Associative Learning between Orientation and Color in Early Visual Areas].

    Science.gov (United States)

    Amano, Kaoru; Shibata, Kazuhisa; Kawato, Mitsuo; Sasaki, Yuka; Watanabe, Takeo

    2017-08-01

    Associative learning is an essential neural phenomenon where the contingency of different items increases after training. Although associative learning has been found to occur in many brain regions, there is no clear evidence that associative learning of visual features occurs in early visual areas. Here, we developed an associative decoded functional magnetic resonance imaging (fMRI) neurofeedback (A-DecNef) to determine whether associative learning of color and orientation can be induced in early visual areas. During the three days' training, A-DecNef induced fMRI signal patterns that corresponded to a specific target color (red) mostly in early visual areas while a vertical achromatic grating was simultaneously, physically presented to participants. Consequently, participants' perception of "red" was significantly more frequently than that of "green" in an achromatic vertical grating. This effect was also observed 3 to 5 months after training. These results suggest that long-term associative learning of two different visual features such as color and orientation, was induced most likely in early visual areas. This newly extended technique that induces associative learning may be used as an important tool for understanding and modifying brain function, since associations are fundamental and ubiquitous with respect to brain function.

  8. Visual teaching and learning in the fields of engineering

    Directory of Open Access Journals (Sweden)

    Kyvete S. Shatri

    2015-11-01

    Full Text Available Engineering education today is faced with numerous demands that are closely connected with a globalized economy. One of these requirements is to draw the engineers of the future, who are characterized with: strong analytical skills, creativity, ingenuity, professionalism, intercultural communication and leadership. To achieve this effective teaching methods should be used to facilitate and enhance the learning of students and their performance in general, making them able to cope with market demands of a globalized economy. One of these methods is the visualization as a very important method that increases the learning of students. A visual approach in science and in engineering also increases communication, critical thinking and provides analytical approach to various problems. Therefore, this research is aimed to investigate the effect of the use of visualization in the process of teaching and learning in engineering fields and encourage teachers and students to use visual methods for teaching and learning. The results of this research highlight the positive effect that the use of visualization has in the learning process of students and their overall performance. In addition, innovative teaching methods have a good effect in the improvement of the situation. Visualization motivates students to learn, making them more cooperative and developing their communication skills.

  9. Representation and Integration: Combining Robot Control, High-Level Planning, and Action Learning

    DEFF Research Database (Denmark)

    Petrick, Ronald; Kraft, Dirk; Mourao, Kira

    We describe an approach to integrated robot control, high-level planning, and action effect learning that attempts to overcome the representational difficulties that exist between these diverse areas. Our approach combines ideas from robot vision, knowledgelevel planning, and connectionist machine......-level action specifications, suitable for planning, from a robot’s interactions with the world. We present a detailed overview of our approach and show how it supports the learning of certain aspects of a high-level lepresentation from low-level world state information....... learning, and focuses on the representational needs of these components.We also make use of a simple representational unit called an instantiated state transition fragment (ISTF) and a related structure called an object-action complex (OAC). The goal of this work is a general approach for inducing high...

  10. Visual Learning Induces Changes in Resting-State fMRI Multivariate Pattern of Information.

    Science.gov (United States)

    Guidotti, Roberto; Del Gratta, Cosimo; Baldassarre, Antonello; Romani, Gian Luca; Corbetta, Maurizio

    2015-07-08

    When measured with functional magnetic resonance imaging (fMRI) in the resting state (R-fMRI), spontaneous activity is correlated between brain regions that are anatomically and functionally related. Learning and/or task performance can induce modulation of the resting synchronization between brain regions. Moreover, at the neuronal level spontaneous brain activity can replay patterns evoked by a previously presented stimulus. Here we test whether visual learning/task performance can induce a change in the patterns of coded information in R-fMRI signals consistent with a role of spontaneous activity in representing task-relevant information. Human subjects underwent R-fMRI before and after perceptual learning on a novel visual shape orientation discrimination task. Task-evoked fMRI patterns to trained versus novel stimuli were recorded after learning was completed, and before the second R-fMRI session. Using multivariate pattern analysis on task-evoked signals, we found patterns in several cortical regions, as follows: visual cortex, V3/V3A/V7; within the default mode network, precuneus, and inferior parietal lobule; and, within the dorsal attention network, intraparietal sulcus, which discriminated between trained and novel visual stimuli. The accuracy of classification was strongly correlated with behavioral performance. Next, we measured multivariate patterns in R-fMRI signals before and after learning. The frequency and similarity of resting states representing the task/visual stimuli states increased post-learning in the same cortical regions recruited by the task. These findings support a representational role of spontaneous brain activity. Copyright © 2015 the authors 0270-6474/15/359786-13$15.00/0.

  11. Robust Visual Tracking Via Consistent Low-Rank Sparse Learning

    KAUST Repository

    Zhang, Tianzhu; Liu, Si; Ahuja, Narendra; Yang, Ming-Hsuan; Ghanem, Bernard

    2014-01-01

    and the low-rank minimization problem for learning joint sparse representations can be efficiently solved by a sequence of closed form update operations. We evaluate the proposed CLRST algorithm against 14 state-of-the-art tracking methods on a set of 25

  12. Robust visual tracking via structured multi-task sparse learning

    KAUST Repository

    Zhang, Tianzhu

    2012-11-09

    In this paper, we formulate object tracking in a particle filter framework as a structured multi-task sparse learning problem, which we denote as Structured Multi-Task Tracking (S-MTT). Since we model particles as linear combinations of dictionary templates that are updated dynamically, learning the representation of each particle is considered a single task in Multi-Task Tracking (MTT). By employing popular sparsity-inducing lp,q mixed norms (specifically p∈2,∞ and q=1), we regularize the representation problem to enforce joint sparsity and learn the particle representations together. As compared to previous methods that handle particles independently, our results demonstrate that mining the interdependencies between particles improves tracking performance and overall computational complexity. Interestingly, we show that the popular L1 tracker (Mei and Ling, IEEE Trans Pattern Anal Mach Intel 33(11):2259-2272, 2011) is a special case of our MTT formulation (denoted as the L11 tracker) when p=q=1. Under the MTT framework, some of the tasks (particle representations) are often more closely related and more likely to share common relevant covariates than other tasks. Therefore, we extend the MTT framework to take into account pairwise structural correlations between particles (e.g. spatial smoothness of representation) and denote the novel framework as S-MTT. The problem of learning the regularized sparse representation in MTT and S-MTT can be solved efficiently using an Accelerated Proximal Gradient (APG) method that yields a sequence of closed form updates. As such, S-MTT and MTT are computationally attractive. We test our proposed approach on challenging sequences involving heavy occlusion, drastic illumination changes, and large pose variations. Experimental results show that S-MTT is much better than MTT, and both methods consistently outperform state-of-the-art trackers. © 2012 Springer Science+Business Media New York.

  13. The impact of category structure and training methodology on learning and generalizing within-category representations.

    Science.gov (United States)

    Ell, Shawn W; Smith, David B; Peralta, Gabriela; Hélie, Sébastien

    2017-08-01

    When interacting with categories, representations focused on within-category relationships are often learned, but the conditions promoting within-category representations and their generalizability are unclear. We report the results of three experiments investigating the impact of category structure and training methodology on the learning and generalization of within-category representations (i.e., correlational structure). Participants were trained on either rule-based or information-integration structures using classification (Is the stimulus a member of Category A or Category B?), concept (e.g., Is the stimulus a member of Category A, Yes or No?), or inference (infer the missing component of the stimulus from a given category) and then tested on either an inference task (Experiments 1 and 2) or a classification task (Experiment 3). For the information-integration structure, within-category representations were consistently learned, could be generalized to novel stimuli, and could be generalized to support inference at test. For the rule-based structure, extended inference training resulted in generalization to novel stimuli (Experiment 2) and inference training resulted in generalization to classification (Experiment 3). These data help to clarify the conditions under which within-category representations can be learned. Moreover, these results make an important contribution in highlighting the impact of category structure and training methodology on the generalization of categorical knowledge.

  14. Knowledge Visualization for Self-Regulated Learning

    Science.gov (United States)

    Wang, Minhong; Peng, Jun; Cheng, Bo; Zhou, Hance; Liu, Jie

    2011-01-01

    The Web allows self-regulated learning through interaction with large amounts of learning resources. While enjoying the flexibility of learning, learners may suffer from cognitive overload and conceptual and navigational disorientation when faced with various information resources under disparate topics and complex knowledge structures. This study…

  15. How does the brain rapidly learn and reorganize view-invariant and position-invariant object representations in the inferotemporal cortex?

    Science.gov (United States)

    Cao, Yongqiang; Grossberg, Stephen; Markowitz, Jeffrey

    2011-12-01

    All primates depend for their survival on being able to rapidly learn about and recognize objects. Objects may be visually detected at multiple positions, sizes, and viewpoints. How does the brain rapidly learn and recognize objects while scanning a scene with eye movements, without causing a combinatorial explosion in the number of cells that are needed? How does the brain avoid the problem of erroneously classifying parts of different objects together at the same or different positions in a visual scene? In monkeys and humans, a key area for such invariant object category learning and recognition is the inferotemporal cortex (IT). A neural model is proposed to explain how spatial and object attention coordinate the ability of IT to learn invariant category representations of objects that are seen at multiple positions, sizes, and viewpoints. The model clarifies how interactions within a hierarchy of processing stages in the visual brain accomplish this. These stages include the retina, lateral geniculate nucleus, and cortical areas V1, V2, V4, and IT in the brain's What cortical stream, as they interact with spatial attention processes within the parietal cortex of the Where cortical stream. The model builds upon the ARTSCAN model, which proposed how view-invariant object representations are generated. The positional ARTSCAN (pARTSCAN) model proposes how the following additional processes in the What cortical processing stream also enable position-invariant object representations to be learned: IT cells with persistent activity, and a combination of normalizing object category competition and a view-to-object learning law which together ensure that unambiguous views have a larger effect on object recognition than ambiguous views. The model explains how such invariant learning can be fooled when monkeys, or other primates, are presented with an object that is swapped with another object during eye movements to foveate the original object. The swapping procedure is

  16. Interactions between attention, context and learning in primary visual cortex.

    Science.gov (United States)

    Gilbert, C; Ito, M; Kapadia, M; Westheimer, G

    2000-01-01

    Attention in early visual processing engages the higher order, context dependent properties of neurons. Even at the earliest stages of visual cortical processing neurons play a role in intermediate level vision - contour integration and surface segmentation. The contextual influences mediating this process may be derived from long range connections within primary visual cortex (V1). These influences are subject to perceptual learning, and are strongly modulated by visuospatial attention, which is itself a learning dependent process. The attentional influences may involve interactions between feedback and horizontal connections in V1. V1 is therefore a dynamic and active processor, subject to top-down influences.

  17. Learning with multiple representations: an example of a revision lesson in mechanics

    Science.gov (United States)

    Wong, Darren; Poo, Sng Peng; Eng Hock, Ng; Loo Kang, Wee

    2011-03-01

    We describe an example of learning with multiple representations in an A-level revision lesson on mechanics. The context of the problem involved the motion of a ball thrown vertically upwards in air and studying how the associated physical quantities changed during its flight. Different groups of students were assigned to look at the ball's motion using various representations: motion diagrams, vector diagrams, free-body diagrams, verbal description, equations and graphs, drawn against time as well as against displacement. Overall, feedback from students about the lesson was positive. We further discuss the benefits of using computer simulation to support and extend student learning.

  18. Information processing in illness representation: Implications from an associative-learning framework.

    Science.gov (United States)

    Lowe, Rob; Norman, Paul

    2017-03-01

    The common-sense model (Leventhal, Meyer, & Nerenz, 1980) outlines how illness representations are important for understanding adjustment to health threats. However, psychological processes giving rise to these representations are little understood. To address this, an associative-learning framework was used to model low-level process mechanics of illness representation and coping-related decision making. Associative learning was modeled within a connectionist network simulation. Two types of information were paired: Illness identities (indigestion, heart attack, cancer) were paired with illness-belief profiles (cause, timeline, consequences, control/cure), and specific illness beliefs were paired with coping procedures (family doctor, emergency services, self-treatment). To emulate past experience, the network was trained with these pairings. As an analogue of a current illness event, the trained network was exposed to partial information (illness identity or select representation beliefs) and its response recorded. The network (a) produced the appropriate representation profile (beliefs) for a given illness identity, (b) prioritized expected coping procedures, and (c) highlighted circumstances in which activated representation profiles could include self-generated or counterfactual beliefs. Encoding and activation of illness beliefs can occur spontaneously and automatically; conventional questionnaire measurement may be insensitive to these automatic representations. Furthermore, illness representations may comprise a coherent set of nonindependent beliefs (a schema) rather than a collective of independent beliefs. Incoming information may generate a "tipping point," dramatically changing the active schema as a new illness-knowledge set is invoked. Finally, automatic activation of well-learned information can lead to the erroneous interpretation of illness events, with implications for [inappropriate] coping efforts. (PsycINFO Database Record (c) 2017 APA, all

  19. Visual artificial grammar learning in dyslexia : A meta-analysis

    NARCIS (Netherlands)

    van Witteloostuijn, Merel; Boersma, Paul; Wijnen, Frank; Rispens, Judith

    2017-01-01

    Background Literacy impairments in dyslexia have been hypothesized to be (partly) due to an implicit learning deficit. However, studies of implicit visual artificial grammar learning (AGL) have often yielded null results. Aims The aim of this study is to weigh the evidence collected thus far by

  20. Asymmetrical learning between a tactile and visual serial RT task

    NARCIS (Netherlands)

    Abrahamse, E.L.; van der Lubbe, Robert Henricus Johannes; Verwey, Willem B.

    2007-01-01

    According to many researchers, implicit learning in the serial reaction-time task is predominantly motor based and therefore should be independent of stimulus modality. Previous research on the task, however, has focused almost completely on the visual domain. Here we investigated sequence learning

  1. Visual Supports for the Learning Disabled: A Handbook for Educators

    Science.gov (United States)

    Sells, Leighan

    2013-01-01

    A large percent of the population is affected by learning disabilities, which significantly impacts individuals and families. Much research has been done to identify effective ways to best help the students with learning disabilities. One of the more promising strategies is the use of visual supports to enhance these students' understanding…

  2. Learning of Grammar-Like Visual Sequences by Adults with and without Language-Learning Disabilities

    Science.gov (United States)

    Aguilar, Jessica M.; Plante, Elena

    2014-01-01

    Purpose: Two studies examined learning of grammar-like visual sequences to determine whether a general deficit in statistical learning characterizes this population. Furthermore, we tested the hypothesis that difficulty in sustaining attention during the learning task might account for differences in statistical learning. Method: In Study 1,…

  3. Neuro-symbolic representation learning on biological knowledge graphs

    KAUST Repository

    AlShahrani, Mona; Khan, Mohammed Asif; Maddouri, Omar; Kinjo, Akira R; Queralt-Rosinach, Nú ria; Hoehndorf, Robert

    2017-01-01

    Biological data and knowledge bases increasingly rely on Semantic Web technologies and the use of knowledge graphs for data integration, retrieval and federated queries. In the past years, feature learning methods that are applicable to graph

  4. Learning Representation and Control in Markov Decision Processes

    Science.gov (United States)

    2013-10-21

    449–456. MIT Press, 2006. [35] D. Koller and N. Friedman. Graphical Models. MIT Press, 2009. [36] J. Zico Kolter and Andrew Y. Ng. Regularization and...ICML ’09, pages 521–528, New York, NY, USA, 2009. ACM. [37] J. Zico Kolter and Andrew Y. Ng. Regularization and feature selection in least-squares...temporal differ- ence learning. In Proceedings of 27 th International Conference on Machine Learning, 2009. [38] J. Zico Z. Kolter . The Fixed Points of Off

  5. Measuring, Predicting and Visualizing Short-Term Change in Word Representation and Usage in VKontakte Social Network

    Energy Technology Data Exchange (ETDEWEB)

    Stewart, Ian B.; Arendt, Dustin L.; Bell, Eric B.; Volkova, Svitlana

    2017-05-17

    Language in social media is extremely dynamic: new words emerge, trend and disappear, while the meaning of existing words can fluctuate over time. This work addresses several important tasks of visualizing and predicting short term text representation shift, i.e. the change in a word’s contextual semantics. We study the relationship between short-term concept drift and representation shift on a large social media corpus – VKontakte collected during the Russia-Ukraine crisis in 2014 – 2015. We visualize short-term representation shift for example keywords and build predictive models to forecast short-term shifts in meaning from previous meaning as well as from concept drift. We show that short-term representation shift can be accurately predicted up to several weeks in advance and that visualization provides insight into meaning change. Our approach can be used to explore and characterize specific aspects of the streaming corpus during crisis events and potentially improve other downstream classification tasks including real-time event forecasting in social media.

  6. Automatic frame-centered object representation and integration revealed by iconic memory, visual priming, and backward masking.

    Science.gov (United States)

    Lin, Zhicheng; He, Sheng

    2012-10-25

    Object identities ("what") and their spatial locations ("where") are processed in distinct pathways in the visual system, raising the question of how the what and where information is integrated. Because of object motions and eye movements, the retina-based representations are unstable, necessitating nonretinotopic representation and integration. A potential mechanism is to code and update objects according to their reference frames (i.e., frame-centered representation and integration). To isolate frame-centered processes, in a frame-to-frame apparent motion configuration, we (a) presented two preceding or trailing objects on the same frame, equidistant from the target on the other frame, to control for object-based (frame-based) effect and space-based effect, and (b) manipulated the target's relative location within its frame to probe frame-centered effect. We show that iconic memory, visual priming, and backward masking depend on objects' relative frame locations, orthogonal of the retinotopic coordinate. These findings not only reveal that iconic memory, visual priming, and backward masking can be nonretinotopic but also demonstrate that these processes are automatically constrained by contextual frames through a frame-centered mechanism. Thus, object representation is robustly and automatically coupled to its reference frame and continuously being updated through a frame-centered, location-specific mechanism. These findings lead to an object cabinet framework, in which objects ("files") within the reference frame ("cabinet") are orderly coded relative to the frame.

  7. Concept cells through associative learning of high-level representations.

    Science.gov (United States)

    Reddy, Leila; Thorpe, Simon J

    2014-10-22

    In this issue of Neuron, Quian Quiroga et al. (2014) show that neurons in the human medial temporal lobe (MTL) follow subjects' perceptual states rather than the features of the visual input. Patients with MTL damage however have intact perceptual abilities but suffer instead from extreme forgetfulness. Thus, the reported MTL neurons could create new memories of the current perceptual state.

  8. Magnetic stimulation of visual cortex impairs perceptual learning.

    Science.gov (United States)

    Baldassarre, Antonello; Capotosto, Paolo; Committeri, Giorgia; Corbetta, Maurizio

    2016-12-01

    The ability to learn and process visual stimuli more efficiently is important for survival. Previous neuroimaging studies have shown that perceptual learning on a shape identification task differently modulates activity in both frontal-parietal cortical regions and visual cortex (Sigman et al., 2005;Lewis et al., 2009). Specifically, fronto-parietal regions (i.e. intra parietal sulcus, pIPS) became less activated for trained as compared to untrained stimuli, while visual regions (i.e. V2d/V3 and LO) exhibited higher activation for familiar shape. Here, after the intensive training, we employed transcranial magnetic stimulation over both visual occipital and parietal regions, previously shown to be modulated, to investigate their causal role in learning the shape identification task. We report that interference with V2d/V3 and LO increased reaction times to learned stimuli as compared to pIPS and Sham control condition. Moreover, the impairment observed after stimulation over the two visual regions was positive correlated. These results strongly support the causal role of the visual network in the control of the perceptual learning. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Images as representations : Visual sources on education and childhood in the past

    NARCIS (Netherlands)

    Dekker, Jeroen J.H.

    2015-01-01

    The challenge of using images for the history of education and childhood will be addressed in this article by looking at them as representations. Central is the relationship between representations and reality. The focus is on the power of paintings as representations of aspects of realities. First

  10. Location-Unbound Color-Shape Binding Representations in Visual Working Memory.

    Science.gov (United States)

    Saiki, Jun

    2016-02-01

    The mechanism by which nonspatial features, such as color and shape, are bound in visual working memory, and the role of those features' location in their binding, remains unknown. In the current study, I modified a redundancy-gain paradigm to investigate these issues. A set of features was presented in a two-object memory display, followed by a single object probe. Participants judged whether the probe contained any features of the memory display, regardless of its location. Response time distributions revealed feature coactivation only when both features of a single object in the memory display appeared together in the probe, regardless of the response time benefit from the probe and memory objects sharing the same location. This finding suggests that a shared location is necessary in the formation of bound representations but unnecessary in their maintenance. Electroencephalography data showed that amplitude modulations reflecting location-unbound feature coactivation were different from those reflecting the location-sharing benefit, consistent with the behavioral finding that feature-location binding is unnecessary in the maintenance of color-shape binding. © The Author(s) 2015.

  11. Contrasting vertical and horizontal representations of affect in emotional visual search.

    Science.gov (United States)

    Damjanovic, Ljubica; Santiago, Julio

    2016-02-01

    Independent lines of evidence suggest that the representation of emotional evaluation recruits both vertical and horizontal spatial mappings. These two spatial mappings differ in their experiential origins and their productivity, and available data suggest that they differ in their saliency. Yet, no study has so far compared their relative strength in an attentional orienting reaction time task that affords the simultaneous manifestation of both types of mapping. Here, we investigated this question using a visual search task with emotional faces. We presented angry and happy face targets and neutral distracter faces in top, bottom, left, and right locations on the computer screen. Conceptual congruency effects were observed along the vertical dimension supporting the 'up = good' metaphor, but not along the horizontal dimension. This asymmetrical processing pattern was observed when faces were presented in a cropped (Experiment 1) and whole (Experiment 2) format. These findings suggest that the 'up = good' metaphor is more salient and readily activated than the 'right = good' metaphor, and that the former outcompetes the latter when the task context affords the simultaneous activation of both mappings.

  12. Visual representation of knowledge in the field of Library and Information Science of IRAN

    Directory of Open Access Journals (Sweden)

    Afsoon Sabetpour

    2015-05-01

    Full Text Available Purpose: The present research has been done to visual representation of knowledge and determination vacuum and density points of scientific trends of faculty members of state universities of IRAN in Library & Information Science field. Method: Curriculum Vitae of each faculty member with census method were collected and its content analyzed. Then using a checklist, the rate scientific tendencies were extracted. NodeXL software was deployed to map out the levels. Results: The results showed that the trends are concentrated in Scientometrics, Research method in Library & Information Science, information organization, information resources, psychology, Education, Management, the Web, Knowledge management, Academic Libraries, Information services, Information Theories and collection management. Apparently, the Library & Information Science community of experts pays little or no attention to the Library & Information Science applications in the fields of chemistry, Cartography, museum, law, art, school libraries as well as to independent subject clusters such as minorities in library, information architecture, mentoring in library science, library automation, preservation, oral history, cybernetics, copyright, information marketing and information economy. Lack of efforts on these areas is remarkable.

  13. Attention modulates maintenance of representations in visual short-term memory.

    Science.gov (United States)

    Kuo, Bo-Cheng; Stokes, Mark G; Nobre, Anna Christina

    2012-01-01

    Recent studies have shown that selective attention is of considerable importance for encoding task-relevant items into visual short-term memory (VSTM) according to our behavioral goals. However, it is not known whether top-down attentional biases can continue to operate during the maintenance period of VSTM. We used ERPs to investigate this question across two experiments. Specifically, we tested whether orienting attention to a given spatial location within a VSTM representation resulted in modulation of the contralateral delay activity (CDA), a lateralized ERP marker of VSTM maintenance generated when participants selectively encode memory items from one hemifield. In both experiments, retrospective cues during the maintenance period could predict a specific item (spatial retrocue) or multiple items (neutral retrocue) that would be probed at the end of the memory delay. Our results revealed that VSTM performance is significantly improved by orienting attention to the location of a task-relevant item. The behavioral benefit was accompanied by modulation of neural activity involved in VSTM maintenance. Spatial retrocues reduced the magnitude of the CDA, consistent with a reduction in memory load. Our results provide direct evidence that top-down control modulates neural activity associated with maintenance in VSTM, biasing competition in favor of the task-relevant information.

  14. External and Internal Representations in the Acquisition and Use of Knowledge: Visualization Effects on Mental Model Construction

    Science.gov (United States)

    Schnotz, Wolfgang; Kurschner, Christian

    2008-01-01

    This article investigates whether different formats of visualizing information result in different mental models constructed in learning from pictures, whether the different mental models lead to different patterns of performance in subsequently presented tasks, and how these visualization effects can be modified by further external…

  15. Improving the learning of clinical reasoning through computer-based cognitive representation.

    Science.gov (United States)

    Wu, Bian; Wang, Minhong; Johnson, Janice M; Grotzer, Tina A

    2014-01-01

    Clinical reasoning is usually taught using a problem-solving approach, which is widely adopted in medical education. However, learning through problem solving is difficult as a result of the contextualization and dynamic aspects of actual problems. Moreover, knowledge acquired from problem-solving practice tends to be inert and fragmented. This study proposed a computer-based cognitive representation approach that externalizes and facilitates the complex processes in learning clinical reasoning. The approach is operationalized in a computer-based cognitive representation tool that involves argument mapping to externalize the problem-solving process and concept mapping to reveal the knowledge constructed from the problems. Twenty-nine Year 3 or higher students from a medical school in east China participated in the study. Participants used the proposed approach implemented in an e-learning system to complete four learning cases in 4 weeks on an individual basis. For each case, students interacted with the problem to capture critical data, generate and justify hypotheses, make a diagnosis, recall relevant knowledge, and update their conceptual understanding of the problem domain. Meanwhile, students used the computer-based cognitive representation tool to articulate and represent the key elements and their interactions in the learning process. A significant improvement was found in students' learning products from the beginning to the end of the study, consistent with students' report of close-to-moderate progress in developing problem-solving and knowledge-construction abilities. No significant differences were found between the pretest and posttest scores with the 4-week period. The cognitive representation approach was found to provide more formative assessment. The computer-based cognitive representation approach improved the learning of clinical reasoning in both problem solving and knowledge construction.

  16. Improving the learning of clinical reasoning through computer-based cognitive representation

    Directory of Open Access Journals (Sweden)

    Bian Wu

    2014-12-01

    Full Text Available Objective: Clinical reasoning is usually taught using a problem-solving approach, which is widely adopted in medical education. However, learning through problem solving is difficult as a result of the contextualization and dynamic aspects of actual problems. Moreover, knowledge acquired from problem-solving practice tends to be inert and fragmented. This study proposed a computer-based cognitive representation approach that externalizes and facilitates the complex processes in learning clinical reasoning. The approach is operationalized in a computer-based cognitive representation tool that involves argument mapping to externalize the problem-solving process and concept mapping to reveal the knowledge constructed from the problems. Methods: Twenty-nine Year 3 or higher students from a medical school in east China participated in the study. Participants used the proposed approach implemented in an e-learning system to complete four learning cases in 4 weeks on an individual basis. For each case, students interacted with the problem to capture critical data, generate and justify hypotheses, make a diagnosis, recall relevant knowledge, and update their conceptual understanding of the problem domain. Meanwhile, students used the computer-based cognitive representation tool to articulate and represent the key elements and their interactions in the learning process. Results: A significant improvement was found in students’ learning products from the beginning to the end of the study, consistent with students’ report of close-to-moderate progress in developing problem-solving and knowledge-construction abilities. No significant differences were found between the pretest and posttest scores with the 4-week period. The cognitive representation approach was found to provide more formative assessment. Conclusions: The computer-based cognitive representation approach improved the learning of clinical reasoning in both problem solving and knowledge

  17. Teachers’ learning and assessing of mathematical processes with emphasis on representations, reasoning and proof

    Directory of Open Access Journals (Sweden)

    Satsope Maoto

    2018-03-01

    Full Text Available This article focuses mainly on two key mathematical processes (representation, and reasoning and proof. Firstly, we observed how teachers learn these processes and subsequently identify what and how to assess learners on the same processes. Secondly, we reviewed one teacher’s attempt to facilitate the learning of the processes in his classroom. Two interrelated questions were pursued: ‘what are the teachers’ challenges in learning mathematical processes?’ and ‘in what ways are teachers’ approaches to learning mathematical processes influencing how they assess their learners on the same processes?’ A case study was undertaken involving 10 high school mathematics teachers who enrolled for an assessment module towards a Bachelor in Education Honours degree in mathematics education. We present an interpretive analysis of two sets of data. The first set consisted of the teachers’ written responses to a pattern searching activity. The second set consisted of a mathematical discourse on matchstick patterns in a Grade 9 class. The overall finding was that teachers rush through forms of representation and focus more on manipulation of numerical representations with a view to deriving symbolic representation. Subsequently, this unidirectional approach limits the scope of assessment of mathematical processes. Interventions with regard to the enhancement of these complex processes should involve teachers’ actual engagements in and reflections on similar learning.

  18. Handwriting generates variable visual output to facilitate symbol learning.

    Science.gov (United States)

    Li, Julia X; James, Karin H

    2016-03-01

    Recent research has demonstrated that handwriting practice facilitates letter categorization in young children. The present experiments investigated why handwriting practice facilitates visual categorization by comparing 2 hypotheses: that handwriting exerts its facilitative effect because of the visual-motor production of forms, resulting in a direct link between motor and perceptual systems, or because handwriting produces variable visual instances of a named category in the environment that then changes neural systems. We addressed these issues by measuring performance of 5-year-old children on a categorization task involving novel, Greek symbols across 6 different types of learning conditions: 3 involving visual-motor practice (copying typed symbols independently, tracing typed symbols, tracing handwritten symbols) and 3 involving visual-auditory practice (seeing and saying typed symbols of a single typed font, of variable typed fonts, and of handwritten examples). We could therefore compare visual-motor production with visual perception both of variable and similar forms. Comparisons across the 6 conditions (N = 72) demonstrated that all conditions that involved studying highly variable instances of a symbol facilitated symbol categorization relative to conditions where similar instances of a symbol were learned, regardless of visual-motor production. Therefore, learning perceptually variable instances of a category enhanced performance, suggesting that handwriting facilitates symbol understanding by virtue of its environmental output: supporting the notion of developmental change though brain-body-environment interactions. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  19. Handwriting generates variable visual input to facilitate symbol learning

    Science.gov (United States)

    Li, Julia X.; James, Karin H.

    2015-01-01

    Recent research has demonstrated that handwriting practice facilitates letter categorization in young children. The present experiments investigated why handwriting practice facilitates visual categorization by comparing two hypotheses: That handwriting exerts its facilitative effect because of the visual-motor production of forms, resulting in a direct link between motor and perceptual systems, or because handwriting produces variable visual instances of a named category in the environment that then changes neural systems. We addressed these issues by measuring performance of 5 year-old children on a categorization task involving novel, Greek symbols across 6 different types of learning conditions: three involving visual-motor practice (copying typed symbols independently, tracing typed symbols, tracing handwritten symbols) and three involving visual-auditory practice (seeing and saying typed symbols of a single typed font, of variable typed fonts, and of handwritten examples). We could therefore compare visual-motor production with visual perception both of variable and similar forms. Comparisons across the six conditions (N=72) demonstrated that all conditions that involved studying highly variable instances of a symbol facilitated symbol categorization relative to conditions where similar instances of a symbol were learned, regardless of visual-motor production. Therefore, learning perceptually variable instances of a category enhanced performance, suggesting that handwriting facilitates symbol understanding by virtue of its environmental output: supporting the notion of developmental change though brain-body-environment interactions. PMID:26726913

  20. Learning Building Layouts with Non-geometric Visual Information: The Effects of Visual Impairment and Age

    Science.gov (United States)

    Kalia, Amy A.; Legge, Gordon E.; Giudice, Nicholas A.

    2009-01-01

    Previous studies suggest that humans rely on geometric visual information (hallway structure) rather than non-geometric visual information (e.g., doors, signs and lighting) for acquiring cognitive maps of novel indoor layouts. This study asked whether visual impairment and age affect reliance on non-geometric visual information for layout learning. We tested three groups of participants—younger (sighted, older (50–70 years) normally sighted, and low vision (people with heterogeneous forms of visual impairment ranging in age from 18–67). Participants learned target locations in building layouts using four presentation modes: a desktop virtual environment (VE) displaying only geometric cues (Sparse VE), a VE displaying both geometric and non-geometric cues (Photorealistic VE), a Map, and a Real building. Layout knowledge was assessed by map drawing and by asking participants to walk to specified targets in the real space. Results indicate that low-vision and older normally-sighted participants relied on additional non-geometric information to accurately learn layouts. In conclusion, visual impairment and age may result in reduced perceptual and/or memory processing that makes it difficult to learn layouts without non-geometric visual information. PMID:19189732

  1. Curriculum Q-Learning for Visual Vocabulary Acquisition

    OpenAIRE

    Zaidi, Ahmed H.; Moore, Russell; Briscoe, Ted

    2017-01-01

    The structure of curriculum plays a vital role in our learning process, both as children and adults. Presenting material in ascending order of difficulty that also exploits prior knowledge can have a significant impact on the rate of learning. However, the notion of difficulty and prior knowledge differs from person to person. Motivated by the need for a personalised curriculum, we present a novel method of curriculum learning for vocabulary words in the form of visual prompts. We employ a re...

  2. Visual representation of medical information: the importance of considering the end-user in the design of medical illustrations.

    Science.gov (United States)

    Scheltema, Emma; Reay, Stephen; Piper, Greg

    2018-01-01

    This practice led research project explored visual representation through illustrations designed to communicate often complex medical information for different users within Auckland City Hospital, New Zealand. Media and tools were manipulated to affect varying degrees of naturalism or abstraction from reality in the creation of illustrations for a variety of real-life clinical projects, and user feedback on illustration preference gathered from both medical professionals and patients. While all users preferred the most realistic representations of medical information from the illustrations presented, patients often favoured illustrations that depicted a greater amount of information than professionals suggested was necessary.

  3. Locomotor sequence learning in visually guided walking

    DEFF Research Database (Denmark)

    Choi, Julia T; Jensen, Peter; Nielsen, Jens Bo

    2016-01-01

    walking. In addition, we determined how age (i.e., healthy young adults vs. children) and biomechanical factors (i.e., walking speed) affected the rate and magnitude of locomotor sequence learning. The results showed that healthy young adults (age 24 ± 5 years, N = 20) could learn a specific sequence...... of step lengths over 300 training steps. Younger children (age 6-10 years, N = 8) have lower baseline performance, but their magnitude and rate of sequence learning was the same compared to older children (11-16 years, N = 10) and healthy adults. In addition, learning capacity may be more limited...... to modify step length from one trial to the next. Our sequence learning paradigm is derived from the serial reaction-time (SRT) task that has been used in upper limb studies. Both random and ordered sequences of step lengths were used to measure sequence-specific and sequence non-specific learning during...

  4. Context generalization in Drosophila visual learning requires the mushroom bodies

    Science.gov (United States)

    Liu, Li; Wolf, Reinhard; Ernst, Roman; Heisenberg, Martin

    1999-08-01

    The world is permanently changing. Laboratory experiments on learning and memory normally minimize this feature of reality, keeping all conditions except the conditioned and unconditioned stimuli as constant as possible. In the real world, however, animals need to extract from the universe of sensory signals the actual predictors of salient events by separating them from non-predictive stimuli (context). In principle, this can be achieved ifonly those sensory inputs that resemble the reinforcer in theirtemporal structure are taken as predictors. Here we study visual learning in the fly Drosophila melanogaster, using a flight simulator,, and show that memory retrieval is, indeed, partially context-independent. Moreover, we show that the mushroom bodies, which are required for olfactory but not visual or tactile learning, effectively support context generalization. In visual learning in Drosophila, it appears that a facilitating effect of context cues for memory retrieval is the default state, whereas making recall context-independent requires additional processing.

  5. Students and Teachers as Developers of Visual Learning Designs with Augmented Reality for Visual Arts Education

    DEFF Research Database (Denmark)

    Buhl, Mie

    2017-01-01

    upon which to discuss the potential for reengineering the traditional role of the teacher/learning designer as the only supplier and the students as the receivers of digital learning designs in higher education. The discussion applies the actor-network theory and socio-material perspectives...... on education in order to enhance the meta-perspective of traditional teacher and student roles.......Abstract This paper reports on a project in which communication and digital media students collaborated with visual arts teacher students and their teacher trainer to develop visual digital designs for learning that involved Augmented Reality (AR) technology. The project exemplified a design...

  6. Learning Reverse Engineering and Simulation with Design Visualization

    Science.gov (United States)

    Hemsworth, Paul J.

    2018-01-01

    The Design Visualization (DV) group supports work at the Kennedy Space Center by utilizing metrology data with Computer-Aided Design (CAD) models and simulations to provide accurate visual representations that aid in decision-making. The capability to measure and simulate objects in real time helps to predict and avoid potential problems before they become expensive in addition to facilitating the planning of operations. I had the opportunity to work on existing and new models and simulations in support of DV and NASA’s Exploration Ground Systems (EGS).

  7. iSee: Teaching Visual Learning in an Organic Virtual Learning Environment

    Science.gov (United States)

    Han, Hsiao-Cheng

    2017-01-01

    This paper presents a three-year participatory action research project focusing on the graduate level course entitled Visual Learning in 3D Animated Virtual Worlds. The purpose of this research was to understand "How the virtual world processes of observing and creating can best help students learn visual theories". The first cycle of…

  8. Effects of regular aerobic exercise on visual perceptual learning.

    Science.gov (United States)

    Connell, Charlotte J W; Thompson, Benjamin; Green, Hayden; Sullivan, Rachel K; Gant, Nicholas

    2017-12-02

    This study investigated the influence of five days of moderate intensity aerobic exercise on the acquisition and consolidation of visual perceptual learning using a motion direction discrimination (MDD) task. The timing of exercise relative to learning was manipulated by administering exercise either before or after perceptual training. Within a matched-subjects design, twenty-seven healthy participants (n = 9 per group) completed five consecutive days of perceptual training on a MDD task under one of three interventions: no exercise, exercise before the MDD task, or exercise after the MDD task. MDD task accuracy improved in all groups over the five-day period, but there was a trend for impaired learning when exercise was performed before visual perceptual training. MDD task accuracy (mean ± SD) increased in exercise before by 4.5 ± 6.5%; exercise after by 11.8 ± 6.4%; and no exercise by 11.3 ± 7.2%. All intervention groups displayed similar MDD threshold reductions for the trained and untrained motion axes after training. These findings suggest that moderate daily exercise does not enhance the rate of visual perceptual learning for an MDD task or the transfer of learning to an untrained motion axis. Furthermore, exercise performed immediately prior to a visual perceptual learning task may impair learning. Further research with larger groups is required in order to better understand these effects. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Learning state representation for deep actor-critic control

    NARCIS (Netherlands)

    Munk, J.; Kober, J.; Babuska, R.; Bullo, Francesco; Prieur, Christophe; Giua, Alessandro

    2016-01-01

    Deep Neural Networks (DNNs) can be used as function approximators in Reinforcement Learning (RL). One advantage of DNNs is that they can cope with large input dimensions. Instead of relying on feature engineering to lower the input dimension, DNNs can extract the features from raw observations. The

  10. Learning multiscale and deep representations for classifying remotely sensed imagery

    Science.gov (United States)

    Zhao, Wenzhi; Du, Shihong

    2016-03-01

    It is widely agreed that spatial features can be combined with spectral properties for improving interpretation performances on very-high-resolution (VHR) images in urban areas. However, many existing methods for extracting spatial features can only generate low-level features and consider limited scales, leading to unpleasant classification results. In this study, multiscale convolutional neural network (MCNN) algorithm was presented to learn spatial-related deep features for hyperspectral remote imagery classification. Unlike traditional methods for extracting spatial features, the MCNN first transforms the original data sets into a pyramid structure containing spatial information at multiple scales, and then automatically extracts high-level spatial features using multiscale training data sets. Specifically, the MCNN has two merits: (1) high-level spatial features can be effectively learned by using the hierarchical learning structure and (2) multiscale learning scheme can capture contextual information at different scales. To evaluate the effectiveness of the proposed approach, the MCNN was applied to classify the well-known hyperspectral data sets and compared with traditional methods. The experimental results shown a significant increase in classification accuracies especially for urban areas.

  11. Representational scripting for carrying out complex learning tasks

    NARCIS (Netherlands)

    Slof, B.

    2011-01-01

    Learning to solve complex problems is important because in our rapidly changing modern society and work environments knowing the answer is often not possible. Although educators and instructional designers acknowledge the benefits of problem solving, they also realize that learners need good

  12. An Eye-tracking Study of Notational, Informational, and Emotional Aspects of Learning Analytics Representations

    DEFF Research Database (Denmark)

    Vatrapu, Ravi; Reimann, Peter; Bull, Susan

    2013-01-01

    This paper presents an eye-tracking study of notational, informational, and emotional aspects of nine different notational systems (Skill Meters, Smilies, Traffic Lights, Topic Boxes, Collective Histograms, Word Clouds, Textual Descriptors, Table, and Matrix) and three different information states...... (Weak, Average, & Strong) used to represent student's learning. Findings from the eye-tracking study show that higher emotional activation was observed for the metaphorical notations of traffic lights and smilies and collective representations. Mean view time was higher for representations...... of the "average" informational learning state. Qualitative data analysis of the think-aloud comments and post-study interview show that student participants reflected on the meaning-making opportunities and action-taking possibilities afforded by the representations. Implications for the design and evaluation...

  13. Sparse Representation Based Multi-Instance Learning for Breast Ultrasound Image Classification.

    Science.gov (United States)

    Bing, Lu; Wang, Wei

    2017-01-01

    We propose a novel method based on sparse representation for breast ultrasound image classification under the framework of multi-instance learning (MIL). After image enhancement and segmentation, concentric circle is used to extract the global and local features for improving the accuracy in diagnosis and prediction. The classification problem of ultrasound image is converted to sparse representation based MIL problem. Each instance of a bag is represented as a sparse linear combination of all basis vectors in the dictionary, and then the bag is represented by one feature vector which is obtained via sparse representations of all instances within the bag. The sparse and MIL problem is further converted to a conventional learning problem that is solved by relevance vector machine (RVM). Results of single classifiers are combined to be used for classification. Experimental results on the breast cancer datasets demonstrate the superiority of the proposed method in terms of classification accuracy as compared with state-of-the-art MIL methods.

  14. Incremental learning of perceptual and conceptual representations and the puzzle of neural repetition suppression.

    Science.gov (United States)

    Gotts, Stephen J

    2016-08-01

    Incremental learning models of long-term perceptual and conceptual knowledge hold that neural representations are gradually acquired over many individual experiences via Hebbian-like activity-dependent synaptic plasticity across cortical connections of the brain. In such models, variation in task relevance of information, anatomic constraints, and the statistics of sensory inputs and motor outputs lead to qualitative alterations in the nature of representations that are acquired. Here, the proposal that behavioral repetition priming and neural repetition suppression effects are empirical markers of incremental learning in the cortex is discussed, and research results that both support and challenge this position are reviewed. Discussion is focused on a recent fMRI-adaptation study from our laboratory that shows decoupling of experience-dependent changes in neural tuning, priming, and repetition suppression, with representational changes that appear to work counter to the explicit task demands. Finally, critical experiments that may help to clarify and resolve current challenges are outlined.

  15. A Conceptual Framework for Error Remediation with Multiple External Representations Applied to Learning Objects

    Science.gov (United States)

    Leite, Maici Duarte; Marczal, Diego; Pimentel, Andrey Ricardo; Direne, Alexandre Ibrahim

    2014-01-01

    This paper presents the application of some concepts of Intelligent Tutoring Systems (ITS) to elaborate a conceptual framework that uses the remediation of errors with Multiple External Representations (MERs) in Learning Objects (LO). To this is demonstrated a development of LO for teaching the Pythagorean Theorem through this framework. This…

  16. A Novel Method of Case Representation and Retrieval in CBR for E-Learning

    Science.gov (United States)

    Khamparia, Aditya; Pandey, Babita

    2017-01-01

    In this paper we have discussed a novel method which has been developed for representation and retrieval of cases in case based reasoning (CBR) as a part of e-learning system which is based on various student features. In this approach we have integrated Artificial Neural Network (ANN) with Data mining (DM) and CBR. ANN is used to find the…

  17. Learning with Multiple Representations: An Example of a Revision Lesson in Mathematics

    Science.gov (United States)

    Wong, Darren; Poo, Sng Peng; Hock, Ng Eng; Kang, Wee Loo

    2011-01-01

    We describe an example of learning with multiple representations in an A-level revision lesson on mechanics. The context of the problem involved the motion of a ball thrown vertically upwards in air and studying how the associated physical quantities changed during its flight. Different groups of students were assigned to look at the ball's motion…

  18. Combining Generative and Discriminative Representation Learning for Lung CT Analysis With Convolutional Restricted Boltzmann Machines

    NARCIS (Netherlands)

    G. van Tulder (Gijs); M. de Bruijne (Marleen)

    2016-01-01

    textabstractThe choice of features greatly influences the performance of a tissue classification system. Despite this, many systems are built with standard, predefined filter banks that are not optimized for that particular application. Representation learning methods such as restricted Boltzmann

  19. Group-sparse representation with dictionary learning for medical image denoising and fusion.

    Science.gov (United States)

    Li, Shutao; Yin, Haitao; Fang, Leyuan

    2012-12-01

    Recently, sparse representation has attracted a lot of interest in various areas. However, the standard sparse representation does not consider the intrinsic structure, i.e., the nonzero elements occur in clusters, called group sparsity. Furthermore, there is no dictionary learning method for group sparse representation considering the geometrical structure of space spanned by atoms. In this paper, we propose a novel dictionary learning method, called Dictionary Learning with Group Sparsity and Graph Regularization (DL-GSGR). First, the geometrical structure of atoms is modeled as the graph regularization. Then, combining group sparsity and graph regularization, the DL-GSGR is presented, which is solved by alternating the group sparse coding and dictionary updating. In this way, the group coherence of learned dictionary can be enforced small enough such that any signal can be group sparse coded effectively. Finally, group sparse representation with DL-GSGR is applied to 3-D medical image denoising and image fusion. Specifically, in 3-D medical image denoising, a 3-D processing mechanism (using the similarity among nearby slices) and temporal regularization (to perverse the correlations across nearby slices) are exploited. The experimental results on 3-D image denoising and image fusion demonstrate the superiority of our proposed denoising and fusion approaches.

  20. The Effect of Project-Based Learning on Students' Statistical Literacy Levels for Data Representation

    Science.gov (United States)

    Koparan, Timur; Güven, Bülent

    2015-01-01

    The point of this study is to define the effect of project-based learning approach on 8th Grade secondary-school students' statistical literacy levels for data representation. To achieve this goal, a test which consists of 12 open-ended questions in accordance with the views of experts was developed. Seventy 8th grade secondary-school students, 35…

  1. Audiovisual Association Learning in the Absence of Primary Visual Cortex.

    Science.gov (United States)

    Seirafi, Mehrdad; De Weerd, Peter; Pegna, Alan J; de Gelder, Beatrice

    2015-01-01

    Learning audiovisual associations is mediated by the primary cortical areas; however, recent animal studies suggest that such learning can take place even in the absence of the primary visual cortex. Other studies have demonstrated the involvement of extra-geniculate pathways and especially the superior colliculus (SC) in audiovisual association learning. Here, we investigated such learning in a rare human patient with complete loss of the bilateral striate cortex. We carried out an implicit audiovisual association learning task with two different colors of red and purple (the latter color known to minimally activate the extra-genicular pathway). Interestingly, the patient learned the association between an auditory cue and a visual stimulus only when the unseen visual stimulus was red, but not when it was purple. The current study presents the first evidence showing the possibility of audiovisual association learning in humans with lesioned striate cortex. Furthermore, in line with animal studies, it supports an important role for the SC in audiovisual associative learning.

  2. Multisensory training can promote or impede visual perceptual learning of speech stimuli: visual-tactile vs. visual-auditory training.

    Science.gov (United States)

    Eberhardt, Silvio P; Auer, Edward T; Bernstein, Lynne E

    2014-01-01

    In a series of studies we have been investigating how multisensory training affects unisensory perceptual learning with speech stimuli. Previously, we reported that audiovisual (AV) training with speech stimuli can promote auditory-only (AO) perceptual learning in normal-hearing adults but can impede learning in congenitally deaf adults with late-acquired cochlear implants. Here, impeder and promoter effects were sought in normal-hearing adults who participated in lipreading training. In Experiment 1, visual-only (VO) training on paired associations between CVCVC nonsense word videos and nonsense pictures demonstrated that VO words could be learned to a high level of accuracy even by poor lipreaders. In Experiment 2, visual-auditory (VA) training in the same paradigm but with the addition of synchronous vocoded acoustic speech impeded VO learning of the stimuli in the paired-associates paradigm. In Experiment 3, the vocoded AO stimuli were shown to be less informative than the VO speech. Experiment 4 combined vibrotactile speech stimuli with the visual stimuli during training. Vibrotactile stimuli were shown to promote visual perceptual learning. In Experiment 5, no-training controls were used to show that training with visual speech carried over to consonant identification of untrained CVCVC stimuli but not to lipreading words in sentences. Across this and previous studies, multisensory training effects depended on the functional relationship between pathways engaged during training. Two principles are proposed to account for stimulus effects: (1) Stimuli presented to the trainee's primary perceptual pathway will impede learning by a lower-rank pathway. (2) Stimuli presented to the trainee's lower rank perceptual pathway will promote learning by a higher-rank pathway. The mechanisms supporting these principles are discussed in light of multisensory reverse hierarchy theory (RHT).

  3. The Effects of Static and Dynamic Visual Representations as Aids for Primary School Children in Tasks of Auditory Discrimination of Sound Patterns. An Intervention-based Study.

    Directory of Open Access Journals (Sweden)

    Jesus Tejada

    2018-02-01

    Full Text Available It has been proposed that non-conventional presentations of visual information could be very useful as a scaffolding strategy in the learning of Western music notation. As a result, this study has attempted to determine if there is any effect of static and dynamic presentation modes of visual information in the recognition of sound patterns. An intervention-based quasi-experimental design was adopted with two groups of fifth-grade students in a Spanish city. Students did tasks involving discrimination, auditory recognition and symbolic association of the sound patterns with non-musical representations, either static images (S group, or dynamic images (D group. The results showed neither statistically significant differences in the scores of D and S, nor influence of the covariates on the dependent variable, although statistically significant intra-group differences were found for both groups. This suggests that both types of graphic formats could be effective as digital learning mediators in the learning of Western musical notation.

  4. Robust visual tracking via multi-task sparse learning

    KAUST Repository

    Zhang, Tianzhu

    2012-06-01

    In this paper, we formulate object tracking in a particle filter framework as a multi-task sparse learning problem, which we denote as Multi-Task Tracking (MTT). Since we model particles as linear combinations of dictionary templates that are updated dynamically, learning the representation of each particle is considered a single task in MTT. By employing popular sparsity-inducing p, q mixed norms (p D; 1), we regularize the representation problem to enforce joint sparsity and learn the particle representations together. As compared to previous methods that handle particles independently, our results demonstrate that mining the interdependencies between particles improves tracking performance and overall computational complexity. Interestingly, we show that the popular L 1 tracker [15] is a special case of our MTT formulation (denoted as the L 11 tracker) when p q 1. The learning problem can be efficiently solved using an Accelerated Proximal Gradient (APG) method that yields a sequence of closed form updates. As such, MTT is computationally attractive. We test our proposed approach on challenging sequences involving heavy occlusion, drastic illumination changes, and large pose variations. Experimental results show that MTT methods consistently outperform state-of-the-art trackers. © 2012 IEEE.

  5. Sparse Representations-Based Super-Resolution of Key-Frames Extracted from Frames-Sequences Generated by a Visual Sensor Network

    Directory of Open Access Journals (Sweden)

    Muhammad Sajjad

    2014-02-01

    Full Text Available Visual sensor networks (VSNs usually generate a low-resolution (LR frame-sequence due to energy and processing constraints. These LR-frames are not very appropriate for use in certain surveillance applications. It is very important to enhance the resolution of the captured LR-frames using resolution enhancement schemes. In this paper, an effective framework for a super-resolution (SR scheme is proposed that enhances the resolution of LR key-frames extracted from frame-sequences captured by visual-sensors. In a VSN, a visual processing hub (VPH collects a huge amount of visual data from camera sensors. In the proposed framework, at the VPH, key-frames are extracted using our recent key-frame extraction technique and are streamed to the base station (BS after compression. A novel effective SR scheme is applied at BS to produce a high-resolution (HR output from the received key-frames. The proposed SR scheme uses optimized orthogonal matching pursuit (OOMP for sparse-representation recovery in SR. OOMP does better in terms of detecting true sparsity than orthogonal matching pursuit (OMP. This property of the OOMP helps produce a HR image which is closer to the original image. The K-SVD dictionary learning procedure is incorporated for dictionary learning. Batch-OMP improves the dictionary learning process by removing the limitation in handling a large set of observed signals. Experimental results validate the effectiveness of the proposed scheme and show its superiority over other state-of-the-art schemes.

  6. Sparse representations-based super-resolution of key-frames extracted from frames-sequences generated by a visual sensor network.

    Science.gov (United States)

    Sajjad, Muhammad; Mehmood, Irfan; Baik, Sung Wook

    2014-02-21

    Visual sensor networks (VSNs) usually generate a low-resolution (LR) frame-sequence due to energy and processing constraints. These LR-frames are not very appropriate for use in certain surveillance applications. It is very important to enhance the resolution of the captured LR-frames using resolution enhancement schemes. In this paper, an effective framework for a super-resolution (SR) scheme is proposed that enhances the resolution of LR key-frames extracted from frame-sequences captured by visual-sensors. In a VSN, a visual processing hub (VPH) collects a huge amount of visual data from camera sensors. In the proposed framework, at the VPH, key-frames are extracted using our recent key-frame extraction technique and are streamed to the base station (BS) after compression. A novel effective SR scheme is applied at BS to produce a high-resolution (HR) output from the received key-frames. The proposed SR scheme uses optimized orthogonal matching pursuit (OOMP) for sparse-representation recovery in SR. OOMP does better in terms of detecting true sparsity than orthogonal matching pursuit (OMP). This property of the OOMP helps produce a HR image which is closer to the original image. The K-SVD dictionary learning procedure is incorporated for dictionary learning. Batch-OMP improves the dictionary learning process by removing the limitation in handling a large set of observed signals. Experimental results validate the effectiveness of the proposed scheme and show its superiority over other state-of-the-art schemes.

  7. Neural network representation and learning of mappings and their derivatives

    Science.gov (United States)

    White, Halbert; Hornik, Kurt; Stinchcombe, Maxwell; Gallant, A. Ronald

    1991-01-01

    Discussed here are recent theorems proving that artificial neural networks are capable of approximating an arbitrary mapping and its derivatives as accurately as desired. This fact forms the basis for further results establishing the learnability of the desired approximations, using results from non-parametric statistics. These results have potential applications in robotics, chaotic dynamics, control, and sensitivity analysis. An example involving learning the transfer function and its derivatives for a chaotic map is discussed.

  8. The Representation of Color across the Human Visual Cortex: Distinguishing Chromatic Signals Contributing to Object Form Versus Surface Color.

    Science.gov (United States)

    Seymour, K J; Williams, M A; Rich, A N

    2016-05-01

    Many theories of visual object perception assume the visual system initially extracts borders between objects and their background and then "fills in" color to the resulting object surfaces. We investigated the transformation of chromatic signals across the human ventral visual stream, with particular interest in distinguishing representations of object surface color from representations of chromatic signals reflecting the retinal input. We used fMRI to measure brain activity while participants viewed figure-ground stimuli that differed either in the position or in the color contrast polarity of the foreground object (the figure). Multivariate pattern analysis revealed that classifiers were able to decode information about which color was presented at a particular retinal location from early visual areas, whereas regions further along the ventral stream exhibited biases for representing color as part of an object's surface, irrespective of its position on the retina. Additional analyses showed that although activity in V2 contained strong chromatic contrast information to support the early parsing of objects within a visual scene, activity in this area also signaled information about object surface color. These findings are consistent with the view that mechanisms underlying scene segmentation and the binding of color to object surfaces converge in V2. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  9. Representation of Gravity-Aligned Scene Structure in Ventral Pathway Visual Cortex.

    Science.gov (United States)

    Vaziri, Siavash; Connor, Charles E

    2016-03-21

    The ventral visual pathway in humans and non-human primates is known to represent object information, including shape and identity [1]. Here, we show the ventral pathway also represents scene structure aligned with the gravitational reference frame in which objects move and interact. We analyzed shape tuning of recently described macaque monkey ventral pathway neurons that prefer scene-like stimuli to objects [2]. Individual neurons did not respond to a single shape class, but to a variety of scene elements that are typically aligned with gravity: large planes in the orientation range of ground surfaces under natural viewing conditions, planes in the orientation range of ceilings, and extended convex and concave edges in the orientation range of wall/floor/ceiling junctions. For a given neuron, these elements tended to share a common alignment in eye-centered coordinates. Thus, each neuron integrated information about multiple gravity-aligned structures as they would be seen from a specific eye and head orientation. This eclectic coding strategy provides only ambiguous information about individual structures but explicit information about the environmental reference frame and the orientation of gravity in egocentric coordinates. In the ventral pathway, this could support perceiving and/or predicting physical events involving objects subject to gravity, recognizing object attributes like animacy based on movement not caused by gravity, and/or stabilizing perception of the world against changes in head orientation [3-5]. Our results, like the recent discovery of object weight representation [6], imply that the ventral pathway is involved not just in recognition, but also in physical understanding of objects and scenes. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Enhanced Data Representation by Kernel Metric Learning for Dementia Diagnosis

    Directory of Open Access Journals (Sweden)

    David Cárdenas-Peña

    2017-07-01

    Full Text Available Alzheimer's disease (AD is the kind of dementia that affects the most people around the world. Therefore, an early identification supporting effective treatments is required to increase the life quality of a wide number of patients. Recently, computer-aided diagnosis tools for dementia using Magnetic Resonance Imaging scans have been successfully proposed to discriminate between patients with AD, mild cognitive impairment, and healthy controls. Most of the attention has been given to the clinical data, provided by initiatives as the ADNI, supporting reliable researches on intervention, prevention, and treatments of AD. Therefore, there is a need for improving the performance of classification machines. In this paper, we propose a kernel framework for learning metrics that enhances conventional machines and supports the diagnosis of dementia. Our framework aims at building discriminative spaces through the maximization of center kernel alignment function, aiming at improving the discrimination of the three considered neurological classes. The proposed metric learning performance is evaluated on the widely-known ADNI database using three supervised classification machines (k-nn, SVM and NNs for multi-class and bi-class scenarios from structural MRIs. Specifically, from ADNI collection 286 AD patients, 379 MCI patients and 231 healthy controls are used for development and validation of our proposed metric learning framework. For the experimental validation, we split the data into two subsets: 30% of subjects used like a blindfolded assessment and 70% employed for parameter tuning. Then, in the preprocessing stage, each structural MRI scan a total of 310 morphological measurements are automatically extracted from by FreeSurfer software package and concatenated to build an input feature matrix. Obtained test performance results, show that including a supervised metric learning improves the compared baseline classifiers in both scenarios. In the multi

  11. Measuring, Predicting and Visualizing Short-Term Change in Word Representation and Usage in VKontakte Social Network

    OpenAIRE

    Stewart, Ian; Arendt, Dustin; Bell, Eric; Volkova, Svitlana

    2017-01-01

    Language in social media is extremely dynamic: new words emerge, trend and disappear, while the meaning of existing words can fluctuate over time. Such dynamics are especially notable during a period of crisis. This work addresses several important tasks of measuring, visualizing and predicting short term text representation shift, i.e. the change in a word's contextual semantics, and contrasting such shift with surface level word dynamics, or concept drift, observed in social media streams. ...

  12. Multiple representations in physics education

    CERN Document Server

    Duit, Reinders; Fischer, Hans E

    2017-01-01

    This volume is important because despite various external representations, such as analogies, metaphors, and visualizations being commonly used by physics teachers, educators and researchers, the notion of using the pedagogical functions of multiple representations to support teaching and learning is still a gap in physics education. The research presented in the three sections of the book is introduced by descriptions of various psychological theories that are applied in different ways for designing physics teaching and learning in classroom settings. The following chapters of the book illustrate teaching and learning with respect to applying specific physics multiple representations in different levels of the education system and in different physics topics using analogies and models, different modes, and in reasoning and representational competence. When multiple representations are used in physics for teaching, the expectation is that they should be successful. To ensure this is the case, the implementati...

  13. Deformable segmentation via sparse representation and dictionary learning.

    Science.gov (United States)

    Zhang, Shaoting; Zhan, Yiqiang; Metaxas, Dimitris N

    2012-10-01

    "Shape" and "appearance", the two pillars of a deformable model, complement each other in object segmentation. In many medical imaging applications, while the low-level appearance information is weak or mis-leading, shape priors play a more important role to guide a correct segmentation, thanks to the strong shape characteristics of biological structures. Recently a novel shape prior modeling method has been proposed based on sparse learning theory. Instead of learning a generative shape model, shape priors are incorporated on-the-fly through the sparse shape composition (SSC). SSC is robust to non-Gaussian errors and still preserves individual shape characteristics even when such characteristics is not statistically significant. Although it seems straightforward to incorporate SSC into a deformable segmentation framework as shape priors, the large-scale sparse optimization of SSC has low runtime efficiency, which cannot satisfy clinical requirements. In this paper, we design two strategies to decrease the computational complexity of SSC, making a robust, accurate and efficient deformable segmentation system. (1) When the shape repository contains a large number of instances, which is often the case in 2D problems, K-SVD is used to learn a more compact but still informative shape dictionary. (2) If the derived shape instance has a large number of vertices, which often appears in 3D problems, an affinity propagation method is used to partition the surface into small sub-regions, on which the sparse shape composition is performed locally. Both strategies dramatically decrease the scale of the sparse optimization problem and hence speed up the algorithm. Our method is applied on a diverse set of biomedical image analysis problems. Compared to the original SSC, these two newly-proposed modules not only significant reduce the computational complexity, but also improve the overall accuracy. Copyright © 2012 Elsevier B.V. All rights reserved.

  14. Coordinate Representations for Interference Reduction in Motor Learning.

    Directory of Open Access Journals (Sweden)

    Sang-Hoon Yeo

    Full Text Available When opposing force fields are presented alternately or randomly across trials for identical reaching movements, subjects learn neither force field, a behavior termed 'interference'. Studies have shown that a small difference in the endpoint posture of the limb reduces this interference. However, any difference in the limb's endpoint location typically changes the hand position, joint angles and the hand orientation making it ambiguous as to which of these changes underlies the ability to learn dynamics that normally interfere. Here we examine the extent to which each of these three possible coordinate systems--Cartesian hand position, shoulder and elbow joint angles, or hand orientation--underlies the reduction in interference. Subjects performed goal-directed reaching movements in five different limb configurations designed so that different pairs of these configurations involved a change in only one coordinate system. By specifically assigning clockwise and counter-clockwise force fields to the configurations we could create three different conditions in which the direction of the force field could only be uniquely distinguished in one of the three coordinate systems. We examined the ability to learn the two fields based on each of the coordinate systems. The largest reduction of interference was observed when the field direction was linked to the hand orientation with smaller reductions in the other two conditions. This result demonstrates that the strongest reduction in interference occurred with changes in the hand orientation, suggesting that hand orientation may have a privileged role in reducing motor interference for changes in the endpoint posture of the limb.

  15. A Comprehensive Review on Handcrafted and Learning-Based Action Representation Approaches for Human Activity Recognition

    Directory of Open Access Journals (Sweden)

    Allah Bux Sargano

    2017-01-01

    Full Text Available Human activity recognition (HAR is an important research area in the fields of human perception and computer vision due to its wide range of applications. These applications include: intelligent video surveillance, ambient assisted living, human computer interaction, human-robot interaction, entertainment, and intelligent driving. Recently, with the emergence and successful deployment of deep learning techniques for image classification, researchers have migrated from traditional handcrafting to deep learning techniques for HAR. However, handcrafted representation-based approaches are still widely used due to some bottlenecks such as computational complexity of deep learning techniques for activity recognition. However, approaches based on handcrafted representation are not able to handle complex scenarios due to their limitations and incapability; therefore, resorting to deep learning-based techniques is a natural option. This review paper presents a comprehensive survey of both handcrafted and learning-based action representations, offering comparison, analysis, and discussions on these approaches. In addition to this, the well-known public datasets available for experimentations and important applications of HAR are also presented to provide further insight into the field. This is the first review paper of its kind which presents all these aspects of HAR in a single review article with comprehensive coverage of each part. Finally, the paper is concluded with important discussions and research directions in the domain of HAR.

  16. Influence of TANDUR Learning to Students's Mathematical Representation and Student Self-Concept

    Directory of Open Access Journals (Sweden)

    Dimas Fajar Maulana

    2017-11-01

    study is all students of class X which amounted to 350 students in one of the SMA Negeri in Cirebon city. From the population is taken the sample using simple random sampling technique as many as 60 students are divided into two groups namely groups who get TANDUR learning and groups that get conventional learning. The results showed that the TANDUR learning model had an effect of 66.9% on the selfconcept of the students, while the students' mathematical representation ability was 75.5%. Meanwhile, the correlation between selfconcept and student's mathematical representation is 74,3%.

  17. Learning Programming Technique through Visual Programming Application as Learning Media with Fuzzy Rating

    Science.gov (United States)

    Buditjahjanto, I. G. P. Asto; Nurlaela, Luthfiyah; Ekohariadi; Riduwan, Mochamad

    2017-01-01

    Programming technique is one of the subjects at Vocational High School in Indonesia. This subject contains theory and application of programming utilizing Visual Programming. Students experience some difficulties to learn textual learning. Therefore, it is necessary to develop media as a tool to transfer learning materials. The objectives of this…

  18. Learning of grammar-like visual sequences by adults with and without language-learning disabilities.

    Science.gov (United States)

    Aguilar, Jessica M; Plante, Elena

    2014-08-01

    Two studies examined learning of grammar-like visual sequences to determine whether a general deficit in statistical learning characterizes this population. Furthermore, we tested the hypothesis that difficulty in sustaining attention during the learning task might account for differences in statistical learning. In Study 1, adults with normal language (NL) or language-learning disability (LLD) were familiarized with the visual artificial grammar and then tested using items that conformed or deviated from the grammar. In Study 2, a 2nd sample of adults with NL and LLD were presented auditory word pairs with weak semantic associations (e.g., groom + clean) along with the visual learning task. Participants were instructed to attend to visual sequences and to ignore the auditory stimuli. Incidental encoding of these words would indicate reduced attention to the primary task. In Studies 1 and 2, both groups demonstrated learning and generalization of the artificial grammar. In Study 2, neither the NL nor the LLD group appeared to encode the words presented during the learning phase. The results argue against a general deficit in statistical learning for individuals with LLD and demonstrate that both NL and LLD learners can ignore extraneous auditory stimuli during visual learning.

  19. Familiarisation: Restructuring Layouts with Visual Learning Models

    OpenAIRE

    Todi, Kashyap; Jokinen, Jussi; Luyten, Kris; Oulasvirta, Antti

    2018-01-01

    In domains where users are exposed to large variations in visuo-spatial features among designs, they often spend excess time searching for common elements (features) in familiar locations. This paper contributes computational approaches to restructuring layouts such that features on a new, unvisited interface can be found quicker. We explore four concepts of familiarisation, inspired by the human visual system (HVS), to automatically generate a familiar design for each user. Given a histor...

  20. Dynamic functional brain networks involved in simple visual discrimination learning.

    Science.gov (United States)

    Fidalgo, Camino; Conejo, Nélida María; González-Pardo, Héctor; Arias, Jorge Luis

    2014-10-01

    Visual discrimination tasks have been widely used to evaluate many types of learning and memory processes. However, little is known about the brain regions involved at different stages of visual discrimination learning. We used cytochrome c oxidase histochemistry to evaluate changes in regional brain oxidative metabolism during visual discrimination learning in a water-T maze at different time points during training. As compared with control groups, the results of the present study reveal the gradual activation of cortical (prefrontal and temporal cortices) and subcortical brain regions (including the striatum and the hippocampus) associated to the mastery of a simple visual discrimination task. On the other hand, the brain regions involved and their functional interactions changed progressively over days of training. Regions associated with novelty, emotion, visuo-spatial orientation and motor aspects of the behavioral task seem to be relevant during the earlier phase of training, whereas a brain network comprising the prefrontal cortex was found along the whole learning process. This study highlights the relevance of functional interactions among brain regions to investigate learning and memory processes. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. Learning semantic and visual similarity for endomicroscopy video retrieval.

    Science.gov (United States)

    Andre, Barbara; Vercauteren, Tom; Buchner, Anna M; Wallace, Michael B; Ayache, Nicholas

    2012-06-01

    Content-based image retrieval (CBIR) is a valuable computer vision technique which is increasingly being applied in the medical community for diagnosis support. However, traditional CBIR systems only deliver visual outputs, i.e., images having a similar appearance to the query, which is not directly interpretable by the physicians. Our objective is to provide a system for endomicroscopy video retrieval which delivers both visual and semantic outputs that are consistent with each other. In a previous study, we developed an adapted bag-of-visual-words method for endomicroscopy retrieval, called "Dense-Sift," that computes a visual signature for each video. In this paper, we present a novel approach to complement visual similarity learning with semantic knowledge extraction, in the field of in vivo endomicroscopy. We first leverage a semantic ground truth based on eight binary concepts, in order to transform these visual signatures into semantic signatures that reflect how much the presence of each semantic concept is expressed by the visual words describing the videos. Using cross-validation, we demonstrate that, in terms of semantic detection, our intuitive Fisher-based method transforming visual-word histograms into semantic estimations outperforms support vector machine (SVM) methods with statistical significance. In a second step, we propose to improve retrieval relevance by learning an adjusted similarity distance from a perceived similarity ground truth. As a result, our distance learning method allows to statistically improve the correlation with the perceived similarity. We also demonstrate that, in terms of perceived similarity, the recall performance of the semantic signatures is close to that of visual signatures and significantly better than those of several state-of-the-art CBIR methods. The semantic signatures are thus able to communicate high-level medical knowledge while being consistent with the low-level visual signatures and much shorter than them

  2. Correlation Filter Learning Toward Peak Strength for Visual Tracking.

    Science.gov (United States)

    Sui, Yao; Wang, Guanghui; Zhang, Li

    2018-04-01

    This paper presents a novel visual tracking approach to correlation filter learning toward peak strength of correlation response. Previous methods leverage all features of the target and the immediate background to learn a correlation filter. Some features, however, may be distractive to tracking, like those from occlusion and local deformation, resulting in unstable tracking performance. This paper aims at solving this issue and proposes a novel algorithm to learn the correlation filter. The proposed approach, by imposing an elastic net constraint on the filter, can adaptively eliminate those distractive features in the correlation filtering. A new peak strength metric is proposed to measure the discriminative capability of the learned correlation filter. It is demonstrated that the proposed approach effectively strengthens the peak of the correlation response, leading to more discriminative performance than previous methods. Extensive experiments on a challenging visual tracking benchmark demonstrate that the proposed tracker outperforms most state-of-the-art methods.

  3. Low-rank sparse learning for robust visual tracking

    KAUST Repository

    Zhang, Tianzhu

    2012-01-01

    In this paper, we propose a new particle-filter based tracking algorithm that exploits the relationship between particles (candidate targets). By representing particles as sparse linear combinations of dictionary templates, this algorithm capitalizes on the inherent low-rank structure of particle representations that are learned jointly. As such, it casts the tracking problem as a low-rank matrix learning problem. This low-rank sparse tracker (LRST) has a number of attractive properties. (1) Since LRST adaptively updates dictionary templates, it can handle significant changes in appearance due to variations in illumination, pose, scale, etc. (2) The linear representation in LRST explicitly incorporates background templates in the dictionary and a sparse error term, which enables LRST to address the tracking drift problem and to be robust against occlusion respectively. (3) LRST is computationally attractive, since the low-rank learning problem can be efficiently solved as a sequence of closed form update operations, which yield a time complexity that is linear in the number of particles and the template size. We evaluate the performance of LRST by applying it to a set of challenging video sequences and comparing it to 6 popular tracking methods. Our experiments show that by representing particles jointly, LRST not only outperforms the state-of-the-art in tracking accuracy but also significantly improves the time complexity of methods that use a similar sparse linear representation model for particles [1]. © 2012 Springer-Verlag.

  4. Speech Analysis and Visual Image: Language Learning

    Science.gov (United States)

    Loo, Alfred; Chung, C. W.; Lam, Alan

    2016-01-01

    Students will speak a second language with an accent if they learn the language after the age of six. It does not matter how motivated and clever they are, the accent will not go away. Only a few gifted students can speak a second language flawlessly. The exact reasons for this phenomenon are unknown. Although a large number of hypotheses have…

  5. Representational Learning for Fault Diagnosis of Wind Turbine Equipment: A Multi-Layered Extreme Learning Machines Approach

    Directory of Open Access Journals (Sweden)

    Zhi-Xin Yang

    2016-05-01

    Full Text Available Reliable and quick response fault diagnosis is crucial for the wind turbine generator system (WTGS to avoid unplanned interruption and to reduce the maintenance cost. However, the conditional data generated from WTGS operating in a tough environment is always dynamical and high-dimensional. To address these challenges, we propose a new fault diagnosis scheme which is composed of multiple extreme learning machines (ELM in a hierarchical structure, where a forwarding list of ELM layers is concatenated and each of them is processed independently for its corresponding role. The framework enables both representational feature learning and fault classification. The multi-layered ELM based representational learning covers functions including data preprocessing, feature extraction and dimension reduction. An ELM based autoencoder is trained to generate a hidden layer output weight matrix, which is then used to transform the input dataset into a new feature representation. Compared with the traditional feature extraction methods which may empirically wipe off some “insignificant’ feature information that in fact conveys certain undiscovered important knowledge, the introduced representational learning method could overcome the loss of information content. The computed output weight matrix projects the high dimensional input vector into a compressed and orthogonally weighted distribution. The last single layer of ELM is applied for fault classification. Unlike the greedy layer wise learning method adopted in back propagation based deep learning (DL, the proposed framework does not need iterative fine-tuning of parameters. To evaluate its experimental performance, comparison tests are carried out on a wind turbine generator simulator. The results show that the proposed diagnostic framework achieves the best performance among the compared approaches in terms of accuracy and efficiency in multiple faults detection of wind turbines.

  6. Visual word learning in adults with dyslexia

    Directory of Open Access Journals (Sweden)

    Rosa Kit Wan Kwok

    2014-05-01

    Full Text Available We investigated word learning in university and college students with a diagnosis of dyslexia and in typically-reading controls. Participants read aloud short (4-letter and longer (7-letter nonwords as quickly as possible. The nonwords were repeated across 10 blocks, using a different random order in each block. Participants returned 7 days later and repeated the experiment. Accuracy was high in both groups. The dyslexics were substantially slower than the controls at reading the nonwords throughout the experiment. They also showed a larger length effect, indicating less effective decoding skills. Learning was demonstrated by faster reading of the nonwords across repeated presentations and by a reduction in the difference in reading speeds between shorter and longer nonwords. The dyslexics required more presentations of the nonwords before the length effect became non-significant, only showing convergence in reaction times between shorter and longer items in the second testing session where controls achieved convergence part-way through the first session. Participants also completed a psychological test battery assessing reading and spelling, vocabulary, phonological awareness, working memory, nonverbal ability and motor speed. The dyslexics performed at a similar level to the controls on nonverbal ability but significantly less well on all the other measures. Regression analyses found that decoding ability, measured as the speed of reading aloud nonwords when they were presented for the first time, was predicted by a composite of word reading and spelling scores (‘literacy’. Word learning was assessed in terms of the improvement in naming speeds over 10 blocks of training. Learning was predicted by vocabulary and working memory scores, but not by literacy, phonological awareness, nonverbal ability or motor speed. The results show that young dyslexic adults have problems both in pronouncing novel words and in learning new written words.

  7. Comparison of Auditory/Visual and Visual/Motor Practice on the Spelling Accuracy of Learning Disabled Children.

    Science.gov (United States)

    Aleman, Cheryl; And Others

    1990-01-01

    Compares auditory/visual practice to visual/motor practice in spelling with seven elementary school learning-disabled students enrolled in a resource room setting. Finds that the auditory/visual practice was superior to the visual/motor practice on the weekly spelling performance for all seven students. (MG)

  8. Literature review of visual representation of the results of benefit-risk assessments of medicinal products.

    Science.gov (United States)

    Hallgreen, Christine E; Mt-Isa, Shahrul; Lieftucht, Alfons; Phillips, Lawrence D; Hughes, Diana; Talbot, Susan; Asiimwe, Alex; Downey, Gerald; Genov, Georgy; Hermann, Richard; Noel, Rebecca; Peters, Ruth; Micaleff, Alain; Tzoulaki, Ioanna; Ashby, Deborah

    2016-03-01

    The PROTECT Benefit-Risk group is dedicated to research in methods for continuous benefit-risk monitoring of medicines, including the presentation of the results, with a particular emphasis on graphical methods. A comprehensive review was performed to identify visuals used for medical risk and benefit-risk communication. The identified visual displays were grouped into visual types, and each visual type was appraised based on five criteria: intended audience, intended message, knowledge required to understand the visual, unintentional messages that may be derived from the visual and missing information that may be needed to understand the visual. Sixty-six examples of visual formats were identified from the literature and classified into 14 visual types. We found that there is not one single visual format that is consistently superior to others for the communication of benefit-risk information. In addition, we found that most of the drawbacks found in the visual formats could be considered general to visual communication, although some appear more relevant to specific formats and should be considered when creating visuals for different audiences depending on the exact message to be communicated. We have arrived at recommendations for the use of visual displays for benefit-risk communication. The recommendation refers to the creation of visuals. We outline four criteria to determine audience-visual compatibility and consider these to be a key task in creating any visual. Next we propose specific visual formats of interest, to be explored further for their ability to address nine different types of benefit-risk analysis information. Copyright © 2015 John Wiley & Sons, Ltd.

  9. How to make a good animation: A grounded cognition model of how visual representation design affects the construction of abstract physics knowledge

    Directory of Open Access Journals (Sweden)

    Zhongzhou Chen

    2014-04-01

    Full Text Available Visual representations play a critical role in teaching physics. However, since we do not have a satisfactory understanding of how visual perception impacts the construction of abstract knowledge, most visual representations used in instructions are either created based on existing conventions or designed according to the instructor’s intuition, which leads to a significant variance in their effectiveness. In this paper we propose a cognitive mechanism based on grounded cognition, suggesting that visual perception affects understanding by activating “perceptual symbols”: the basic cognitive unit used by the brain to construct a concept. A good visual representation activates perceptual symbols that are essential for the construction of the represented concept, whereas a bad representation does the opposite. As a proof of concept, we conducted a clinical experiment in which participants received three different versions of a multimedia tutorial teaching the integral expression of electric potential. The three versions were only different by the details of the visual representation design, only one of which contained perceptual features that activate perceptual symbols essential for constructing the idea of “accumulation.” On a following post-test, participants receiving this version of tutorial significantly outperformed those who received the other two versions of tutorials designed to mimic conventional visual representations used in classrooms.

  10. Per-Sample Multiple Kernel Approach for Visual Concept Learning

    Directory of Open Access Journals (Sweden)

    Ling-Yu Duan

    2010-01-01

    Full Text Available Learning visual concepts from images is an important yet challenging problem in computer vision and multimedia research areas. Multiple kernel learning (MKL methods have shown great advantages in visual concept learning. As a visual concept often exhibits great appearance variance, a canonical MKL approach may not generate satisfactory results when a uniform kernel combination is applied over the input space. In this paper, we propose a per-sample multiple kernel learning (PS-MKL approach to take into account intraclass diversity for improving discrimination. PS-MKL determines sample-wise kernel weights according to kernel functions and training samples. Kernel weights as well as kernel-based classifiers are jointly learned. For efficient learning, PS-MKL employs a sample selection strategy. Extensive experiments are carried out over three benchmarking datasets of different characteristics including Caltech101, WikipediaMM, and Pascal VOC'07. PS-MKL has achieved encouraging performance, comparable to the state of the art, which has outperformed a canonical MKL.

  11. Per-Sample Multiple Kernel Approach for Visual Concept Learning

    Directory of Open Access Journals (Sweden)

    Tian Yonghong

    2010-01-01

    Full Text Available Abstract Learning visual concepts from images is an important yet challenging problem in computer vision and multimedia research areas. Multiple kernel learning (MKL methods have shown great advantages in visual concept learning. As a visual concept often exhibits great appearance variance, a canonical MKL approach may not generate satisfactory results when a uniform kernel combination is applied over the input space. In this paper, we propose a per-sample multiple kernel learning (PS-MKL approach to take into account intraclass diversity for improving discrimination. PS-MKL determines sample-wise kernel weights according to kernel functions and training samples. Kernel weights as well as kernel-based classifiers are jointly learned. For efficient learning, PS-MKL employs a sample selection strategy. Extensive experiments are carried out over three benchmarking datasets of different characteristics including Caltech101, WikipediaMM, and Pascal VOC'07. PS-MKL has achieved encouraging performance, comparable to the state of the art, which has outperformed a canonical MKL.

  12. Target templates: the precision of mental representations affects attentional guidance and decision-making in visual search.

    Science.gov (United States)

    Hout, Michael C; Goldinger, Stephen D

    2015-01-01

    When people look for things in the environment, they use target templates-mental representations of the objects they are attempting to locate-to guide attention and to assess incoming visual input as potential targets. However, unlike laboratory participants, searchers in the real world rarely have perfect knowledge regarding the potential appearance of targets. In seven experiments, we examined how the precision of target templates affects the ability to conduct visual search. Specifically, we degraded template precision in two ways: 1) by contaminating searchers' templates with inaccurate features, and 2) by introducing extraneous features to the template that were unhelpful. We recorded eye movements to allow inferences regarding the relative extents to which attentional guidance and decision-making are hindered by template imprecision. Our findings support a dual-function theory of the target template and highlight the importance of examining template precision in visual search.

  13. Could a Multimodal Dictionary Serve as a Learning Tool? An Examination of the Impact of Technologically Enhanced Visual Glosses on L2 Text Comprehension

    Science.gov (United States)

    Sato, Takeshi

    2016-01-01

    This study examines the efficacy of a multimodal online bilingual dictionary based on cognitive linguistics in order to explore the advantages and limitations of explicit multimodal L2 vocabulary learning. Previous studies have examined the efficacy of the verbal and visual representation of words while reading L2 texts, concluding that it…

  14. A visual tracking method based on deep learning without online model updating

    Science.gov (United States)

    Tang, Cong; Wang, Yicheng; Feng, Yunsong; Zheng, Chao; Jin, Wei

    2018-02-01

    The paper proposes a visual tracking method based on deep learning without online model updating. In consideration of the advantages of deep learning in feature representation, deep model SSD (Single Shot Multibox Detector) is used as the object extractor in the tracking model. Simultaneously, the color histogram feature and HOG (Histogram of Oriented Gradient) feature are combined to select the tracking object. In the process of tracking, multi-scale object searching map is built to improve the detection performance of deep detection model and the tracking efficiency. In the experiment of eight respective tracking video sequences in the baseline dataset, compared with six state-of-the-art methods, the method in the paper has better robustness in the tracking challenging factors, such as deformation, scale variation, rotation variation, illumination variation, and background clutters, moreover, its general performance is better than other six tracking methods.

  15. Shape prior modeling using sparse representation and online dictionary learning.

    Science.gov (United States)

    Zhang, Shaoting; Zhan, Yiqiang; Zhou, Yan; Uzunbas, Mustafa; Metaxas, Dimitris N

    2012-01-01

    The recently proposed sparse shape composition (SSC) opens a new avenue for shape prior modeling. Instead of assuming any parametric model of shape statistics, SSC incorporates shape priors on-the-fly by approximating a shape instance (usually derived from appearance cues) by a sparse combination of shapes in a training repository. Theoretically, one can increase the modeling capability of SSC by including as many training shapes in the repository. However, this strategy confronts two limitations in practice. First, since SSC involves an iterative sparse optimization at run-time, the more shape instances contained in the repository, the less run-time efficiency SSC has. Therefore, a compact and informative shape dictionary is preferred to a large shape repository. Second, in medical imaging applications, training shapes seldom come in one batch. It is very time consuming and sometimes infeasible to reconstruct the shape dictionary every time new training shapes appear. In this paper, we propose an online learning method to address these two limitations. Our method starts from constructing an initial shape dictionary using the K-SVD algorithm. When new training shapes come, instead of re-constructing the dictionary from the ground up, we update the existing one using a block-coordinates descent approach. Using the dynamically updated dictionary, sparse shape composition can be gracefully scaled up to model shape priors from a large number of training shapes without sacrificing run-time efficiency. Our method is validated on lung localization in X-Ray and cardiac segmentation in MRI time series. Compared to the original SSC, it shows comparable performance while being significantly more efficient.

  16. Multiple Learning Strategies Project. Small Engine Repair. Visually Impaired.

    Science.gov (United States)

    Foster, Don; And Others

    This instructional package designed for visually impaired students, focuses on the vocational area of small engine repair. Contained in this document are forty learning modules organized into fourteen units: engine block; starters; fuel tank, lines, filters and pumps; carburetors; electrical; test equipment; motorcycle; machining; tune-ups; short…

  17. Cueing and Anxiety in a Visual Concept Learning Task.

    Science.gov (United States)

    Turner, Philip M.

    This study investigated the relationship of two anxiety measures (the State-Trait Anxiety Inventory-Trait Form and the S-R Inventory of Anxiousness-Exam Form) to performance on a visual concept-learning task with embedded criterial information. The effect on anxiety reduction of cueing criterial information was also examined, and two levels of…

  18. Instructional Television: Visual Production Techniques and Learning Comprehension.

    Science.gov (United States)

    Silbergleid, Michael Ian

    The purpose of this study was to determine if increasing levels of complexity in visual production techniques would increase the viewer's learning comprehension and the degree of likeness expressed for a college level instructional television program. A total of 119 mass communications students at the University of Alabama participated in the…

  19. Can visual illusions be used to facilitate sport skill learning?

    NARCIS (Netherlands)

    Canal Bruland, R.; van der Meer, Y.; Moerman, J.

    2016-01-01

    Recently it has been reported that practicing putting with visual illusions that make the hole appear larger than it actually is leads to longer-lasting performance improvements. Interestingly, from a motor control and learning perspective, it may be possible to actually predict the opposite to

  20. Developing Visualization Support System for Teaching/Learning Database Normalization

    Science.gov (United States)

    Folorunso, Olusegun; Akinwale, AdioTaofeek

    2010-01-01

    Purpose: In tertiary institution, some students find it hard to learn database design theory, in particular, database normalization. The purpose of this paper is to develop a visualization tool to give students an interactive hands-on experience in database normalization process. Design/methodology/approach: The model-view-controller architecture…

  1. Multiple Learning Strategies Project. Building Maintenance & Engineering. Visually Impaired.

    Science.gov (United States)

    Smith, Dwight; And Others

    This instructional package is designed for visually impaired students in the vocational area of building maintenance and engineering. The twenty-eight learning modules are organized into six units: floor care, general maintenance tasks; restrooms; carpet care; power and hand tools; and cabinet construction. Each module, printed in large block…

  2. V4 activity predicts the strength of visual short-term memory representations

    NARCIS (Netherlands)

    Sligte, I.G.; Scholte, H.S.; Lamme, V.A.F.

    2009-01-01

    Recent studies have shown the existence of a form of visual memory that lies intermediate of iconic memory and visual short-term memory (VSTM), in terms of both capacity (up to 15 items) and the duration of the memory trace (up to 4 s). Because new visual objects readily overwrite this intermediate

  3. Adaptive structured dictionary learning for image fusion based on group-sparse-representation

    Science.gov (United States)

    Yang, Jiajie; Sun, Bin; Luo, Chengwei; Wu, Yuzhong; Xu, Limei

    2018-04-01

    Dictionary learning is the key process of sparse representation which is one of the most widely used image representation theories in image fusion. The existing dictionary learning method does not use the group structure information and the sparse coefficients well. In this paper, we propose a new adaptive structured dictionary learning algorithm and a l1-norm maximum fusion rule that innovatively utilizes grouped sparse coefficients to merge the images. In the dictionary learning algorithm, we do not need prior knowledge about any group structure of the dictionary. By using the characteristics of the dictionary in expressing the signal, our algorithm can automatically find the desired potential structure information that hidden in the dictionary. The fusion rule takes the physical meaning of the group structure dictionary, and makes activity-level judgement on the structure information when the images are being merged. Therefore, the fused image can retain more significant information. Comparisons have been made with several state-of-the-art dictionary learning methods and fusion rules. The experimental results demonstrate that, the dictionary learning algorithm and the fusion rule both outperform others in terms of several objective evaluation metrics.

  4. Can Visual Illusions Be Used to Facilitate Sport Skill Learning?

    Science.gov (United States)

    Cañal-Bruland, Rouwen; van der Meer, Yor; Moerman, Jelle

    2016-01-01

    Recently it has been reported that practicing putting with visual illusions that make the hole appear larger than it actually is leads to longer-lasting performance improvements. Interestingly, from a motor control and learning perspective, it may be possible to actually predict the opposite to occur, as facing a smaller appearing target should enforce performers to be more precise. To test this idea the authors invited participants to practice an aiming task (i.e., a marble-shooting task) with either a visual illusion that made the target appear larger or a visual illusion that made the target appear smaller. They applied a pre-post test design, included a control group training without any illusory effects and increased the amount of practice to 450 trials. In contrast to earlier reports, the results revealed that the group that trained with the visual illusion that made the target look smaller improved performance from pre- to posttest, whereas the group practicing with visual illusions that made the target appear larger did not show any improvements. Notably, also the control group improved from pre- to posttest. The authors conclude that more research is needed to improve our understanding of whether and how visual illusions may be useful training tools for sport skill learning.

  5. Interrogating the Conventional Boundaries of Research Methods in Social Sciences: The Role of Visual Representation in Ethnography

    Directory of Open Access Journals (Sweden)

    Nel Glass

    2008-05-01

    Full Text Available The author will propose that the use of performative social science is a means to deliberately interrogate long held conventions of established research. The innovative role of visual art representation in data collection, analysis and public engagement with research will be discussed. Examples will be drawn from two postmodern feminist ethnographic research which investigated academic professional development, resilience, hope and optimism in the UK, US, Australia and New Zealand from 1997-2005. Artwork was initially created as data collection and digitalised as representation to intentionally validate the voices of research participants, the researcher and viewers of the work. The research participants and viewers were given opportunities to actively engage with the visual work. Artwork complimented two additional research methods: critical conversational interviewing and reflective journaling. This paper will address the ways inclusion of art methods contributed and deepened data representation. The role of crafting artwork in the field, the artistic changes that represented the complexity of data analysis and engagement with the work will be explored. It will be argued that the creation and engagement with artwork in research is an empowering and dynamic process for researchers and participants. It is an innovative means of representing intersubjectivity that results in reciprocity. URN: urn:nbn:de:0114-fqs0802509

  6. Design of multiple representations e-learning resources based on a contextual approach for the basic physics course

    Science.gov (United States)

    Bakri, F.; Muliyati, D.

    2018-05-01

    This research aims to design e-learning resources with multiple representations based on a contextual approach for the Basic Physics Course. The research uses the research and development methods accordance Dick & Carey strategy. The development carried out in the digital laboratory of Physics Education Department, Mathematics and Science Faculty, Universitas Negeri Jakarta. The result of the process of product development with Dick & Carey strategy, have produced e-learning design of the Basic Physics Course is presented in multiple representations in contextual learning syntax. The appropriate of representation used in the design of learning basic physics include: concept map, video, figures, data tables of experiment results, charts of data tables, the verbal explanations, mathematical equations, problem and solutions example, and exercise. Multiple representations are presented in the form of contextual learning by stages: relating, experiencing, applying, transferring, and cooperating.

  7. Visual acuity and visual skills in Malaysian children with learning disabilities

    Directory of Open Access Journals (Sweden)

    Muzaliha MN

    2012-09-01

    Full Text Available Mohd-Nor Muzaliha,1 Buang Nurhamiza,1 Adil Hussein,1 Abdul-Rani Norabibas,1 Jaafar Mohd-Hisham-Basrun,1 Abdullah Sarimah,2 Seo-Wei Leo,3 Ismail Shatriah11Department of Ophthalmology, 2Biostatistics and Research Methodology Unit, School of Medical Sciences, Universiti Sains Malaysia, Kelantan, Malaysia; 3Paediatric Ophthalmology and Strabismus Unit, Department of Ophthalmology, Tan Tock Seng Hospital, SingaporeBackground: There is limited data in the literature concerning the visual status and skills in children with learning disabilities, particularly within the Asian population. This study is aimed to determine visual acuity and visual skills in children with learning disabilities in primary schools within the suburban Kota Bharu district in Malaysia.Methods: We examined 1010 children with learning disabilities aged between 8–12 years from 40 primary schools in the Kota Bharu district, Malaysia from January 2009 to March 2010. These children were identified based on their performance in a screening test known as the Early Intervention Class for Reading and Writing Screening Test conducted by the Ministry of Education, Malaysia. Complete ocular examinations and visual skills assessment included near point of convergence, amplitude of accommodation, accommodative facility, convergence break and recovery, divergence break and recovery, and developmental eye movement tests for all subjects.Results: A total of 4.8% of students had visual acuity worse than 6/12 (20/40, 14.0% had convergence insufficiency, 28.3% displayed poor accommodative amplitude, and 26.0% showed signs of accommodative infacility. A total of 12.1% of the students had poor convergence break, 45.7% displayed poor convergence recovery, 37.4% showed poor divergence break, and 66.3% were noted to have poor divergence recovery. The mean horizontal developmental eye movement was significantly prolonged.Conclusion: Although their visual acuity was satisfactory, nearly 30% of the

  8. Neural basis for dynamic updating of object representation in visual working memory.

    Science.gov (United States)

    Takahama, Sachiko; Miyauchi, Satoru; Saiki, Jun

    2010-02-15

    In real world, objects have multiple features and change dynamically. Thus, object representations must satisfy dynamic updating and feature binding. Previous studies have investigated the neural activity of dynamic updating or feature binding alone, but not both simultaneously. We investigated the neural basis of feature-bound object representation in a dynamically updating situation by conducting a multiple object permanence tracking task, which required observers to simultaneously process both the maintenance and dynamic updating of feature-bound objects. Using an event-related design, we separated activities during memory maintenance and change detection. In the search for regions showing selective activation in dynamic updating of feature-bound objects, we identified a network during memory maintenance that was comprised of the inferior precentral sulcus, superior parietal lobule, and middle frontal gyrus. In the change detection period, various prefrontal regions, including the anterior prefrontal cortex, were activated. In updating object representation of dynamically moving objects, the inferior precentral sulcus closely cooperates with a so-called "frontoparietal network", and subregions of the frontoparietal network can be decomposed into those sensitive to spatial updating and feature binding. The anterior prefrontal cortex identifies changes in object representation by comparing memory and perceptual representations rather than maintaining object representations per se, as previously suggested. Copyright 2009 Elsevier Inc. All rights reserved.

  9. Development of multi-representation learning tools for the course of fundamental physics

    Science.gov (United States)

    Huda, C.; Siswanto, J.; Kurniawan, A. F.; Nuroso, H.

    2016-08-01

    This research is aimed at designing a learning tool based on multi-representation that can improve problem solving skills. It used the research and development approach. It was applied for the course of Fundamental Physics at Universitas PGRI Semarang for the 2014/2015 academic year. Results show gain analysis value of 0.68, which means some medium improvements. The result of t-test is shows a calculated value of 27.35 and a table t of 2.020 for df = 25 and α = 0.05. Results of pre-tests and post-tests increase from 23.45 to 76.15. Application of multi-representation learning tools significantly improves students’ grades.

  10. DESIGNING VISUAL NOVEL CHARACTERS OF GAJAH MADA AND TRIBHUWANA TUNGGADEWI AS REPRESENTATION OF HISTORY FIGURES

    Directory of Open Access Journals (Sweden)

    Dendi Pratama

    2018-03-01

    Gajah Mada dan Tribhuwana Tunggadewi adalah dua tokoh sejarah Kerajaan Majapahit. Keduanya memiliki pengaruh besar dalam memperluas kekuatan Kerajaan Majapahit. Kedua tokoh sejarah ini bisa disajikan sebagai karakter permainan dalam visual novel yang mendidik, terutama merepresentasikannya melalui media interaktif yang menarik bagi remaja. Saat ini, tidak banyak visual novel yang menampilkan latar sejarah Indonesia. Studi ini menciptakan karakter Gajah Mada dan Tribhuwana Tunggadewi dalam konteks desain komunikasi visual. Pembahasan tentang karakter visual novel ini menggunakan pendekatan kualitatif dengan metode semiotika struktural, yaitu mendesain pesan melalui elemen visual garis, bentuk, tekstur, dan warna. Studi ini menunjukkan bahwa desain kostum pada karakter sebagai representasi makna informasi tentang sosok yang berpengaruh di kerajaan. Desain aksesori merupakan representasi makna simbolik tentang dolongan kebangsawanan. Desain wajah dan postur sebagai representasi eleganitas dan kekuatan karakter dalam makna imaji. Hasil desain karakter ini diharapkan bisa memberi gambaran tentang tokoh sejarah di Kerajaan Majapahit bagi remaja. Kata kunci: elemen visual, karakter, visual novel, semiotika struktural

  11. Accurate or assumed: visual learning in children with ASD.

    Science.gov (United States)

    Trembath, David; Vivanti, Giacomo; Iacono, Teresa; Dissanayake, Cheryl

    2015-10-01

    Children with autism spectrum disorder (ASD) are often described as visual learners. We tested this assumption in an experiment in which 25 children with ASD, 19 children with global developmental delay (GDD), and 17 typically developing (TD) children were presented a series of videos via an eye tracker in which an actor instructed them to manipulate objects in speech-only and speech + pictures conditions. We found no group differences in visual attention to the stimuli. The GDD and TD groups performed better when pictures were available, whereas the ASD group did not. Performance of children with ASD and GDD was positively correlated with visual attention and receptive language. We found no evidence of a prominent visual learning style in the ASD group.

  12. Children’s Learning from Touch Screens: A Dual Representation Perspective

    Science.gov (United States)

    Sheehan, Kelly J.; Uttal, David H.

    2016-01-01

    Parents and educators often expect that children will learn from touch screen devices, such as during joint e-book reading. Therefore an essential question is whether young children understand that the touch screen can be a symbolic medium – that entities represented on the touch screen can refer to entities in the real world. Research on symbolic development suggests that symbolic understanding requires that children develop dual representational abilities, meaning children need to appreciate that a symbol is an object in itself (i.e., picture of a dog) while also being a representation of something else (i.e., the real dog). Drawing on classic research on symbols and new research on children’s learning from touch screens, we offer the perspective that children’s ability to learn from the touch screen as a symbolic medium depends on the effect of interactivity on children’s developing dual representational abilities. Although previous research on dual representation suggests the interactive nature of the touch screen might make it difficult for young children to use as a symbolic medium, the unique interactive affordances may help alleviate this difficulty. More research needs to investigate how the interactivity of the touch screen affects children’s ability to connect the symbols on the screen to the real world. Given the interactive nature of the touch screen, researchers and educators should consider both the affordances of the touch screen as well as young children’s cognitive abilities when assessing whether young children can learn from it as a symbolic medium. PMID:27570516

  13. Learning of Multimodal Representations With Random Walks on the Click Graph.

    Science.gov (United States)

    Wu, Fei; Lu, Xinyan; Song, Jun; Yan, Shuicheng; Zhang, Zhongfei Mark; Rui, Yong; Zhuang, Yueting

    2016-02-01

    In multimedia information retrieval, most classic approaches tend to represent different modalities of media in the same feature space. With the click data collected from the users' searching behavior, existing approaches take either one-to-one paired data (text-image pairs) or ranking examples (text-query-image and/or image-query-text ranking lists) as training examples, which do not make full use of the click data, particularly the implicit connections among the data objects. In this paper, we treat the click data as a large click graph, in which vertices are images/text queries and edges indicate the clicks between an image and a query. We consider learning a multimodal representation from the perspective of encoding the explicit/implicit relevance relationship between the vertices in the click graph. By minimizing both the truncated random walk loss as well as the distance between the learned representation of vertices and their corresponding deep neural network output, the proposed model which is named multimodal random walk neural network (MRW-NN) can be applied to not only learn robust representation of the existing multimodal data in the click graph, but also deal with the unseen queries and images to support cross-modal retrieval. We evaluate the latent representation learned by MRW-NN on a public large-scale click log data set Clickture and further show that MRW-NN achieves much better cross-modal retrieval performance on the unseen queries/images than the other state-of-the-art methods.

  14. Feature selection and multi-kernel learning for sparse representation on a manifold

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-03-01

    Sparse representation has been widely studied as a part-based data representation method and applied in many scientific and engineering fields, such as bioinformatics and medical imaging. It seeks to represent a data sample as a sparse linear combination of some basic items in a dictionary. Gao etal. (2013) recently proposed Laplacian sparse coding by regularizing the sparse codes with an affinity graph. However, due to the noisy features and nonlinear distribution of the data samples, the affinity graph constructed directly from the original feature space is not necessarily a reliable reflection of the intrinsic manifold of the data samples. To overcome this problem, we integrate feature selection and multiple kernel learning into the sparse coding on the manifold. To this end, unified objectives are defined for feature selection, multiple kernel learning, sparse coding, and graph regularization. By optimizing the objective functions iteratively, we develop novel data representation algorithms with feature selection and multiple kernel learning respectively. Experimental results on two challenging tasks, N-linked glycosylation prediction and mammogram retrieval, demonstrate that the proposed algorithms outperform the traditional sparse coding methods. © 2013 Elsevier Ltd.

  15. Feature selection and multi-kernel learning for sparse representation on a manifold.

    Science.gov (United States)

    Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin

    2014-03-01

    Sparse representation has been widely studied as a part-based data representation method and applied in many scientific and engineering fields, such as bioinformatics and medical imaging. It seeks to represent a data sample as a sparse linear combination of some basic items in a dictionary. Gao et al. (2013) recently proposed Laplacian sparse coding by regularizing the sparse codes with an affinity graph. However, due to the noisy features and nonlinear distribution of the data samples, the affinity graph constructed directly from the original feature space is not necessarily a reliable reflection of the intrinsic manifold of the data samples. To overcome this problem, we integrate feature selection and multiple kernel learning into the sparse coding on the manifold. To this end, unified objectives are defined for feature selection, multiple kernel learning, sparse coding, and graph regularization. By optimizing the objective functions iteratively, we develop novel data representation algorithms with feature selection and multiple kernel learning respectively. Experimental results on two challenging tasks, N-linked glycosylation prediction and mammogram retrieval, demonstrate that the proposed algorithms outperform the traditional sparse coding methods. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Short-term perceptual learning in visual conjunction search.

    Science.gov (United States)

    Su, Yuling; Lai, Yunpeng; Huang, Wanyi; Tan, Wei; Qu, Zhe; Ding, Yulong

    2014-08-01

    Although some studies showed that training can improve the ability of cross-dimension conjunction search, less is known about the underlying mechanism. Specifically, it remains unclear whether training of visual conjunction search can successfully bind different features of separated dimensions into a new function unit at early stages of visual processing. In the present study, we utilized stimulus specificity and generalization to provide a new approach to investigate the mechanisms underlying perceptual learning (PL) in visual conjunction search. Five experiments consistently showed that after 40 to 50 min of training of color-shape/orientation conjunction search, the ability to search for a certain conjunction target improved significantly and the learning effects did not transfer to a new target that differed from the trained target in both color and shape/orientation features. However, the learning effects were not strictly specific. In color-shape conjunction search, although the learning effect could not transfer to a same-shape different-color target, it almost completely transferred to a same-color different-shape target. In color-orientation conjunction search, the learning effect partly transferred to a new target that shared same color or same orientation with the trained target. Moreover, the sum of transfer effects for the same color target and the same orientation target in color-orientation conjunction search was algebraically equivalent to the learning effect for trained target, showing an additive transfer effect. The different transfer patterns in color-shape and color-orientation conjunction search learning might reflect the different complexity and discriminability between feature dimensions. These results suggested a feature-based attention enhancement mechanism rather than a unitization mechanism underlying the short-term PL of color-shape/orientation conjunction search.

  17. Enhanced learning of natural visual sequences in newborn chicks.

    Science.gov (United States)

    Wood, Justin N; Prasad, Aditya; Goldman, Jason G; Wood, Samantha M W

    2016-07-01

    To what extent are newborn brains designed to operate over natural visual input? To address this question, we used a high-throughput controlled-rearing method to examine whether newborn chicks (Gallus gallus) show enhanced learning of natural visual sequences at the onset of vision. We took the same set of images and grouped them into either natural sequences (i.e., sequences showing different viewpoints of the same real-world object) or unnatural sequences (i.e., sequences showing different images of different real-world objects). When raised in virtual worlds containing natural sequences, newborn chicks developed the ability to recognize familiar images of objects. Conversely, when raised in virtual worlds containing unnatural sequences, newborn chicks' object recognition abilities were severely impaired. In fact, the majority of the chicks raised with the unnatural sequences failed to recognize familiar images of objects despite acquiring over 100 h of visual experience with those images. Thus, newborn chicks show enhanced learning of natural visual sequences at the onset of vision. These results indicate that newborn brains are designed to operate over natural visual input.

  18. Interaction of Instrumental and Goal-Directed Learning Modulates Prediction Error Representations in the Ventral Striatum.

    Science.gov (United States)

    Guo, Rong; Böhmer, Wendelin; Hebart, Martin; Chien, Samson; Sommer, Tobias; Obermayer, Klaus; Gläscher, Jan

    2016-12-14

    Goal-directed and instrumental learning are both important controllers of human behavior. Learning about which stimulus event occurs in the environment and the reward associated with them allows humans to seek out the most valuable stimulus and move through the environment in a goal-directed manner. Stimulus-response associations are characteristic of instrumental learning, whereas response-outcome associations are the hallmark of goal-directed learning. Here we provide behavioral, computational, and neuroimaging results from a novel task in which stimulus-response and response-outcome associations are learned simultaneously but dominate behavior at different stages of the experiment. We found that prediction error representations in the ventral striatum depend on which type of learning dominates. Furthermore, the amygdala tracks the time-dependent weighting of stimulus-response versus response-outcome learning. Our findings suggest that the goal-directed and instrumental controllers dynamically engage the ventral striatum in representing prediction errors whenever one of them is dominating choice behavior. Converging evidence in human neuroimaging studies has shown that the reward prediction errors are correlated with activity in the ventral striatum. Our results demonstrate that this region is simultaneously correlated with a stimulus prediction error. Furthermore, the learning system that is currently dominating behavioral choice dynamically engages the ventral striatum for computing its prediction errors. This demonstrates that the prediction error representations are highly dynamic and influenced by various experimental context. This finding points to a general role of the ventral striatum in detecting expectancy violations and encoding error signals regardless of the specific nature of the reinforcer itself. Copyright © 2016 the authors 0270-6474/16/3612650-11$15.00/0.

  19. The Effect of Content Representation Design Principles on Users' Intuitive Beliefs and Use of E-Learning Systems

    Science.gov (United States)

    Al-Samarraie, Hosam; Selim, Hassan; Zaqout, Fahed

    2016-01-01

    A model is proposed to assess the effect of different content representation design principles on learners' intuitive beliefs about using e-learning. We hypothesized that the impact of the representation of course contents is mediated by the design principles of alignment, quantity, clarity, simplicity, and affordance, which influence the…

  20. Visual tracking based on the sparse representation of the PCA subspace

    Science.gov (United States)

    Chen, Dian-bing; Zhu, Ming; Wang, Hui-li

    2017-09-01

    We construct a collaborative model of the sparse representation and the subspace representation. First, we represent the tracking target in the principle component analysis (PCA) subspace, and then we employ an L 1 regularization to restrict the sparsity of the residual term, an L 2 regularization term to restrict the sparsity of the representation coefficients, and an L 2 norm to restrict the distance between the reconstruction and the target. Then we implement the algorithm in the particle filter framework. Furthermore, an iterative method is presented to get the global minimum of the residual and the coefficients. Finally, an alternative template update scheme is adopted to avoid the tracking drift which is caused by the inaccurate update. In the experiment, we test the algorithm on 9 sequences, and compare the results with 5 state-of-art methods. According to the results, we can conclude that our algorithm is more robust than the other methods.

  1. Learning weighted sparse representation of encoded facial normal information for expression-robust 3D face recognition

    KAUST Repository

    Li, Huibin; Di, Huang; Morvan, Jean-Marie; Chen, Liming

    2011-01-01

    This paper proposes a novel approach for 3D face recognition by learning weighted sparse representation of encoded facial normal information. To comprehensively describe 3D facial surface, three components, in X, Y, and Z-plane respectively

  2. Learning temporal context shapes prestimulus alpha oscillations and improves visual discrimination performance.

    Science.gov (United States)

    Toosi, Tahereh; K Tousi, Ehsan; Esteky, Hossein

    2017-08-01

    Time is an inseparable component of every physical event that we perceive, yet it is not clear how the brain processes time or how the neuronal representation of time affects our perception of events. Here we asked subjects to perform a visual discrimination task while we changed the temporal context in which the stimuli were presented. We collected electroencephalography (EEG) signals in two temporal contexts. In predictable blocks stimuli were presented after a constant delay relative to a visual cue, and in unpredictable blocks stimuli were presented after variable delays relative to the visual cue. Four subsecond delays of 83, 150, 400, and 800 ms were used in the predictable and unpredictable blocks. We observed that predictability modulated the power of prestimulus alpha oscillations in the parieto-occipital sites: alpha power increased in the 300-ms window before stimulus onset in the predictable blocks compared with the unpredictable blocks. This modulation only occurred in the longest delay period, 800 ms, in which predictability also improved the behavioral performance of the subjects. Moreover, learning the temporal context shaped the prestimulus alpha power: modulation of prestimulus alpha power grew during the predictable block and correlated with performance enhancement. These results suggest that the brain is able to learn the subsecond temporal context of stimuli and use this to enhance sensory processing. Furthermore, the neural correlate of this temporal prediction is reflected in the alpha oscillations. NEW & NOTEWORTHY It is not well understood how the uncertainty in the timing of an external event affects its processing, particularly at subsecond scales. Here we demonstrate how a predictable timing scheme improves visual processing. We found that learning the predictable scheme gradually shaped the prestimulus alpha power. These findings indicate that the human brain is able to extract implicit subsecond patterns in the temporal context of

  3. Multi-channel EEG-based sleep stage classification with joint collaborative representation and multiple kernel learning.

    Science.gov (United States)

    Shi, Jun; Liu, Xiao; Li, Yan; Zhang, Qi; Li, Yingjie; Ying, Shihui

    2015-10-30

    Electroencephalography (EEG) based sleep staging is commonly used in clinical routine. Feature extraction and representation plays a crucial role in EEG-based automatic classification of sleep stages. Sparse representation (SR) is a state-of-the-art unsupervised feature learning method suitable for EEG feature representation. Collaborative representation (CR) is an effective data coding method used as a classifier. Here we use CR as a data representation method to learn features from the EEG signal. A joint collaboration model is established to develop a multi-view learning algorithm, and generate joint CR (JCR) codes to fuse and represent multi-channel EEG signals. A two-stage multi-view learning-based sleep staging framework is then constructed, in which JCR and joint sparse representation (JSR) algorithms first fuse and learning the feature representation from multi-channel EEG signals, respectively. Multi-view JCR and JSR features are then integrated and sleep stages recognized by a multiple kernel extreme learning machine (MK-ELM) algorithm with grid search. The proposed two-stage multi-view learning algorithm achieves superior performance for sleep staging. With a K-means clustering based dictionary, the mean classification accuracy, sensitivity and specificity are 81.10 ± 0.15%, 71.42 ± 0.66% and 94.57 ± 0.07%, respectively; while with the dictionary learned using the submodular optimization method, they are 80.29 ± 0.22%, 71.26 ± 0.78% and 94.38 ± 0.10%, respectively. The two-stage multi-view learning based sleep staging framework outperforms all other classification methods compared in this work, while JCR is superior to JSR. The proposed multi-view learning framework has the potential for sleep staging based on multi-channel or multi-modality polysomnography signals. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. The reliability of retro-cues determines the fate of noncued visual working memory representations

    NARCIS (Netherlands)

    Günseli, E.; van Moorselaar, D.; Meeter, M.; Olivers, C.N.L.

    2015-01-01

    Retrospectively cueing an item retained in visual working memory during maintenance is known to improve its retention. However, studies have provided conflicting results regarding the costs of such retro-cues for the noncued items, leading to different theories on the mechanisms behind visual

  5. Neural Correlates of Visual Short-term Memory Dissociate between Fragile and Working Memory Representations

    NARCIS (Netherlands)

    Vandenbroucke, A.R.; Sligte, I.G.; Vries, J.G. de; Cohen, M.S.; Lamme, V.A.F.

    2015-01-01

    Evidence is accumulating that the classic two-stage model of visual STM (VSTM), comprising iconic memory (IM) and visual working memory (WM), is incomplete. A third memory stage, termed fragile VSTM (FM), seems to exist in between IM and WM [Vandenbroucke, A. R. E., Sligte, I. G., & Lamme, V. A. F.

  6. Neural correlates of visual short-term memory dissociate between fragile and working memory representations

    NARCIS (Netherlands)

    Vandenbroucke, A.R.E.; Sligte, I.G.; de Vries, J.G.; Cohen, M.X.; Lamme, V.A.F.

    2015-01-01

    Evidence is accumulating that the classic two-stage model of visual STM (VSTM), comprising iconic memory (IM) and visual working memory (WM), is incomplete. A third memory stage, termed fragile VSTM (FM), seems to exist in between IM and WM [Vandenbroucke, A. R. E., Sligte, I. G., & Lamme, V. A. F.

  7. The Effect of Visual Representation Style in Problem-Solving : A Perspective from Cognitive Processes

    NARCIS (Netherlands)

    Nyamsuren, Enkhbold; Taatgen, Niels A.

    2013-01-01

    Using results from a controlled experiment and simulations based on cognitive models, we show that visual presentation style can have a significant impact on performance in a complex problem-solving task. We compared subject performances in two isomorphic, but visually different, tasks based on a

  8. The Interrelations between Diagrammatic Representations and Verbal Explanations in Learning from Social Science Texts.

    Science.gov (United States)

    Guri-Rozenblit, Sarah

    1988-01-01

    Describes study that examined the instructional effectiveness of abstract diagrams and verbal explanations in learning from social science texts. The control and treatment groups of adult learners at Everyman's University (Israel) are described, verbal and visual aptitude tests are explained, and results are analyzed. (25 references) (Author/LRW)

  9. Children's Visual Representations of Food and Meal Time: Towards an Understanding of Nutrition and Educational Practices

    Science.gov (United States)

    Savoie-Zajc, Lorraine

    2005-01-01

    Within the broad perspective of school and social exclusion, this article pays attention to an important factor of exclusion: overweight and obesity in primary school children. An interdisciplinary research was conducted and aimed at the study of social representations and practices surrounding food which primary school children, their parents and…

  10. Improving of Junior High School Visual Thinking Representation Ability in Mathematical Problem Solving by CTL

    Science.gov (United States)

    Surya, Edy; Sabandar, Jozua; Kusumah, Yaya S.; Darhim

    2013-01-01

    The students' difficulty which was found is in the problem of understanding, drawing diagrams, reading the charts correctly, conceptual formal mathematical understanding, and mathematical problem solving. The appropriate problem representation is the basic way in order to understand the problem itself and make a plan to solve it. This research was…

  11. Refinement of learned skilled movement representation in motor cortex deep output layer

    Science.gov (United States)

    Li, Qian; Ko, Ho; Qian, Zhong-Ming; Yan, Leo Y. C.; Chan, Danny C. W.; Arbuthnott, Gordon; Ke, Ya; Yung, Wing-Ho

    2017-01-01

    The mechanisms underlying the emergence of learned motor skill representation in primary motor cortex (M1) are not well understood. Specifically, how motor representation in the deep output layer 5b (L5b) is shaped by motor learning remains virtually unknown. In rats undergoing motor skill training, we detect a subpopulation of task-recruited L5b neurons that not only become more movement-encoding, but their activities are also more structured and temporally aligned to motor execution with a timescale of refinement in tens-of-milliseconds. Field potentials evoked at L5b in vivo exhibit persistent long-term potentiation (LTP) that parallels motor performance. Intracortical dopamine denervation impairs motor learning, and disrupts the LTP profile as well as the emergent neurodynamical properties of task-recruited L5b neurons. Thus, dopamine-dependent recruitment of L5b neuronal ensembles via synaptic reorganization may allow the motor cortex to generate more temporally structured, movement-encoding output signal from M1 to downstream circuitry that drives increased uniformity and precision of movement during motor learning. PMID:28598433

  12. Reinforcement learning for dpm of embedded visual sensor nodes

    International Nuclear Information System (INIS)

    Khani, U.; Sadhayo, I. H.

    2014-01-01

    This paper proposes a RL (Reinforcement Learning) based DPM (Dynamic Power Management) technique to learn time out policies during a visual sensor node's operation which has multiple power/performance states. As opposed to the widely used static time out policies, our proposed DPM policy which is also referred to as OLTP (Online Learning of Time out Policies), learns to dynamically change the time out decisions in the different node states including the non-operational states. The selection of time out values in different power/performance states of a visual sensing platform is based on the workload estimates derived from a ML-ANN (Multi-Layer Artificial Neural Network) and an objective function given by weighted performance and power parameters. The DPM approach is also able to dynamically adjust the power-performance weights online to satisfy a given constraint of either power consumption or performance. Results show that the proposed learning algorithm explores the power-performance tradeoff with non-stationary workload and outperforms other DPM policies. It also performs the online adjustment of the tradeoff parameters in order to meet a user-specified constraint. (author)

  13. Alpha-Band Activity Reveals Spontaneous Representations of Spatial Position in Visual Working Memory.

    Science.gov (United States)

    Foster, Joshua J; Bsales, Emma M; Jaffe, Russell J; Awh, Edward

    2017-10-23

    An emerging view suggests that spatial position is an integral component of working memory (WM), such that non-spatial features are bound to locations regardless of whether space is relevant [1, 2]. For instance, past work has shown that stimulus position is spontaneously remembered when non-spatial features are stored. Item recognition is enhanced when memoranda appear at the same location where they were encoded [3-5], and accessing non-spatial information elicits shifts of spatial attention to the original position of the stimulus [6, 7]. However, these findings do not establish that a persistent, active representation of stimulus position is maintained in WM because similar effects have also been documented following storage in long-term memory [8, 9]. Here we show that the spatial position of the memorandum is actively coded by persistent neural activity during a non-spatial WM task. We used a spatial encoding model in conjunction with electroencephalogram (EEG) measurements of oscillatory alpha-band (8-12 Hz) activity to track active representations of spatial position. The position of the stimulus varied trial to trial but was wholly irrelevant to the tasks. We nevertheless observed active neural representations of the original stimulus position that persisted throughout the retention interval. Further experiments established that these spatial representations are dependent on the volitional storage of non-spatial features rather than being a lingering effect of sensory energy or initial encoding demands. These findings provide strong evidence that online spatial representations are spontaneously maintained in WM-regardless of task relevance-during the storage of non-spatial features. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Mexican high school students' social representations of mathematics, its teaching and learning

    Science.gov (United States)

    Martínez-Sierra, Gustavo; Miranda-Tirado, Marisa

    2015-07-01

    This paper reports a qualitative research that identifies Mexican high school students' social representations of mathematics. For this purpose, the social representations of 'mathematics', 'learning mathematics' and 'teaching mathematics' were identified in a group of 50 students. Focus group interviews were carried out in order to obtain the data. The constant comparative style was the strategy used for the data analysis because it allowed the categories to emerge from the data. The students' social representations are: (A) Mathematics is…(1) important for daily life, (2) important for careers and for life, (3) important because it is in everything that surrounds us, (4) a way to solve problems of daily life, (5) calculations and operations with numbers, (6) complex and difficult, (7) exact and (6) a subject that develops thinking skills; (B) To learn mathematics is…(1) to possess knowledge to solve problems, (2) to be able to solve everyday problems, (3) to be able to make calculations and operations, and (4) to think logically to be able to solve problems; and (C) To teach mathematics is…(1) to transmit knowledge, (2) to know to share it, (3) to transmit the reasoning ability, and (4) to show how to solve problems.

  15. Sparse Representation Based Multi-Instance Learning for Breast Ultrasound Image Classification

    Directory of Open Access Journals (Sweden)

    Lu Bing

    2017-01-01

    Full Text Available We propose a novel method based on sparse representation for breast ultrasound image classification under the framework of multi-instance learning (MIL. After image enhancement and segmentation, concentric circle is used to extract the global and local features for improving the accuracy in diagnosis and prediction. The classification problem of ultrasound image is converted to sparse representation based MIL problem. Each instance of a bag is represented as a sparse linear combination of all basis vectors in the dictionary, and then the bag is represented by one feature vector which is obtained via sparse representations of all instances within the bag. The sparse and MIL problem is further converted to a conventional learning problem that is solved by relevance vector machine (RVM. Results of single classifiers are combined to be used for classification. Experimental results on the breast cancer datasets demonstrate the superiority of the proposed method in terms of classification accuracy as compared with state-of-the-art MIL methods.

  16. Brain signal complexity rises with repetition suppression in visual learning.

    Science.gov (United States)

    Lafontaine, Marc Philippe; Lacourse, Karine; Lina, Jean-Marc; McIntosh, Anthony R; Gosselin, Frédéric; Théoret, Hugo; Lippé, Sarah

    2016-06-21

    Neuronal activity associated with visual processing of an unfamiliar face gradually diminishes when it is viewed repeatedly. This process, known as repetition suppression (RS), is involved in the acquisition of familiarity. Current models suggest that RS results from interactions between visual information processing areas located in the occipito-temporal cortex and higher order areas, such as the dorsolateral prefrontal cortex (DLPFC). Brain signal complexity, which reflects information dynamics of cortical networks, has been shown to increase as unfamiliar faces become familiar. However, the complementarity of RS and increases in brain signal complexity have yet to be demonstrated within the same measurements. We hypothesized that RS and brain signal complexity increase occur simultaneously during learning of unfamiliar faces. Further, we expected alteration of DLPFC function by transcranial direct current stimulation (tDCS) to modulate RS and brain signal complexity over the occipito-temporal cortex. Participants underwent three tDCS conditions in random order: right anodal/left cathodal, right cathodal/left anodal and sham. Following tDCS, participants learned unfamiliar faces, while an electroencephalogram (EEG) was recorded. Results revealed RS over occipito-temporal electrode sites during learning, reflected by a decrease in signal energy, a measure of amplitude. Simultaneously, as signal energy decreased, brain signal complexity, as estimated with multiscale entropy (MSE), increased. In addition, prefrontal tDCS modulated brain signal complexity over the right occipito-temporal cortex during the first presentation of faces. These results suggest that although RS may reflect a brain mechanism essential to learning, complementary processes reflected by increases in brain signal complexity, may be instrumental in the acquisition of novel visual information. Such processes likely involve long-range coordinated activity between prefrontal and lower order visual

  17. Influence of visual observational conditions on tongue motor learning

    DEFF Research Database (Denmark)

    Kothari, Mohit; Liu, Xuimei; Baad-Hansen, Lene

    2016-01-01

    To investigate the impact of visual observational conditions on performance during a standardized tongue-protrusion training (TPT) task and to evaluate subject-based reports of helpfulness, disturbance, pain, and fatigue due to the observational conditions on 0-10 numerical rating scales. Forty...... regarding the level of disturbance, pain or fatigue. Self-observation of tongue-training facilitated behavioral aspects of tongue motor learning compared with model-observation but not compared with control....

  18. Making perceptual learning practical to improve visual functions.

    Science.gov (United States)

    Polat, Uri

    2009-10-01

    Task-specific improvement in performance after training is well established. The finding that learning is stimulus-specific and does not transfer well between different stimuli, between stimulus locations in the visual field, or between the two eyes has been used to support the notion that neurons or assemblies of neurons are modified at the earliest stage of cortical processing. However, a debate regarding the proposed mechanism underlying perceptual learning is an ongoing issue. Nevertheless, generalization of a trained task to other functions is an important key, for both understanding the neural mechanisms and the practical value of the training. This manuscript describes a structured perceptual learning method that previously used (amblyopia, myopia) and a novel technique and results that were applied for presbyopia. In general, subjects were trained for contrast detection of Gabor targets under lateral masking conditions. Training improved contrast sensitivity and diminished the lateral suppression when it existed (amblyopia). The improvement was transferred to unrelated functions such as visual acuity. The new results of presbyopia show substantial improvement of the spatial and temporal contrast sensitivity, leading to improved processing speed of target detection as well as reaction time. Consequently, the subjects, who were able to eliminate the need for reading glasses, benefited. Thus, here we show that the transfer of functions indicates that the specificity of improvement in the trained task can be generalized by repetitive practice of target detection, covering a sufficient range of spatial frequencies and orientations, leading to an improvement in unrelated visual functions. Thus, perceptual learning can be a practical method to improve visual functions in people with impaired or blurred vision.

  19. Visual representation of costs in the productive process: a case study on a footwear industry in Portugal

    Directory of Open Access Journals (Sweden)

    Levi da Silva Guimarães

    2015-12-01

    Full Text Available Over the last decades, conventional production systems have gone through changes in the face of intensified competition among companies. The occurrence of these changes has boosted the development of decision-making assistance tools for the production systems. However, most of these instruments do not allow the visualization of the costs involved throughout industrial operations. This study comprises the integration of the "Waste Identification Diagrams" (WID, current tool for visualization and analysis of production processes, along with "Time-Driven Activity-Based Costing" (TDABC, strategic management cost tool, seeking to create a model that visually demonstrates waste and relate its occurrence to operating costs. For that, the research adopted a descriptive-exploratory approach, based on a case study carried out in a footwear industry. The analysis showed that the integration of tools allowed for the representation of costs based on the time equations from the TDABC, associated with the visualization of the production process by the WID. The study concludes that the WID can be integrated to the TDABC tool, creating a management model for making decisions based on the operating costs of the production process.

  20. Exploring the relation between visualizer-verbalizer cognitive styles and performance with visual or verbal learning material

    NARCIS (Netherlands)

    Kolloffel, Bas Jan

    2012-01-01

    A student might find a certain representational format (e.g., diagram, text) more attractive than other formats for learning. Computer technology offers opportunities to adjust the formats used in learning environments to the preferences of individual learners. The question addressed in the current