WorldWideScience

Sample records for visual learning method

  1. Reflexive Learning through Visual Methods

    DEFF Research Database (Denmark)

    Frølunde, Lisbeth

    2014-01-01

    What. This chapter concerns how visual methods and visual materials can support visually oriented, collaborative, and creative learning processes in education. The focus is on facilitation (guiding, teaching) with visual methods in learning processes that are designerly or involve design. Visual...... methods are exemplified through two university classroom cases about collaborative idea generation processes. The visual methods and materials in the cases are photo elicitation using photo cards, and modeling with LEGO Serious Play sets. Why. The goal is to encourage the reader, whether student...... or professional, to facilitate with visual methods in a critical, reflective, and experimental way. The chapter offers recommendations for facilitating with visual methods to support playful, emergent designerly processes. The chapter also has a critical, situated perspective. Where. This chapter offers case...

  2. Perceptual Learning in Children With Visual Impairment Improves Near Visual Acuity

    NARCIS (Netherlands)

    Huurneman, Bianca; Boonstra, F. Nienke; Cox, Ralf F. A.; van Rens, Ger; Cillessen, Antonius H. N.

    PURPOSE. This study investigated whether visual perceptual learning can improve near visual acuity and reduce foveal crowding effects in four-to nine-year-old children with visual impairment. METHODS. Participants were 45 children with visual impairment and 29 children with normal vision. Children

  3. Perceptual Learning in Children With Visual Impairment Improves Near Visual Acuity

    NARCIS (Netherlands)

    Huurneman, B.; Boonstra, F.N.; Cox, R.F.A.; van Rens, G.H.M.B.; Cillessen, A.H.N.

    2013-01-01

    Purpose. This study investigated whether visual perceptual learning can improve near visual acuity and reduce foveal crowding effects in four- to nine-year-old children with visual impairment. Methods. Participants were 45 children with visual impairment and 29 children with normal vision. Children

  4. Perceptual learning in children with visual impairment improves near visual acuity

    NARCIS (Netherlands)

    Huurneman, B.; Boonstra, F.N.; Cox, R.F.; Rens, G. van; Cillessen, A.H.

    2013-01-01

    PURPOSE: This study investigated whether visual perceptual learning can improve near visual acuity and reduce foveal crowding effects in four- to nine-year-old children with visual impairment. METHODS: Participants were 45 children with visual impairment and 29 children with normal vision. Children

  5. Perceptual Learning in Children With Visual Impairment Improves Near Visual Acuity

    NARCIS (Netherlands)

    Huurneman, B.; Boonstra, F.N.; Cox, R.F.A.; Rens, G.H.M.B. van; Cillessen, A.H.N.

    2013-01-01

    PURPOSE. This study investigated whether visual perceptual learning can improve near visual acuity and reduce foveal crowding effects in four-to nine-year-old children with visual impairment. METHODS. Participants were 45 children with visual impairment and 29 children with normal vision. Children

  6. Computer-enhanced visual learning method: a paradigm to teach and document surgical skills.

    Science.gov (United States)

    Maizels, Max; Mickelson, Jennie; Yerkes, Elizabeth; Maizels, Evelyn; Stork, Rachel; Young, Christine; Corcoran, Julia; Holl, Jane; Kaplan, William E

    2009-09-01

    Changes in health care are stimulating residency training programs to develop new methods for teaching surgical skills. We developed Computer-Enhanced Visual Learning (CEVL) as an innovative Internet-based learning and assessment tool. The CEVL method uses the educational procedures of deliberate practice and performance to teach and learn surgery in a stylized manner. CEVL is a learning and assessment tool that can provide students and educators with quantitative feedback on learning a specific surgical procedure. Methods involved examine quantitative data of improvement in surgical skills. Herein, we qualitatively describe the method and show how program directors (PDs) may implement this technique in their residencies. CEVL allows an operation to be broken down into teachable components. The process relies on feedback and remediation to improve performance, with a focus on learning that is applicable to the next case being performed. CEVL has been shown to be effective for teaching pediatric orchiopexy and is being adapted to additional adult and pediatric procedures and to office examination skills. The CEVL method is available to other residency training programs.

  7. Visual teaching and learning in the fields of engineering

    Directory of Open Access Journals (Sweden)

    Kyvete S. Shatri

    2015-11-01

    Full Text Available Engineering education today is faced with numerous demands that are closely connected with a globalized economy. One of these requirements is to draw the engineers of the future, who are characterized with: strong analytical skills, creativity, ingenuity, professionalism, intercultural communication and leadership. To achieve this effective teaching methods should be used to facilitate and enhance the learning of students and their performance in general, making them able to cope with market demands of a globalized economy. One of these methods is the visualization as a very important method that increases the learning of students. A visual approach in science and in engineering also increases communication, critical thinking and provides analytical approach to various problems. Therefore, this research is aimed to investigate the effect of the use of visualization in the process of teaching and learning in engineering fields and encourage teachers and students to use visual methods for teaching and learning. The results of this research highlight the positive effect that the use of visualization has in the learning process of students and their overall performance. In addition, innovative teaching methods have a good effect in the improvement of the situation. Visualization motivates students to learn, making them more cooperative and developing their communication skills.

  8. Learning Visualizations by Analogy: Promoting Visual Literacy through Visualization Morphing.

    Science.gov (United States)

    Ruchikachorn, Puripant; Mueller, Klaus

    2015-09-01

    We propose the concept of teaching (and learning) unfamiliar visualizations by analogy, that is, demonstrating an unfamiliar visualization method by linking it to another more familiar one, where the in-betweens are designed to bridge the gap of these two visualizations and explain the difference in a gradual manner. As opposed to a textual description, our morphing explains an unfamiliar visualization through purely visual means. We demonstrate our idea by ways of four visualization pair examples: data table and parallel coordinates, scatterplot matrix and hyperbox, linear chart and spiral chart, and hierarchical pie chart and treemap. The analogy is commutative i.e. any member of the pair can be the unfamiliar visualization. A series of studies showed that this new paradigm can be an effective teaching tool. The participants could understand the unfamiliar visualization methods in all of the four pairs either fully or at least significantly better after they observed or interacted with the transitions from the familiar counterpart. The four examples suggest how helpful visualization pairings be identified and they will hopefully inspire other visualization morphings and associated transition strategies to be identified.

  9. Feature and Region Selection for Visual Learning.

    Science.gov (United States)

    Zhao, Ji; Wang, Liantao; Cabral, Ricardo; De la Torre, Fernando

    2016-03-01

    Visual learning problems, such as object classification and action recognition, are typically approached using extensions of the popular bag-of-words (BoWs) model. Despite its great success, it is unclear what visual features the BoW model is learning. Which regions in the image or video are used to discriminate among classes? Which are the most discriminative visual words? Answering these questions is fundamental for understanding existing BoW models and inspiring better models for visual recognition. To answer these questions, this paper presents a method for feature selection and region selection in the visual BoW model. This allows for an intermediate visualization of the features and regions that are important for visual learning. The main idea is to assign latent weights to the features or regions, and jointly optimize these latent variables with the parameters of a classifier (e.g., support vector machine). There are four main benefits of our approach: 1) our approach accommodates non-linear additive kernels, such as the popular χ(2) and intersection kernel; 2) our approach is able to handle both regions in images and spatio-temporal regions in videos in a unified way; 3) the feature selection problem is convex, and both problems can be solved using a scalable reduced gradient method; and 4) we point out strong connections with multiple kernel learning and multiple instance learning approaches. Experimental results in the PASCAL VOC 2007, MSR Action Dataset II and YouTube illustrate the benefits of our approach.

  10. A visual tracking method based on deep learning without online model updating

    Science.gov (United States)

    Tang, Cong; Wang, Yicheng; Feng, Yunsong; Zheng, Chao; Jin, Wei

    2018-02-01

    The paper proposes a visual tracking method based on deep learning without online model updating. In consideration of the advantages of deep learning in feature representation, deep model SSD (Single Shot Multibox Detector) is used as the object extractor in the tracking model. Simultaneously, the color histogram feature and HOG (Histogram of Oriented Gradient) feature are combined to select the tracking object. In the process of tracking, multi-scale object searching map is built to improve the detection performance of deep detection model and the tracking efficiency. In the experiment of eight respective tracking video sequences in the baseline dataset, compared with six state-of-the-art methods, the method in the paper has better robustness in the tracking challenging factors, such as deformation, scale variation, rotation variation, illumination variation, and background clutters, moreover, its general performance is better than other six tracking methods.

  11. Learning semantic and visual similarity for endomicroscopy video retrieval.

    Science.gov (United States)

    Andre, Barbara; Vercauteren, Tom; Buchner, Anna M; Wallace, Michael B; Ayache, Nicholas

    2012-06-01

    Content-based image retrieval (CBIR) is a valuable computer vision technique which is increasingly being applied in the medical community for diagnosis support. However, traditional CBIR systems only deliver visual outputs, i.e., images having a similar appearance to the query, which is not directly interpretable by the physicians. Our objective is to provide a system for endomicroscopy video retrieval which delivers both visual and semantic outputs that are consistent with each other. In a previous study, we developed an adapted bag-of-visual-words method for endomicroscopy retrieval, called "Dense-Sift," that computes a visual signature for each video. In this paper, we present a novel approach to complement visual similarity learning with semantic knowledge extraction, in the field of in vivo endomicroscopy. We first leverage a semantic ground truth based on eight binary concepts, in order to transform these visual signatures into semantic signatures that reflect how much the presence of each semantic concept is expressed by the visual words describing the videos. Using cross-validation, we demonstrate that, in terms of semantic detection, our intuitive Fisher-based method transforming visual-word histograms into semantic estimations outperforms support vector machine (SVM) methods with statistical significance. In a second step, we propose to improve retrieval relevance by learning an adjusted similarity distance from a perceived similarity ground truth. As a result, our distance learning method allows to statistically improve the correlation with the perceived similarity. We also demonstrate that, in terms of perceived similarity, the recall performance of the semantic signatures is close to that of visual signatures and significantly better than those of several state-of-the-art CBIR methods. The semantic signatures are thus able to communicate high-level medical knowledge while being consistent with the low-level visual signatures and much shorter than them

  12. Visual Perceptual Learning and Models.

    Science.gov (United States)

    Dosher, Barbara; Lu, Zhong-Lin

    2017-09-15

    Visual perceptual learning through practice or training can significantly improve performance on visual tasks. Originally seen as a manifestation of plasticity in the primary visual cortex, perceptual learning is more readily understood as improvements in the function of brain networks that integrate processes, including sensory representations, decision, attention, and reward, and balance plasticity with system stability. This review considers the primary phenomena of perceptual learning, theories of perceptual learning, and perceptual learning's effect on signal and noise in visual processing and decision. Models, especially computational models, play a key role in behavioral and physiological investigations of the mechanisms of perceptual learning and for understanding, predicting, and optimizing human perceptual processes, learning, and performance. Performance improvements resulting from reweighting or readout of sensory inputs to decision provide a strong theoretical framework for interpreting perceptual learning and transfer that may prove useful in optimizing learning in real-world applications.

  13. Creating visual explanations improves learning.

    Science.gov (United States)

    Bobek, Eliza; Tversky, Barbara

    2016-01-01

    Many topics in science are notoriously difficult for students to learn. Mechanisms and processes outside student experience present particular challenges. While instruction typically involves visualizations, students usually explain in words. Because visual explanations can show parts and processes of complex systems directly, creating them should have benefits beyond creating verbal explanations. We compared learning from creating visual or verbal explanations for two STEM domains, a mechanical system (bicycle pump) and a chemical system (bonding). Both kinds of explanations were analyzed for content and learning assess by a post-test. For the mechanical system, creating a visual explanation increased understanding particularly for participants of low spatial ability. For the chemical system, creating both visual and verbal explanations improved learning without new teaching. Creating a visual explanation was superior and benefitted participants of both high and low spatial ability. Visual explanations often included crucial yet invisible features. The greater effectiveness of visual explanations appears attributable to the checks they provide for completeness and coherence as well as to their roles as platforms for inference. The benefits should generalize to other domains like the social sciences, history, and archeology where important information can be visualized. Together, the findings provide support for the use of learner-generated visual explanations as a powerful learning tool.

  14. Visual acuity and visual skills in Malaysian children with learning disabilities

    Directory of Open Access Journals (Sweden)

    Muzaliha MN

    2012-09-01

    Full Text Available Mohd-Nor Muzaliha,1 Buang Nurhamiza,1 Adil Hussein,1 Abdul-Rani Norabibas,1 Jaafar Mohd-Hisham-Basrun,1 Abdullah Sarimah,2 Seo-Wei Leo,3 Ismail Shatriah11Department of Ophthalmology, 2Biostatistics and Research Methodology Unit, School of Medical Sciences, Universiti Sains Malaysia, Kelantan, Malaysia; 3Paediatric Ophthalmology and Strabismus Unit, Department of Ophthalmology, Tan Tock Seng Hospital, SingaporeBackground: There is limited data in the literature concerning the visual status and skills in children with learning disabilities, particularly within the Asian population. This study is aimed to determine visual acuity and visual skills in children with learning disabilities in primary schools within the suburban Kota Bharu district in Malaysia.Methods: We examined 1010 children with learning disabilities aged between 8–12 years from 40 primary schools in the Kota Bharu district, Malaysia from January 2009 to March 2010. These children were identified based on their performance in a screening test known as the Early Intervention Class for Reading and Writing Screening Test conducted by the Ministry of Education, Malaysia. Complete ocular examinations and visual skills assessment included near point of convergence, amplitude of accommodation, accommodative facility, convergence break and recovery, divergence break and recovery, and developmental eye movement tests for all subjects.Results: A total of 4.8% of students had visual acuity worse than 6/12 (20/40, 14.0% had convergence insufficiency, 28.3% displayed poor accommodative amplitude, and 26.0% showed signs of accommodative infacility. A total of 12.1% of the students had poor convergence break, 45.7% displayed poor convergence recovery, 37.4% showed poor divergence break, and 66.3% were noted to have poor divergence recovery. The mean horizontal developmental eye movement was significantly prolonged.Conclusion: Although their visual acuity was satisfactory, nearly 30% of the

  15. A parallel spatiotemporal saliency and discriminative online learning method for visual target tracking in aerial videos.

    Science.gov (United States)

    Aghamohammadi, Amirhossein; Ang, Mei Choo; A Sundararajan, Elankovan; Weng, Ng Kok; Mogharrebi, Marzieh; Banihashem, Seyed Yashar

    2018-01-01

    Visual tracking in aerial videos is a challenging task in computer vision and remote sensing technologies due to appearance variation difficulties. Appearance variations are caused by camera and target motion, low resolution noisy images, scale changes, and pose variations. Various approaches have been proposed to deal with appearance variation difficulties in aerial videos, and amongst these methods, the spatiotemporal saliency detection approach reported promising results in the context of moving target detection. However, it is not accurate for moving target detection when visual tracking is performed under appearance variations. In this study, a visual tracking method is proposed based on spatiotemporal saliency and discriminative online learning methods to deal with appearance variations difficulties. Temporal saliency is used to represent moving target regions, and it was extracted based on the frame difference with Sauvola local adaptive thresholding algorithms. The spatial saliency is used to represent the target appearance details in candidate moving regions. SLIC superpixel segmentation, color, and moment features can be used to compute feature uniqueness and spatial compactness of saliency measurements to detect spatial saliency. It is a time consuming process, which prompted the development of a parallel algorithm to optimize and distribute the saliency detection processes that are loaded into the multi-processors. Spatiotemporal saliency is then obtained by combining the temporal and spatial saliencies to represent moving targets. Finally, a discriminative online learning algorithm was applied to generate a sample model based on spatiotemporal saliency. This sample model is then incrementally updated to detect the target in appearance variation conditions. Experiments conducted on the VIVID dataset demonstrated that the proposed visual tracking method is effective and is computationally efficient compared to state-of-the-art methods.

  16. Associative visual learning by tethered bees in a controlled visual environment.

    Science.gov (United States)

    Buatois, Alexis; Pichot, Cécile; Schultheiss, Patrick; Sandoz, Jean-Christophe; Lazzari, Claudio R; Chittka, Lars; Avarguès-Weber, Aurore; Giurfa, Martin

    2017-10-10

    Free-flying honeybees exhibit remarkable cognitive capacities but the neural underpinnings of these capacities cannot be studied in flying insects. Conversely, immobilized bees are accessible to neurobiological investigation but display poor visual learning. To overcome this limitation, we aimed at establishing a controlled visual environment in which tethered bees walking on a spherical treadmill learn to discriminate visual stimuli video projected in front of them. Freely flying bees trained to walk into a miniature Y-maze displaying these stimuli in a dark environment learned the visual discrimination efficiently when one of them (CS+) was paired with sucrose and the other with quinine solution (CS-). Adapting this discrimination to the treadmill paradigm with a tethered, walking bee was successful as bees exhibited robust discrimination and preferred the CS+ to the CS- after training. As learning was better in the maze, movement freedom, active vision and behavioral context might be important for visual learning. The nature of the punishment associated with the CS- also affects learning as quinine and distilled water enhanced the proportion of learners. Thus, visual learning is amenable to a controlled environment in which tethered bees learn visual stimuli, a result that is important for future neurobiological studies in virtual reality.

  17. An Interactive Approach to Learning and Teaching in Visual Arts Education

    Science.gov (United States)

    Tomljenovic, Zlata

    2015-01-01

    The present research focuses on modernising the approach to learning and teaching the visual arts in teaching practice, as well as examining the performance of an interactive approach to learning and teaching in visual arts classes with the use of a combination of general and specific (visual arts) teaching methods. The study uses quantitative…

  18. Learning Visual Basic NET

    CERN Document Server

    Liberty, Jesse

    2009-01-01

    Learning Visual Basic .NET is a complete introduction to VB.NET and object-oriented programming. By using hundreds of examples, this book demonstrates how to develop various kinds of applications--including those that work with databases--and web services. Learning Visual Basic .NET will help you build a solid foundation in .NET.

  19. Implicit visual learning and the expression of learning.

    Science.gov (United States)

    Haider, Hilde; Eberhardt, Katharina; Kunde, Alexander; Rose, Michael

    2013-03-01

    Although the existence of implicit motor learning is now widely accepted, the findings concerning perceptual implicit learning are ambiguous. Some researchers have observed perceptual learning whereas other authors have not. The review of the literature provides different reasons to explain this ambiguous picture, such as differences in the underlying learning processes, selective attention, or differences in the difficulty to express this knowledge. In three experiments, we investigated implicit visual learning within the original serial reaction time task. We used different response devices (keyboard vs. mouse) in order to manipulate selective attention towards response dimensions. Results showed that visual and motor sequence learning differed in terms of RT-benefits, but not in terms of the amount of knowledge assessed after training. Furthermore, visual sequence learning was modulated by selective attention. However, the findings of all three experiments suggest that selective attention did not alter implicit but rather explicit learning processes. Copyright © 2012 Elsevier Inc. All rights reserved.

  20. Visual Aversive Learning Compromises Sensory Discrimination.

    Science.gov (United States)

    Shalev, Lee; Paz, Rony; Avidan, Galia

    2018-03-14

    Aversive learning is thought to modulate perceptual thresholds, which can lead to overgeneralization. However, it remains undetermined whether this modulation is domain specific or a general effect. Moreover, despite the unique role of the visual modality in human perception, it is unclear whether this aspect of aversive learning exists in this modality. The current study was designed to examine the effect of visual aversive outcomes on the perception of basic visual and auditory features. We tested the ability of healthy participants, both males and females, to discriminate between neutral stimuli, before and after visual learning. In each experiment, neutral stimuli were associated with aversive images in an experimental group and with neutral images in a control group. Participants demonstrated a deterioration in discrimination (higher discrimination thresholds) only after aversive learning. This deterioration was measured for both auditory (tone frequency) and visual (orientation and contrast) features. The effect was replicated in five different experiments and lasted for at least 24 h. fMRI neural responses and pupil size were also measured during learning. We showed an increase in neural activations in the anterior cingulate cortex, insula, and amygdala during aversive compared with neutral learning. Interestingly, the early visual cortex showed increased brain activity during aversive compared with neutral context trials, with identical visual information. Our findings imply the existence of a central multimodal mechanism, which modulates early perceptual properties, following exposure to negative situations. Such a mechanism could contribute to abnormal responses that underlie anxiety states, even in new and safe environments. SIGNIFICANCE STATEMENT Using a visual aversive-learning paradigm, we found deteriorated discrimination abilities for visual and auditory stimuli that were associated with visual aversive stimuli. We showed increased neural

  1. Perceptual learning in children with visual impairment improves near visual acuity.

    Science.gov (United States)

    Huurneman, Bianca; Boonstra, F Nienke; Cox, Ralf F A; van Rens, Ger; Cillessen, Antonius H N

    2013-09-17

    This study investigated whether visual perceptual learning can improve near visual acuity and reduce foveal crowding effects in four- to nine-year-old children with visual impairment. Participants were 45 children with visual impairment and 29 children with normal vision. Children with visual impairment were divided into three groups: a magnifier group (n = 12), a crowded perceptual learning group (n = 18), and an uncrowded perceptual learning group (n = 15). Children with normal vision also were divided in three groups, but were measured only at baseline. Dependent variables were single near visual acuity (NVA), crowded NVA, LH line 50% crowding NVA, number of trials, accuracy, performance time, amount of small errors, and amount of large errors. Children with visual impairment trained during six weeks, two times per week, for 30 minutes (12 training sessions). After training, children showed significant improvement of NVA in addition to specific improvements on the training task. The crowded perceptual learning group showed the largest acuity improvements (1.7 logMAR lines on the crowded chart, P children in the crowded perceptual learning group showed improvements on all NVA charts. Children with visual impairment benefit from perceptual training. While task-specific improvements were observed in all training groups, transfer to crowded NVA was largest in the crowded perceptual learning group. To our knowledge, this is the first study to provide evidence for the improvement of NVA by perceptual learning in children with visual impairment. (http://www.trialregister.nl number, NTR2537.).

  2. Learning of Grammar-Like Visual Sequences by Adults with and without Language-Learning Disabilities

    Science.gov (United States)

    Aguilar, Jessica M.; Plante, Elena

    2014-01-01

    Purpose: Two studies examined learning of grammar-like visual sequences to determine whether a general deficit in statistical learning characterizes this population. Furthermore, we tested the hypothesis that difficulty in sustaining attention during the learning task might account for differences in statistical learning. Method: In Study 1,…

  3. Visual Learning in Application of Integration

    Science.gov (United States)

    Bt Shafie, Afza; Barnachea Janier, Josefina; Bt Wan Ahmad, Wan Fatimah

    Innovative use of technology can improve the way how Mathematics should be taught. It can enhance student's learning the concepts through visualization. Visualization in Mathematics refers to us of texts, pictures, graphs and animations to hold the attention of the learners in order to learn the concepts. This paper describes the use of a developed multimedia courseware as an effective tool for visual learning mathematics. The focus is on the application of integration which is a topic in Engineering Mathematics 2. The course is offered to the foundation students in the Universiti Teknologi of PETRONAS. Questionnaire has been distributed to get a feedback on the visual representation and students' attitudes towards using visual representation as a learning tool. The questionnaire consists of 3 sections: Courseware Design (Part A), courseware usability (Part B) and attitudes towards using the courseware (Part C). The results showed that students demonstrated the use of visual representation has benefited them in learning the topic.

  4. Learning Science Through Visualization

    Science.gov (United States)

    Chaudhury, S. Raj

    2005-01-01

    In the context of an introductory physical science course for non-science majors, I have been trying to understand how scientific visualizations of natural phenomena can constructively impact student learning. I have also necessarily been concerned with the instructional and assessment approaches that need to be considered when focusing on learning science through visually rich information sources. The overall project can be broken down into three distinct segments : (i) comparing students' abilities to demonstrate proportional reasoning competency on visual and verbal tasks (ii) decoding and deconstructing visualizations of an object falling under gravity (iii) the role of directed instruction to elicit alternate, valid scientific visualizations of the structure of the solar system. Evidence of student learning was collected in multiple forms for this project - quantitative analysis of student performance on written, graded assessments (tests and quizzes); qualitative analysis of videos of student 'think aloud' sessions. The results indicate that there are significant barriers for non-science majors to succeed in mastering the content of science courses, but with informed approaches to instruction and assessment, these barriers can be overcome.

  5. A parallel spatiotemporal saliency and discriminative online learning method for visual target tracking in aerial videos

    Science.gov (United States)

    2018-01-01

    Visual tracking in aerial videos is a challenging task in computer vision and remote sensing technologies due to appearance variation difficulties. Appearance variations are caused by camera and target motion, low resolution noisy images, scale changes, and pose variations. Various approaches have been proposed to deal with appearance variation difficulties in aerial videos, and amongst these methods, the spatiotemporal saliency detection approach reported promising results in the context of moving target detection. However, it is not accurate for moving target detection when visual tracking is performed under appearance variations. In this study, a visual tracking method is proposed based on spatiotemporal saliency and discriminative online learning methods to deal with appearance variations difficulties. Temporal saliency is used to represent moving target regions, and it was extracted based on the frame difference with Sauvola local adaptive thresholding algorithms. The spatial saliency is used to represent the target appearance details in candidate moving regions. SLIC superpixel segmentation, color, and moment features can be used to compute feature uniqueness and spatial compactness of saliency measurements to detect spatial saliency. It is a time consuming process, which prompted the development of a parallel algorithm to optimize and distribute the saliency detection processes that are loaded into the multi-processors. Spatiotemporal saliency is then obtained by combining the temporal and spatial saliencies to represent moving targets. Finally, a discriminative online learning algorithm was applied to generate a sample model based on spatiotemporal saliency. This sample model is then incrementally updated to detect the target in appearance variation conditions. Experiments conducted on the VIVID dataset demonstrated that the proposed visual tracking method is effective and is computationally efficient compared to state-of-the-art methods. PMID:29438421

  6. Per-Sample Multiple Kernel Approach for Visual Concept Learning

    Directory of Open Access Journals (Sweden)

    Ling-Yu Duan

    2010-01-01

    Full Text Available Learning visual concepts from images is an important yet challenging problem in computer vision and multimedia research areas. Multiple kernel learning (MKL methods have shown great advantages in visual concept learning. As a visual concept often exhibits great appearance variance, a canonical MKL approach may not generate satisfactory results when a uniform kernel combination is applied over the input space. In this paper, we propose a per-sample multiple kernel learning (PS-MKL approach to take into account intraclass diversity for improving discrimination. PS-MKL determines sample-wise kernel weights according to kernel functions and training samples. Kernel weights as well as kernel-based classifiers are jointly learned. For efficient learning, PS-MKL employs a sample selection strategy. Extensive experiments are carried out over three benchmarking datasets of different characteristics including Caltech101, WikipediaMM, and Pascal VOC'07. PS-MKL has achieved encouraging performance, comparable to the state of the art, which has outperformed a canonical MKL.

  7. Per-Sample Multiple Kernel Approach for Visual Concept Learning

    Directory of Open Access Journals (Sweden)

    Tian Yonghong

    2010-01-01

    Full Text Available Abstract Learning visual concepts from images is an important yet challenging problem in computer vision and multimedia research areas. Multiple kernel learning (MKL methods have shown great advantages in visual concept learning. As a visual concept often exhibits great appearance variance, a canonical MKL approach may not generate satisfactory results when a uniform kernel combination is applied over the input space. In this paper, we propose a per-sample multiple kernel learning (PS-MKL approach to take into account intraclass diversity for improving discrimination. PS-MKL determines sample-wise kernel weights according to kernel functions and training samples. Kernel weights as well as kernel-based classifiers are jointly learned. For efficient learning, PS-MKL employs a sample selection strategy. Extensive experiments are carried out over three benchmarking datasets of different characteristics including Caltech101, WikipediaMM, and Pascal VOC'07. PS-MKL has achieved encouraging performance, comparable to the state of the art, which has outperformed a canonical MKL.

  8. Correlation Filter Learning Toward Peak Strength for Visual Tracking.

    Science.gov (United States)

    Sui, Yao; Wang, Guanghui; Zhang, Li

    2018-04-01

    This paper presents a novel visual tracking approach to correlation filter learning toward peak strength of correlation response. Previous methods leverage all features of the target and the immediate background to learn a correlation filter. Some features, however, may be distractive to tracking, like those from occlusion and local deformation, resulting in unstable tracking performance. This paper aims at solving this issue and proposes a novel algorithm to learn the correlation filter. The proposed approach, by imposing an elastic net constraint on the filter, can adaptively eliminate those distractive features in the correlation filtering. A new peak strength metric is proposed to measure the discriminative capability of the learned correlation filter. It is demonstrated that the proposed approach effectively strengthens the peak of the correlation response, leading to more discriminative performance than previous methods. Extensive experiments on a challenging visual tracking benchmark demonstrate that the proposed tracker outperforms most state-of-the-art methods.

  9. Learning style, judgements of learning, and learning of verbal and visual information.

    Science.gov (United States)

    Knoll, Abby R; Otani, Hajime; Skeel, Reid L; Van Horn, K Roger

    2017-08-01

    The concept of learning style is immensely popular despite the lack of evidence showing that learning style influences performance. This study tested the hypothesis that the popularity of learning style is maintained because it is associated with subjective aspects of learning, such as judgements of learning (JOLs). Preference for verbal and visual information was assessed using the revised Verbalizer-Visualizer Questionnaire (VVQ). Then, participants studied a list of word pairs and a list of picture pairs, making JOLs (immediate, delayed, and global) while studying each list. Learning was tested by cued recall. The results showed that higher VVQ verbalizer scores were associated with higher immediate JOLs for words, and higher VVQ visualizer scores were associated with higher immediate JOLs for pictures. There was no association between VVQ scores and recall or JOL accuracy. As predicted, learning style was associated with subjective aspects of learning but not objective aspects of learning. © 2016 The British Psychological Society.

  10. Curriculum Q-Learning for Visual Vocabulary Acquisition

    OpenAIRE

    Zaidi, Ahmed H.; Moore, Russell; Briscoe, Ted

    2017-01-01

    The structure of curriculum plays a vital role in our learning process, both as children and adults. Presenting material in ascending order of difficulty that also exploits prior knowledge can have a significant impact on the rate of learning. However, the notion of difficulty and prior knowledge differs from person to person. Motivated by the need for a personalised curriculum, we present a novel method of curriculum learning for vocabulary words in the form of visual prompts. We employ a re...

  11. Robust Visual Knowledge Transfer via Extreme Learning Machine Based Domain Adaptation.

    Science.gov (United States)

    Zhang, Lei; Zhang, David

    2016-08-10

    We address the problem of visual knowledge adaptation by leveraging labeled patterns from source domain and a very limited number of labeled instances in target domain to learn a robust classifier for visual categorization. This paper proposes a new extreme learning machine based cross-domain network learning framework, that is called Extreme Learning Machine (ELM) based Domain Adaptation (EDA). It allows us to learn a category transformation and an ELM classifier with random projection by minimizing the -norm of the network output weights and the learning error simultaneously. The unlabeled target data, as useful knowledge, is also integrated as a fidelity term to guarantee the stability during cross domain learning. It minimizes the matching error between the learned classifier and a base classifier, such that many existing classifiers can be readily incorporated as base classifiers. The network output weights cannot only be analytically determined, but also transferrable. Additionally, a manifold regularization with Laplacian graph is incorporated, such that it is beneficial to semi-supervised learning. Extensively, we also propose a model of multiple views, referred as MvEDA. Experiments on benchmark visual datasets for video event recognition and object recognition, demonstrate that our EDA methods outperform existing cross-domain learning methods.

  12. Learning Sorting Algorithms through Visualization Construction

    Science.gov (United States)

    Cetin, Ibrahim; Andrews-Larson, Christine

    2016-01-01

    Recent increased interest in computational thinking poses an important question to researchers: What are the best ways to teach fundamental computing concepts to students? Visualization is suggested as one way of supporting student learning. This mixed-method study aimed to (i) examine the effect of instruction in which students constructed…

  13. Learning of arbitrary association between visual and auditory novel stimuli in adults: the "bond effect" of haptic exploration.

    Directory of Open Access Journals (Sweden)

    Benjamin Fredembach

    Full Text Available BACKGROUND: It is well-known that human beings are able to associate stimuli (novel or not perceived in their environment. For example, this ability is used by children in reading acquisition when arbitrary associations between visual and auditory stimuli must be learned. The studies tend to consider it as an "implicit" process triggered by the learning of letter/sound correspondences. The study described in this paper examined whether the addition of the visuo-haptic exploration would help adults to learn more effectively the arbitrary association between visual and auditory novel stimuli. METHODOLOGY/PRINCIPAL FINDINGS: Adults were asked to learn 15 new arbitrary associations between visual stimuli and their corresponding sounds using two learning methods which differed according to the perceptual modalities involved in the exploration of the visual stimuli. Adults used their visual modality in the "classic" learning method and both their visual and haptic modalities in the "multisensory" learning one. After both learning methods, participants showed a similar above-chance ability to recognize the visual and auditory stimuli and the audio-visual associations. However, the ability to recognize the visual-auditory associations was better after the multisensory method than after the classic one. CONCLUSION/SIGNIFICANCE: This study revealed that adults learned more efficiently the arbitrary association between visual and auditory novel stimuli when the visual stimuli were explored with both vision and touch. The results are discussed from the perspective of how they relate to the functional differences of the manual haptic modality and the hypothesis of a "haptic bond" between visual and auditory stimuli.

  14. Objects Classification by Learning-Based Visual Saliency Model and Convolutional Neural Network.

    Science.gov (United States)

    Li, Na; Zhao, Xinbo; Yang, Yongjia; Zou, Xiaochun

    2016-01-01

    Humans can easily classify different kinds of objects whereas it is quite difficult for computers. As a hot and difficult problem, objects classification has been receiving extensive interests with broad prospects. Inspired by neuroscience, deep learning concept is proposed. Convolutional neural network (CNN) as one of the methods of deep learning can be used to solve classification problem. But most of deep learning methods, including CNN, all ignore the human visual information processing mechanism when a person is classifying objects. Therefore, in this paper, inspiring the completed processing that humans classify different kinds of objects, we bring forth a new classification method which combines visual attention model and CNN. Firstly, we use the visual attention model to simulate the processing of human visual selection mechanism. Secondly, we use CNN to simulate the processing of how humans select features and extract the local features of those selected areas. Finally, not only does our classification method depend on those local features, but also it adds the human semantic features to classify objects. Our classification method has apparently advantages in biology. Experimental results demonstrated that our method made the efficiency of classification improve significantly.

  15. Learning Visual Representations for Perception-Action Systems

    DEFF Research Database (Denmark)

    Piater, Justus; Jodogne, Sebastien; Detry, Renaud

    2011-01-01

    and RLJC, our second method learns structural object models for robust object detection and pose estimation by probabilistic inference. To these models, the method associates grasp experiences autonomously learned by trial and error. These experiences form a nonparametric representation of grasp success......We discuss vision as a sensory modality for systems that effect actions in response to perceptions. While the internal representations informed by vision may be arbitrarily complex, we argue that in many cases it is advantageous to link them rather directly to action via learned mappings....... These arguments are illustrated by two examples of our own work. First, our RLVC algorithm performs reinforcement learning directly on the visual input space. To make this very large space manageable, RLVC interleaves the reinforcement learner with a supervised classification algorithm that seeks to split...

  16. Making perceptual learning practical to improve visual functions.

    Science.gov (United States)

    Polat, Uri

    2009-10-01

    Task-specific improvement in performance after training is well established. The finding that learning is stimulus-specific and does not transfer well between different stimuli, between stimulus locations in the visual field, or between the two eyes has been used to support the notion that neurons or assemblies of neurons are modified at the earliest stage of cortical processing. However, a debate regarding the proposed mechanism underlying perceptual learning is an ongoing issue. Nevertheless, generalization of a trained task to other functions is an important key, for both understanding the neural mechanisms and the practical value of the training. This manuscript describes a structured perceptual learning method that previously used (amblyopia, myopia) and a novel technique and results that were applied for presbyopia. In general, subjects were trained for contrast detection of Gabor targets under lateral masking conditions. Training improved contrast sensitivity and diminished the lateral suppression when it existed (amblyopia). The improvement was transferred to unrelated functions such as visual acuity. The new results of presbyopia show substantial improvement of the spatial and temporal contrast sensitivity, leading to improved processing speed of target detection as well as reaction time. Consequently, the subjects, who were able to eliminate the need for reading glasses, benefited. Thus, here we show that the transfer of functions indicates that the specificity of improvement in the trained task can be generalized by repetitive practice of target detection, covering a sufficient range of spatial frequencies and orientations, leading to an improvement in unrelated visual functions. Thus, perceptual learning can be a practical method to improve visual functions in people with impaired or blurred vision.

  17. The role of visual representation in physics learning: dynamic versus static visualization

    Science.gov (United States)

    Suyatna, Agus; Anggraini, Dian; Agustina, Dina; Widyastuti, Dini

    2017-11-01

    This study aims to examine the role of visual representation in physics learning and to compare the learning outcomes of using dynamic and static visualization media. The study was conducted using quasi-experiment with Pretest-Posttest Control Group Design. The samples of this research are students of six classes at State Senior High School in Lampung Province. The experimental class received a learning using dynamic visualization and control class using static visualization media. Both classes are given pre-test and post-test with the same instruments. Data were tested with N-gain analysis, normality test, homogeneity test and mean difference test. The results showed that there was a significant increase of mean (N-Gain) learning outcomes (p physical phenomena and requires long-term observation.

  18. Visual recognition and inference using dynamic overcomplete sparse learning.

    Science.gov (United States)

    Murray, Joseph F; Kreutz-Delgado, Kenneth

    2007-09-01

    We present a hierarchical architecture and learning algorithm for visual recognition and other visual inference tasks such as imagination, reconstruction of occluded images, and expectation-driven segmentation. Using properties of biological vision for guidance, we posit a stochastic generative world model and from it develop a simplified world model (SWM) based on a tractable variational approximation that is designed to enforce sparse coding. Recent developments in computational methods for learning overcomplete representations (Lewicki & Sejnowski, 2000; Teh, Welling, Osindero, & Hinton, 2003) suggest that overcompleteness can be useful for visual tasks, and we use an overcomplete dictionary learning algorithm (Kreutz-Delgado, et al., 2003) as a preprocessing stage to produce accurate, sparse codings of images. Inference is performed by constructing a dynamic multilayer network with feedforward, feedback, and lateral connections, which is trained to approximate the SWM. Learning is done with a variant of the back-propagation-through-time algorithm, which encourages convergence to desired states within a fixed number of iterations. Vision tasks require large networks, and to make learning efficient, we take advantage of the sparsity of each layer to update only a small subset of elements in a large weight matrix at each iteration. Experiments on a set of rotated objects demonstrate various types of visual inference and show that increasing the degree of overcompleteness improves recognition performance in difficult scenes with occluded objects in clutter.

  19. An Interactive Approach to Learning and Teaching in Visual Arts Education

    Directory of Open Access Journals (Sweden)

    Zlata Tomljenović

    2015-09-01

    Full Text Available The present research focuses on modernising the approach to learning and teaching the visual arts in teaching practice, as well as examining the performance of an interactive approach to learning and teaching in visual arts classes with the use of a combination of general and specific (visual arts teaching methods. The study uses quantitative analysis of data on the basis of results obtained from a pedagogical experiment. The subjects of the research were 285 second- and fourth-grade students from four primary schools in the city of Rijeka, Croatia. Paintings made by the students in the initial and final stage of the pedagogical experiment were evaluated. The research results confirmed the hypotheses about the positive effect of interactive approaches to learning and teaching on the following variables: (1 knowledge and understanding of visual arts terms, (2 abilities and skills in the use of art materials and techniques within the framework of planned painting tasks, and (3 creativity in solving visual arts problems. The research results can help shape an optimised model for the planning and performance of visual arts education, and provide guidelines for planning professional development and the further professional education of teachers, with the aim of establishing more efficient learning and teaching of the visual arts in primary school.

  20. Research on demand-oriented Business English learning method

    Directory of Open Access Journals (Sweden)

    Zhou Yuan

    2016-01-01

    Full Text Available Business English is integrated with visual-audio-oral English, which focuses on the application for English listening and speaking skills in common business occasions, and acquire business knowledge and improve skills through English. This paper analyzes the Business English Visual-audio-oral Course, and learning situation of higher vocational students’ learning objectives, interests, vocabulary, listening and speaking, and focuses on the research of effective methods to guide the higher vocational students to learn Business English Visual-audio-oral Course, master Business English knowledge, and improve communicative competence of Business English.

  1. Analysing the physics learning environment of visually impaired students in high schools

    Science.gov (United States)

    Toenders, Frank G. C.; de Putter-Smits, Lesley G. A.; Sanders, Wendy T. M.; den Brok, Perry

    2017-07-01

    Although visually impaired students attend regular high school, their enrolment in advanced science classes is dramatically low. In our research we evaluated the physics learning environment of a blind high school student in a regular Dutch high school. For visually impaired students to grasp physics concepts, time and additional materials to support the learning process are key. Time for teachers to develop teaching methods for such students is scarce. Suggestions for changes to the learning environment and of materials used are given.

  2. Formation of 17-18 yrs age girl students’ visual performance by means of visual training at stage of adaptation to learning loads

    Directory of Open Access Journals (Sweden)

    Bondarenko S.V.

    2015-04-01

    Full Text Available Purpose: substantiation of health related training influence of basketball and volleyball elements on functional state of 1 st year students’ visual analyzers in period of adaptation to learning loads with expressed visual component. Material: in experiment 29 students of 17-18 year age without visual pathologies participated. Indicators of visual performance were determined by correction table of Tagayeva and processed by Weston methodic. Accommodative function was tested by method of mechanical proximetry. Results: the authors worked out and tested two programs of visual training. Influence of visual trainings on visual performance’s main components (quickness, quality, integral indicators was studied as well as eye’s accommodative function (by dynamic of position of the nearest point of clear vision. Conclusions: Application of visual trainings at physical education classes permits to improve indicators of visual analyzer’s performance as well as minimize negative influence of intensive learning loads on eye’ accommodative function.

  3. Learning sorting algorithms through visualization construction

    Science.gov (United States)

    Cetin, Ibrahim; Andrews-Larson, Christine

    2016-01-01

    Recent increased interest in computational thinking poses an important question to researchers: What are the best ways to teach fundamental computing concepts to students? Visualization is suggested as one way of supporting student learning. This mixed-method study aimed to (i) examine the effect of instruction in which students constructed visualizations on students' programming achievement and students' attitudes toward computer programming, and (ii) explore how this kind of instruction supports students' learning according to their self-reported experiences in the course. The study was conducted with 58 pre-service teachers who were enrolled in their second programming class. They expect to teach information technology and computing-related courses at the primary and secondary levels. An embedded experimental model was utilized as a research design. Students in the experimental group were given instruction that required students to construct visualizations related to sorting, whereas students in the control group viewed pre-made visualizations. After the instructional intervention, eight students from each group were selected for semi-structured interviews. The results showed that the intervention based on visualization construction resulted in significantly better acquisition of sorting concepts. However, there was no significant difference between the groups with respect to students' attitudes toward computer programming. Qualitative data analysis indicated that students in the experimental group constructed necessary abstractions through their engagement in visualization construction activities. The authors of this study argue that the students' active engagement in the visualization construction activities explains only one side of students' success. The other side can be explained through the instructional approach, constructionism in this case, used to design instruction. The conclusions and implications of this study can be used by researchers and

  4. Enhanced learning of natural visual sequences in newborn chicks.

    Science.gov (United States)

    Wood, Justin N; Prasad, Aditya; Goldman, Jason G; Wood, Samantha M W

    2016-07-01

    To what extent are newborn brains designed to operate over natural visual input? To address this question, we used a high-throughput controlled-rearing method to examine whether newborn chicks (Gallus gallus) show enhanced learning of natural visual sequences at the onset of vision. We took the same set of images and grouped them into either natural sequences (i.e., sequences showing different viewpoints of the same real-world object) or unnatural sequences (i.e., sequences showing different images of different real-world objects). When raised in virtual worlds containing natural sequences, newborn chicks developed the ability to recognize familiar images of objects. Conversely, when raised in virtual worlds containing unnatural sequences, newborn chicks' object recognition abilities were severely impaired. In fact, the majority of the chicks raised with the unnatural sequences failed to recognize familiar images of objects despite acquiring over 100 h of visual experience with those images. Thus, newborn chicks show enhanced learning of natural visual sequences at the onset of vision. These results indicate that newborn brains are designed to operate over natural visual input.

  5. Colouring the Gaps in Learning Design: Aesthetics and the Visual in Learning

    Science.gov (United States)

    Carroll, Fiona; Kop, Rita

    2016-01-01

    The visual is a dominant mode of information retrieval and understanding however, the focus on the visual dimension of Technology Enhanced Learning (TEL) is still quite weak in relation to its predominant focus on usability. To accommodate the future needs of the visual learner, designers of e-learning environments should advance the current…

  6. Parts-based stereoscopic image assessment by learning binocular manifold color visual properties

    Science.gov (United States)

    Xu, Haiyong; Yu, Mei; Luo, Ting; Zhang, Yun; Jiang, Gangyi

    2016-11-01

    Existing stereoscopic image quality assessment (SIQA) methods are mostly based on the luminance information, in which color information is not sufficiently considered. Actually, color is part of the important factors that affect human visual perception, and nonnegative matrix factorization (NMF) and manifold learning are in line with human visual perception. We propose an SIQA method based on learning binocular manifold color visual properties. To be more specific, in the training phase, a feature detector is created based on NMF with manifold regularization by considering color information, which not only allows parts-based manifold representation of an image, but also manifests localized color visual properties. In the quality estimation phase, visually important regions are selected by considering different human visual attention, and feature vectors are extracted by using the feature detector. Then the feature similarity index is calculated and the parts-based manifold color feature energy (PMCFE) for each view is defined based on the color feature vectors. The final quality score is obtained by considering a binocular combination based on PMCFE. The experimental results on LIVE I and LIVE Π 3-D IQA databases demonstrate that the proposed method can achieve much higher consistency with subjective evaluations than the state-of-the-art SIQA methods.

  7. The Rhetoric of Multi-Display Learning Spaces: exploratory experiences in visual art disciplines

    Directory of Open Access Journals (Sweden)

    Brett Bligh

    2010-11-01

    Full Text Available Multi-Display Learning Spaces (MD-LS comprise technologies to allow the viewing of multiple simultaneous visual materials, modes of learning which encourage critical reflection upon these materials, and spatial configurations which afford interaction between learners and the materials in orchestrated ways. In this paper we provide an argument for the benefits of Multi-Display Learning Spaces in supporting complex, disciplinary reasoning within learning, focussing upon our experiences within postgraduate visual arts education. The importance of considering the affordances of the physical environment within education has been acknowledged by the recent attention given to Learning Spaces, yet within visual art disciplines the perception of visual material within a given space has long been seen as a key methodological consideration with implications for the identity of the discipline itself. We analyse the methodological, technological and spatial affordances of MD-LS to support learning, and discuss comparative viewing as a disciplinary method to structure visual analysis within the space which benefits from the simultaneous display of multiple partitions of visual evidence. We offer an analysis of the role of the teacher in authoring and orchestration and conclude by proposing a more general structure for what we term ‘multiple perspective learning’, in which the presentation of multiple pieces of visual evidence creates the conditions for complex argumentation within Higher Education.

  8. Magnetic stimulation of visual cortex impairs perceptual learning.

    Science.gov (United States)

    Baldassarre, Antonello; Capotosto, Paolo; Committeri, Giorgia; Corbetta, Maurizio

    2016-12-01

    The ability to learn and process visual stimuli more efficiently is important for survival. Previous neuroimaging studies have shown that perceptual learning on a shape identification task differently modulates activity in both frontal-parietal cortical regions and visual cortex (Sigman et al., 2005;Lewis et al., 2009). Specifically, fronto-parietal regions (i.e. intra parietal sulcus, pIPS) became less activated for trained as compared to untrained stimuli, while visual regions (i.e. V2d/V3 and LO) exhibited higher activation for familiar shape. Here, after the intensive training, we employed transcranial magnetic stimulation over both visual occipital and parietal regions, previously shown to be modulated, to investigate their causal role in learning the shape identification task. We report that interference with V2d/V3 and LO increased reaction times to learned stimuli as compared to pIPS and Sham control condition. Moreover, the impairment observed after stimulation over the two visual regions was positive correlated. These results strongly support the causal role of the visual network in the control of the perceptual learning. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Deep learning for visual understanding

    NARCIS (Netherlands)

    Guo, Y.

    2017-01-01

    With the dramatic growth of the image data on the web, there is an increasing demand of the algorithms capable of understanding the visual information automatically. Deep learning, served as one of the most significant breakthroughs, has brought revolutionary success in diverse visual applications,

  10. Learning of grammar-like visual sequences by adults with and without language-learning disabilities.

    Science.gov (United States)

    Aguilar, Jessica M; Plante, Elena

    2014-08-01

    Two studies examined learning of grammar-like visual sequences to determine whether a general deficit in statistical learning characterizes this population. Furthermore, we tested the hypothesis that difficulty in sustaining attention during the learning task might account for differences in statistical learning. In Study 1, adults with normal language (NL) or language-learning disability (LLD) were familiarized with the visual artificial grammar and then tested using items that conformed or deviated from the grammar. In Study 2, a 2nd sample of adults with NL and LLD were presented auditory word pairs with weak semantic associations (e.g., groom + clean) along with the visual learning task. Participants were instructed to attend to visual sequences and to ignore the auditory stimuli. Incidental encoding of these words would indicate reduced attention to the primary task. In Studies 1 and 2, both groups demonstrated learning and generalization of the artificial grammar. In Study 2, neither the NL nor the LLD group appeared to encode the words presented during the learning phase. The results argue against a general deficit in statistical learning for individuals with LLD and demonstrate that both NL and LLD learners can ignore extraneous auditory stimuli during visual learning.

  11. Three visual techniques to enhance interprofessional learning.

    Science.gov (United States)

    Parsell, G; Gibbs, T; Bligh, J

    1998-07-01

    Many changes in the delivery of healthcare in the UK have highlighted the need for healthcare professionals to learn to work together as teams for the benefit of patients. Whatever the profession or level, whether for postgraduate education and training, continuing professional development, or for undergraduates, learners should have an opportunity to learn about and with, other healthcare practitioners in a stimulating and exciting way. Learning to understand how people think, feel, and react, and the parts they play at work, both as professionals and individuals, can only be achieved through sensitive discussion and exchange of views. Teaching and learning methods must provide opportunities for this to happen. This paper describes three small-group teaching techniques which encourage a high level of learner collaboration and team-working. Learning content is focused on real-life health-care issues and strong visual images are used to stimulate lively discussion and debate. Each description includes the learning objectives of each exercise, basic equipment and resources, and learning outcomes.

  12. Visual and Verbal Learning in a Genetic Metabolic Disorder

    Science.gov (United States)

    Spilkin, Amy M.; Ballantyne, Angela O.; Trauner, Doris A.

    2009-01-01

    Visual and verbal learning in a genetic metabolic disorder (cystinosis) were examined in the following three studies. The goal of Study I was to provide a normative database and establish the reliability and validity of a new test of visual learning and memory (Visual Learning and Memory Test; VLMT) that was modeled after a widely used test of…

  13. Digital media Experiences for Visual Learning

    DEFF Research Database (Denmark)

    Buhl, Mie

    2013-01-01

    for new tools and new theoretical approaches with which to understand them. the article argues that the current phase of social practices and technological development makes it difficult to disitnguish between experience with digital media and mediated experiences, because of the use of renegotiation og......Visual learning is a topic for didactic studies in all levels of educaion, brought about by an increasing use of digital meida- digital media give rise to discussions of how learning expereienes come about from various media ressources that generate new learning situations. new situations call...... about by the nature of diverse digital artefacts, 3. the learning potentials in using mobils devices for integrating the body in visual perception processes....

  14. Online multi-modal robust non-negative dictionary learning for visual tracking.

    Science.gov (United States)

    Zhang, Xiang; Guan, Naiyang; Tao, Dacheng; Qiu, Xiaogang; Luo, Zhigang

    2015-01-01

    Dictionary learning is a method of acquiring a collection of atoms for subsequent signal representation. Due to its excellent representation ability, dictionary learning has been widely applied in multimedia and computer vision. However, conventional dictionary learning algorithms fail to deal with multi-modal datasets. In this paper, we propose an online multi-modal robust non-negative dictionary learning (OMRNDL) algorithm to overcome this deficiency. Notably, OMRNDL casts visual tracking as a dictionary learning problem under the particle filter framework and captures the intrinsic knowledge about the target from multiple visual modalities, e.g., pixel intensity and texture information. To this end, OMRNDL adaptively learns an individual dictionary, i.e., template, for each modality from available frames, and then represents new particles over all the learned dictionaries by minimizing the fitting loss of data based on M-estimation. The resultant representation coefficient can be viewed as the common semantic representation of particles across multiple modalities, and can be utilized to track the target. OMRNDL incrementally learns the dictionary and the coefficient of each particle by using multiplicative update rules to respectively guarantee their non-negativity constraints. Experimental results on a popular challenging video benchmark validate the effectiveness of OMRNDL for visual tracking in both quantity and quality.

  15. Joint learning and weighting of visual vocabulary for bag-of-feature based tissue classification

    KAUST Repository

    Wang, Jim Jing-Yan

    2013-12-01

    Automated classification of tissue types of Region of Interest (ROI) in medical images has been an important application in Computer-Aided Diagnosis (CAD). Recently, bag-of-feature methods which treat each ROI as a set of local features have shown their power in this field. Two important issues of bag-of-feature strategy for tissue classification are investigated in this paper: the visual vocabulary learning and weighting, which are always considered independently in traditional methods by neglecting the inner relationship between the visual words and their weights. To overcome this problem, we develop a novel algorithm, Joint-ViVo, which learns the vocabulary and visual word weights jointly. A unified objective function based on large margin is defined for learning of both visual vocabulary and visual word weights, and optimized alternately in the iterative algorithm. We test our algorithm on three tissue classification tasks: classifying breast tissue density in mammograms, classifying lung tissue in High-Resolution Computed Tomography (HRCT) images, and identifying brain tissue type in Magnetic Resonance Imaging (MRI). The results show that Joint-ViVo outperforms the state-of-art methods on tissue classification problems. © 2013 Elsevier Ltd.

  16. Age-related declines of stability in visual perceptual learning.

    Science.gov (United States)

    Chang, Li-Hung; Shibata, Kazuhisa; Andersen, George J; Sasaki, Yuka; Watanabe, Takeo

    2014-12-15

    One of the biggest questions in learning is how a system can resolve the plasticity and stability dilemma. Specifically, the learning system needs to have not only a high capability of learning new items (plasticity) but also a high stability to retain important items or processing in the system by preventing unimportant or irrelevant information from being learned. This dilemma should hold true for visual perceptual learning (VPL), which is defined as a long-term increase in performance on a visual task as a result of visual experience. Although it is well known that aging influences learning, the effect of aging on the stability and plasticity of the visual system is unclear. To address the question, we asked older and younger adults to perform a task while a task-irrelevant feature was merely exposed. We found that older individuals learned the task-irrelevant features that younger individuals did not learn, both the features that were sufficiently strong for younger individuals to suppress and the features that were too weak for younger individuals to learn. At the same time, there was no plasticity reduction in older individuals within the task tested. These results suggest that the older visual system is less stable to unimportant information than the younger visual system. A learning problem with older individuals may be due to a decrease in stability rather than a decrease in plasticity, at least in VPL. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Learning from Balance Sheet Visualization

    Science.gov (United States)

    Tanlamai, Uthai; Soongswang, Oranuj

    2011-01-01

    This exploratory study examines alternative visuals and their effect on the level of learning of balance sheet users. Executive and regular classes of graduate students majoring in information technology in business were asked to evaluate the extent of acceptance and enhanced capability of these alternative visuals toward their learning…

  18. Studying Visual Displays: How to Instructionally Support Learning

    Science.gov (United States)

    Renkl, Alexander; Scheiter, Katharina

    2017-01-01

    Visual displays are very frequently used in learning materials. Although visual displays have great potential to foster learning, they also pose substantial demands on learners so that the actual learning outcomes are often disappointing. In this article, we pursue three main goals. First, we identify the main difficulties that learners have when…

  19. iSee: Teaching Visual Learning in an Organic Virtual Learning Environment

    Science.gov (United States)

    Han, Hsiao-Cheng

    2017-01-01

    This paper presents a three-year participatory action research project focusing on the graduate level course entitled Visual Learning in 3D Animated Virtual Worlds. The purpose of this research was to understand "How the virtual world processes of observing and creating can best help students learn visual theories". The first cycle of…

  20. Using Technology to Support Visual Learning Strategies

    Science.gov (United States)

    O'Bannon, Blanche; Puckett, Kathleen; Rakes, Glenda

    2006-01-01

    Visual learning is a strategy for visually representing the structure of information and for representing the ways in which concepts are related. Based on the work of Ausubel, these hierarchical maps facilitate student learning of unfamiliar information in the K-12 classroom. This paper presents the research base for this Type II computer tool, as…

  1. Research on demand-oriented Business English learning method

    OpenAIRE

    Zhou Yuan

    2016-01-01

    Business English is integrated with visual-audio-oral English, which focuses on the application for English listening and speaking skills in common business occasions, and acquire business knowledge and improve skills through English. This paper analyzes the Business English Visual-audio-oral Course, and learning situation of higher vocational students’ learning objectives, interests, vocabulary, listening and speaking, and focuses on the research of effective methods to guide the higher voca...

  2. Motor sequence learning occurs despite disrupted visual and proprioceptive feedback

    Directory of Open Access Journals (Sweden)

    Boyd Lara A

    2008-07-01

    Full Text Available Abstract Background Recent work has demonstrated the importance of proprioception for the development of internal representations of the forces encountered during a task. Evidence also exists for a significant role for proprioception in the execution of sequential movements. However, little work has explored the role of proprioceptive sensation during the learning of continuous movement sequences. Here, we report that the repeated segment of a continuous tracking task can be learned despite peripherally altered arm proprioception and severely restricted visual feedback regarding motor output. Methods Healthy adults practiced a continuous tracking task over 2 days. Half of the participants experienced vibration that altered proprioception of shoulder flexion/extension of the active tracking arm (experimental condition and half experienced vibration of the passive resting arm (control condition. Visual feedback was restricted for all participants. Retention testing was conducted on a separate day to assess motor learning. Results Regardless of vibration condition, participants learned the repeated segment demonstrated by significant improvements in accuracy for tracking repeated as compared to random continuous movement sequences. Conclusion These results suggest that with practice, participants were able to use residual afferent information to overcome initial interference of tracking ability related to altered proprioception and restricted visual feedback to learn a continuous motor sequence. Motor learning occurred despite an initial interference of tracking noted during acquisition practice.

  3. Smart-system of distance learning of visually impaired people based on approaches of artificial intelligence

    Science.gov (United States)

    Samigulina, Galina A.; Shayakhmetova, Assem S.

    2016-11-01

    Research objective is the creation of intellectual innovative technology and information Smart-system of distance learning for visually impaired people. The organization of the available environment for receiving quality education for visually impaired people, their social adaptation in society are important and topical issues of modern education.The proposed Smart-system of distance learning for visually impaired people can significantly improve the efficiency and quality of education of this category of people. The scientific novelty of proposed Smart-system is using intelligent and statistical methods of processing multi-dimensional data, and taking into account psycho-physiological characteristics of perception and awareness learning information by visually impaired people.

  4. Cognitive Strategies for Learning from Static and Dynamic Visuals.

    Science.gov (United States)

    Lewalter, D.

    2003-01-01

    Studied the effects of including static or dynamic visuals in an expository text on a learning outcome and the use of learning strategies when working with these visuals. Results for 60 undergraduates for both types of illustration indicate different frequencies in the use of learning strategies relevant for the learning outcome. (SLD)

  5. Caudate nucleus reactivity predicts perceptual learning rate for visual feature conjunctions.

    Science.gov (United States)

    Reavis, Eric A; Frank, Sebastian M; Tse, Peter U

    2015-04-15

    Useful information in the visual environment is often contained in specific conjunctions of visual features (e.g., color and shape). The ability to quickly and accurately process such conjunctions can be learned. However, the neural mechanisms responsible for such learning remain largely unknown. It has been suggested that some forms of visual learning might involve the dopaminergic neuromodulatory system (Roelfsema et al., 2010; Seitz and Watanabe, 2005), but this hypothesis has not yet been directly tested. Here we test the hypothesis that learning visual feature conjunctions involves the dopaminergic system, using functional neuroimaging, genetic assays, and behavioral testing techniques. We use a correlative approach to evaluate potential associations between individual differences in visual feature conjunction learning rate and individual differences in dopaminergic function as indexed by neuroimaging and genetic markers. We find a significant correlation between activity in the caudate nucleus (a component of the dopaminergic system connected to visual areas of the brain) and visual feature conjunction learning rate. Specifically, individuals who showed a larger difference in activity between positive and negative feedback on an unrelated cognitive task, indicative of a more reactive dopaminergic system, learned visual feature conjunctions more quickly than those who showed a smaller activity difference. This finding supports the hypothesis that the dopaminergic system is involved in visual learning, and suggests that visual feature conjunction learning could be closely related to associative learning. However, no significant, reliable correlations were found between feature conjunction learning and genotype or dopaminergic activity in any other regions of interest. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Visual Learning: A Learner Centered Approach to Enhance English Language Teaching

    Science.gov (United States)

    Philominraj, Andrew; Jeyabalan, David; Vidal-Silva, Christian

    2017-01-01

    This article presents an empirical study carried out among the students of higher secondary schools to find out how English language learning occurs naturally in an environment where learners are encouraged by an appropriate method such as visual learning. The primary data was collected from 504 students with different pretested questionnaires. A…

  7. Learning QlikView data visualization

    CERN Document Server

    Pover, Karl

    2013-01-01

    A practical and fast-paced guide that gives you all the information you need to start developing charts from your data.Learning QlikView Data Visualization is for anybody interested in performing powerful data analysis and crafting insightful data visualization, independent of any previous knowledge of QlikView. Experience with spreadsheet software will help you understand QlikView functions.

  8. Investigating Verbal and Visual Auditory Learning After Conformal Radiation Therapy for Childhood Ependymoma

    International Nuclear Information System (INIS)

    Di Pinto, Marcos; Conklin, Heather M.; Li Chenghong; Xiong Xiaoping; Merchant, Thomas E.

    2010-01-01

    Purpose: The primary objective of this study was to determine whether children with localized ependymoma experience a decline in verbal or visual-auditory learning after conformal radiation therapy (CRT). The secondary objective was to investigate the impact of age and select clinical factors on learning before and after treatment. Methods and Materials: Learning in a sample of 71 patients with localized ependymoma was assessed with the California Verbal Learning Test (CVLT-C) and the Visual-Auditory Learning Test (VAL). Learning measures were administered before CRT, at 6 months, and then yearly for a total of 5 years. Results: There was no significant decline on measures of verbal or visual-auditory learning after CRT; however, younger age, more surgeries, and cerebrospinal fluid shunting did predict lower scores at baseline. There were significant longitudinal effects (improved learning scores after treatment) among older children on the CVLT-C and children that did not receive pre-CRT chemotherapy on the VAL. Conclusion: There was no evidence of global decline in learning after CRT in children with localized ependymoma. Several important implications from the findings include the following: (1) identification of and differentiation among variables with transient vs. long-term effects on learning, (2) demonstration that children treated with chemotherapy before CRT had greater risk of adverse visual-auditory learning performance, and (3) establishment of baseline and serial assessment as critical in ascertaining necessary sensitivity and specificity for the detection of modest effects.

  9. Learning Building Layouts with Non-geometric Visual Information: The Effects of Visual Impairment and Age

    Science.gov (United States)

    Kalia, Amy A.; Legge, Gordon E.; Giudice, Nicholas A.

    2009-01-01

    Previous studies suggest that humans rely on geometric visual information (hallway structure) rather than non-geometric visual information (e.g., doors, signs and lighting) for acquiring cognitive maps of novel indoor layouts. This study asked whether visual impairment and age affect reliance on non-geometric visual information for layout learning. We tested three groups of participants—younger (sighted, older (50–70 years) normally sighted, and low vision (people with heterogeneous forms of visual impairment ranging in age from 18–67). Participants learned target locations in building layouts using four presentation modes: a desktop virtual environment (VE) displaying only geometric cues (Sparse VE), a VE displaying both geometric and non-geometric cues (Photorealistic VE), a Map, and a Real building. Layout knowledge was assessed by map drawing and by asking participants to walk to specified targets in the real space. Results indicate that low-vision and older normally-sighted participants relied on additional non-geometric information to accurately learn layouts. In conclusion, visual impairment and age may result in reduced perceptual and/or memory processing that makes it difficult to learn layouts without non-geometric visual information. PMID:19189732

  10. Repetitive Transcranial Direct Current Stimulation Induced Excitability Changes of Primary Visual Cortex and Visual Learning Effects-A Pilot Study.

    Science.gov (United States)

    Sczesny-Kaiser, Matthias; Beckhaus, Katharina; Dinse, Hubert R; Schwenkreis, Peter; Tegenthoff, Martin; Höffken, Oliver

    2016-01-01

    Studies on noninvasive motor cortex stimulation and motor learning demonstrated cortical excitability as a marker for a learning effect. Transcranial direct current stimulation (tDCS) is a non-invasive tool to modulate cortical excitability. It is as yet unknown how tDCS-induced excitability changes and perceptual learning in visual cortex correlate. Our study aimed to examine the influence of tDCS on visual perceptual learning in healthy humans. Additionally, we measured excitability in primary visual cortex (V1). We hypothesized that anodal tDCS would improve and cathodal tDCS would have minor or no effects on visual learning. Anodal, cathodal or sham tDCS were applied over V1 in a randomized, double-blinded design over four consecutive days (n = 30). During 20 min of tDCS, subjects had to learn a visual orientation-discrimination task (ODT). Excitability parameters were measured by analyzing paired-stimulation behavior of visual-evoked potentials (ps-VEP) and by measuring phosphene thresholds (PTs) before and after the stimulation period of 4 days. Compared with sham-tDCS, anodal tDCS led to an improvement of visual discrimination learning (p learning effect. For cathodal tDCS, no significant effects on learning or on excitability could be seen. Our results showed that anodal tDCS over V1 resulted in improved visual perceptual learning and increased cortical excitability. tDCS is a promising tool to alter V1 excitability and, hence, perceptual visual learning.

  11. Learning without knowing: subliminal visual feedback facilitates ballistic motor learning

    DEFF Research Database (Denmark)

    Lundbye-Jensen, Jesper; Leukel, Christian; Nielsen, Jens Bo

    by subconscious (subliminal) augmented visual feedback on motor performance. To test this, 45 subjects participated in the experiment, which involved learning of a ballistic task. The task was to execute simple ankle plantar flexion movements as quickly as possible within 200 ms and to continuously improve...... by the learner, indeed facilitated ballistic motor learning. This effect likely relates to multiple (conscious versus unconscious) processing of visual feedback and to the specific neural circuitries involved in optimization of ballistic motor performance.......). It is a well- described phenomenon that we may respond to features of our surroundings without being aware of them. It is also a well-known principle, that learning is reinforced by augmented feedback on motor performance. In the present experiment we hypothesized that motor learning may be facilitated...

  12. Multisensory training can promote or impede visual perceptual learning of speech stimuli: visual-tactile vs. visual-auditory training.

    Science.gov (United States)

    Eberhardt, Silvio P; Auer, Edward T; Bernstein, Lynne E

    2014-01-01

    In a series of studies we have been investigating how multisensory training affects unisensory perceptual learning with speech stimuli. Previously, we reported that audiovisual (AV) training with speech stimuli can promote auditory-only (AO) perceptual learning in normal-hearing adults but can impede learning in congenitally deaf adults with late-acquired cochlear implants. Here, impeder and promoter effects were sought in normal-hearing adults who participated in lipreading training. In Experiment 1, visual-only (VO) training on paired associations between CVCVC nonsense word videos and nonsense pictures demonstrated that VO words could be learned to a high level of accuracy even by poor lipreaders. In Experiment 2, visual-auditory (VA) training in the same paradigm but with the addition of synchronous vocoded acoustic speech impeded VO learning of the stimuli in the paired-associates paradigm. In Experiment 3, the vocoded AO stimuli were shown to be less informative than the VO speech. Experiment 4 combined vibrotactile speech stimuli with the visual stimuli during training. Vibrotactile stimuli were shown to promote visual perceptual learning. In Experiment 5, no-training controls were used to show that training with visual speech carried over to consonant identification of untrained CVCVC stimuli but not to lipreading words in sentences. Across this and previous studies, multisensory training effects depended on the functional relationship between pathways engaged during training. Two principles are proposed to account for stimulus effects: (1) Stimuli presented to the trainee's primary perceptual pathway will impede learning by a lower-rank pathway. (2) Stimuli presented to the trainee's lower rank perceptual pathway will promote learning by a higher-rank pathway. The mechanisms supporting these principles are discussed in light of multisensory reverse hierarchy theory (RHT).

  13. [Which learning methods are expected for ultrasound training? Blended learning on trial].

    Science.gov (United States)

    Röhrig, S; Hempel, D; Stenger, T; Armbruster, W; Seibel, A; Walcher, F; Breitkreutz, R

    2014-10-01

    Current teaching methods in graduate and postgraduate training often include frontal presentations. Especially in ultrasound education not only knowledge but also sensomotory and visual skills need to be taught. This requires new learning methods. This study examined which types of teaching methods are preferred by participants in ultrasound training courses before, during and after the course by analyzing a blended learning concept. It also investigated how much time trainees are willing to spend on such activities. A survey was conducted at the end of a certified ultrasound training course. Participants were asked to complete a questionnaire based on a visual analogue scale (VAS) in which three categories were defined: category (1) vote for acceptance with a two thirds majority (VAS 67-100%), category (2) simple acceptance (50-67%) and category (3) rejection (learning program with interactive elements, short presentations (less than 20 min), incorporating interaction with the audience, hands-on sessions in small groups, an alternation between presentations and hands-on-sessions, live demonstrations and quizzes. For post-course learning, interactive and media-assisted approaches were preferred, such as e-learning, films of the presentations and the possibility to stay in contact with instructors in order to discuss the results. Participants also voted for maintaining a logbook for documentation of results. The results of this study indicate the need for interactive learning concepts and blended learning activities. Directors of ultrasound courses may consider these aspects and are encouraged to develop sustainable learning pathways.

  14. Occupational Therapy Interventions Effect on Visual-Motor Skills in Children with Learning Disorders

    Directory of Open Access Journals (Sweden)

    Batoul Mandani

    2007-07-01

    Full Text Available Objective: Visual-motor skill is a part of visual perception which can integrate visual processing skills to fine movements. Visual-motor dysfunction is often to cause problems in copying and writing. The purpose of this study is investigation of occupational therapy interventions effect on the visual-motor skill in children with learning disorders. Materials & Methods: In this interventional and experimental study, 23 students with learning disorders (2nd, 3rd, 4th grade were selected and they were divided (through Randomized Block Method into two groups, 11 persons as intervention group and the others as the control group (12 people. Both groups were administered the “Test of Visual-Motor Skills- Revised” (TVMS-R. Then case group received occupational therapy interventions for 16 sessions and two groups were administered by TVMS-R again. Data was analyzed by using paired T-test and independent T-test. Results: Total mark of TVMS-R demonstrated statistically significant difference in visual-motor skills between case and control groups (P<0/001. This test has 8 categories. Total mark of 1, 3,4,6,8 categories demonstrated that occupational therapy had significant effect on visual analysis skills (P<0/005. Total mark of 2, 5, 7 categories demonstrated that occupational therapy had significant effect on visual-spatial skills (P<0/001. Conclusion: Occupational therapy interventions had significant effect on the visual-motor skills and its items (visual-spatial, visual analysis, visual-motor integration and eye fixation skills.

  15. A deep learning / neuroevolution hybrid for visual control

    DEFF Research Database (Denmark)

    Poulsen, Andreas Precht; Thorhauge, Mark; Funch, Mikkel Hvilshj

    2017-01-01

    This paper presents a deep learning / neuroevolution hybrid approach called DLNE, which allows FPS bots to learn to aim & shoot based only on high-dimensional raw pixel input. The deep learning component is responsible for visual recognition and translating raw pixels to compact feature...... representations, while the evolving network takes those features as inputs to infer actions. The results suggest that combining deep learning and neuroevolution in a hybrid approach is a promising research direction that could make complex visual domains directly accessible to networks trained through evolution....

  16. Changing viewer perspectives reveals constraints to implicit visual statistical learning.

    Science.gov (United States)

    Jiang, Yuhong V; Swallow, Khena M

    2014-10-07

    Statistical learning-learning environmental regularities to guide behavior-likely plays an important role in natural human behavior. One potential use is in search for valuable items. Because visual statistical learning can be acquired quickly and without intention or awareness, it could optimize search and thereby conserve energy. For this to be true, however, visual statistical learning needs to be viewpoint invariant, facilitating search even when people walk around. To test whether implicit visual statistical learning of spatial information is viewpoint independent, we asked participants to perform a visual search task from variable locations around a monitor placed flat on a stand. Unbeknownst to participants, the target was more often in some locations than others. In contrast to previous research on stationary observers, visual statistical learning failed to produce a search advantage for targets in high-probable regions that were stable within the environment but variable relative to the viewer. This failure was observed even when conditions for spatial updating were optimized. However, learning was successful when the rich locations were referenced relative to the viewer. We conclude that changing viewer perspective disrupts implicit learning of the target's location probability. This form of learning shows limited integration with spatial updating or spatiotopic representations. © 2014 ARVO.

  17. Visual variability affects early verb learning.

    Science.gov (United States)

    Twomey, Katherine E; Lush, Lauren; Pearce, Ruth; Horst, Jessica S

    2014-09-01

    Research demonstrates that within-category visual variability facilitates noun learning; however, the effect of visual variability on verb learning is unknown. We habituated 24-month-old children to a novel verb paired with an animated star-shaped actor. Across multiple trials, children saw either a single action from an action category (identical actions condition, for example, travelling while repeatedly changing into a circle shape) or multiple actions from that action category (variable actions condition, for example, travelling while changing into a circle shape, then a square shape, then a triangle shape). Four test trials followed habituation. One paired the habituated verb with a new action from the habituated category (e.g., 'dacking' + pentagon shape) and one with a completely novel action (e.g., 'dacking' + leg movement). The others paired a new verb with a new same-category action (e.g., 'keefing' + pentagon shape), or a completely novel category action (e.g., 'keefing' + leg movement). Although all children discriminated novel verb/action pairs, children in the identical actions condition discriminated trials that included the completely novel verb, while children in the variable actions condition discriminated the out-of-category action. These data suggest that - as in noun learning - visual variability affects verb learning and children's ability to form action categories. © 2014 The British Psychological Society.

  18. A real-time articulatory visual feedback approach with target presentation for second language pronunciation learning.

    Science.gov (United States)

    Suemitsu, Atsuo; Dang, Jianwu; Ito, Takayuki; Tiede, Mark

    2015-10-01

    Articulatory information can support learning or remediating pronunciation of a second language (L2). This paper describes an electromagnetic articulometer-based visual-feedback approach using an articulatory target presented in real-time to facilitate L2 pronunciation learning. This approach trains learners to adjust articulatory positions to match targets for a L2 vowel estimated from productions of vowels that overlap in both L1 and L2. Training of Japanese learners for the American English vowel /æ/ that included visual training improved its pronunciation regardless of whether audio training was also included. Articulatory visual feedback is shown to be an effective method for facilitating L2 pronunciation learning.

  19. Benefits of stimulus congruency for multisensory facilitation of visual learning.

    Directory of Open Access Journals (Sweden)

    Robyn S Kim

    Full Text Available BACKGROUND: Studies of perceptual learning have largely focused on unisensory stimuli. However, multisensory interactions are ubiquitous in perception, even at early processing stages, and thus can potentially play a role in learning. Here, we examine the effect of auditory-visual congruency on visual learning. METHODOLOGY/PRINCIPLE FINDINGS: Subjects were trained over five days on a visual motion coherence detection task with either congruent audiovisual, or incongruent audiovisual stimuli. Comparing performance on visual-only trials, we find that training with congruent audiovisual stimuli produces significantly better learning than training with incongruent audiovisual stimuli or with only visual stimuli. CONCLUSIONS/SIGNIFICANCE: This advantage from stimulus congruency during training suggests that the benefits of multisensory training may result from audiovisual interactions at a perceptual rather than cognitive level.

  20. The relevance of visual information on learning sounds in infancy

    NARCIS (Netherlands)

    ter Schure, S.M.M.

    2016-01-01

    Newborn infants are sensitive to combinations of visual and auditory speech. Does this ability to match sounds and sights affect how infants learn the sounds of their native language? And are visual articulations the only type of visual information that can influence sound learning? This

  1. Public Computer Assisted Learning Facilities for Children with Visual Impairment: Universal Design for Inclusive Learning

    Science.gov (United States)

    Siu, Kin Wai Michael; Lam, Mei Seung

    2012-01-01

    Although computer assisted learning (CAL) is becoming increasingly popular, people with visual impairment face greater difficulty in accessing computer-assisted learning facilities. This is primarily because most of the current CAL facilities are not visually impaired friendly. People with visual impairment also do not normally have access to…

  2. Tiger salamanders' (Ambystoma tigrinum) response learning and usage of visual cues.

    Science.gov (United States)

    Kundey, Shannon M A; Millar, Roberto; McPherson, Justin; Gonzalez, Maya; Fitz, Aleyna; Allen, Chadbourne

    2016-05-01

    We explored tiger salamanders' (Ambystoma tigrinum) learning to execute a response within a maze as proximal visual cue conditions varied. In Experiment 1, salamanders learned to turn consistently in a T-maze for reinforcement before the maze was rotated. All learned the initial task and executed the trained turn during test, suggesting that they learned to demonstrate the reinforced response during training and continued to perform it during test. In a second experiment utilizing a similar procedure, two visual cues were placed consistently at the maze junction. Salamanders were reinforced for turning towards one cue. Cue placement was reversed during test. All learned the initial task, but executed the trained turn rather than turning towards the visual cue during test, evidencing response learning. In Experiment 3, we investigated whether a compound visual cue could control salamanders' behaviour when it was the only cue predictive of reinforcement in a cross-maze by varying start position and cue placement. All learned to turn in the direction indicated by the compound visual cue, indicating that visual cues can come to control their behaviour. Following training, testing revealed that salamanders attended to stimuli foreground over background features. Overall, these results suggest that salamanders learn to execute responses over learning to use visual cues but can use visual cues if required. Our success with this paradigm offers the potential in future studies to explore salamanders' cognition further, as well as to shed light on how features of the tiger salamanders' life history (e.g. hibernation and metamorphosis) impact cognition.

  3. Discovery learning model with geogebra assisted for improvement mathematical visual thinking ability

    Science.gov (United States)

    Juandi, D.; Priatna, N.

    2018-05-01

    The main goal of this study is to improve the mathematical visual thinking ability of high school student through implementation the Discovery Learning Model with Geogebra Assisted. This objective can be achieved through study used quasi-experimental method, with non-random pretest-posttest control design. The sample subject of this research consist of 62 senior school student grade XI in one of school in Bandung district. The required data will be collected through documentation, observation, written tests, interviews, daily journals, and student worksheets. The results of this study are: 1) Improvement students Mathematical Visual Thinking Ability who obtain learning with applied the Discovery Learning Model with Geogebra assisted is significantly higher than students who obtain conventional learning; 2) There is a difference in the improvement of students’ Mathematical Visual Thinking ability between groups based on prior knowledge mathematical abilities (high, medium, and low) who obtained the treatment. 3) The Mathematical Visual Thinking Ability improvement of the high group is significantly higher than in the medium and low groups. 4) The quality of improvement ability of high and low prior knowledge is moderate category, in while the quality of improvement ability in the high category achieved by student with medium prior knowledge.

  4. [Associative Learning between Orientation and Color in Early Visual Areas].

    Science.gov (United States)

    Amano, Kaoru; Shibata, Kazuhisa; Kawato, Mitsuo; Sasaki, Yuka; Watanabe, Takeo

    2017-08-01

    Associative learning is an essential neural phenomenon where the contingency of different items increases after training. Although associative learning has been found to occur in many brain regions, there is no clear evidence that associative learning of visual features occurs in early visual areas. Here, we developed an associative decoded functional magnetic resonance imaging (fMRI) neurofeedback (A-DecNef) to determine whether associative learning of color and orientation can be induced in early visual areas. During the three days' training, A-DecNef induced fMRI signal patterns that corresponded to a specific target color (red) mostly in early visual areas while a vertical achromatic grating was simultaneously, physically presented to participants. Consequently, participants' perception of "red" was significantly more frequently than that of "green" in an achromatic vertical grating. This effect was also observed 3 to 5 months after training. These results suggest that long-term associative learning of two different visual features such as color and orientation, was induced most likely in early visual areas. This newly extended technique that induces associative learning may be used as an important tool for understanding and modifying brain function, since associations are fundamental and ubiquitous with respect to brain function.

  5. Analysis and Visualization of Relations in eLearning

    Science.gov (United States)

    Dráždilová, Pavla; Obadi, Gamila; Slaninová, Kateřina; Martinovič, Jan; Snášel, Václav

    The popularity of eLearning systems is growing rapidly; this growth is enabled by the consecutive development in Internet and multimedia technologies. Web-based education became wide spread in the past few years. Various types of learning management systems facilitate development of Web-based courses. Users of these courses form social networks through the different activities performed by them. This chapter focuses on searching the latent social networks in eLearning systems data. These data consist of students activity records wherein latent ties among actors are embedded. The social network studied in this chapter is represented by groups of students who have similar contacts and interact in similar social circles. Different methods of data clustering analysis can be applied to these groups, and the findings show the existence of latent ties among the group members. The second part of this chapter focuses on social network visualization. Graphical representation of social network can describe its structure very efficiently. It can enable social network analysts to determine the network degree of connectivity. Analysts can easily determine individuals with a small or large amount of relationships as well as the amount of independent groups in a given network. When applied to the field of eLearning, data visualization simplifies the process of monitoring the study activities of individuals or groups, as well as the planning of educational curriculum, the evaluation of study processes, etc.

  6. Spatial analysis statistics, visualization, and computational methods

    CERN Document Server

    Oyana, Tonny J

    2015-01-01

    An introductory text for the next generation of geospatial analysts and data scientists, Spatial Analysis: Statistics, Visualization, and Computational Methods focuses on the fundamentals of spatial analysis using traditional, contemporary, and computational methods. Outlining both non-spatial and spatial statistical concepts, the authors present practical applications of geospatial data tools, techniques, and strategies in geographic studies. They offer a problem-based learning (PBL) approach to spatial analysis-containing hands-on problem-sets that can be worked out in MS Excel or ArcGIS-as well as detailed illustrations and numerous case studies. The book enables readers to: Identify types and characterize non-spatial and spatial data Demonstrate their competence to explore, visualize, summarize, analyze, optimize, and clearly present statistical data and results Construct testable hypotheses that require inferential statistical analysis Process spatial data, extract explanatory variables, conduct statisti...

  7. Visual-Motor Learning Using Haptic Devices: How Best to Train Surgeons?

    Directory of Open Access Journals (Sweden)

    Oscar Giles

    2012-05-01

    Full Text Available Laparoscopic surgery has revolutionised medicine but requires surgeons to learn new visual-motor mappings. The optimal method for training surgeons is unknown. For instance, it may be easier to learn planar movements when training is constrained to a plane, since this forces the surgeon to develop an appropriate perceptual-motor map. In contrast, allowing the surgeon to move without constraints could improve performance because this provides greater experience of the control dynamics of the device. In order to test between these alternatives, we created an experimental tool that connected a commercially available robotic arm with specialised software that presents visual stimuli and objectively records kinematics. Participants were given the task of generating a series of aiming movements to move a visual cursor to a series of targets. The actions required movement along a horizontal plane, whereas the visual display was a screen positioned perpendicular to this plane (ie, vertically. One group (n=8 received training where the force field constrained their movement to the correct plane of action, whilst a second group (n=8 trained without constraints. On test trials (after training the unconstrained group showed better performance, as indexed by reduced movement duration and reduced path length. These results show that participants who explored the entire action space had an advantage, which highlights the importance of experiencing the full dynamics of a control device and the action space when learning a new visual-motor mapping.

  8. Audiovisual Association Learning in the Absence of Primary Visual Cortex.

    Science.gov (United States)

    Seirafi, Mehrdad; De Weerd, Peter; Pegna, Alan J; de Gelder, Beatrice

    2015-01-01

    Learning audiovisual associations is mediated by the primary cortical areas; however, recent animal studies suggest that such learning can take place even in the absence of the primary visual cortex. Other studies have demonstrated the involvement of extra-geniculate pathways and especially the superior colliculus (SC) in audiovisual association learning. Here, we investigated such learning in a rare human patient with complete loss of the bilateral striate cortex. We carried out an implicit audiovisual association learning task with two different colors of red and purple (the latter color known to minimally activate the extra-genicular pathway). Interestingly, the patient learned the association between an auditory cue and a visual stimulus only when the unseen visual stimulus was red, but not when it was purple. The current study presents the first evidence showing the possibility of audiovisual association learning in humans with lesioned striate cortex. Furthermore, in line with animal studies, it supports an important role for the SC in audiovisual associative learning.

  9. Effects of Computer-Based Visual Representation on Mathematics Learning and Cognitive Load

    Science.gov (United States)

    Yung, Hsin I.; Paas, Fred

    2015-01-01

    Visual representation has been recognized as a powerful learning tool in many learning domains. Based on the assumption that visual representations can support deeper understanding, we examined the effects of visual representations on learning performance and cognitive load in the domain of mathematics. An experimental condition with visual…

  10. Perceptual learning increases the strength of the earliest signals in visual cortex.

    Science.gov (United States)

    Bao, Min; Yang, Lin; Rios, Cristina; He, Bin; Engel, Stephen A

    2010-11-10

    Training improves performance on most visual tasks. Such perceptual learning can modify how information is read out from, and represented in, later visual areas, but effects on early visual cortex are controversial. In particular, it remains unknown whether learning can reshape neural response properties in early visual areas independent from feedback arising in later cortical areas. Here, we tested whether learning can modify feedforward signals in early visual cortex as measured by the human electroencephalogram. Fourteen subjects were trained for >24 d to detect a diagonal grating pattern in one quadrant of the visual field. Training improved performance, reducing the contrast needed for reliable detection, and also reliably increased the amplitude of the earliest component of the visual evoked potential, the C1. Control orientations and locations showed smaller effects of training. Because the C1 arises rapidly and has a source in early visual cortex, our results suggest that learning can increase early visual area response through local receptive field changes without feedback from later areas.

  11. Visual and verbal learning deficits in Veterans with alcohol and substance use disorders.

    Science.gov (United States)

    Bell, Morris D; Vissicchio, Nicholas A; Weinstein, Andrea J

    2016-02-01

    This study examined visual and verbal learning in the early phase of recovery for 48 Veterans with alcohol use (AUD) and substance use disorders (SUD, primarily cocaine and opiate abusers). Previous studies have demonstrated visual and verbal learning deficits in AUD, however little is known about the differences between AUD and SUD on these domains. Since the DSM-5 specifically identifies problems with learning in AUD and not in SUD, and problems with visual and verbal learning have been more prevalent in the literature for AUD than SUD, we predicted that people with AUD would be more impaired on measures of visual and verbal learning than people with SUD. Participants were enrolled in a comprehensive rehabilitation program and were assessed within the first 5 weeks of abstinence. Verbal learning was measured using the Hopkins Verbal Learning Test (HVLT) and visual learning was assessed using the Brief Visuospatial Memory Test (BVMT). Results indicated significantly greater decline in verbal learning on the HVLT across the three learning trials for AUD participants but not for SUD participants (F=4.653, df=48, p=0.036). Visual learning was less impaired than verbal learning across learning trials for both diagnostic groups (F=0.197, df=48, p=0.674); there was no significant difference between groups on visual learning (F=0.401, df=14, p=0.538). Older Veterans in the early phase of recovery from AUD may have difficulty learning new verbal information. Deficits in verbal learning may reduce the effectiveness of verbally-based interventions such as psycho-education. Published by Elsevier Ireland Ltd.

  12. Effects of regular aerobic exercise on visual perceptual learning.

    Science.gov (United States)

    Connell, Charlotte J W; Thompson, Benjamin; Green, Hayden; Sullivan, Rachel K; Gant, Nicholas

    2017-12-02

    This study investigated the influence of five days of moderate intensity aerobic exercise on the acquisition and consolidation of visual perceptual learning using a motion direction discrimination (MDD) task. The timing of exercise relative to learning was manipulated by administering exercise either before or after perceptual training. Within a matched-subjects design, twenty-seven healthy participants (n = 9 per group) completed five consecutive days of perceptual training on a MDD task under one of three interventions: no exercise, exercise before the MDD task, or exercise after the MDD task. MDD task accuracy improved in all groups over the five-day period, but there was a trend for impaired learning when exercise was performed before visual perceptual training. MDD task accuracy (mean ± SD) increased in exercise before by 4.5 ± 6.5%; exercise after by 11.8 ± 6.4%; and no exercise by 11.3 ± 7.2%. All intervention groups displayed similar MDD threshold reductions for the trained and untrained motion axes after training. These findings suggest that moderate daily exercise does not enhance the rate of visual perceptual learning for an MDD task or the transfer of learning to an untrained motion axis. Furthermore, exercise performed immediately prior to a visual perceptual learning task may impair learning. Further research with larger groups is required in order to better understand these effects. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Handwriting generates variable visual input to facilitate symbol learning

    Science.gov (United States)

    Li, Julia X.; James, Karin H.

    2015-01-01

    Recent research has demonstrated that handwriting practice facilitates letter categorization in young children. The present experiments investigated why handwriting practice facilitates visual categorization by comparing two hypotheses: That handwriting exerts its facilitative effect because of the visual-motor production of forms, resulting in a direct link between motor and perceptual systems, or because handwriting produces variable visual instances of a named category in the environment that then changes neural systems. We addressed these issues by measuring performance of 5 year-old children on a categorization task involving novel, Greek symbols across 6 different types of learning conditions: three involving visual-motor practice (copying typed symbols independently, tracing typed symbols, tracing handwritten symbols) and three involving visual-auditory practice (seeing and saying typed symbols of a single typed font, of variable typed fonts, and of handwritten examples). We could therefore compare visual-motor production with visual perception both of variable and similar forms. Comparisons across the six conditions (N=72) demonstrated that all conditions that involved studying highly variable instances of a symbol facilitated symbol categorization relative to conditions where similar instances of a symbol were learned, regardless of visual-motor production. Therefore, learning perceptually variable instances of a category enhanced performance, suggesting that handwriting facilitates symbol understanding by virtue of its environmental output: supporting the notion of developmental change though brain-body-environment interactions. PMID:26726913

  14. Deep learning with convolutional neural networks for EEG decoding and visualization.

    Science.gov (United States)

    Schirrmeister, Robin Tibor; Springenberg, Jost Tobias; Fiederer, Lukas Dominique Josef; Glasstetter, Martin; Eggensperger, Katharina; Tangermann, Michael; Hutter, Frank; Burgard, Wolfram; Ball, Tonio

    2017-11-01

    Deep learning with convolutional neural networks (deep ConvNets) has revolutionized computer vision through end-to-end learning, that is, learning from the raw data. There is increasing interest in using deep ConvNets for end-to-end EEG analysis, but a better understanding of how to design and train ConvNets for end-to-end EEG decoding and how to visualize the informative EEG features the ConvNets learn is still needed. Here, we studied deep ConvNets with a range of different architectures, designed for decoding imagined or executed tasks from raw EEG. Our results show that recent advances from the machine learning field, including batch normalization and exponential linear units, together with a cropped training strategy, boosted the deep ConvNets decoding performance, reaching at least as good performance as the widely used filter bank common spatial patterns (FBCSP) algorithm (mean decoding accuracies 82.1% FBCSP, 84.0% deep ConvNets). While FBCSP is designed to use spectral power modulations, the features used by ConvNets are not fixed a priori. Our novel methods for visualizing the learned features demonstrated that ConvNets indeed learned to use spectral power modulations in the alpha, beta, and high gamma frequencies, and proved useful for spatially mapping the learned features by revealing the topography of the causal contributions of features in different frequency bands to the decoding decision. Our study thus shows how to design and train ConvNets to decode task-related information from the raw EEG without handcrafted features and highlights the potential of deep ConvNets combined with advanced visualization techniques for EEG-based brain mapping. Hum Brain Mapp 38:5391-5420, 2017. © 2017 Wiley Periodicals, Inc. © 2017 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.

  15. Deep learning with convolutional neural networks for EEG decoding and visualization

    Science.gov (United States)

    Springenberg, Jost Tobias; Fiederer, Lukas Dominique Josef; Glasstetter, Martin; Eggensperger, Katharina; Tangermann, Michael; Hutter, Frank; Burgard, Wolfram; Ball, Tonio

    2017-01-01

    Abstract Deep learning with convolutional neural networks (deep ConvNets) has revolutionized computer vision through end‐to‐end learning, that is, learning from the raw data. There is increasing interest in using deep ConvNets for end‐to‐end EEG analysis, but a better understanding of how to design and train ConvNets for end‐to‐end EEG decoding and how to visualize the informative EEG features the ConvNets learn is still needed. Here, we studied deep ConvNets with a range of different architectures, designed for decoding imagined or executed tasks from raw EEG. Our results show that recent advances from the machine learning field, including batch normalization and exponential linear units, together with a cropped training strategy, boosted the deep ConvNets decoding performance, reaching at least as good performance as the widely used filter bank common spatial patterns (FBCSP) algorithm (mean decoding accuracies 82.1% FBCSP, 84.0% deep ConvNets). While FBCSP is designed to use spectral power modulations, the features used by ConvNets are not fixed a priori. Our novel methods for visualizing the learned features demonstrated that ConvNets indeed learned to use spectral power modulations in the alpha, beta, and high gamma frequencies, and proved useful for spatially mapping the learned features by revealing the topography of the causal contributions of features in different frequency bands to the decoding decision. Our study thus shows how to design and train ConvNets to decode task‐related information from the raw EEG without handcrafted features and highlights the potential of deep ConvNets combined with advanced visualization techniques for EEG‐based brain mapping. Hum Brain Mapp 38:5391–5420, 2017. © 2017 Wiley Periodicals, Inc. PMID:28782865

  16. Time course influences transfer of visual perceptual learning across spatial location.

    Science.gov (United States)

    Larcombe, S J; Kennard, C; Bridge, H

    2017-06-01

    Visual perceptual learning describes the improvement of visual perception with repeated practice. Previous research has established that the learning effects of perceptual training may be transferable to untrained stimulus attributes such as spatial location under certain circumstances. However, the mechanisms involved in transfer have not yet been fully elucidated. Here, we investigated the effect of altering training time course on the transferability of learning effects. Participants were trained on a motion direction discrimination task or a sinusoidal grating orientation discrimination task in a single visual hemifield. The 4000 training trials were either condensed into one day, or spread evenly across five training days. When participants were trained over a five-day period, there was transfer of learning to both the untrained visual hemifield and the untrained task. In contrast, when the same amount of training was condensed into a single day, participants did not show any transfer of learning. Thus, learning time course may influence the transferability of perceptual learning effects. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Making Visual Arts Learning Visible in a Generalist Elementary School Classroom

    Science.gov (United States)

    Wright, Susan; Watkins, Marnee; Grant, Gina

    2017-01-01

    This article presents the story of one elementary school teacher's shift in art praxis through her involvement in a research project aimed at facilitating participatory arts-based communities of practice. Qualitative methods and social constructivism informed Professional Learning Interventions (PLIs) involving: (1) a visual arts workshop, (2)…

  18. Selective transfer of visual working memory training on Chinese character learning.

    Science.gov (United States)

    Opitz, Bertram; Schneiders, Julia A; Krick, Christoph M; Mecklinger, Axel

    2014-01-01

    Previous research has shown a systematic relationship between phonological working memory capacity and second language proficiency for alphabetic languages. However, little is known about the impact of working memory processes on second language learning in a non-alphabetic language such as Mandarin Chinese. Due to the greater complexity of the Chinese writing system we expect that visual working memory rather than phonological working memory exerts a unique influence on learning Chinese characters. This issue was explored in the present experiment by comparing visual working memory training with an active (auditory working memory training) control condition and a passive, no training control condition. Training induced modulations in language-related brain networks were additionally examined using functional magnetic resonance imaging in a pretest-training-posttest design. As revealed by pre- to posttest comparisons and analyses of individual differences in working memory training gains, visual working memory training led to positive transfer effects on visual Chinese vocabulary learning compared to both control conditions. In addition, we found sustained activation after visual working memory training in the (predominantly visual) left infero-temporal cortex that was associated with behavioral transfer. In the control conditions, activation either increased (active control condition) or decreased (passive control condition) without reliable behavioral transfer effects. This suggests that visual working memory training leads to more efficient processing and more refined responses in brain regions involved in visual processing. Furthermore, visual working memory training boosted additional activation in the precuneus, presumably reflecting mental image generation of the learned characters. We, therefore, suggest that the conjoint activity of the mid-fusiform gyrus and the precuneus after visual working memory training reflects an interaction of working memory and

  19. HD-MTL: Hierarchical Deep Multi-Task Learning for Large-Scale Visual Recognition.

    Science.gov (United States)

    Fan, Jianping; Zhao, Tianyi; Kuang, Zhenzhong; Zheng, Yu; Zhang, Ji; Yu, Jun; Peng, Jinye

    2017-02-09

    In this paper, a hierarchical deep multi-task learning (HD-MTL) algorithm is developed to support large-scale visual recognition (e.g., recognizing thousands or even tens of thousands of atomic object classes automatically). First, multiple sets of multi-level deep features are extracted from different layers of deep convolutional neural networks (deep CNNs), and they are used to achieve more effective accomplishment of the coarseto- fine tasks for hierarchical visual recognition. A visual tree is then learned by assigning the visually-similar atomic object classes with similar learning complexities into the same group, which can provide a good environment for determining the interrelated learning tasks automatically. By leveraging the inter-task relatedness (inter-class similarities) to learn more discriminative group-specific deep representations, our deep multi-task learning algorithm can train more discriminative node classifiers for distinguishing the visually-similar atomic object classes effectively. Our hierarchical deep multi-task learning (HD-MTL) algorithm can integrate two discriminative regularization terms to control the inter-level error propagation effectively, and it can provide an end-to-end approach for jointly learning more representative deep CNNs (for image representation) and more discriminative tree classifier (for large-scale visual recognition) and updating them simultaneously. Our incremental deep learning algorithms can effectively adapt both the deep CNNs and the tree classifier to the new training images and the new object classes. Our experimental results have demonstrated that our HD-MTL algorithm can achieve very competitive results on improving the accuracy rates for large-scale visual recognition.

  20. Mobile Visual Search Based on Histogram Matching and Zone Weight Learning

    Science.gov (United States)

    Zhu, Chuang; Tao, Li; Yang, Fan; Lu, Tao; Jia, Huizhu; Xie, Xiaodong

    2018-01-01

    In this paper, we propose a novel image retrieval algorithm for mobile visual search. At first, a short visual codebook is generated based on the descriptor database to represent the statistical information of the dataset. Then, an accurate local descriptor similarity score is computed by merging the tf-idf weighted histogram matching and the weighting strategy in compact descriptors for visual search (CDVS). At last, both the global descriptor matching score and the local descriptor similarity score are summed up to rerank the retrieval results according to the learned zone weights. The results show that the proposed approach outperforms the state-of-the-art image retrieval method in CDVS.

  1. Usage of stereoscopic visualization in the learning contents of rotational motion.

    Science.gov (United States)

    Matsuura, Shu

    2013-01-01

    Rotational motion plays an essential role in physics even at an introductory level. In addition, the stereoscopic display of three-dimensional graphics includes is advantageous for the presentation of rotational motions, particularly for depth recognition. However, the immersive visualization of rotational motion has been known to lead to dizziness and even nausea for some viewers. Therefore, the purpose of this study is to examine the onset of nausea and visual fatigue when learning rotational motion through the use of a stereoscopic display. The findings show that an instruction method with intermittent exposure of the stereoscopic display and a simplification of its visual components reduced the onset of nausea and visual fatigue for the viewers, which maintained the overall effect of instantaneous spatial recognition.

  2. A Method to Train Marmosets in Visual Working Memory Task and Their Performance.

    Science.gov (United States)

    Nakamura, Katsuki; Koba, Reiko; Miwa, Miki; Yamaguchi, Chieko; Suzuki, Hiromi; Takemoto, Atsushi

    2018-01-01

    Learning and memory processes are similarly organized in humans and monkeys; therefore, monkeys can be ideal models for analyzing human aging processes and neurodegenerative diseases such as Alzheimer's disease. With the development of novel gene modification methods, common marmosets ( Callithrix jacchus ) have been suggested as an animal model for neurodegenerative diseases. Furthermore, the common marmoset's lifespan is relatively short, which makes it a practical animal model for aging. Working memory deficits are a prominent symptom of both dementia and aging, but no data are currently available for visual working memory in common marmosets. The delayed matching-to-sample task is a powerful tool for evaluating visual working memory in humans and monkeys; therefore, we developed a novel procedure for training common marmosets in such a task. Using visual discrimination and reversal tasks to direct the marmosets' attention to the physical properties of visual stimuli, we successfully trained 11 out of 13 marmosets in the initial stage of the delayed matching-to-sample task and provided the first available data on visual working memory in common marmosets. We found that the marmosets required many trials to initially learn the task (median: 1316 trials), but once the task was learned, the animals needed fewer trials to learn the task with novel stimuli (476 trials or fewer, with the exception of one marmoset). The marmosets could retain visual information for up to 16 s. Our novel training procedure could enable us to use the common marmoset as a useful non-human primate model for studying visual working memory deficits in neurodegenerative diseases and aging.

  3. Visual Pretraining for Deep Q-Learning

    OpenAIRE

    Sandven, Torstein

    2016-01-01

    Recent advances in reinforcement learning enable computers to learn human level polices for Atari 2600 games. This is done by training a convolutional neural network to play based on screenshots and in-game rewards. The network is referred to as a deep Q-network (DQN). The main disadvantage to this approach is a long training time. A computer will typically learn for approximately one week. In this time it processes 38 days of game play. This thesis explores the possibility of using visual pr...

  4. Comparison of Auditory/Visual and Visual/Motor Practice on the Spelling Accuracy of Learning Disabled Children.

    Science.gov (United States)

    Aleman, Cheryl; And Others

    1990-01-01

    Compares auditory/visual practice to visual/motor practice in spelling with seven elementary school learning-disabled students enrolled in a resource room setting. Finds that the auditory/visual practice was superior to the visual/motor practice on the weekly spelling performance for all seven students. (MG)

  5. Learning and Prediction of Slip from Visual Information

    Science.gov (United States)

    Angelova, Anelia; Matthies, Larry; Helmick, Daniel; Perona, Pietro

    2007-01-01

    This paper presents an approach for slip prediction from a distance for wheeled ground robots using visual information as input. Large amounts of slippage which can occur on certain surfaces, such as sandy slopes, will negatively affect rover mobility. Therefore, obtaining information about slip before entering such terrain can be very useful for better planning and avoiding these areas. To address this problem, terrain appearance and geometry information about map cells are correlated to the slip measured by the rover while traversing each cell. This relationship is learned from previous experience, so slip can be predicted remotely from visual information only. The proposed method consists of terrain type recognition and nonlinear regression modeling. The method has been implemented and tested offline on several off-road terrains including: soil, sand, gravel, and woodchips. The final slip prediction error is about 20%. The system is intended for improved navigation on steep slopes and rough terrain for Mars rovers.

  6. Neural correlates of context-dependent feature conjunction learning in visual search tasks.

    Science.gov (United States)

    Reavis, Eric A; Frank, Sebastian M; Greenlee, Mark W; Tse, Peter U

    2016-06-01

    Many perceptual learning experiments show that repeated exposure to a basic visual feature such as a specific orientation or spatial frequency can modify perception of that feature, and that those perceptual changes are associated with changes in neural tuning early in visual processing. Such perceptual learning effects thus exert a bottom-up influence on subsequent stimulus processing, independent of task-demands or endogenous influences (e.g., volitional attention). However, it is unclear whether such bottom-up changes in perception can occur as more complex stimuli such as conjunctions of visual features are learned. It is not known whether changes in the efficiency with which people learn to process feature conjunctions in a task (e.g., visual search) reflect true bottom-up perceptual learning versus top-down, task-related learning (e.g., learning better control of endogenous attention). Here we show that feature conjunction learning in visual search leads to bottom-up changes in stimulus processing. First, using fMRI, we demonstrate that conjunction learning in visual search has a distinct neural signature: an increase in target-evoked activity relative to distractor-evoked activity (i.e., a relative increase in target salience). Second, we demonstrate that after learning, this neural signature is still evident even when participants passively view learned stimuli while performing an unrelated, attention-demanding task. This suggests that conjunction learning results in altered bottom-up perceptual processing of the learned conjunction stimuli (i.e., a perceptual change independent of the task). We further show that the acquired change in target-evoked activity is contextually dependent on the presence of distractors, suggesting that search array Gestalts are learned. Hum Brain Mapp 37:2319-2330, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  7. Visual Perceptual Learning and its Specificity and Transfer: A New Perspective

    Directory of Open Access Journals (Sweden)

    Cong Yu

    2011-05-01

    Full Text Available Visual perceptual learning is known to be location and orientation specific, and is thus assumed to reflect the neuronal plasticity in the early visual cortex. However, in recent studies we created “Double training” and “TPE” procedures to demonstrate that these “fundamental” specificities of perceptual learning are in some sense artifacts and that learning can completely transfer to a new location or orientation. We proposed a rule-based learning theory to reinterpret perceptual learning and its specificity and transfer: A high-level decision unit learns the rules of performing a visual task through training. However, the learned rules cannot be applied to a new location or orientation automatically because the decision unit cannot functionally connect to new visual inputs with sufficient strength because these inputs are unattended or even suppressed during training. It is double training and TPE training that reactivate these new inputs, so that the functional connections can be strengthened to enable rule application and learning transfer. Currently we are investigating the properties of perceptual learning free from the bogus specificities, and the results provide some preliminary but very interesting insights into how training reshapes the functional connections between the high-level decision units and sensory inputs in the brain.

  8. Enhanced attentional gain as a mechanism for generalized perceptual learning in human visual cortex.

    Science.gov (United States)

    Byers, Anna; Serences, John T

    2014-09-01

    Learning to better discriminate a specific visual feature (i.e., a specific orientation in a specific region of space) has been associated with plasticity in early visual areas (sensory modulation) and with improvements in the transmission of sensory information from early visual areas to downstream sensorimotor and decision regions (enhanced readout). However, in many real-world scenarios that require perceptual expertise, observers need to efficiently process numerous exemplars from a broad stimulus class as opposed to just a single stimulus feature. Some previous data suggest that perceptual learning leads to highly specific neural modulations that support the discrimination of specific trained features. However, the extent to which perceptual learning acts to improve the discriminability of a broad class of stimuli via the modulation of sensory responses in human visual cortex remains largely unknown. Here, we used functional MRI and a multivariate analysis method to reconstruct orientation-selective response profiles based on activation patterns in the early visual cortex before and after subjects learned to discriminate small offsets in a set of grating stimuli that were rendered in one of nine possible orientations. Behavioral performance improved across 10 training sessions, and there was a training-related increase in the amplitude of orientation-selective response profiles in V1, V2, and V3 when orientation was task relevant compared with when it was task irrelevant. These results suggest that generalized perceptual learning can lead to modified responses in the early visual cortex in a manner that is suitable for supporting improved discriminability of stimuli drawn from a large set of exemplars. Copyright © 2014 the American Physiological Society.

  9. Handwriting generates variable visual output to facilitate symbol learning.

    Science.gov (United States)

    Li, Julia X; James, Karin H

    2016-03-01

    Recent research has demonstrated that handwriting practice facilitates letter categorization in young children. The present experiments investigated why handwriting practice facilitates visual categorization by comparing 2 hypotheses: that handwriting exerts its facilitative effect because of the visual-motor production of forms, resulting in a direct link between motor and perceptual systems, or because handwriting produces variable visual instances of a named category in the environment that then changes neural systems. We addressed these issues by measuring performance of 5-year-old children on a categorization task involving novel, Greek symbols across 6 different types of learning conditions: 3 involving visual-motor practice (copying typed symbols independently, tracing typed symbols, tracing handwritten symbols) and 3 involving visual-auditory practice (seeing and saying typed symbols of a single typed font, of variable typed fonts, and of handwritten examples). We could therefore compare visual-motor production with visual perception both of variable and similar forms. Comparisons across the 6 conditions (N = 72) demonstrated that all conditions that involved studying highly variable instances of a symbol facilitated symbol categorization relative to conditions where similar instances of a symbol were learned, regardless of visual-motor production. Therefore, learning perceptually variable instances of a category enhanced performance, suggesting that handwriting facilitates symbol understanding by virtue of its environmental output: supporting the notion of developmental change though brain-body-environment interactions. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  10. Results of a study assessing teaching methods of faculty after measuring student learning style preference.

    Science.gov (United States)

    Stirling, Bridget V

    2017-08-01

    Learning style preference impacts how well groups of students respond to their curricula. Faculty have many choices in the methods for delivering nursing content, as well as assessing students. The purpose was to develop knowledge around how faculty delivered curricula content, and then considering these findings in the context of the students learning style preference. Following an in-service on teaching and learning styles, faculty completed surveys on their methods of teaching and the proportion of time teaching, using each learning style (visual, aural, read/write and kinesthetic). This study took place at the College of Nursing a large all-female university in Saudi Arabia. 24 female nursing faculty volunteered to participate in the project. A cross-sectional design was used. Faculty reported teaching using mostly methods that were kinesthetic and visual, although lecture was also popular (aural). Students preferred kinesthetic and aural learning methods. Read/write was the least preferred by students and the least used method of teaching by faculty. Faculty used visual methods about one third of the time, although they were not preferred by the students. Students' preferred learning style (kinesthetic) was the method most used by faculty. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Visual and olfactory associative learning in the malaria vector Anopheles gambiae sensu stricto

    Directory of Open Access Journals (Sweden)

    Chilaka Nora

    2012-01-01

    Full Text Available Abstract Background Memory and learning are critical aspects of the ecology of insect vectors of human pathogens because of their potential effects on contacts between vectors and their hosts. Despite this epidemiological importance, there have been only a limited number of studies investigating associative learning in insect vector species and none on Anopheline mosquitoes. Methods A simple behavioural assays was developed to study visual and olfactory associative learning in Anopheles gambiae, the main vector of malaria in Africa. Two contrasted membrane qualities or levels of blood palatability were used as reinforcing stimuli for bi-directional conditioning during blood feeding. Results Under such experimental conditions An. gambiae females learned very rapidly to associate visual (chequered and white patterns and olfactory cues (presence and absence of cheese or Citronella smell with the reinforcing stimuli (bloodmeal quality and remembered the association for up to three days. Associative learning significantly increased with the strength of the conditioning stimuli used. Importantly, learning sometimes occurred faster when a positive reinforcing stimulus (palatable blood was associated with an innately preferred cue (such as a darker visual pattern. However, the use of too attractive a cue (e.g. Shropshire cheese smell was counter-productive and decreased learning success. Conclusions The results address an important knowledge gap in mosquito ecology and emphasize the role of associative memory for An. gambiae's host finding and blood-feeding behaviour with important potential implications for vector control.

  12. Spatial Visualization Learning in Engineering: Traditional Methods vs. a Web-Based Tool

    Science.gov (United States)

    Pedrosa, Carlos Melgosa; Barbero, Basilio Ramos; Miguel, Arturo Román

    2014-01-01

    This study compares an interactive learning manager for graphic engineering to develop spatial vision (ILMAGE_SV) to traditional methods. ILMAGE_SV is an asynchronous web-based learning tool that allows the manipulation of objects with a 3D viewer, self-evaluation, and continuous assessment. In addition, student learning may be monitored, which…

  13. Prior Visual Experience Modulates Learning of Sound Localization Among Blind Individuals.

    Science.gov (United States)

    Tao, Qian; Chan, Chetwyn C H; Luo, Yue-Jia; Li, Jian-Jun; Ting, Kin-Hung; Lu, Zhong-Lin; Whitfield-Gabrieli, Susan; Wang, Jun; Lee, Tatia M C

    2017-05-01

    Cross-modal learning requires the use of information from different sensory modalities. This study investigated how the prior visual experience of late blind individuals could modulate neural processes associated with learning of sound localization. Learning was realized by standardized training on sound localization processing, and experience was investigated by comparing brain activations elicited from a sound localization task in individuals with (late blind, LB) and without (early blind, EB) prior visual experience. After the training, EB showed decreased activation in the precuneus, which was functionally connected to a limbic-multisensory network. In contrast, LB showed the increased activation of the precuneus. A subgroup of LB participants who demonstrated higher visuospatial working memory capabilities (LB-HVM) exhibited an enhanced precuneus-lingual gyrus network. This differential connectivity suggests that visuospatial working memory due to the prior visual experience gained via LB-HVM enhanced learning of sound localization. Active visuospatial navigation processes could have occurred in LB-HVM compared to the retrieval of previously bound information from long-term memory for EB. The precuneus appears to play a crucial role in learning of sound localization, disregarding prior visual experience. Prior visual experience, however, could enhance cross-modal learning by extending binding to the integration of unprocessed information, mediated by the cognitive functions that these experiences develop.

  14. Dynamic functional brain networks involved in simple visual discrimination learning.

    Science.gov (United States)

    Fidalgo, Camino; Conejo, Nélida María; González-Pardo, Héctor; Arias, Jorge Luis

    2014-10-01

    Visual discrimination tasks have been widely used to evaluate many types of learning and memory processes. However, little is known about the brain regions involved at different stages of visual discrimination learning. We used cytochrome c oxidase histochemistry to evaluate changes in regional brain oxidative metabolism during visual discrimination learning in a water-T maze at different time points during training. As compared with control groups, the results of the present study reveal the gradual activation of cortical (prefrontal and temporal cortices) and subcortical brain regions (including the striatum and the hippocampus) associated to the mastery of a simple visual discrimination task. On the other hand, the brain regions involved and their functional interactions changed progressively over days of training. Regions associated with novelty, emotion, visuo-spatial orientation and motor aspects of the behavioral task seem to be relevant during the earlier phase of training, whereas a brain network comprising the prefrontal cortex was found along the whole learning process. This study highlights the relevance of functional interactions among brain regions to investigate learning and memory processes. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. Dictionary learning in visual computing

    CERN Document Server

    Zhang, Qiang

    2015-01-01

    The last few years have witnessed fast development on dictionary learning approaches for a set of visual computing tasks, largely due to their utilization in developing new techniques based on sparse representation. Compared with conventional techniques employing manually defined dictionaries, such as Fourier Transform and Wavelet Transform, dictionary learning aims at obtaining a dictionary adaptively from the data so as to support optimal sparse representation of the data. In contrast to conventional clustering algorithms like K-means, where a data point is associated with only one cluster c

  16. Multidimensional structured data visualization method and apparatus, text visualization method and apparatus, method and apparatus for visualizing and graphically navigating the world wide web, method and apparatus for visualizing hierarchies

    Science.gov (United States)

    Risch, John S [Kennewick, WA; Dowson, Scott T [West Richland, WA; Hart, Michelle L [Richland, WA; Hatley, Wes L [Kennewick, WA

    2008-05-13

    A method of displaying correlations among information objects comprises receiving a query against a database; obtaining a query result set; and generating a visualization representing the components of the result set, the visualization including one of a plane and line to represent a data field, nodes representing data values, and links showing correlations among fields and values. Other visualization methods and apparatus are disclosed.

  17. Visual Thinking Strategies: Using Art to Deepen Learning across School Disciplines

    Science.gov (United States)

    Yenawine, Philip

    2013-01-01

    "What's going on in this picture?" With this one question and a carefully chosen work of art, teachers can start their students down a path toward deeper learning and other skills now encouraged by the Common Core State Standards. The Visual Thinking Strategies (VTS) teaching method has been successfully implemented in schools,…

  18. The effect of learning on the function of monkey extrastriate visual cortex.

    Directory of Open Access Journals (Sweden)

    Gregor Rainer

    2004-02-01

    Full Text Available One of the most remarkable capabilities of the adult brain is its ability to learn and continuously adapt to an ever-changing environment. While many studies have documented how learning improves the perception and identification of visual stimuli, relatively little is known about how it modifies the underlying neural mechanisms. We trained monkeys to identify natural images that were degraded by interpolation with visual noise. We found that learning led to an improvement in monkeys' ability to identify these indeterminate visual stimuli. We link this behavioral improvement to a learning-dependent increase in the amount of information communicated by V4 neurons. This increase was mediated by a specific enhancement in neural activity. Our results reveal a mechanism by which learning increases the amount of information that V4 neurons are able to extract from the visual environment. This suggests that V4 plays a key role in resolving indeterminate visual inputs by coordinated interaction between bottom-up and top-down processing streams.

  19. No evidence for visual context-dependency of olfactory learning in Drosophila

    Science.gov (United States)

    Yarali, Ayse; Mayerle, Moritz; Nawroth, Christian; Gerber, Bertram

    2008-08-01

    How is behaviour organised across sensory modalities? Specifically, we ask concerning the fruit fly Drosophila melanogaster how visual context affects olfactory learning and recall and whether information about visual context is getting integrated into olfactory memory. We find that changing visual context between training and test does not deteriorate olfactory memory scores, suggesting that these olfactory memories can drive behaviour despite a mismatch of visual context between training and test. Rather, both the establishment and the recall of olfactory memory are generally facilitated by light. In a follow-up experiment, we find no evidence for learning about combinations of odours and visual context as predictors for reinforcement even after explicit training in a so-called biconditional discrimination task. Thus, a ‘true’ interaction between visual and olfactory modalities is not evident; instead, light seems to influence olfactory learning and recall unspecifically, for example by altering motor activity, alertness or olfactory acuity.

  20. Identification of Quality Visual-Based Learning Material for Technology Education

    Science.gov (United States)

    Katsioloudis, Petros

    2010-01-01

    It is widely known that the use of visual technology enhances learning by providing a better understanding of the topic as well as motivating students. If all visual-based learning materials (tables, figures, photos, etc.) were equally effective in facilitating student achievement of all kinds of educational objectives, there would virtually be no…

  1. Supporting Fieldwork Learning by Visual Documentation and Reflection

    DEFF Research Database (Denmark)

    Saltofte, Margit

    2017-01-01

    Photos can be used as a supplements to written fieldnotes and as a sources for mediating reflection during fieldwork and analysis. As part of a field diary, photos can support the recall of experiences and a reflective distance to the events. Photography, as visual representation, can also lead...... to reflection on learning and knowledge production in the process of learning how to conduct fieldwork. Pictures can open the way for abstractions and hidden knowledge, which might otherwise be difficult to formulate in words. However, writing and written field notes cannot be fully replaced by photos...... the role played by photos in their learning process. For students, photography is an everyday documentation form that can support their memory of field experience and serve as a vehicle for the analysis of data. The article discusses how photos and visual representations support fieldwork learning...

  2. Differentiating Visual from Response Sequencing during Long-term Skill Learning.

    Science.gov (United States)

    Lynch, Brighid; Beukema, Patrick; Verstynen, Timothy

    2017-01-01

    The dual-system model of sequence learning posits that during early learning there is an advantage for encoding sequences in sensory frames; however, it remains unclear whether this advantage extends to long-term consolidation. Using the serial RT task, we set out to distinguish the dynamics of learning sequential orders of visual cues from learning sequential responses. On each day, most participants learned a new mapping between a set of symbolic cues and responses made with one of four fingers, after which they were exposed to trial blocks of either randomly ordered cues or deterministic ordered cues (12-item sequence). Participants were randomly assigned to one of four groups (n = 15 per group): Visual sequences (same sequence of visual cues across training days), Response sequences (same order of key presses across training days), Combined (same serial order of cues and responses on all training days), and a Control group (a novel sequence each training day). Across 5 days of training, sequence-specific measures of response speed and accuracy improved faster in the Visual group than any of the other three groups, despite no group differences in explicit awareness of the sequence. The two groups that were exposed to the same visual sequence across days showed a marginal improvement in response binding that was not found in the other groups. These results indicate that there is an advantage, in terms of rate of consolidation across multiple days of training, for learning sequences of actions in a sensory representational space, rather than as motoric representations.

  3. Learning Programming Technique through Visual Programming Application as Learning Media with Fuzzy Rating

    Science.gov (United States)

    Buditjahjanto, I. G. P. Asto; Nurlaela, Luthfiyah; Ekohariadi; Riduwan, Mochamad

    2017-01-01

    Programming technique is one of the subjects at Vocational High School in Indonesia. This subject contains theory and application of programming utilizing Visual Programming. Students experience some difficulties to learn textual learning. Therefore, it is necessary to develop media as a tool to transfer learning materials. The objectives of this…

  4. Conditions for the Effectiveness of Multiple Visual Representations in Enhancing STEM Learning

    Science.gov (United States)

    Rau, Martina A.

    2017-01-01

    Visual representations play a critical role in enhancing science, technology, engineering, and mathematics (STEM) learning. Educational psychology research shows that adding visual representations to text can enhance students' learning of content knowledge, compared to text-only. But should students learn with a single type of visual…

  5. An analysis of mathematical connection ability based on student learning style on visualization auditory kinesthetic (VAK) learning model with self-assessment

    Science.gov (United States)

    Apipah, S.; Kartono; Isnarto

    2018-03-01

    This research aims to analyze the quality of VAK learning with self-assessment toward the ability of mathematical connection performed by students and to analyze students’ mathematical connection ability based on learning styles in VAK learning model with self-assessment. This research applies mixed method type with concurrent embedded design. The subject of this research consists of VIII grade students from State Junior High School 9 Semarang who apply visual learning style, auditory learning style, and kinesthetic learning style. The data of learning style is collected by using questionnaires, the data of mathematical connection ability is collected by performing tests, and the data of self-assessment is collected by using assessment sheets. The quality of learning is qualitatively valued from planning stage, realization stage, and valuation stage. The result of mathematical connection ability test is analyzed quantitatively by mean test, conducting completeness test, mean differentiation test, and mean proportional differentiation test. The result of the research shows that VAK learning model results in well-qualified learning regarded from qualitative and quantitative sides. Students with visual learning style perform the highest mathematical connection ability, students with kinesthetic learning style perform average mathematical connection ability, and students with auditory learning style perform the lowest mathematical connection ability.

  6. Supramodal processing optimizes visual perceptual learning and plasticity.

    Science.gov (United States)

    Zilber, Nicolas; Ciuciu, Philippe; Gramfort, Alexandre; Azizi, Leila; van Wassenhove, Virginie

    2014-06-01

    Multisensory interactions are ubiquitous in cortex and it has been suggested that sensory cortices may be supramodal i.e. capable of functional selectivity irrespective of the sensory modality of inputs (Pascual-Leone and Hamilton, 2001; Renier et al., 2013; Ricciardi and Pietrini, 2011; Voss and Zatorre, 2012). Here, we asked whether learning to discriminate visual coherence could benefit from supramodal processing. To this end, three groups of participants were briefly trained to discriminate which of a red or green intermixed population of random-dot-kinematograms (RDKs) was most coherent in a visual display while being recorded with magnetoencephalography (MEG). During training, participants heard no sound (V), congruent acoustic textures (AV) or auditory noise (AVn); importantly, congruent acoustic textures shared the temporal statistics - i.e. coherence - of visual RDKs. After training, the AV group significantly outperformed participants trained in V and AVn although they were not aware of their progress. In pre- and post-training blocks, all participants were tested without sound and with the same set of RDKs. When contrasting MEG data collected in these experimental blocks, selective differences were observed in the dynamic pattern and the cortical loci responsive to visual RDKs. First and common to all three groups, vlPFC showed selectivity to the learned coherence levels whereas selectivity in visual motion area hMT+ was only seen for the AV group. Second and solely for the AV group, activity in multisensory cortices (mSTS, pSTS) correlated with post-training performances; additionally, the latencies of these effects suggested feedback from vlPFC to hMT+ possibly mediated by temporal cortices in AV and AVn groups. Altogether, we interpret our results in the context of the Reverse Hierarchy Theory of learning (Ahissar and Hochstein, 2004) in which supramodal processing optimizes visual perceptual learning by capitalizing on sensory

  7. Short-term perceptual learning in visual conjunction search.

    Science.gov (United States)

    Su, Yuling; Lai, Yunpeng; Huang, Wanyi; Tan, Wei; Qu, Zhe; Ding, Yulong

    2014-08-01

    Although some studies showed that training can improve the ability of cross-dimension conjunction search, less is known about the underlying mechanism. Specifically, it remains unclear whether training of visual conjunction search can successfully bind different features of separated dimensions into a new function unit at early stages of visual processing. In the present study, we utilized stimulus specificity and generalization to provide a new approach to investigate the mechanisms underlying perceptual learning (PL) in visual conjunction search. Five experiments consistently showed that after 40 to 50 min of training of color-shape/orientation conjunction search, the ability to search for a certain conjunction target improved significantly and the learning effects did not transfer to a new target that differed from the trained target in both color and shape/orientation features. However, the learning effects were not strictly specific. In color-shape conjunction search, although the learning effect could not transfer to a same-shape different-color target, it almost completely transferred to a same-color different-shape target. In color-orientation conjunction search, the learning effect partly transferred to a new target that shared same color or same orientation with the trained target. Moreover, the sum of transfer effects for the same color target and the same orientation target in color-orientation conjunction search was algebraically equivalent to the learning effect for trained target, showing an additive transfer effect. The different transfer patterns in color-shape and color-orientation conjunction search learning might reflect the different complexity and discriminability between feature dimensions. These results suggested a feature-based attention enhancement mechanism rather than a unitization mechanism underlying the short-term PL of color-shape/orientation conjunction search.

  8. Cross-modal interaction between visual and olfactory learning in Apis cerana.

    Science.gov (United States)

    Zhang, Li-Zhen; Zhang, Shao-Wu; Wang, Zi-Long; Yan, Wei-Yu; Zeng, Zhi-Jiang

    2014-10-01

    The power of the small honeybee brain carrying out behavioral and cognitive tasks has been shown repeatedly to be highly impressive. The present study investigates, for the first time, the cross-modal interaction between visual and olfactory learning in Apis cerana. To explore the role and molecular mechanisms of cross-modal learning in A. cerana, the honeybees were trained and tested in a modified Y-maze with seven visual and five olfactory stimulus, where a robust visual threshold for black/white grating (period of 2.8°-3.8°) and relatively olfactory threshold (concentration of 50-25%) was obtained. Meanwhile, the expression levels of five genes (AcCREB, Acdop1, Acdop2, Acdop3, Actyr1) related to learning and memory were analyzed under different training conditions by real-time RT-PCR. The experimental results indicate that A. cerana could exhibit cross-modal interactions between visual and olfactory learning by reducing the threshold level of the conditioning stimuli, and that these genes may play important roles in the learning process of honeybees.

  9. Learning by Designing Interview Methods in Special Education

    DEFF Research Database (Denmark)

    Jönsson, Lise Høgh

    2017-01-01

    , and people with learning disabilities worked together to develop five new visual and digital methods for interviewing in special education. Thereby not only enhancing the students’ competences, knowledge and proficiency in innovation and research, but also proposing a new teaching paradigm for university...

  10. Visual Perceptual Echo Reflects Learning of Regularities in Rapid Luminance Sequences.

    Science.gov (United States)

    Chang, Acer Y-C; Schwartzman, David J; VanRullen, Rufin; Kanai, Ryota; Seth, Anil K

    2017-08-30

    A novel neural signature of active visual processing has recently been described in the form of the "perceptual echo", in which the cross-correlation between a sequence of randomly fluctuating luminance values and occipital electrophysiological signals exhibits a long-lasting periodic (∼100 ms cycle) reverberation of the input stimulus (VanRullen and Macdonald, 2012). As yet, however, the mechanisms underlying the perceptual echo and its function remain unknown. Reasoning that natural visual signals often contain temporally predictable, though nonperiodic features, we hypothesized that the perceptual echo may reflect a periodic process associated with regularity learning. To test this hypothesis, we presented subjects with successive repetitions of a rapid nonperiodic luminance sequence, and examined the effects on the perceptual echo, finding that echo amplitude linearly increased with the number of presentations of a given luminance sequence. These data suggest that the perceptual echo reflects a neural signature of regularity learning.Furthermore, when a set of repeated sequences was followed by a sequence with inverted luminance polarities, the echo amplitude decreased to the same level evoked by a novel stimulus sequence. Crucially, when the original stimulus sequence was re-presented, the echo amplitude returned to a level consistent with the number of presentations of this sequence, indicating that the visual system retained sequence-specific information, for many seconds, even in the presence of intervening visual input. Altogether, our results reveal a previously undiscovered regularity learning mechanism within the human visual system, reflected by the perceptual echo. SIGNIFICANCE STATEMENT How the brain encodes and learns fast-changing but nonperiodic visual input remains unknown, even though such visual input characterizes natural scenes. We investigated whether the phenomenon of "perceptual echo" might index such learning. The perceptual echo is a

  11. Unimodal Learning Enhances Crossmodal Learning in Robotic Audio-Visual Tracking

    DEFF Research Database (Denmark)

    Shaikh, Danish; Bodenhagen, Leon; Manoonpong, Poramate

    2017-01-01

    Crossmodal sensory integration is a fundamental feature of the brain that aids in forming an coherent and unified representation of observed events in the world. Spatiotemporally correlated sensory stimuli brought about by rich sensorimotor experiences drive the development of crossmodal integrat...... a non-holonomic robotic agent towards a moving audio-visual target. Simulation results demonstrate that unimodal learning enhances crossmodal learning and improves both the overall accuracy and precision of multisensory orientation response....

  12. Unimodal Learning Enhances Crossmodal Learning in Robotic Audio-Visual Tracking

    DEFF Research Database (Denmark)

    Shaikh, Danish; Bodenhagen, Leon; Manoonpong, Poramate

    2018-01-01

    Crossmodal sensory integration is a fundamental feature of the brain that aids in forming an coherent and unified representation of observed events in the world. Spatiotemporally correlated sensory stimuli brought about by rich sensorimotor experiences drive the development of crossmodal integrat...... a non-holonomic robotic agent towards a moving audio-visual target. Simulation results demonstrate that unimodal learning enhances crossmodal learning and improves both the overall accuracy and precision of multisensory orientation response....

  13. You see what you have learned. Evidence for an interrelation of associative learning and visual selective attention.

    Science.gov (United States)

    Feldmann-Wüstefeld, Tobias; Uengoer, Metin; Schubö, Anna

    2015-11-01

    Besides visual salience and observers' current intention, prior learning experience may influence deployment of visual attention. Associative learning models postulate that observers pay more attention to stimuli previously experienced as reliable predictors of specific outcomes. To investigate the impact of learning experience on deployment of attention, we combined an associative learning task with a visual search task and measured event-related potentials of the EEG as neural markers of attention deployment. In the learning task, participants categorized stimuli varying in color/shape with only one dimension being predictive of category membership. In the search task, participants searched a shape target while disregarding irrelevant color distractors. Behavioral results showed that color distractors impaired performance to a greater degree when color rather than shape was predictive in the learning task. Neurophysiological results show that the amplified distraction was due to differential attention deployment (N2pc). Experiment 2 showed that when color was predictive for learning, color distractors captured more attention in the search task (ND component) and more suppression of color distractor was required (PD component). The present results thus demonstrate that priority in visual attention is biased toward predictive stimuli, which allows learning experience to shape selection. We also show that learning experience can overrule strong top-down control (blocked tasks, Experiment 3) and that learning experience has a longer-term effect on attention deployment (tasks on two successive days, Experiment 4). © 2015 Society for Psychophysiological Research.

  14. Rapid learning in visual cortical networks.

    Science.gov (United States)

    Wang, Ye; Dragoi, Valentin

    2015-08-26

    Although changes in brain activity during learning have been extensively examined at the single neuron level, the coding strategies employed by cell populations remain mysterious. We examined cell populations in macaque area V4 during a rapid form of perceptual learning that emerges within tens of minutes. Multiple single units and LFP responses were recorded as monkeys improved their performance in an image discrimination task. We show that the increase in behavioral performance during learning is predicted by a tight coordination of spike timing with local population activity. More spike-LFP theta synchronization is correlated with higher learning performance, while high-frequency synchronization is unrelated with changes in performance, but these changes were absent once learning had stabilized and stimuli became familiar, or in the absence of learning. These findings reveal a novel mechanism of plasticity in visual cortex by which elevated low-frequency synchronization between individual neurons and local population activity accompanies the improvement in performance during learning.

  15. Assessing learning preferences of dental students using visual, auditory, reading-writing, and kinesthetic questionnaire

    Directory of Open Access Journals (Sweden)

    Darshana Bennadi

    2015-01-01

    Full Text Available Introduction: Educators of the health care profession (teachers are committed in preparing future health care providers, but are facing many challenges in transmitting their ever expanding knowledge to the students. This study was done to focus on different learning styles among dental students. Aim: To assess different learning preferences among dental students. Materials and Methods: This is a descriptive cross-sectional questionnaire study using visual, auditory, reading-writing, and kinesthetic questionnaire among dental students. Results: Majority 75.8% of the students preferred multimodal learning style. Multimodal learning was common among clinical students. No statistical significant difference of learning styles in relation to gender (P > 0.05. Conclusion: In the present study, majority of students preferred multimodal learning preference. Knowledge about the learning style preference of different profession can help to enhance the teaching method for the students.

  16. Students and Teachers as Developers of Visual Learning Designs with Augmented Reality for Visual Arts Education

    DEFF Research Database (Denmark)

    Buhl, Mie

    2017-01-01

    upon which to discuss the potential for reengineering the traditional role of the teacher/learning designer as the only supplier and the students as the receivers of digital learning designs in higher education. The discussion applies the actor-network theory and socio-material perspectives...... on education in order to enhance the meta-perspective of traditional teacher and student roles.......Abstract This paper reports on a project in which communication and digital media students collaborated with visual arts teacher students and their teacher trainer to develop visual digital designs for learning that involved Augmented Reality (AR) technology. The project exemplified a design...

  17. Two-stage perceptual learning to break visual crowding.

    Science.gov (United States)

    Zhu, Ziyun; Fan, Zhenzhi; Fang, Fang

    2016-01-01

    When a target is presented with nearby flankers in the peripheral visual field, it becomes harder to identify, which is referred to as crowding. Crowding sets a fundamental limit of object recognition in peripheral vision, preventing us from fully appreciating cluttered visual scenes. We trained adult human subjects on a crowded orientation discrimination task and investigated whether crowding could be completely eliminated by training. We discovered a two-stage learning process with this training task. In the early stage, when the target and flankers were separated beyond a certain distance, subjects acquired a relatively general ability to break crowding, as evidenced by the fact that the breaking of crowding could transfer to another crowded orientation, even a crowded motion stimulus, although the transfer to the opposite visual hemi-field was weak. In the late stage, like many classical perceptual learning effects, subjects' performance gradually improved and showed specificity to the trained orientation. We also found that, when the target and flankers were spaced too finely, training could only reduce, rather than completely eliminate, the crowding effect. This two-stage learning process illustrates a learning strategy for our brain to deal with the notoriously difficult problem of identifying peripheral objects in clutter. The brain first learned to solve the "easy and general" part of the problem (i.e., improving the processing resolution and segmenting the target and flankers) and then tackle the "difficult and specific" part (i.e., refining the representation of the target).

  18. Anodal tDCS to V1 blocks visual perceptual learning consolidation.

    Science.gov (United States)

    Peters, Megan A K; Thompson, Benjamin; Merabet, Lotfi B; Wu, Allan D; Shams, Ladan

    2013-06-01

    This study examined the effects of visual cortex transcranial direct current stimulation (tDCS) on visual processing and learning. Participants performed a contrast detection task on two consecutive days. Each session consisted of a baseline measurement followed by measurements made during active or sham stimulation. On the first day, one group received anodal stimulation to primary visual cortex (V1), while another received cathodal stimulation. Stimulation polarity was reversed for these groups on the second day. The third (control) group of subjects received sham stimulation on both days. No improvements or decrements in contrast sensitivity relative to the same-day baseline were observed during real tDCS, nor was any within-session learning trend observed. However, task performance improved significantly from Day 1 to Day 2 for the participants who received cathodal tDCS on Day 1 and for the sham group. No such improvement was found for the participants who received anodal stimulation on Day 1, indicating that anodal tDCS blocked overnight consolidation of visual learning, perhaps through engagement of inhibitory homeostatic plasticity mechanisms or alteration of the signal-to-noise ratio within stimulated cortex. These results show that applying tDCS to the visual cortex can modify consolidation of visual learning. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Learning phacoemulsification. Results of different teaching methods.

    Directory of Open Access Journals (Sweden)

    Hennig Albrecht

    2004-01-01

    Full Text Available We report the learning curves of three eye surgeons converting from sutureless extracapsular cataract extraction to phacoemulsification using different teaching methods. Posterior capsule rupture (PCR as a per-operative complication and visual outcome of the first 100 operations were analysed. The PCR rate was 4% and 15% in supervised and unsupervised surgery respectively. Likewise, an uncorrected visual acuity of > or = 6/18 on the first postoperative day was seen in 62 (62% of patients and in 22 (22% in supervised and unsupervised surgery respectively.

  20. Supporting learning skills in visual art classes: The benefits of teacher awareness

    Directory of Open Access Journals (Sweden)

    Helen Arov

    2017-09-01

    Full Text Available This study focused on middle school art teachers supporting the development of students learning skills, specifically their awareness of the framework of learning skills. It also looked at the relations between the teaching practices teachers use for supporting learning skills and students' learning motivation in art classes. The study combined qualitative and quantitative research methods. The class observations and interviews were conducted with ten Estonian middle school art teachers. One hundred and forty-eight students from the observed classes filled out the learning motivation questionnaire about their interest and achievement goals in visual arts. The study draws attention to the importance of teachers being aware of and valuing learning skills alongside subject specific knowledge, as it could enhance students autonomous motivation and support adaptive goal setting.

  1. Joint learning and weighting of visual vocabulary for bag-of-feature based tissue classification

    KAUST Repository

    Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin

    2013-01-01

    their power in this field. Two important issues of bag-of-feature strategy for tissue classification are investigated in this paper: the visual vocabulary learning and weighting, which are always considered independently in traditional methods by neglecting

  2. Context generalization in Drosophila visual learning requires the mushroom bodies

    Science.gov (United States)

    Liu, Li; Wolf, Reinhard; Ernst, Roman; Heisenberg, Martin

    1999-08-01

    The world is permanently changing. Laboratory experiments on learning and memory normally minimize this feature of reality, keeping all conditions except the conditioned and unconditioned stimuli as constant as possible. In the real world, however, animals need to extract from the universe of sensory signals the actual predictors of salient events by separating them from non-predictive stimuli (context). In principle, this can be achieved ifonly those sensory inputs that resemble the reinforcer in theirtemporal structure are taken as predictors. Here we study visual learning in the fly Drosophila melanogaster, using a flight simulator,, and show that memory retrieval is, indeed, partially context-independent. Moreover, we show that the mushroom bodies, which are required for olfactory but not visual or tactile learning, effectively support context generalization. In visual learning in Drosophila, it appears that a facilitating effect of context cues for memory retrieval is the default state, whereas making recall context-independent requires additional processing.

  3. Implementation of ICARE learning model using visualization animation on biotechnology course

    Science.gov (United States)

    Hidayat, Habibi

    2017-12-01

    ICARE is a learning model that directly ensure the students to actively participate in the learning process using animation media visualization. ICARE have five key elements of learning experience from children and adult that is introduction, connection, application, reflection and extension. The use of Icare system to ensure that participants have opportunity to apply what have been they learned. So that, the message delivered by lecture to students can be understood and recorded by students in a long time. Learning model that was deemed capable of improving learning outcomes and interest to learn in following learning process Biotechnology with applying the ICARE learning model using visualization animation. This learning model have been giving motivation to participate in the learning process and learning outcomes obtained becomes more increased than before. From the results of student learning in subjects Biotechnology by applying the ICARE learning model using Visualization Animation can improving study results of student from the average value of middle test amounted to 70.98 with the percentage of 75% increased value of final test to be 71.57 with the percentage of 68.63%. The interest to learn from students more increasing visits of student activities at each cycle, namely the first cycle obtained average value by 33.5 with enough category. The second cycle is obtained an average value of 36.5 to good category and third cycle the average value of 36.5 with a student activity to good category.

  4. The Effect of Visual of a Courseware towards Pre-University Students' Learning in Literature

    Science.gov (United States)

    Masri, Mazyrah; Wan Ahmad, Wan Fatimah; Nordin, Shahrina Md.; Sulaiman, Suziah

    This paper highlights the effect of visual of a multimedia courseware, Black Cat Courseware (BC-C), developed for learning literature at a pre-university level in University Teknologi PETRONAS (UTP). The contents of the courseware are based on a Black Cat story which is covered in an English course at the university. The objective of this paper is to evaluate the usability and effectiveness of BC-C. A total of sixty foundation students were involved in the study. Quasi-experimental design was employed, forming two groups: experimental and control groups. The experimental group had to interact with BC-C as part of the learning activities while the control group used the conventional learning methods. The results indicate that the experimental group achieved a statistically significant compared to the control group in understanding the Black Cat story. The study result also proves that the effect of visual increases the students' performances in literature learning at a pre-university level.

  5. Mirror reversal and visual rotation are learned and consolidated via separate mechanisms: recalibrating or learning de novo?

    Science.gov (United States)

    Telgen, Sebastian; Parvin, Darius; Diedrichsen, Jörn

    2014-10-08

    Motor learning tasks are often classified into adaptation tasks, which involve the recalibration of an existing control policy (the mapping that determines both feedforward and feedback commands), and skill-learning tasks, requiring the acquisition of new control policies. We show here that this distinction also applies to two different visuomotor transformations during reaching in humans: Mirror-reversal (left-right reversal over a mid-sagittal axis) of visual feedback versus rotation of visual feedback around the movement origin. During mirror-reversal learning, correct movement initiation (feedforward commands) and online corrections (feedback responses) were only generated at longer latencies. The earliest responses were directed into a nonmirrored direction, even after two training sessions. In contrast, for visual rotation learning, no dependency of directional error on reaction time emerged, and fast feedback responses to visual displacements of the cursor were immediately adapted. These results suggest that the motor system acquires a new control policy for mirror reversal, which initially requires extra processing time, while it recalibrates an existing control policy for visual rotations, exploiting established fast computational processes. Importantly, memory for visual rotation decayed between sessions, whereas memory for mirror reversals showed offline gains, leading to better performance at the beginning of the second session than in the end of the first. With shifts in time-accuracy tradeoff and offline gains, mirror-reversal learning shares common features with other skill-learning tasks. We suggest that different neuronal mechanisms underlie the recalibration of an existing versus acquisition of a new control policy and that offline gains between sessions are a characteristic of latter. Copyright © 2014 the authors 0270-6474/14/3413768-12$15.00/0.

  6. Age-related impairments in active learning and strategic visual exploration

    Directory of Open Access Journals (Sweden)

    Kelly L Brandstatt

    2014-02-01

    Full Text Available Old age could impair memory by disrupting learning strategies used by younger individuals. We tested this possibility by manipulating the ability to use visual-exploration strategies during learning. Subjects controlled visual exploration during active learning, thus permitting the use of strategies, whereas strategies were limited during passive learning via predetermined exploration patterns. Performance on tests of object recognition and object-location recall was matched for younger and older subjects for objects studied passively, when learning strategies were restricted. Active learning improved object recognition similarly for younger and older subjects. However, active learning improved object-location recall for younger subjects, but not older subjects. Exploration patterns were used to identify a learning strategy involving repeat viewing. Older subjects used this strategy less frequently and it provided less memory benefit compared to younger subjects. In previous experiments, we linked hippocampal-prefrontal co-activation to improvements in object-location recall from active learning and to the exploration strategy. Collectively, these findings suggest that age-related memory problems result partly from impaired strategies during learning, potentially due to reduced hippocampal-prefrontal co-engagement.

  7. Age-related impairments in active learning and strategic visual exploration.

    Science.gov (United States)

    Brandstatt, Kelly L; Voss, Joel L

    2014-01-01

    Old age could impair memory by disrupting learning strategies used by younger individuals. We tested this possibility by manipulating the ability to use visual-exploration strategies during learning. Subjects controlled visual exploration during active learning, thus permitting the use of strategies, whereas strategies were limited during passive learning via predetermined exploration patterns. Performance on tests of object recognition and object-location recall was matched for younger and older subjects for objects studied passively, when learning strategies were restricted. Active learning improved object recognition similarly for younger and older subjects. However, active learning improved object-location recall for younger subjects, but not older subjects. Exploration patterns were used to identify a learning strategy involving repeat viewing. Older subjects used this strategy less frequently and it provided less memory benefit compared to younger subjects. In previous experiments, we linked hippocampal-prefrontal co-activation to improvements in object-location recall from active learning and to the exploration strategy. Collectively, these findings suggest that age-related memory problems result partly from impaired strategies during learning, potentially due to reduced hippocampal-prefrontal co-engagement.

  8. Summarize to learn: summarization and visualization of text for ubiquitous learning

    DEFF Research Database (Denmark)

    Chongtay, Rocio; Last, Mark; Verbeke, Mathias

    2013-01-01

    Visualizations can stand in many relations to texts – and, as research into learning with pictures has shown, they can become particularly valuable when they transform the contents of the text (rather than just duplicate its message or structure it). But what kinds of transformations can...... be particularly helpful in the learning process? In this paper, we argue that interacting with, and creating, summaries of texts is a key transformation technique, and we investigate how textual and graphical summarization approaches, as well as automatic and manual summarization, can complement one another...... to support effective learning....

  9. Perceptual learning modifies the functional specializations of visual cortical areas.

    Science.gov (United States)

    Chen, Nihong; Cai, Peng; Zhou, Tiangang; Thompson, Benjamin; Fang, Fang

    2016-05-17

    Training can improve performance of perceptual tasks. This phenomenon, known as perceptual learning, is strongest for the trained task and stimulus, leading to a widely accepted assumption that the associated neuronal plasticity is restricted to brain circuits that mediate performance of the trained task. Nevertheless, learning does transfer to other tasks and stimuli, implying the presence of more widespread plasticity. Here, we trained human subjects to discriminate the direction of coherent motion stimuli. The behavioral learning effect substantially transferred to noisy motion stimuli. We used transcranial magnetic stimulation (TMS) and functional magnetic resonance imaging (fMRI) to investigate the neural mechanisms underlying the transfer of learning. The TMS experiment revealed dissociable, causal contributions of V3A (one of the visual areas in the extrastriate visual cortex) and MT+ (middle temporal/medial superior temporal cortex) to coherent and noisy motion processing. Surprisingly, the contribution of MT+ to noisy motion processing was replaced by V3A after perceptual training. The fMRI experiment complemented and corroborated the TMS finding. Multivariate pattern analysis showed that, before training, among visual cortical areas, coherent and noisy motion was decoded most accurately in V3A and MT+, respectively. After training, both kinds of motion were decoded most accurately in V3A. Our findings demonstrate that the effects of perceptual learning extend far beyond the retuning of specific neural populations for the trained stimuli. Learning could dramatically modify the inherent functional specializations of visual cortical areas and dynamically reweight their contributions to perceptual decisions based on their representational qualities. These neural changes might serve as the neural substrate for the transfer of perceptual learning.

  10. The cerebellum and visual perceptual learning: evidence from a motion extrapolation task.

    Science.gov (United States)

    Deluca, Cristina; Golzar, Ashkan; Santandrea, Elisa; Lo Gerfo, Emanuele; Eštočinová, Jana; Moretto, Giuseppe; Fiaschi, Antonio; Panzeri, Marta; Mariotti, Caterina; Tinazzi, Michele; Chelazzi, Leonardo

    2014-09-01

    Visual perceptual learning is widely assumed to reflect plastic changes occurring along the cerebro-cortical visual pathways, including at the earliest stages of processing, though increasing evidence indicates that higher-level brain areas are also involved. Here we addressed the possibility that the cerebellum plays an important role in visual perceptual learning. Within the realm of motor control, the cerebellum supports learning of new skills and recalibration of motor commands when movement execution is consistently perturbed (adaptation). Growing evidence indicates that the cerebellum is also involved in cognition and mediates forms of cognitive learning. Therefore, the obvious question arises whether the cerebellum might play a similar role in learning and adaptation within the perceptual domain. We explored a possible deficit in visual perceptual learning (and adaptation) in patients with cerebellar damage using variants of a novel motion extrapolation, psychophysical paradigm. Compared to their age- and gender-matched controls, patients with focal damage to the posterior (but not the anterior) cerebellum showed strongly diminished learning, in terms of both rate and amount of improvement over time. Consistent with a double-dissociation pattern, patients with focal damage to the anterior cerebellum instead showed more severe clinical motor deficits, indicative of a distinct role of the anterior cerebellum in the motor domain. The collected evidence demonstrates that a pure form of slow-incremental visual perceptual learning is crucially dependent on the intact cerebellum, bearing the notion that the human cerebellum acts as a learning device for motor, cognitive and perceptual functions. We interpret the deficit in terms of an inability to fine-tune predictive models of the incoming flow of visual perceptual input over time. Moreover, our results suggest a strong dissociation between the role of different portions of the cerebellum in motor versus

  11. Visual Hybrid Development Learning System (VHDLS) framework for children with autism.

    Science.gov (United States)

    Banire, Bilikis; Jomhari, Nazean; Ahmad, Rodina

    2015-10-01

    The effect of education on children with autism serves as a relative cure for their deficits. As a result of this, they require special techniques to gain their attention and interest in learning as compared to typical children. Several studies have shown that these children are visual learners. In this study, we proposed a Visual Hybrid Development Learning System (VHDLS) framework that is based on an instructional design model, multimedia cognitive learning theory, and learning style in order to guide software developers in developing learning systems for children with autism. The results from this study showed that the attention of children with autism increased more with the proposed VHDLS framework.

  12. Isolating Visual and Proprioceptive Components of Motor Sequence Learning in ASD.

    Science.gov (United States)

    Sharer, Elizabeth A; Mostofsky, Stewart H; Pascual-Leone, Alvaro; Oberman, Lindsay M

    2016-05-01

    In addition to defining impairments in social communication skills, individuals with autism spectrum disorder (ASD) also show impairments in more basic sensory and motor skills. Development of new skills involves integrating information from multiple sensory modalities. This input is then used to form internal models of action that can be accessed when both performing skilled movements, as well as understanding those actions performed by others. Learning skilled gestures is particularly reliant on integration of visual and proprioceptive input. We used a modified serial reaction time task (SRTT) to decompose proprioceptive and visual components and examine whether patterns of implicit motor skill learning differ in ASD participants as compared with healthy controls. While both groups learned the implicit motor sequence during training, healthy controls showed robust generalization whereas ASD participants demonstrated little generalization when visual input was constant. In contrast, no group differences in generalization were observed when proprioceptive input was constant, with both groups showing limited degrees of generalization. The findings suggest, when learning a motor sequence, individuals with ASD tend to rely less on visual feedback than do healthy controls. Visuomotor representations are considered to underlie imitative learning and action understanding and are thereby crucial to social skill and cognitive development. Thus, anomalous patterns of implicit motor learning, with a tendency to discount visual feedback, may be an important contributor in core social communication deficits that characterize ASD. Autism Res 2016, 9: 563-569. © 2015 International Society for Autism Research, Wiley Periodicals, Inc. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.

  13. The Effect of Visual Variability on the Learning of Academic Concepts.

    Science.gov (United States)

    Bourgoyne, Ashley; Alt, Mary

    2017-06-10

    The purpose of this study was to identify effects of variability of visual input on development of conceptual representations of academic concepts for college-age students with normal language (NL) and those with language-learning disabilities (LLD). Students with NL (n = 11) and LLD (n = 11) participated in a computer-based training for introductory biology course concepts. Participants were trained on half the concepts under a low-variability condition and half under a high-variability condition. Participants completed a posttest in which they were asked to identify and rate the accuracy of novel and trained visual representations of the concepts. We performed separate repeated measures analyses of variance to examine the accuracy of identification and ratings. Participants were equally accurate on trained and novel items in the high-variability condition, but were less accurate on novel items only in the low-variability condition. The LLD group showed the same pattern as the NL group; they were just less accurate. Results indicated that high-variability visual input may facilitate the acquisition of academic concepts in college students with NL and LLD. High-variability visual input may be especially beneficial for generalization to novel representations of concepts. Implicit learning methods may be harnessed by college courses to provide students with basic conceptual knowledge when they are entering courses or beginning new units.

  14. Learning invariance from natural images inspired by observations in the primary visual cortex.

    Science.gov (United States)

    Teichmann, Michael; Wiltschut, Jan; Hamker, Fred

    2012-05-01

    The human visual system has the remarkable ability to largely recognize objects invariant of their position, rotation, and scale. A good interpretation of neurobiological findings involves a computational model that simulates signal processing of the visual cortex. In part, this is likely achieved step by step from early to late areas of visual perception. While several algorithms have been proposed for learning feature detectors, only few studies at hand cover the issue of biologically plausible learning of such invariance. In this study, a set of Hebbian learning rules based on calcium dynamics and homeostatic regulations of single neurons is proposed. Their performance is verified within a simple model of the primary visual cortex to learn so-called complex cells, based on a sequence of static images. As a result, the learned complex-cell responses are largely invariant to phase and position.

  15. Differential learning and memory performance in OEF/OIF veterans for verbal and visual material.

    Science.gov (United States)

    Sozda, Christopher N; Muir, James J; Springer, Utaka S; Partovi, Diana; Cole, Michael A

    2014-05-01

    Memory complaints are particularly salient among veterans who experience combat-related mild traumatic brain injuries and/or trauma exposure, and represent a primary barrier to successful societal reintegration and everyday functioning. Anecdotally within clinical practice, verbal learning and memory performance frequently appears differentially reduced versus visual learning and memory scores. We sought to empirically investigate the robustness of a verbal versus visual learning and memory discrepancy and to explore potential mechanisms for a verbal/visual performance split. Participants consisted of 103 veterans with reported history of mild traumatic brain injuries returning home from U.S. military Operations Enduring Freedom and Iraqi Freedom referred for outpatient neuropsychological evaluation. Findings indicate that visual learning and memory abilities were largely intact while verbal learning and memory performance was significantly reduced in comparison, residing at approximately 1.1 SD below the mean for verbal learning and approximately 1.4 SD below the mean for verbal memory. This difference was not observed in verbal versus visual fluency performance, nor was it associated with estimated premorbid verbal abilities or traumatic brain injury history. In our sample, symptoms of depression, but not posttraumatic stress disorder, were significantly associated with reduced composite verbal learning and memory performance. Verbal learning and memory performance may benefit from targeted treatment of depressive symptomatology. Also, because visual learning and memory functions may remain intact, these might be emphasized when applying neurocognitive rehabilitation interventions to compensate for observed verbal learning and memory difficulties.

  16. Learning a New Selection Rule in Visual and Frontal Cortex.

    Science.gov (United States)

    van der Togt, Chris; Stănişor, Liviu; Pooresmaeili, Arezoo; Albantakis, Larissa; Deco, Gustavo; Roelfsema, Pieter R

    2016-08-01

    How do you make a decision if you do not know the rules of the game? Models of sensory decision-making suggest that choices are slow if evidence is weak, but they may only apply if the subject knows the task rules. Here, we asked how the learning of a new rule influences neuronal activity in the visual (area V1) and frontal cortex (area FEF) of monkeys. We devised a new icon-selection task. On each day, the monkeys saw 2 new icons (small pictures) and learned which one was relevant. We rewarded eye movements to a saccade target connected to the relevant icon with a curve. Neurons in visual and frontal cortex coded the monkey's choice, because the representation of the selected curve was enhanced. Learning delayed the neuronal selection signals and we uncovered the cause of this delay in V1, where learning to select the relevant icon caused an early suppression of surrounding image elements. These results demonstrate that the learning of a new rule causes a transition from fast and random decisions to a more considerate strategy that takes additional time and they reveal the contribution of visual and frontal cortex to the learning process. © The Author 2016. Published by Oxford University Press.

  17. Visual memory and learning in extremely low-birth-weight/extremely preterm adolescents compared with controls: a geographic study.

    Science.gov (United States)

    Molloy, Carly S; Wilson-Ching, Michelle; Doyle, Lex W; Anderson, Vicki A; Anderson, Peter J

    2014-04-01

    Contemporary data on visual memory and learning in survivors born extremely preterm (EP; Visual learning and memory data were available for 221 (74.2%) EP/ELBW subjects and 159 (60.7%) controls. EP/ELBW adolescents exhibited significantly poorer performance across visual memory and learning variables compared with controls. Visual learning and delayed visual memory were particularly problematic and remained so after controlling for visual-motor integration and visual perception and excluding adolescents with neurosensory disability, and/or IQ visual memory and learning outcomes compared with controls, which cannot be entirely explained by poor visual perceptual or visual constructional skills or intellectual impairment.

  18. Using E-Learning Portfolio Technology To Support Visual Art Learning

    Directory of Open Access Journals (Sweden)

    Greer Jones-Woodham

    2009-08-01

    Full Text Available Inspired by self-directed learning (SDL theories, this paper uses learning portfolios as a reflective practice to improve student learning and develop personal responsibility, growth and autonomy in learning in a Visual Arts course. Students use PowerPoint presentations to demonstrate their concepts by creating folders that are linked to e-portfolios on the University website. This paper establishes the role of learning e-portfolios to improve teaching and learning as a model of reflection, collaboration and documentation in the making of art as a self-directed process. These portfolios link students' creative thinking to their conceptual frameworks. They also establish a process of inquiry using journals to map students' processes through their reflections and peer feedback. This practice argues that learning e-portfolios in studio art not only depends on a set of objectives whose means are justified by an agreed end but also depends on a practice that engages students' reflection about their actions while in their art- making practice. Using the principles of the maker as the intuitive and reflective practitioner, the making as the process in which the learning e-portfolios communicate the process and conceptual frameworks of learning and the eventual product, and the made as evidence of that learning in light of progress made, this paper demonstrates that learning-in-action and reflecting-in and-on-action are driven by self-direction. With technology, students bring their learning context to bear with the use of SDL. Students' use of PowerPoint program technology in making their portfolios is systematic and builds on students' competencies as this process guides students' beliefs and actions about their work that is based on theory and concepts in response to a visual culture that is Trinidad and Tobago. Students' self–directed art-making process as a self directed learning, models the process of articulated learning. Communicating about

  19. Exploiting Attribute Correlations: A Novel Trace Lasso-Based Weakly Supervised Dictionary Learning Method.

    Science.gov (United States)

    Wu, Lin; Wang, Yang; Pan, Shirui

    2017-12-01

    It is now well established that sparse representation models are working effectively for many visual recognition tasks, and have pushed forward the success of dictionary learning therein. Recent studies over dictionary learning focus on learning discriminative atoms instead of purely reconstructive ones. However, the existence of intraclass diversities (i.e., data objects within the same category but exhibit large visual dissimilarities), and interclass similarities (i.e., data objects from distinct classes but share much visual similarities), makes it challenging to learn effective recognition models. To this end, a large number of labeled data objects are required to learn models which can effectively characterize these subtle differences. However, labeled data objects are always limited to access, committing it difficult to learn a monolithic dictionary that can be discriminative enough. To address the above limitations, in this paper, we propose a weakly-supervised dictionary learning method to automatically learn a discriminative dictionary by fully exploiting visual attribute correlations rather than label priors. In particular, the intrinsic attribute correlations are deployed as a critical cue to guide the process of object categorization, and then a set of subdictionaries are jointly learned with respect to each category. The resulting dictionary is highly discriminative and leads to intraclass diversity aware sparse representations. Extensive experiments on image classification and object recognition are conducted to show the effectiveness of our approach.

  20. Visual learning alters the spontaneous activity of the resting human brain: an fNIRS study.

    Science.gov (United States)

    Niu, Haijing; Li, Hao; Sun, Li; Su, Yongming; Huang, Jing; Song, Yan

    2014-01-01

    Resting-state functional connectivity (RSFC) has been widely used to investigate spontaneous brain activity that exhibits correlated fluctuations. RSFC has been found to be changed along the developmental course and after learning. Here, we investigated whether and how visual learning modified the resting oxygenated hemoglobin (HbO) functional brain connectivity by using functional near-infrared spectroscopy (fNIRS). We demonstrate that after five days of training on an orientation discrimination task constrained to the right visual field, resting HbO functional connectivity and directed mutual interaction between high-level visual cortex and frontal/central areas involved in the top-down control were significantly modified. Moreover, these changes, which correlated with the degree of perceptual learning, were not limited to the trained left visual cortex. We conclude that the resting oxygenated hemoglobin functional connectivity could be used as a predictor of visual learning, supporting the involvement of high-level visual cortex and the involvement of frontal/central cortex during visual perceptual learning.

  1. Interactions between attention, context and learning in primary visual cortex.

    Science.gov (United States)

    Gilbert, C; Ito, M; Kapadia, M; Westheimer, G

    2000-01-01

    Attention in early visual processing engages the higher order, context dependent properties of neurons. Even at the earliest stages of visual cortical processing neurons play a role in intermediate level vision - contour integration and surface segmentation. The contextual influences mediating this process may be derived from long range connections within primary visual cortex (V1). These influences are subject to perceptual learning, and are strongly modulated by visuospatial attention, which is itself a learning dependent process. The attentional influences may involve interactions between feedback and horizontal connections in V1. V1 is therefore a dynamic and active processor, subject to top-down influences.

  2. Lesion classification using clinical and visual data fusion by multiple kernel learning

    Science.gov (United States)

    Kisilev, Pavel; Hashoul, Sharbell; Walach, Eugene; Tzadok, Asaf

    2014-03-01

    To overcome operator dependency and to increase diagnosis accuracy in breast ultrasound (US), a lot of effort has been devoted to developing computer-aided diagnosis (CAD) systems for breast cancer detection and classification. Unfortunately, the efficacy of such CAD systems is limited since they rely on correct automatic lesions detection and localization, and on robustness of features computed based on the detected areas. In this paper we propose a new approach to boost the performance of a Machine Learning based CAD system, by combining visual and clinical data from patient files. We compute a set of visual features from breast ultrasound images, and construct the textual descriptor of patients by extracting relevant keywords from patients' clinical data files. We then use the Multiple Kernel Learning (MKL) framework to train SVM based classifier to discriminate between benign and malignant cases. We investigate different types of data fusion methods, namely, early, late, and intermediate (MKL-based) fusion. Our database consists of 408 patient cases, each containing US images, textual description of complaints and symptoms filled by physicians, and confirmed diagnoses. We show experimentally that the proposed MKL-based approach is superior to other classification methods. Even though the clinical data is very sparse and noisy, its MKL-based fusion with visual features yields significant improvement of the classification accuracy, as compared to the image features only based classifier.

  3. Differential Effects of Music and Video Gaming During Breaks on Auditory and Visual Learning.

    Science.gov (United States)

    Liu, Shuyan; Kuschpel, Maxim S; Schad, Daniel J; Heinz, Andreas; Rapp, Michael A

    2015-11-01

    The interruption of learning processes by breaks filled with diverse activities is common in everyday life. This study investigated the effects of active computer gaming and passive relaxation (rest and music) breaks on auditory versus visual memory performance. Young adults were exposed to breaks involving (a) open eyes resting, (b) listening to music, and (c) playing a video game, immediately after memorizing auditory versus visual stimuli. To assess learning performance, words were recalled directly after the break (an 8:30 minute delay) and were recalled and recognized again after 7 days. Based on linear mixed-effects modeling, it was found that playing the Angry Birds video game during a short learning break impaired long-term retrieval in auditory learning but enhanced long-term retrieval in visual learning compared with the music and rest conditions. These differential effects of video games on visual versus auditory learning suggest specific interference of common break activities on learning.

  4. Learning styles: The learning methods of air traffic control students

    Science.gov (United States)

    Jackson, Dontae L.

    In the world of aviation, air traffic controllers are an integral part in the overall level of safety that is provided. With a number of controllers reaching retirement age, the Air Traffic Collegiate Training Initiative (AT-CTI) was created to provide a stronger candidate pool. However, AT-CTI Instructors have found that a number of AT-CTI students are unable to memorize types of aircraft effectively. This study focused on the basic learning styles (auditory, visual, and kinesthetic) of students and created a teaching method to try to increase memorization in AT-CTI students. The participants were asked to take a questionnaire to determine their learning style. Upon knowing their learning styles, participants attended two classroom sessions. The participants were given a presentation in the first class, and divided into a control and experimental group for the second class. The control group was given the same presentation from the first classroom session while the experimental group had a group discussion and utilized Middle Tennessee State University's Air Traffic Control simulator to learn the aircraft types. Participants took a quiz and filled out a survey, which tested the new teaching method. An appropriate statistical analysis was applied to determine if there was a significant difference between the control and experimental groups. The results showed that even though the participants felt that the method increased their learning, there was no significant difference between the two groups.

  5. Independent Interactive Inquiry-Based Learning Modules Using Audio-Visual Instruction In Statistics

    OpenAIRE

    McDaniel, Scott N.; Green, Lisa

    2012-01-01

    Simulations can make complex ideas easier for students to visualize and understand. It has been shown that guidance in the use of these simulations enhances students’ learning. This paper describes the implementation and evaluation of the Independent Interactive Inquiry-based (I3) Learning Modules, which use existing open-source Java applets, combined with audio-visual instruction. Students are guided to discover and visualize important concepts in post-calculus and algebra-based courses in p...

  6. STUDIO LEARNING METHOD IN SCHOOL OF DESIGN IN INDONESIA: A CASE STUDY ON THE APPLICATION OF STUDIO LEARNING METHOD FOR THE VISUAL COMMUNICATION DESIGN DEPARTMENT OF PETRA CHRISTIAN UNIVERSITY

    Directory of Open Access Journals (Sweden)

    Freddy H. Istanto

    2002-01-01

    Full Text Available Awalnya Pendidikan Desain di Indonesia lebih ditekankan pada pendekatan keteknikan. Barulah 20 tahun terakhir inilah pendidikan desain yang bernafaskan kesenirupaan dimulai. Salah satu metode pengajaran di Pendidikan Desain ini adalah model pengajaran studio yang diadopsi dari sistim permagangan Bauhaus di Jerman dan Beaux-arts dari Perancis. Jurusan Desain Komunikasi Visual Universitas Kristen Petra menerapkan proses belajar-mengajar sistim studio ini tidak saja pada inti atau matakuliah utama saja%2C tetapi juga pada matakuliah-matakuliah yang membutuhkan pendekatan ketrampilan. Secara fisik konotasi studio adalah ruang atau tempat untuk menggambar. Tetapi dalam konteks perancangan (desain studio tidak sekedar ruang gambar%2C tetapi juga sebagai proses belajar%2C pengkayaan dan penggalian ide melalui diskusi%2C mendengar%2C melihat%2C merasakan%2C menyentuh dan mempraktekan. Studio merupakan tempat yang potensial untuk mengintegrasikan ketrampilan%2C nilai-nilai desain dan wacana desain. Dalam perkembangan dunia informasi%2C pemanfaatan teknologi komunikasi dan kemajuan moda informasi (internet tidak hanya memudahkan mengakses informasi saja%2C tetapi yang lebih penting dari hal itu adalah studio menjadi tempat yang ideal untuk memancarkan kekuatan%2C kearifan dan kebesaran aspek tradisional ke seluruh dunia Abstract in Bahasa Indonesia : In the early time of its appearance%2C the design education in Indonesia had a very technical sense. Then%2C it began to adopt the artistic sense after about 20 years later. The method of studio learning in Indonesia is adopted from the development of the Bauhaus and Beaux-arts apprentice system. This apprentice system was once influenced by the design in the technical/engineering context. In Petra Christian University s Visual Communication Design Department%2C this studio learning method is not only applied to its core subject but also to its skill subject. Physically%2C the connotation of studio is a room

  7. Supervised Learning for Visual Pattern Classification

    Science.gov (United States)

    Zheng, Nanning; Xue, Jianru

    This chapter presents an overview of the topics and major ideas of supervised learning for visual pattern classification. Two prevalent algorithms, i.e., the support vector machine (SVM) and the boosting algorithm, are briefly introduced. SVMs and boosting algorithms are two hot topics of recent research in supervised learning. SVMs improve the generalization of the learning machine by implementing the rule of structural risk minimization (SRM). It exhibits good generalization even when little training data are available for machine training. The boosting algorithm can boost a weak classifier to a strong classifier by means of the so-called classifier combination. This algorithm provides a general way for producing a classifier with high generalization capability from a great number of weak classifiers.

  8. Reinforcement learning for dpm of embedded visual sensor nodes

    International Nuclear Information System (INIS)

    Khani, U.; Sadhayo, I. H.

    2014-01-01

    This paper proposes a RL (Reinforcement Learning) based DPM (Dynamic Power Management) technique to learn time out policies during a visual sensor node's operation which has multiple power/performance states. As opposed to the widely used static time out policies, our proposed DPM policy which is also referred to as OLTP (Online Learning of Time out Policies), learns to dynamically change the time out decisions in the different node states including the non-operational states. The selection of time out values in different power/performance states of a visual sensing platform is based on the workload estimates derived from a ML-ANN (Multi-Layer Artificial Neural Network) and an objective function given by weighted performance and power parameters. The DPM approach is also able to dynamically adjust the power-performance weights online to satisfy a given constraint of either power consumption or performance. Results show that the proposed learning algorithm explores the power-performance tradeoff with non-stationary workload and outperforms other DPM policies. It also performs the online adjustment of the tradeoff parameters in order to meet a user-specified constraint. (author)

  9. Learning STEM Through Integrative Visual Representations

    Science.gov (United States)

    Virk, Satyugjit Singh

    Previous cognitive models of memory have not comprehensively taken into account the internal cognitive load of chunking isolated information and have emphasized the external cognitive load of visual presentation only. Under the Virk Long Term Working Memory Multimedia Model of cognitive load, drawing from the Cowan model, students presented with integrated animations of the key neural signal transmission subcomponents where the interrelationships between subcomponents are visually and verbally explicit, were hypothesized to perform significantly better on free response and diagram labeling questions, than students presented with isolated animations of these subcomponents. This is because the internal attentional cognitive load of chunking these concepts is greatly reduced and hence the overall cognitive load is less for the integrated visuals group than the isolated group, despite the higher external load for the integrated group of having the interrelationships between subcomponents presented explicitly. Experiment 1 demonstrated that integrating the subcomponents of the neuron significantly enhanced comprehension of the interconnections between cellular subcomponents and approached significance for enhancing comprehension of the layered molecular correlates of the cellular structures and their interconnections. Experiment 2 corrected time on task confounds from Experiment 1 and focused on the cellular subcomponents of the neuron only. Results from the free response essay subcomponent subscores did demonstrate significant differences in favor of the integrated group as well as some evidence from the diagram labeling section. Results from free response, short answer and What-If (problem solving), and diagram labeling detailed interrelationship subscores demonstrated the integrated group did indeed learn the extra material they were presented with. This data demonstrating the integrated group learned the extra material they were presented with provides some initial

  10. Visual artificial grammar learning in dyslexia : A meta-analysis

    NARCIS (Netherlands)

    van Witteloostuijn, Merel; Boersma, Paul; Wijnen, Frank; Rispens, Judith

    2017-01-01

    Background Literacy impairments in dyslexia have been hypothesized to be (partly) due to an implicit learning deficit. However, studies of implicit visual artificial grammar learning (AGL) have often yielded null results. Aims The aim of this study is to weigh the evidence collected thus far by

  11. Introduction of computing in physics learning visual programing

    International Nuclear Information System (INIS)

    Kim, Cheung Seop

    1999-12-01

    This book introduces physics and programing, foundation of visual basic, grammar of visual basic, visual programing, solution of equation, calculation of matrix, solution of simultaneous equation, differentiation, differential equation, simultaneous differential equation and second-order differential equation, integration and solution of partial differential equation. It also covers basic language, terms of visual basic, usage of method, graphic method, step by step method, fails-position method, Gauss elimination method, difference method and Euler method.

  12. Data visualization methods, data visualization devices, data visualization apparatuses, and articles of manufacture

    Science.gov (United States)

    Turner, Alan E.; Crow, Vernon L.; Payne, Deborah A.; Hetzler, Elizabeth G.; Cook, Kristin A.; Cowley, Wendy E.

    2015-06-30

    Data visualization methods, data visualization devices, data visualization apparatuses, and articles of manufacture are described according to some aspects. In one aspect, a data visualization method includes accessing a plurality of initial documents at a first moment in time, first processing the initial documents providing processed initial documents, first identifying a plurality of first associations of the initial documents using the processed initial documents, generating a first visualization depicting the first associations, accessing a plurality of additional documents at a second moment in time after the first moment in time, second processing the additional documents providing processed additional documents, second identifying a plurality of second associations of the additional documents and at least some of the initial documents, wherein the second identifying comprises identifying using the processed initial documents and the processed additional documents, and generating a second visualization depicting the second associations.

  13. Redefining "Learning" in Statistical Learning: What Does an Online Measure Reveal About the Assimilation of Visual Regularities?

    Science.gov (United States)

    Siegelman, Noam; Bogaerts, Louisa; Kronenfeld, Ofer; Frost, Ram

    2017-10-07

    From a theoretical perspective, most discussions of statistical learning (SL) have focused on the possible "statistical" properties that are the object of learning. Much less attention has been given to defining what "learning" is in the context of "statistical learning." One major difficulty is that SL research has been monitoring participants' performance in laboratory settings with a strikingly narrow set of tasks, where learning is typically assessed offline, through a set of two-alternative-forced-choice questions, which follow a brief visual or auditory familiarization stream. Is that all there is to characterizing SL abilities? Here we adopt a novel perspective for investigating the processing of regularities in the visual modality. By tracking online performance in a self-paced SL paradigm, we focus on the trajectory of learning. In a set of three experiments we show that this paradigm provides a reliable and valid signature of SL performance, and it offers important insights for understanding how statistical regularities are perceived and assimilated in the visual modality. This demonstrates the promise of integrating different operational measures to our theory of SL. © 2017 Cognitive Science Society, Inc.

  14. Concrete and abstract visualizations in history learning tasks

    NARCIS (Netherlands)

    Prangsma, Maaike; Van Boxtel, Carla; Kanselaar, Gellof; Kirschner, Paul A.

    2010-01-01

    Prangsma, M. E., Van Boxtel, C. A. M., Kanselaar, G., & Kirschner, P. A. (2009). Concrete and abstract visualizations in history learning tasks. British Journal of Educational Psychology, 79, 371-387.

  15. Supporting visual quality assessment with machine learning

    NARCIS (Netherlands)

    Gastaldo, P.; Zunino, R.; Redi, J.

    2013-01-01

    Objective metrics for visual quality assessment often base their reliability on the explicit modeling of the highly non-linear behavior of human perception; as a result, they may be complex and computationally expensive. Conversely, machine learning (ML) paradigms allow to tackle the quality

  16. Learning Convolutional Text Representations for Visual Question Answering

    OpenAIRE

    Wang, Zhengyang; Ji, Shuiwang

    2017-01-01

    Visual question answering is a recently proposed artificial intelligence task that requires a deep understanding of both images and texts. In deep learning, images are typically modeled through convolutional neural networks, and texts are typically modeled through recurrent neural networks. While the requirement for modeling images is similar to traditional computer vision tasks, such as object recognition and image classification, visual question answering raises a different need for textual...

  17. Enhanced visual statistical learning in adults with autism

    Science.gov (United States)

    Roser, Matthew E.; Aslin, Richard N.; McKenzie, Rebecca; Zahra, Daniel; Fiser, József

    2014-01-01

    Individuals with autism spectrum disorder (ASD) are often characterized as having social engagement and language deficiencies, but a sparing of visuo-spatial processing and short-term memory, with some evidence of supra-normal levels of performance in these domains. The present study expanded on this evidence by investigating the observational learning of visuospatial concepts from patterns of covariation across multiple exemplars. Child and adult participants with ASD, and age-matched control participants, viewed multi-shape arrays composed from a random combination of pairs of shapes that were each positioned in a fixed spatial arrangement. After this passive exposure phase, a post-test revealed that all participant groups could discriminate pairs of shapes with high covariation from randomly paired shapes with low covariation. Moreover, learning these shape-pairs with high covariation was superior in adults with ASD than in age-matched controls, while performance in children with ASD was no different than controls. These results extend previous observations of visuospatial enhancement in ASD into the domain of learning, and suggest that enhanced visual statistical learning may have arisen from a sustained bias to attend to local details in complex arrays of visual features. PMID:25151115

  18. Visual one-shot learning as an 'anti-camouflage device': a novel morphing paradigm.

    Science.gov (United States)

    Ishikawa, Tetsuo; Mogi, Ken

    2011-09-01

    Once people perceive what is in the hidden figure such as Dallenbach's cow and Dalmatian, they seldom seem to come back to the previous state when they were ignorant of the answer. This special type of learning process can be accomplished in a short time, with the effect of learning lasting for a long time (visual one-shot learning). Although it is an intriguing cognitive phenomenon, the lack of the control of difficulty of stimuli presented has been a problem in research. Here we propose a novel paradigm to create new hidden figures systematically by using a morphing technique. Through gradual changes from a blurred and binarized two-tone image to a blurred grayscale image of the original photograph including objects in a natural scene, spontaneous one-shot learning can occur at a certain stage of morphing when a sufficient amount of information is restored to the degraded image. A negative correlation between confidence levels and reaction times is observed, giving support to the fluency theory of one-shot learning. The correlation between confidence ratings and correct recognition rates indicates that participants had an accurate introspective ability (metacognition). The learning effect could be tested later by verifying whether or not the target object was recognized quicker in the second exposure. The present method opens a way for a systematic production of "good" hidden figures, which can be used to demystify the nature of visual one-shot learning.

  19. Socio-cognitive profiles for visual learning in young and older adults

    Directory of Open Access Journals (Sweden)

    Julie eChristian

    2015-06-01

    Full Text Available It is common wisdom that practice makes perfect; but why do some adults learn better than others? Here, we investigate individuals’ cognitive and social profiles to test which variables account for variability in learning ability across the lifespan. In particular, we focused on visual learning using tasks that test the ability to inhibit distractors and select task-relevant features. We tested the ability of young and older adults to improve through training in the discrimination of visual global forms embedded in a cluttered background. Further, we used a battery of cognitive tasks and psycho-social measures to examine which of these variables predict training-induced improvement in perceptual tasks and may account for individual variability in learning ability. Using partial least squares regression modelling, we show that visual learning is influenced by cognitive (i.e. cognitive inhibition, attention and social (strategic and deep learning factors rather than an individual’s age alone. Further, our results show that independent of age, strong learners rely on cognitive factors such as attention, while weaker learners use more general cognitive strategies. Our findings suggest an important role for higher-cognitive circuits involving executive functions that contribute to our ability to improve in perceptual tasks after training across the lifespan.

  20. Learning feedback and feedforward control in a mirror-reversed visual environment.

    Science.gov (United States)

    Kasuga, Shoko; Telgen, Sebastian; Ushiba, Junichi; Nozaki, Daichi; Diedrichsen, Jörn

    2015-10-01

    When we learn a novel task, the motor system needs to acquire both feedforward and feedback control. Currently, little is known about how the learning of these two mechanisms relate to each other. In the present study, we tested whether feedforward and feedback control need to be learned separately, or whether they are learned as common mechanism when a new control policy is acquired. Participants were trained to reach to two lateral and one central target in an environment with mirror (left-right)-reversed visual feedback. One group was allowed to make online movement corrections, whereas the other group only received visual information after the end of the movement. Learning of feedforward control was assessed by measuring the accuracy of the initial movement direction to lateral targets. Feedback control was measured in the responses to sudden visual perturbations of the cursor when reaching to the central target. Although feedforward control improved in both groups, it was significantly better when online corrections were not allowed. In contrast, feedback control only adaptively changed in participants who received online feedback and remained unchanged in the group without online corrections. Our findings suggest that when a new control policy is acquired, feedforward and feedback control are learned separately, and that there may be a trade-off in learning between feedback and feedforward controllers. Copyright © 2015 the American Physiological Society.

  1. Visual Literacy and Biochemistry Learning: The role of external representations

    Directory of Open Access Journals (Sweden)

    V.J.S.V. Santos

    2011-04-01

    Full Text Available Visual Literacy can bedefined as people’s ability to understand, use, think, learn and express themselves through external representations (ER in a given subject. This research aims to investigate the development of abilities of ERs reading and interpretation by students from a Biochemistry graduate course of theFederal University of São João Del-Rei. In this way, Visual Literacy level was  assessed using a questionnaire validatedin a previous educational research. This diagnosis questionnaire was elaborated according to six visual abilitiesidentified as essential for the study of the metabolic pathways. The initial statistical analysis of data collectedin this study was carried out using ANOVA method. Results obtained showed that the questionnaire used is adequate for the research and indicated that the level of Visual Literacy related to the metabolic processes increased significantly with the progress of the students in the graduation course. There was also an indication of a possible interference in the student’s performancedetermined by the cutoff punctuation in the university selection process.

  2. Tracing Trajectories of Audio-Visual Learning in the Infant Brain

    Science.gov (United States)

    Kersey, Alyssa J.; Emberson, Lauren L.

    2017-01-01

    Although infants begin learning about their environment before they are born, little is known about how the infant brain changes during learning. Here, we take the initial steps in documenting how the neural responses in the brain change as infants learn to associate audio and visual stimuli. Using functional near-infrared spectroscopy (fNRIS) to…

  3. Development of an Android-based Learning Media Application for Visually Impaired Students

    Directory of Open Access Journals (Sweden)

    Nurul Azmi

    2017-06-01

    Full Text Available This research aims to develop the English for Disability (EFORD application, on Android-based learning english media for Visually Impaired students and determine its based this on assessment of matter expert, media expert, special needs teacher and students. The research method adopted in this research is Research and Development (R&D. The development of this application through five phases: (1 Analysis of problems, through observation and interviews. (2 Collecting information as product planning / analysis of the needs of the media as required of blind children. (3 The design phase of products such as the manufacture of flow and storyboard navigation map.(4 Design validation phase form of an expert assessment of the media is developed. (5 testing products phase, such as assessment of the application by blind students. The results of this research is EFORD application which is feasible to be used as English learning media for visual impairment application based on assessment: 1Media expert it's obtained a percentage scored 95%, include for very worthy category, 2Subject matter, expert its obtained percentage scored 75% include for worthy category and 3 Special needs teacher it's obtained a percentage scored 83% include for very worthy category. Upon demonstration, students indicated the positive response of ≥ 70% in each indicator. Therefore English learning media with Android based application English for Disability (EFORD is very feasible to be used as an English learning media especially grammar and speaking English content for students of visual impairment for a number of reasons. This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

  4. Visual texture perception via graph-based semi-supervised learning

    Science.gov (United States)

    Zhang, Qin; Dong, Junyu; Zhong, Guoqiang

    2018-04-01

    Perceptual features, for example direction, contrast and repetitiveness, are important visual factors for human to perceive a texture. However, it needs to perform psychophysical experiment to quantify these perceptual features' scale, which requires a large amount of human labor and time. This paper focuses on the task of obtaining perceptual features' scale of textures by small number of textures with perceptual scales through a rating psychophysical experiment (what we call labeled textures) and a mass of unlabeled textures. This is the scenario that the semi-supervised learning is naturally suitable for. This is meaningful for texture perception research, and really helpful for the perceptual texture database expansion. A graph-based semi-supervised learning method called random multi-graphs, RMG for short, is proposed to deal with this task. We evaluate different kinds of features including LBP, Gabor, and a kind of unsupervised deep features extracted by a PCA-based deep network. The experimental results show that our method can achieve satisfactory effects no matter what kind of texture features are used.

  5. Color image definition evaluation method based on deep learning method

    Science.gov (United States)

    Liu, Di; Li, YingChun

    2018-01-01

    In order to evaluate different blurring levels of color image and improve the method of image definition evaluation, this paper proposed a method based on the depth learning framework and BP neural network classification model, and presents a non-reference color image clarity evaluation method. Firstly, using VGG16 net as the feature extractor to extract 4,096 dimensions features of the images, then the extracted features and labeled images are employed in BP neural network to train. And finally achieve the color image definition evaluation. The method in this paper are experimented by using images from the CSIQ database. The images are blurred at different levels. There are 4,000 images after the processing. Dividing the 4,000 images into three categories, each category represents a blur level. 300 out of 400 high-dimensional features are trained in VGG16 net and BP neural network, and the rest of 100 samples are tested. The experimental results show that the method can take full advantage of the learning and characterization capability of deep learning. Referring to the current shortcomings of the major existing image clarity evaluation methods, which manually design and extract features. The method in this paper can extract the images features automatically, and has got excellent image quality classification accuracy for the test data set. The accuracy rate is 96%. Moreover, the predicted quality levels of original color images are similar to the perception of the human visual system.

  6. THE POTENTIAL AND LIMITATIONS OF VISUALISATION AS A METHOD IN LEARNING SOCIAL SCIENCES AND HUMANITIES

    Directory of Open Access Journals (Sweden)

    Tatyana T. Sidelnikova

    2016-06-01

    Full Text Available Introduction: the paper is concerned with potential and barriers of application of visualisation as a method in learning social sciences and humanities. Using and employing visual aids becomes the most important resource in modern pedagogical theory and learning process due to the improvement of traditional pedagogical tools and new interpretation of well-known methods. Materials and Methods: the methods of observation, analysis of test results, results of examination session, data of questionnaires were used during the elaboration of the paper. Results: a good visual aid in teaching political science is the smiley as a simplified graphical representation expressing the emotions of a speaker or a writer. Observation, survey and results of examinations indicate that the above visual solutions not only improve students’ knowledge of subjects, but also improve the intellectual activity, contribute to the formation of the methodical approach to learning, associative thinking and creativity. Discussion and Conclusion: visualisation is a sign presentation of the content, functions, structures, stages of a process, a phenomenon through schematisation and associative and illustrative arrays. At the same time it is a way of transforming knowledge into real visual product with the author’s personal touch. Initially, students learn to reflect by drawing the essence of rather abstract concepts such as “parity”, “power” “freedom” etc. Assignments of higher levels involve the use of associative arrays, free images. By doing this, students do not just paint, but on their own initiative work with colours, seek to schematise information, sometimes dressing comments in lyrics.

  7. Learning to associate orientation with color in early visual areas by associative decoded fMRI neurofeedback

    Science.gov (United States)

    Amano, Kaoru; Shibata, Kazuhisa; Kawato, Mitsuo; Sasaki, Yuka; Watanabe, Takeo

    2016-01-01

    Summary Associative learning is an essential brain process where the contingency of different items increases after training. Associative learning has been found to occur in many brain regions [1-4]. However, there is no clear evidence that associative learning of visual features occurs in early visual areas, although a number of studies have indicated that learning of a single visual feature (perceptual learning) involves early visual areas [5-8]. Here, via decoded functional magnetic resonance imaging (fMRI) neurofeedback, termed “DecNef” [9], we tested whether associative learning of color and orientation can be created in early visual areas. During three days' training, DecNef induced fMRI signal patterns that corresponded to a specific target color (red) mostly in early visual areas while a vertical achromatic grating was physically presented to participants. As a result, participants came to perceive “red” significantly more frequently than “green” in an achromatic vertical grating. This effect was also observed 3 to 5 months after the training. These results suggest that long-term associative learning of the two different visual features such as color and orientation was created most likely in early visual areas. This newly extended technique that induces associative learning is called “A(ssociative)-DecNef” and may be used as an important tool for understanding and modifying brain functions, since associations are fundamental and ubiquitous functions in the brain. PMID:27374335

  8. Learning to Associate Orientation with Color in Early Visual Areas by Associative Decoded fMRI Neurofeedback.

    Science.gov (United States)

    Amano, Kaoru; Shibata, Kazuhisa; Kawato, Mitsuo; Sasaki, Yuka; Watanabe, Takeo

    2016-07-25

    Associative learning is an essential brain process where the contingency of different items increases after training. Associative learning has been found to occur in many brain regions [1-4]. However, there is no clear evidence that associative learning of visual features occurs in early visual areas, although a number of studies have indicated that learning of a single visual feature (perceptual learning) involves early visual areas [5-8]. Here, via decoded fMRI neurofeedback termed "DecNef" [9], we tested whether associative learning of orientation and color can be created in early visual areas. During 3 days of training, DecNef induced fMRI signal patterns that corresponded to a specific target color (red) mostly in early visual areas while a vertical achromatic grating was physically presented to participants. As a result, participants came to perceive "red" significantly more frequently than "green" in an achromatic vertical grating. This effect was also observed 3-5 months after the training. These results suggest that long-term associative learning of two different visual features such as orientation and color was created, most likely in early visual areas. This newly extended technique that induces associative learning is called "A-DecNef," and it may be used as an important tool for understanding and modifying brain functions because associations are fundamental and ubiquitous functions in the brain. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Adaptive learning in a compartmental model of visual cortex—how feedback enables stable category learning and refinement

    Science.gov (United States)

    Layher, Georg; Schrodt, Fabian; Butz, Martin V.; Neumann, Heiko

    2014-01-01

    The categorization of real world objects is often reflected in the similarity of their visual appearances. Such categories of objects do not necessarily form disjunct sets of objects, neither semantically nor visually. The relationship between categories can often be described in terms of a hierarchical structure. For instance, tigers and leopards build two separate mammalian categories, both of which are subcategories of the category Felidae. In the last decades, the unsupervised learning of categories of visual input stimuli has been addressed by numerous approaches in machine learning as well as in computational neuroscience. However, the question of what kind of mechanisms might be involved in the process of subcategory learning, or category refinement, remains a topic of active investigation. We propose a recurrent computational network architecture for the unsupervised learning of categorial and subcategorial visual input representations. During learning, the connection strengths of bottom-up weights from input to higher-level category representations are adapted according to the input activity distribution. In a similar manner, top-down weights learn to encode the characteristics of a specific stimulus category. Feedforward and feedback learning in combination realize an associative memory mechanism, enabling the selective top-down propagation of a category's feedback weight distribution. We suggest that the difference between the expected input encoded in the projective field of a category node and the current input pattern controls the amplification of feedforward-driven representations. Large enough differences trigger the recruitment of new representational resources and the establishment of additional (sub-) category representations. We demonstrate the temporal evolution of such learning and show how the proposed combination of an associative memory with a modulatory feedback integration successfully establishes category and subcategory representations

  10. Adaptive learning in a compartmental model of visual cortex - how feedback enables stable category learning and refinement

    Directory of Open Access Journals (Sweden)

    Georg eLayher

    2014-12-01

    Full Text Available The categorization of real world objects is often reflected in the similarity of their visual appearances. Such categories of objects do not necessarily form disjunct sets of objects, neither semantically nor visually. The relationship between categories can often be described in terms of a hierarchical structure. For instance, tigers and leopards build two separate mammalian categories, but both belong to the category of felines. In other words, tigers and leopards are subcategories of the category Felidae. In the last decades, the unsupervised learning of categories of visual input stimuli has been addressed by numerous approaches in machine learning as well as in the computational neurosciences. However, the question of what kind of mechanisms might be involved in the process of subcategory learning, or category refinement, remains a topic of active investigation. We propose a recurrent computational network architecture for the unsupervised learning of categorial and subcategorial visual input representations. During learning, the connection strengths of bottom-up weights from input to higher-level category representations are adapted according to the input activity distribution. In a similar manner, top-down weights learn to encode the characteristics of a specific stimulus category. Feedforward and feedback learning in combination realize an associative memory mechanism, enabling the selective top-down propagation of a category's feedback weight distribution. We suggest that the difference between the expected input encoded in the projective field of a category node and the current input pattern controls the amplification of feedforward-driven representations. Large enough differences trigger the recruitment of new representational resources and the establishment of (sub- category representations. We demonstrate the temporal evolution of such learning and show how the approach successully establishes category and subcategory

  11. Efficacy of Simulation-Based Learning of Electronics Using Visualization and Manipulation

    Science.gov (United States)

    Chen, Yu-Lung; Hong, Yu-Ru; Sung, Yao-Ting; Chang, Kuo-En

    2011-01-01

    Software for simulation-based learning of electronics was implemented to help learners understand complex and abstract concepts through observing external representations and exploring concept models. The software comprises modules for visualization and simulative manipulation. Differences in learning performance of using the learning software…

  12. Right Hemisphere Dominance in Visual Statistical Learning

    Science.gov (United States)

    Roser, Matthew E.; Fiser, Jozsef; Aslin, Richard N.; Gazzaniga, Michael S.

    2011-01-01

    Several studies report a right hemisphere advantage for visuospatial integration and a left hemisphere advantage for inferring conceptual knowledge from patterns of covariation. The present study examined hemispheric asymmetry in the implicit learning of new visual feature combinations. A split-brain patient and normal control participants viewed…

  13. 'You see?' Teaching and learning how to interpret visual cues during surgery.

    Science.gov (United States)

    Cope, Alexandra C; Bezemer, Jeff; Kneebone, Roger; Lingard, Lorelei

    2015-11-01

    The ability to interpret visual cues is important in many medical specialties, including surgery, in which poor outcomes are largely attributable to errors of perception rather than poor motor skills. However, we know little about how trainee surgeons learn to make judgements in the visual domain. We explored how trainees learn visual cue interpretation in the operating room. A multiple case study design was used. Participants were postgraduate surgical trainees and their trainers. Data included observer field notes, and integrated video- and audio-recordings from 12 cases representing more than 11 hours of observation. A constant comparative methodology was used to identify dominant themes. Visual cue interpretation was a recurrent feature of trainer-trainee interactions and was achieved largely through the pedagogic mechanism of co-construction. Co-construction was a dialogic sequence between trainer and trainee in which they explored what they were looking at together to identify and name structures or pathology. Co-construction took two forms: 'guided co-construction', in which the trainer steered the trainee to see what the trainer was seeing, and 'authentic co-construction', in which neither trainer nor trainee appeared certain of what they were seeing and pieced together the information collaboratively. Whether the co-construction activity was guided or authentic appeared to be influenced by case difficulty and trainee seniority. Co-construction was shown to occur verbally, through discussion, and also through non-verbal exchanges in which gestures made with laparoscopic instruments contributed to the co-construction discourse. In the training setting, learning visual cue interpretation occurs in part through co-construction. Co-construction is a pedagogic phenomenon that is well recognised in the context of learning to interpret verbal information. In articulating the features of co-construction in the visual domain, this work enables the development of

  14. Lateralization of visual learning in the honeybee.

    Science.gov (United States)

    Letzkus, Pinar; Boeddeker, Norbert; Wood, Jeff T; Zhang, Shao-Wu; Srinivasan, Mandyam V

    2008-02-23

    Lateralization is a well-described phenomenon in humans and other vertebrates and there are interesting parallels across a variety of different vertebrate species. However, there are only a few studies of lateralization in invertebrates. In a recent report, we showed lateralization of olfactory learning in the honeybee (Apis mellifera). Here, we investigate lateralization of another sensory modality, vision. By training honeybees on a modified version of a visual proboscis extension reflex task, we find that bees learn a colour stimulus better with their right eye.

  15. A diagram retrieval method with multi-label learning

    Science.gov (United States)

    Fu, Songping; Lu, Xiaoqing; Liu, Lu; Qu, Jingwei; Tang, Zhi

    2015-01-01

    In recent years, the retrieval of plane geometry figures (PGFs) has attracted increasing attention in the fields of mathematics education and computer science. However, the high cost of matching complex PGF features leads to the low efficiency of most retrieval systems. This paper proposes an indirect classification method based on multi-label learning, which improves retrieval efficiency by reducing the scope of compare operation from the whole database to small candidate groups. Label correlations among PGFs are taken into account for the multi-label classification task. The primitive feature selection for multi-label learning and the feature description of visual geometric elements are conducted individually to match similar PGFs. The experiment results show the competitive performance of the proposed method compared with existing PGF retrieval methods in terms of both time consumption and retrieval quality.

  16. Metacognitive Confidence Increases with, but Does Not Determine, Visual Perceptual Learning.

    Science.gov (United States)

    Zizlsperger, Leopold; Kümmel, Florian; Haarmeier, Thomas

    2016-01-01

    While perceptual learning increases objective sensitivity, the effects on the constant interaction of the process of perception and its metacognitive evaluation have been rarely investigated. Visual perception has been described as a process of probabilistic inference featuring metacognitive evaluations of choice certainty. For visual motion perception in healthy, naive human subjects here we show that perceptual sensitivity and confidence in it increased with training. The metacognitive sensitivity-estimated from certainty ratings by a bias-free signal detection theoretic approach-in contrast, did not. Concomitant 3Hz transcranial alternating current stimulation (tACS) was applied in compliance with previous findings on effective high-low cross-frequency coupling subserving signal detection. While perceptual accuracy and confidence in it improved with training, there were no statistically significant tACS effects. Neither metacognitive sensitivity in distinguishing between their own correct and incorrect stimulus classifications, nor decision confidence itself determined the subjects' visual perceptual learning. Improvements of objective performance and the metacognitive confidence in it were rather determined by the perceptual sensitivity at the outset of the experiment. Post-decision certainty in visual perceptual learning was neither independent of objective performance, nor requisite for changes in sensitivity, but rather covaried with objective performance. The exact functional role of metacognitive confidence in human visual perception has yet to be determined.

  17. The role of visualization in learning from computer-based images

    Science.gov (United States)

    Piburn, Michael D.; Reynolds, Stephen J.; McAuliffe, Carla; Leedy, Debra E.; Birk, James P.; Johnson, Julia K.

    2005-05-01

    Among the sciences, the practice of geology is especially visual. To assess the role of spatial ability in learning geology, we designed an experiment using: (1) web-based versions of spatial visualization tests, (2) a geospatial test, and (3) multimedia instructional modules built around QuickTime Virtual Reality movies. Students in control and experimental sections were administered measures of spatial orientation and visualization, as well as a content-based geospatial examination. All subjects improved significantly in their scores on spatial visualization and the geospatial examination. There was no change in their scores on spatial orientation. A three-way analysis of variance, with the geospatial examination as the dependent variable, revealed significant main effects favoring the experimental group and a significant interaction between treatment and gender. These results demonstrate that spatial ability can be improved through instruction, that learning of geological content will improve as a result, and that differences in performance between the genders can be eliminated.

  18. Asymmetrical learning between a tactile and visual serial RT task

    NARCIS (Netherlands)

    Abrahamse, E.L.; van der Lubbe, Robert Henricus Johannes; Verwey, Willem B.

    2007-01-01

    According to many researchers, implicit learning in the serial reaction-time task is predominantly motor based and therefore should be independent of stimulus modality. Previous research on the task, however, has focused almost completely on the visual domain. Here we investigated sequence learning

  19. Visual-motor association learning in undergraduate students as a function of the autism-spectrum quotient.

    Science.gov (United States)

    Parkington, Karisa B; Clements, Rebecca J; Landry, Oriane; Chouinard, Philippe A

    2015-10-01

    We examined how performance on an associative learning task changes in a sample of undergraduate students as a function of their autism-spectrum quotient (AQ) score. The participants, without any prior knowledge of the Japanese language, learned to associate hiragana characters with button responses. In the novel condition, 50 participants learned visual-motor associations without any prior exposure to the stimuli's visual attributes. In the familiar condition, a different set of 50 participants completed a session in which they first became familiar with the stimuli's visual appearance prior to completing the visual-motor association learning task. Participants with higher AQ scores had a clear advantage in the novel condition; the amount of training required reaching learning criterion correlated negatively with AQ. In contrast, participants with lower AQ scores had a clear advantage in the familiar condition; the amount of training required to reach learning criterion correlated positively with AQ. An examination of how each of the AQ subscales correlated with these learning patterns revealed that abilities in visual discrimination-which is known to depend on the visual ventral-stream system-may have afforded an advantage in the novel condition for the participants with the higher AQ scores, whereas abilities in attention switching-which are known to require mechanisms in the prefrontal cortex-may have afforded an advantage in the familiar condition for the participants with the lower AQ scores.

  20. Learning about “wicked” problems in the Global South. Creating a film-based learning environment with “Visual Problem Appraisal”

    NARCIS (Netherlands)

    Witteveen, L.M.; Lie, R.

    2012-01-01

    The current complexity of sustainable development in the Global South calls for the design of learning strategies that can deal with this complexity. One such innovative learning strategy, called Visual Problem Appraisal (VPA), is highlighted in this article. The strategy is termed visual as it

  1. Building effective learning experiences around visualizations: NASA Eyes on the Solar System and Infiniscope

    Science.gov (United States)

    Tamer, A. J. J.; Anbar, A. D.; Elkins-Tanton, L. T.; Klug Boonstra, S.; Mead, C.; Swann, J. L.; Hunsley, D.

    2017-12-01

    Advances in scientific visualization and public access to data have transformed science outreach and communication, but have yet to realize their potential impacts in the realm of education. Computer-based learning is a clear bridge between visualization and education, but creating high-quality learning experiences that leverage existing visualizations requires close partnerships among scientists, technologists, and educators. The Infiniscope project is working to foster such partnerships in order to produce exploration-driven learning experiences around NASA SMD data and images, leveraging the principles of ETX (Education Through eXploration). The visualizations inspire curiosity, while the learning design promotes improved reasoning skills and increases understanding of space science concepts. Infiniscope includes both a web portal to host these digital learning experiences, as well as a teaching network of educators using and modifying these experiences. Our initial efforts to enable student discovery through active exploration of the concepts associated with Small Worlds, Kepler's Laws, and Exoplanets led us to develop our own visualizations at Arizona State University. Other projects focused on Astrobiology and Mars geology led us to incorporate an immersive Virtual Field Trip platform into the Infiniscope portal in support of virtual exploration of scientifically significant locations. Looking to apply ETX design practices with other visualizations, our team at Arizona State partnered with the Jet Propulsion Lab to integrate the web-based version of NASA Eyes on the Eclipse within Smart Sparrow's digital learning platform in a proof-of-concept focused on the 2017 Eclipse. This goes a step beyond the standard features of "Eyes" by wrapping guided exploration, focused on a specific learning goal into standards-aligned lesson built around the visualization, as well as its distribution through Infiniscope and it's digital teaching network. Experience from this

  2. Increase in MST activity correlates with visual motion learning: A functional MRI study of perceptual learning.

    Science.gov (United States)

    Larcombe, Stephanie J; Kennard, Chris; Bridge, Holly

    2018-01-01

    Repeated practice of a specific task can improve visual performance, but the neural mechanisms underlying this improvement in performance are not yet well understood. Here we trained healthy participants on a visual motion task daily for 5 days in one visual hemifield. Before and after training, we used functional magnetic resonance imaging (fMRI) to measure the change in neural activity. We also imaged a control group of participants on two occasions who did not receive any task training. While in the MRI scanner, all participants completed the motion task in the trained and untrained visual hemifields separately. Following training, participants improved their ability to discriminate motion direction in the trained hemifield and, to a lesser extent, in the untrained hemifield. The amount of task learning correlated positively with the change in activity in the medial superior temporal (MST) area. MST is the anterior portion of the human motion complex (hMT+). MST changes were localized to the hemisphere contralateral to the region of the visual field, where perceptual training was delivered. Visual areas V2 and V3a showed an increase in activity between the first and second scan in the training group, but this was not correlated with performance. The contralateral anterior hippocampus and bilateral dorsolateral prefrontal cortex (DLPFC) and frontal pole showed changes in neural activity that also correlated with the amount of task learning. These findings emphasize the importance of MST in perceptual learning of a visual motion task. Hum Brain Mapp 39:145-156, 2018. © 2017 Wiley Periodicals, Inc. © 2017 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.

  3. Attentional Modulation in Visual Cortex Is Modified during Perceptual Learning

    Science.gov (United States)

    Bartolucci, Marco; Smith, Andrew T.

    2011-01-01

    Practicing a visual task commonly results in improved performance. Often the improvement does not transfer well to a new retinal location, suggesting that it is mediated by changes occurring in early visual cortex, and indeed neuroimaging and neurophysiological studies both demonstrate that perceptual learning is associated with altered activity…

  4. Enhancement of Online Robotics Learning Using Real-Time 3D Visualization Technology

    Directory of Open Access Journals (Sweden)

    Richard Chiou

    2010-06-01

    Full Text Available This paper discusses a real-time e-Lab Learning system based on the integration of 3D visualization technology with a remote robotic laboratory. With the emergence and development of the Internet field, online learning is proving to play a significant role in the upcoming era. In an effort to enhance Internet-based learning of robotics and keep up with the rapid progression of technology, a 3- Dimensional scheme of viewing the robotic laboratory has been introduced in addition to the remote controlling of the robots. The uniqueness of the project lies in making this process Internet-based, and remote robot operated and visualized in 3D. This 3D system approach provides the students with a more realistic feel of the 3D robotic laboratory even though they are working remotely. As a result, the 3D visualization technology has been tested as part of a laboratory in the MET 205 Robotics and Mechatronics class and has received positive feedback by most of the students. This type of research has introduced a new level of realism and visual communications to online laboratory learning in a remote classroom.

  5. Audiovisual Association Learning in the Absence of Primary Visual Cortex

    OpenAIRE

    Seirafi, Mehrdad; De Weerd, Peter; Pegna, Alan J.; de Gelder, Beatrice

    2016-01-01

    Learning audiovisual associations is mediated by the primary cortical areas; however, recent animal studies suggest that such learning can take place even in the absence of the primary visual cortex. Other studies have demonstrated the involvement of extra-geniculate pathways and especially the superior colliculus (SC) in audiovisual association learning. Here, we investigated such learning in a rare human patient with complete loss of the bilateral striate cortex. We carried out an implicit ...

  6. Learning effects of dynamic postural control by auditory biofeedback versus visual biofeedback training.

    Science.gov (United States)

    Hasegawa, Naoya; Takeda, Kenta; Sakuma, Moe; Mani, Hiroki; Maejima, Hiroshi; Asaka, Tadayoshi

    2017-10-01

    Augmented sensory biofeedback (BF) for postural control is widely used to improve postural stability. However, the effective sensory information in BF systems of motor learning for postural control is still unknown. The purpose of this study was to investigate the learning effects of visual versus auditory BF training in dynamic postural control. Eighteen healthy young adults were randomly divided into two groups (visual BF and auditory BF). In test sessions, participants were asked to bring the real-time center of pressure (COP) in line with a hidden target by body sway in the sagittal plane. The target moved in seven cycles of sine curves at 0.23Hz in the vertical direction on a monitor. In training sessions, the visual and auditory BF groups were required to change the magnitude of a visual circle and a sound, respectively, according to the distance between the COP and target in order to reach the target. The perceptual magnitudes of visual and auditory BF were equalized according to Stevens' power law. At the retention test, the auditory but not visual BF group demonstrated decreased postural performance errors in both the spatial and temporal parameters under the no-feedback condition. These findings suggest that visual BF increases the dependence on visual information to control postural performance, while auditory BF may enhance the integration of the proprioceptive sensory system, which contributes to motor learning without BF. These results suggest that auditory BF training improves motor learning of dynamic postural control. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Picasso: A Modular Framework for Visualizing the Learning Process of Neural Network Image Classifiers

    Directory of Open Access Journals (Sweden)

    Ryan Henderson

    2017-09-01

    Full Text Available Picasso is a free open-source (Eclipse Public License web application written in Python for rendering standard visualizations useful for analyzing convolutional neural networks. Picasso ships with occlusion maps and saliency maps, two visualizations which help reveal issues that evaluation metrics like loss and accuracy might hide: for example, learning a proxy classification task. Picasso works with the Tensorflow deep learning framework, and Keras (when the model can be loaded into the Tensorflow backend. Picasso can be used with minimal configuration by deep learning researchers and engineers alike across various neural network architectures. Adding new visualizations is simple: the user can specify their visualization code and HTML template separately from the application code.

  8. Can Visual Illusions Be Used to Facilitate Sport Skill Learning?

    Science.gov (United States)

    Cañal-Bruland, Rouwen; van der Meer, Yor; Moerman, Jelle

    2016-01-01

    Recently it has been reported that practicing putting with visual illusions that make the hole appear larger than it actually is leads to longer-lasting performance improvements. Interestingly, from a motor control and learning perspective, it may be possible to actually predict the opposite to occur, as facing a smaller appearing target should enforce performers to be more precise. To test this idea the authors invited participants to practice an aiming task (i.e., a marble-shooting task) with either a visual illusion that made the target appear larger or a visual illusion that made the target appear smaller. They applied a pre-post test design, included a control group training without any illusory effects and increased the amount of practice to 450 trials. In contrast to earlier reports, the results revealed that the group that trained with the visual illusion that made the target look smaller improved performance from pre- to posttest, whereas the group practicing with visual illusions that made the target appear larger did not show any improvements. Notably, also the control group improved from pre- to posttest. The authors conclude that more research is needed to improve our understanding of whether and how visual illusions may be useful training tools for sport skill learning.

  9. Using Visualization to Motivate Student Participation in Collaborative Online Learning Environments

    Science.gov (United States)

    Jin, Sung-Hee

    2017-01-01

    Online participation in collaborative online learning environments is instrumental in motivating students to learn and promoting their learning satisfaction, but there has been little research on the technical supports for motivating students' online participation. The purpose of this study was to develop a visualization tool to motivate learners…

  10. Images in Language: Metaphors and Metamorphoses. Visual Learning. Volume 1

    Science.gov (United States)

    Benedek, Andras, Ed.; Nyiri, Kristof, Ed.

    2011-01-01

    Learning and teaching are faced with radically new challenges in today's rapidly changing world and its deeply transformed communicational environment. We are living in an era of images. Contemporary visual technology--film, video, interactive digital media--is promoting but also demanding a new approach to education: the age of visual learning…

  11. Learning about Locomotion Patterns from Visualizations: Effects of Presentation Format and Realism

    Science.gov (United States)

    Imhof, Birgit; Scheiter, Katharina; Gerjets, Peter

    2011-01-01

    The rapid development of computer graphics technology has made possible an easy integration of dynamic visualizations into computer-based learning environments. This study examines the relative effectiveness of dynamic visualizations, compared either to sequentially or simultaneously presented static visualizations. Moreover, the degree of realism…

  12. Learning visual balance from large-scale datasets of aesthetically highly rated images

    Science.gov (United States)

    Jahanian, Ali; Vishwanathan, S. V. N.; Allebach, Jan P.

    2015-03-01

    The concept of visual balance is innate for humans, and influences how we perceive visual aesthetics and cognize harmony. Although visual balance is a vital principle of design and taught in schools of designs, it is barely quantified. On the other hand, with emergence of automantic/semi-automatic visual designs for self-publishing, learning visual balance and computationally modeling it, may escalate aesthetics of such designs. In this paper, we present how questing for understanding visual balance inspired us to revisit one of the well-known theories in visual arts, the so called theory of "visual rightness", elucidated by Arnheim. We define Arnheim's hypothesis as a design mining problem with the goal of learning visual balance from work of professionals. We collected a dataset of 120K images that are aesthetically highly rated, from a professional photography website. We then computed factors that contribute to visual balance based on the notion of visual saliency. We fitted a mixture of Gaussians to the saliency maps of the images, and obtained the hotspots of the images. Our inferred Gaussians align with Arnheim's hotspots, and confirm his theory. Moreover, the results support the viability of the center of mass, symmetry, as well as the Rule of Thirds in our dataset.

  13. Visual Learning Induces Changes in Resting-State fMRI Multivariate Pattern of Information.

    Science.gov (United States)

    Guidotti, Roberto; Del Gratta, Cosimo; Baldassarre, Antonello; Romani, Gian Luca; Corbetta, Maurizio

    2015-07-08

    When measured with functional magnetic resonance imaging (fMRI) in the resting state (R-fMRI), spontaneous activity is correlated between brain regions that are anatomically and functionally related. Learning and/or task performance can induce modulation of the resting synchronization between brain regions. Moreover, at the neuronal level spontaneous brain activity can replay patterns evoked by a previously presented stimulus. Here we test whether visual learning/task performance can induce a change in the patterns of coded information in R-fMRI signals consistent with a role of spontaneous activity in representing task-relevant information. Human subjects underwent R-fMRI before and after perceptual learning on a novel visual shape orientation discrimination task. Task-evoked fMRI patterns to trained versus novel stimuli were recorded after learning was completed, and before the second R-fMRI session. Using multivariate pattern analysis on task-evoked signals, we found patterns in several cortical regions, as follows: visual cortex, V3/V3A/V7; within the default mode network, precuneus, and inferior parietal lobule; and, within the dorsal attention network, intraparietal sulcus, which discriminated between trained and novel visual stimuli. The accuracy of classification was strongly correlated with behavioral performance. Next, we measured multivariate patterns in R-fMRI signals before and after learning. The frequency and similarity of resting states representing the task/visual stimuli states increased post-learning in the same cortical regions recruited by the task. These findings support a representational role of spontaneous brain activity. Copyright © 2015 the authors 0270-6474/15/359786-13$15.00/0.

  14. The effect of haptic guidance and visual feedback on learning a complex tennis task.

    Science.gov (United States)

    Marchal-Crespo, Laura; van Raai, Mark; Rauter, Georg; Wolf, Peter; Riener, Robert

    2013-11-01

    While haptic guidance can improve ongoing performance of a motor task, several studies have found that it ultimately impairs motor learning. However, some recent studies suggest that the haptic demonstration of optimal timing, rather than movement magnitude, enhances learning in subjects trained with haptic guidance. Timing of an action plays a crucial role in the proper accomplishment of many motor skills, such as hitting a moving object (discrete timing task) or learning a velocity profile (time-critical tracking task). The aim of the present study is to evaluate which feedback conditions-visual or haptic guidance-optimize learning of the discrete and continuous elements of a timing task. The experiment consisted in performing a fast tennis forehand stroke in a virtual environment. A tendon-based parallel robot connected to the end of a racket was used to apply haptic guidance during training. In two different experiments, we evaluated which feedback condition was more adequate for learning: (1) a time-dependent discrete task-learning to start a tennis stroke and (2) a tracking task-learning to follow a velocity profile. The effect that the task difficulty and subject's initial skill level have on the selection of the optimal training condition was further evaluated. Results showed that the training condition that maximizes learning of the discrete time-dependent motor task depends on the subjects' initial skill level. Haptic guidance was especially suitable for less-skilled subjects and in especially difficult discrete tasks, while visual feedback seems to benefit more skilled subjects. Additionally, haptic guidance seemed to promote learning in a time-critical tracking task, while visual feedback tended to deteriorate the performance independently of the task difficulty and subjects' initial skill level. Haptic guidance outperformed visual feedback, although additional studies are needed to further analyze the effect of other types of feedback visualization on

  15. Geometric Hypergraph Learning for Visual Tracking

    OpenAIRE

    Du, Dawei; Qi, Honggang; Wen, Longyin; Tian, Qi; Huang, Qingming; Lyu, Siwei

    2016-01-01

    Graph based representation is widely used in visual tracking field by finding correct correspondences between target parts in consecutive frames. However, most graph based trackers consider pairwise geometric relations between local parts. They do not make full use of the target's intrinsic structure, thereby making the representation easily disturbed by errors in pairwise affinities when large deformation and occlusion occur. In this paper, we propose a geometric hypergraph learning based tr...

  16. A Simulator to Enhance Teaching and Learning of Mining Methods ...

    African Journals Online (AJOL)

    Audio visual education that incorporates devices and materials which involve sight, sound, or both has become a sine qua non in recent times in the teaching and learning process. An automated physical model of mining methods aided with video instructions was designed and constructed by harnessing locally available ...

  17. Learning about “wicked” problems in the Global South. Creating a film-based learning environment with “Visual Problem Appraisal”

    OpenAIRE

    Loes Witteveen; Rico Lie

    2012-01-01

    The current complexity of sustainable development in the Global South calls for the design of learning strategies that can deal with this complexity. One such innovative learning strategy, called Visual Problem Appraisal (VPA), is highlighted in this article. The strategy is termed visual as it creates a learning environment that is film-based. VPA enhances the analysis of complex issues, and facilitates stakeholder dialogue and action planning. The strategy is used in workshops dealing with ...

  18. Pre-Service Visual Art Teachers' Perceptions of Assessment in Online Learning

    Science.gov (United States)

    Allen, Jeanne Maree; Wright, Suzie; Innes, Maureen

    2014-01-01

    This paper reports on a study conducted into how one cohort of Master of Teaching pre-service visual art teachers perceived their learning in a fully online learning environment. Located in an Australian urban university, this qualitative study provided insights into a number of areas associated with higher education online learning, including…

  19. Associative learning in baboons (Papio papio) and humans (Homo sapiens): species differences in learned attention to visual features.

    Science.gov (United States)

    Fagot, J; Kruschke, J K; Dépy, D; Vauclair, J

    1998-10-01

    We examined attention shifting in baboons and humans during the learning of visual categories. Within a conditional matching-to-sample task, participants of the two species sequentially learned two two-feature categories which shared a common feature. Results showed that humans encoded both features of the initially learned category, but predominantly only the distinctive feature of the subsequently learned category. Although baboons initially encoded both features of the first category, they ultimately retained only the distinctive features of each category. Empirical data from the two species were analyzed with the 1996 ADIT connectionist model of Kruschke. ADIT fits the baboon data when the attentional shift rate is zero, and the human data when the attentional shift rate is not zero. These empirical and modeling results suggest species differences in learned attention to visual features.

  20. Perceptual learning improves visual performance in juvenile amblyopia.

    Science.gov (United States)

    Li, Roger W; Young, Karen G; Hoenig, Pia; Levi, Dennis M

    2005-09-01

    To determine whether practicing a position-discrimination task improves visual performance in children with amblyopia and to determine the mechanism(s) of improvement. Five children (age range, 7-10 years) with amblyopia practiced a positional acuity task in which they had to judge which of three pairs of lines was misaligned. Positional noise was produced by distributing the individual patches of each line segment according to a Gaussian probability function. Observers were trained at three noise levels (including 0), with each observer performing between 3000 and 4000 responses in 7 to 10 sessions. Trial-by-trial feedback was provided. Four of the five observers showed significant improvement in positional acuity. In those four observers, on average, positional acuity with no noise improved by approximately 32% and with high noise by approximately 26%. A position-averaging model was used to parse the improvement into an increase in efficiency or a decrease in equivalent input noise. Two observers showed increased efficiency (51% and 117% improvements) with no significant change in equivalent input noise across sessions. The other two observers showed both a decrease in equivalent input noise (18% and 29%) and an increase in efficiency (17% and 71%). All five observers showed substantial improvement in Snellen acuity (approximately 26%) after practice. Perceptual learning can improve visual performance in amblyopic children. The improvement can be parsed into two important factors: decreased equivalent input noise and increased efficiency. Perceptual learning techniques may add an effective new method to the armamentarium of amblyopia treatments.

  1. Acquiring skill at medical image inspection: learning localized in early visual processes

    Science.gov (United States)

    Sowden, Paul T.; Davies, Ian R. L.; Roling, Penny; Watt, Simon J.

    1997-04-01

    Acquisition of the skill of medical image inspection could be due to changes in visual search processes, 'low-level' sensory learning, and higher level 'conceptual learning.' Here, we report two studies that investigate the extent to which learning in medical image inspection involves low- level learning. Early in the visual processing pathway cells are selective for direction of luminance contrast. We exploit this in the present studies by using transfer across direction of contrast as a 'marker' to indicate the level of processing at which learning occurs. In both studies twelve observers trained for four days at detecting features in x- ray images (experiment one equals discs in the Nijmegen phantom, experiment two equals micro-calcification clusters in digitized mammograms). Half the observers examined negative luminance contrast versions of the images and the remainder examined positive contrast versions. On the fifth day, observers swapped to inspect their respective opposite contrast images. In both experiments leaning occurred across sessions. In experiment one, learning did not transfer across direction of luminance contrast, while in experiment two there was only partial transfer. These findings are consistent with the contention that some of the leaning was localized early in the visual processing pathway. The implications of these results for current medical image inspection training schedules are discussed.

  2. Teaching numerical methods with IPython notebooks and inquiry-based learning

    KAUST Repository

    Ketcheson, David I.

    2014-01-01

    A course in numerical methods should teach both the mathematical theory of numerical analysis and the craft of implementing numerical algorithms. The IPython notebook provides a single medium in which mathematics, explanations, executable code, and visualizations can be combined, and with which the student can interact in order to learn both the theory and the craft of numerical methods. The use of notebooks also lends itself naturally to inquiry-based learning methods. I discuss the motivation and practice of teaching a course based on the use of IPython notebooks and inquiry-based learning, including some specific practical aspects. The discussion is based on my experience teaching a Masters-level course in numerical analysis at King Abdullah University of Science and Technology (KAUST), but is intended to be useful for those who teach at other levels or in industry.

  3. Analysis of Cine-Psychometric Visual Memory Data by the Tucker Generalized Learning Curve Method: Final Report.

    Science.gov (United States)

    Reid, J. C.; Seibert, Warren F.

    The analysis of previously obtained data concerning short-term visual memory and cognition by a method suggested by Tucker is proposed. Although interesting individual differences undoubtedly exist in people's ability and capacity to process short-term visual information, studies have not generally examined these differences. In fact, conventional…

  4. Multiple instance learning tracking method with local sparse representation

    KAUST Repository

    Xie, Chengjun

    2013-10-01

    When objects undergo large pose change, illumination variation or partial occlusion, most existed visual tracking algorithms tend to drift away from targets and even fail in tracking them. To address this issue, in this study, the authors propose an online algorithm by combining multiple instance learning (MIL) and local sparse representation for tracking an object in a video system. The key idea in our method is to model the appearance of an object by local sparse codes that can be formed as training data for the MIL framework. First, local image patches of a target object are represented as sparse codes with an overcomplete dictionary, where the adaptive representation can be helpful in overcoming partial occlusion in object tracking. Then MIL learns the sparse codes by a classifier to discriminate the target from the background. Finally, results from the trained classifier are input into a particle filter framework to sequentially estimate the target state over time in visual tracking. In addition, to decrease the visual drift because of the accumulative errors when updating the dictionary and classifier, a two-step object tracking method combining a static MIL classifier with a dynamical MIL classifier is proposed. Experiments on some publicly available benchmarks of video sequences show that our proposed tracker is more robust and effective than others. © The Institution of Engineering and Technology 2013.

  5. Posttraining transcranial magnetic stimulation of striate cortex disrupts consolidation early in visual skill learning.

    Science.gov (United States)

    De Weerd, Peter; Reithler, Joel; van de Ven, Vincent; Been, Marin; Jacobs, Christianne; Sack, Alexander T

    2012-02-08

    Practice-induced improvements in skilled performance reflect "offline " consolidation processes extending beyond daily training sessions. According to visual learning theories, an early, fast learning phase driven by high-level areas is followed by a late, asymptotic learning phase driven by low-level, retinotopic areas when higher resolution is required. Thus, low-level areas would not contribute to learning and offline consolidation until late learning. Recent studies have challenged this notion, demonstrating modified responses to trained stimuli in primary visual cortex (V1) and offline activity after very limited training. However, the behavioral relevance of modified V1 activity for offline consolidation of visual skill memory in V1 after early training sessions remains unclear. Here, we used neuronavigated transcranial magnetic stimulation (TMS) directed to a trained retinotopic V1 location to test for behaviorally relevant consolidation in human low-level visual cortex. Applying TMS to the trained V1 location within 45 min of the first or second training session strongly interfered with learning, as measured by impaired performance the next day. The interference was conditional on task context and occurred only when training in the location targeted by TMS was followed by training in a second location before TMS. In this condition, high-level areas may become coupled to the second location and uncoupled from the previously trained low-level representation, thereby rendering consolidation vulnerable to interference. Our data show that, during the earliest phases of skill learning in the lowest-level visual areas, a behaviorally relevant form of consolidation exists of which the robustness is controlled by high-level, contextual factors.

  6. Lithium-Ion Battery Capacity Estimation: A Method Based on Visual Cognition

    Directory of Open Access Journals (Sweden)

    Yujie Cheng

    2017-01-01

    Full Text Available This study introduces visual cognition into Lithium-ion battery capacity estimation. The proposed method consists of four steps. First, the acquired charging current or discharge voltage data in each cycle are arranged to form a two-dimensional image. Second, the generated image is decomposed into multiple spatial-frequency channels with a set of orientation subbands by using non-subsampled contourlet transform (NSCT. NSCT imitates the multichannel characteristic of the human visual system (HVS that provides multiresolution, localization, directionality, and shift invariance. Third, several time-domain indicators of the NSCT coefficients are extracted to form an initial high-dimensional feature vector. Similarly, inspired by the HVS manifold sensing characteristic, the Laplacian eigenmap manifold learning method, which is considered to reveal the evolutionary law of battery performance degradation within a low-dimensional intrinsic manifold, is used to further obtain a low-dimensional feature vector. Finally, battery capacity degradation is estimated using the geodesic distance on the manifold between the initial and the most recent features. Verification experiments were conducted using data obtained under different operating and aging conditions. Results suggest that the proposed visual cognition approach provides a highly accurate means of estimating battery capacity and thus offers a promising method derived from the emerging field of cognitive computing.

  7. Development of a visual tool to analyze interactions in forums in an e-learning environment

    Directory of Open Access Journals (Sweden)

    Cláudio Filipe Tereso

    2016-12-01

    Full Text Available This article presents VAFAE – Forum Access Visualization on a Distance Learning Environment, a web tool that visually maps Universidade Aberta’s (UAb students’ interaction with a course available on the e-learning platform. Raw data is extracted from the log files that are then transformed to obtain the necessary format. Next, different visualization techniques are applied with the aim of improving and streamlining the underlying information. In a more specific way, VAFAE aims at helping teachers to better understand the level and quality of the interaction of the students with the modules of the learning units in UAb’s distance learning environment.

  8. A mixed-methods exploration of an environment for learning computer programming

    Directory of Open Access Journals (Sweden)

    Richard Mather

    2015-08-01

    Full Text Available A mixed-methods approach is evaluated for exploring collaborative behaviour, acceptance and progress surrounding an interactive technology for learning computer programming. A review of literature reveals a compelling case for using mixed-methods approaches when evaluating technology-enhanced-learning environments. Here, ethnographic approaches used for the requirements engineering of computing systems are combined with questionnaire-based feedback and skill tests. These are applied to the ‘Ceebot’ animated 3D learning environment. Video analysis with workplace observation allowed detailed inspection of problem solving and tacit behaviours. Questionnaires and knowledge tests provided broad sample coverage with insights into subject understanding and overall response to the learning environment. Although relatively low scores in programming tests seemingly contradicted the perception that Ceebot had enhanced understanding of programming, this perception was nevertheless found to be correlated with greater test performance. Video analysis corroborated findings that the learning environment and Ceebot animations were engaging and encouraged constructive collaborative behaviours. Ethnographic observations clearly captured Ceebot's value in providing visual cues for problem-solving discussions and for progress through sharing discoveries. Notably, performance in tests was most highly correlated with greater programming practice (p≤0.01. It was apparent that although students had appropriated technology for collaborative working and benefitted from visual and tacit cues provided by Ceebot, they had not necessarily deeply learned the lessons intended. The key value of the ‘mixed-methods’ approach was that ethnographic observations captured the authenticity of learning behaviours, and thereby strengthened confidence in the interpretation of questionnaire and test findings.

  9. Auditory and Visual Working Memory Functioning in College Students with Attention-Deficit/Hyperactivity Disorder and/or Learning Disabilities.

    Science.gov (United States)

    Liebel, Spencer W; Nelson, Jason M

    2017-12-01

    We investigated the auditory and visual working memory functioning in college students with attention-deficit/hyperactivity disorder, learning disabilities, and clinical controls. We examined the role attention-deficit/hyperactivity disorder subtype status played in working memory functioning. The unique influence that both domains of working memory have on reading and math abilities was investigated. A sample of 268 individuals seeking postsecondary education comprise four groups of the present study: 110 had an attention-deficit/hyperactivity disorder diagnosis only, 72 had a learning disability diagnosis only, 35 had comorbid attention-deficit/hyperactivity disorder and learning disability diagnoses, and 60 individuals without either of these disorders comprise a clinical control group. Participants underwent a comprehensive neuropsychological evaluation, and licensed psychologists employed a multi-informant, multi-method approach in obtaining diagnoses. In the attention-deficit/hyperactivity disorder only group, there was no difference between auditory and visual working memory functioning, t(100) = -1.57, p = .12. In the learning disability group, however, auditory working memory functioning was significantly weaker compared with visual working memory, t(71) = -6.19, p attention-deficit/hyperactivity disorder only group, there were no auditory or visual working memory functioning differences between participants with either a predominantly inattentive type or a combined type diagnosis. Visual working memory did not incrementally contribute to the prediction of academic achievement skills. Individuals with attention-deficit/hyperactivity disorder did not demonstrate significant working memory differences compared with clinical controls. Individuals with a learning disability demonstrated weaker auditory working memory than individuals in either the attention-deficit/hyperactivity or clinical control groups. © The Author 2017. Published by Oxford University

  10. Visual Supports for the Learning Disabled: A Handbook for Educators

    Science.gov (United States)

    Sells, Leighan

    2013-01-01

    A large percent of the population is affected by learning disabilities, which significantly impacts individuals and families. Much research has been done to identify effective ways to best help the students with learning disabilities. One of the more promising strategies is the use of visual supports to enhance these students' understanding…

  11. A qualitative inquiry into the effects of visualization on high school chemistry students' learning process of molecular structure

    Science.gov (United States)

    Deratzou, Susan

    This research studies the process of high school chemistry students visualizing chemical structures and its role in learning chemical bonding and molecular structure. Minimal research exists with high school chemistry students and more research is necessary (Gabel & Sherwood, 1980; Seddon & Moore, 1986; Seddon, Tariq, & Dos Santos Veiga, 1984). Using visualization tests (Ekstrom, French, Harman, & Dermen, 1990a), a learning style inventory (Brown & Cooper, 1999), and observations through a case study design, this study found visual learners performed better, but needed more practice and training. Statistically, all five pre- and post-test visualization test comparisons were highly significant in the two-tailed t-test (p > .01). The research findings are: (1) Students who tested high in the Visual (Language and/or Numerical) and Tactile Learning Styles (and Social Learning) had an advantage. Students who learned the chemistry concepts more effectively were better at visualizing structures and using molecular models to enhance their knowledge. (2) Students showed improvement in learning after visualization practice. Training in visualization would improve students' visualization abilities and provide them with a way to think about these concepts. (3) Conceptualization of concepts indicated that visualizing ability was critical and that it could be acquired. Support for this finding was provided by pre- and post-Visualization Test data with a highly significant t-test. (4) Various molecular animation programs and websites were found to be effective. (5) Visualization and modeling of structures encompassed both two- and three-dimensional space. The Visualization Test findings suggested that the students performed better with basic rotation of structures as compared to two- and three-dimensional objects. (6) Data from observations suggest that teaching style was an important factor in student learning of molecular structure. (7) Students did learn the chemistry concepts

  12. Communicating Science Concepts to Individuals with Visual Impairments Using Short Learning Modules

    Science.gov (United States)

    Stender, Anthony S.; Newell, Ryan; Villarreal, Eduardo; Swearer, Dayne F.; Bianco, Elisabeth; Ringe, Emilie

    2016-01-01

    Of the 6.7 million individuals in the United States who are visually impaired, 63% are unemployed, and 59% have not attained an education beyond a high school diploma. Providing a basic science education to children and adults with visual disabilities can be challenging because most scientific learning relies on visual demonstrations. Creating…

  13. Learning Using Dynamic and Static Visualizations: Students' Comprehension, Prior Knowledge and Conceptual Status of a Biotechnological Method

    Science.gov (United States)

    Yarden, Hagit; Yarden, Anat

    2010-05-01

    The importance of biotechnology education at the high-school level has been recognized in a number of international curriculum frameworks around the world. One of the most problematic issues in learning biotechnology has been found to be the biotechnological methods involved. Here, we examine the unique contribution of an animation of the polymerase chain reaction (PCR) in promoting conceptual learning of the biotechnological method among 12th-grade biology majors. All of the students learned about the PCR using still images ( n = 83) or the animation ( n = 90). A significant advantage to the animation treatment was identified following learning. Students’ prior content knowledge was found to be an important factor for students who learned PCR using still images, serving as an obstacle to learning the PCR method in the case of low prior knowledge. Through analysing students’ discourse, using the framework of the conceptual status analysis, we found that students who learned about PCR using still images faced difficulties in understanding some mechanistic aspects of the method. On the other hand, using the animation gave the students an advantage in understanding those aspects.

  14. Accurate or assumed: visual learning in children with ASD.

    Science.gov (United States)

    Trembath, David; Vivanti, Giacomo; Iacono, Teresa; Dissanayake, Cheryl

    2015-10-01

    Children with autism spectrum disorder (ASD) are often described as visual learners. We tested this assumption in an experiment in which 25 children with ASD, 19 children with global developmental delay (GDD), and 17 typically developing (TD) children were presented a series of videos via an eye tracker in which an actor instructed them to manipulate objects in speech-only and speech + pictures conditions. We found no group differences in visual attention to the stimuli. The GDD and TD groups performed better when pictures were available, whereas the ASD group did not. Performance of children with ASD and GDD was positively correlated with visual attention and receptive language. We found no evidence of a prominent visual learning style in the ASD group.

  15. Improvement of The Ability of Junior High School Students Thinking Through Visual Learning Assisted Geo gbra Tutorial

    Science.gov (United States)

    Elvi, M.; Nurjanah

    2017-02-01

    This research is distributed on the issue of the lack of visual thinking ability is a must-have basic ability of students in learning geometry. The purpose of this research is to investigate and elucide: 1) the enhancement of visual thinking ability of students to acquire learning assisted with geogebra tutorial learning: 2) the increase in visual thinking ability of students who obtained a model of learning assisted with geogebra and students who obtained a regular study of KAM (high, medium, and low). This research population is grade VII in Bandung Junior High School. The instruments used to collect data in this study consisted of instruments of the test and the observation sheet. The data obtained were analyzed using the test average difference i.e. Test-t and ANOVA Test one line to two lines. The results showed that: 1) the attainment and enhancement of visual thinking ability of students to acquire learning assisted geogebra tutorial better than students who acquire learning; 2) there may be differences of visual upgrade thinking students who acquire the learning model assisted with geogebra tutorial earn regular learning of KAM (high, medium and low).

  16. Looking to Learn: The Effects of Visual Guidance on Observational Learning of the Golf Swing.

    Science.gov (United States)

    D'Innocenzo, Giorgia; Gonzalez, Claudia C; Williams, A Mark; Bishop, Daniel T

    2016-01-01

    Skilled performers exhibit more efficient gaze patterns than less-skilled counterparts do and they look more frequently at task-relevant regions than at superfluous ones. We examine whether we may guide novices' gaze towards relevant regions during action observation in order to facilitate their learning of a complex motor skill. In a Pre-test-Post-test examination of changes in their execution of the full golf swing, 21 novices viewed one of three videos at intervention: i) a skilled golfer performing 10 swings (Free Viewing, FV); ii) the same video with transient colour cues superimposed to highlight key features of the setup (Visual Guidance; VG); iii) or a History of Golf video (Control). Participants in the visual guidance group spent significantly more time looking at cued areas than did the other two groups, a phenomenon that persisted after the cues had been removed. Moreover, the visual guidance group improved their swing execution at Post-test and on a Retention test one week later. Our results suggest that visual guidance to cued areas during observational learning of complex motor skills may accelerate acquisition of the skill.

  17. Brain signal complexity rises with repetition suppression in visual learning.

    Science.gov (United States)

    Lafontaine, Marc Philippe; Lacourse, Karine; Lina, Jean-Marc; McIntosh, Anthony R; Gosselin, Frédéric; Théoret, Hugo; Lippé, Sarah

    2016-06-21

    Neuronal activity associated with visual processing of an unfamiliar face gradually diminishes when it is viewed repeatedly. This process, known as repetition suppression (RS), is involved in the acquisition of familiarity. Current models suggest that RS results from interactions between visual information processing areas located in the occipito-temporal cortex and higher order areas, such as the dorsolateral prefrontal cortex (DLPFC). Brain signal complexity, which reflects information dynamics of cortical networks, has been shown to increase as unfamiliar faces become familiar. However, the complementarity of RS and increases in brain signal complexity have yet to be demonstrated within the same measurements. We hypothesized that RS and brain signal complexity increase occur simultaneously during learning of unfamiliar faces. Further, we expected alteration of DLPFC function by transcranial direct current stimulation (tDCS) to modulate RS and brain signal complexity over the occipito-temporal cortex. Participants underwent three tDCS conditions in random order: right anodal/left cathodal, right cathodal/left anodal and sham. Following tDCS, participants learned unfamiliar faces, while an electroencephalogram (EEG) was recorded. Results revealed RS over occipito-temporal electrode sites during learning, reflected by a decrease in signal energy, a measure of amplitude. Simultaneously, as signal energy decreased, brain signal complexity, as estimated with multiscale entropy (MSE), increased. In addition, prefrontal tDCS modulated brain signal complexity over the right occipito-temporal cortex during the first presentation of faces. These results suggest that although RS may reflect a brain mechanism essential to learning, complementary processes reflected by increases in brain signal complexity, may be instrumental in the acquisition of novel visual information. Such processes likely involve long-range coordinated activity between prefrontal and lower order visual

  18. Audiovisual Blindsight: Audiovisual learning in the absence of primary visual cortex

    OpenAIRE

    Mehrdad eSeirafi; Peter eDe Weerd; Alan J Pegna; Beatrice ede Gelder

    2016-01-01

    Learning audiovisual associations is mediated by the primary cortical areas; however, recent animal studies suggest that such learning can take place even in the absence of the primary visual cortex. Other studies have demonstrated the involvement of extra-geniculate pathways and especially the superior colliculus (SC) in audiovisual association learning. Here, we investigated such learning in a rare human patient with complete loss of the bilateral striate cortex. We carried out an implicit...

  19. Perceptual Learning in Children With Infantile Nystagmus: Effects on Visual Performance.

    Science.gov (United States)

    Huurneman, Bianca; Boonstra, F Nienke; Goossens, Jeroen

    2016-08-01

    To evaluate whether computerized training with a crowded or uncrowded letter-discrimination task reduces visual impairment (VI) in 6- to 11-year-old children with infantile nystagmus (IN) who suffer from increased foveal crowding, reduced visual acuity, and reduced stereopsis. Thirty-six children with IN were included. Eighteen had idiopathic IN and 18 had oculocutaneous albinism. These children were divided in two training groups matched on age and diagnosis: a crowded training group (n = 18) and an uncrowded training group (n = 18). Training occurred two times per week during 5 weeks (3500 trials per training). Eleven age-matched children with normal vision were included to assess baseline differences in task performance and test-retest learning. Main outcome measures were task-specific performance, distance and near visual acuity (DVA and NVA), intensity and extent of (foveal) crowding at 5 m and 40 cm, and stereopsis. Training resulted in task-specific improvements. Both training groups also showed uncrowded and crowded DVA improvements (0.10 ± 0.02 and 0.11 ± 0.02 logMAR) and improved stereopsis (670 ± 249″). Crowded NVA improved only in the crowded training group (0.15 ± 0.02 logMAR), which was also the only group showing a reduction in near crowding intensity (0.08 ± 0.03 logMAR). Effects were not due to test-retest learning. Perceptual learning with or without distractors reduces the extent of crowding and improves visual acuity in children with IN. Training with distractors improves near vision more than training with single optotypes. Perceptual learning also transfers to DVA and NVA under uncrowded and crowded conditions and even stereopsis. Learning curves indicated that improvements may be larger after longer training.

  20. Aversive learning shapes neuronal orientation tuning in human visual cortex.

    Science.gov (United States)

    McTeague, Lisa M; Gruss, L Forest; Keil, Andreas

    2015-07-28

    The responses of sensory cortical neurons are shaped by experience. As a result perceptual biases evolve, selectively facilitating the detection and identification of sensory events that are relevant for adaptive behaviour. Here we examine the involvement of human visual cortex in the formation of learned perceptual biases. We use classical aversive conditioning to associate one out of a series of oriented gratings with a noxious sound stimulus. After as few as two grating-sound pairings, visual cortical responses to the sound-paired grating show selective amplification. Furthermore, as learning progresses, responses to the orientations with greatest similarity to the sound-paired grating are increasingly suppressed, suggesting inhibitory interactions between orientation-selective neuronal populations. Changes in cortical connectivity between occipital and fronto-temporal regions mirror the changes in visuo-cortical response amplitudes. These findings suggest that short-term behaviourally driven retuning of human visual cortical neurons involves distal top-down projections as well as local inhibitory interactions.

  1. A new data-mining method to search for behavioral properties that induce alignment and their involvement in social learning in medaka fish (Oryzias latipes.

    Directory of Open Access Journals (Sweden)

    Takashi Ochiai

    Full Text Available BACKGROUND: Coordinated movement in social animal groups via social learning facilitates foraging activity. Few studies have examined the behavioral cause-and-effect between group members that mediates this social learning. METHODOLOGY/PRINCIPAL FINDINGS: We first established a behavioral paradigm for visual food learning using medaka fish and demonstrated that a single fish can learn to associate a visual cue with a food reward. Grouped medaka fish (6 fish learn to respond to the visual cue more rapidly than a single fish, indicating that medaka fish undergo social learning. We then established a data-mining method based on Kullback-Leibler divergence (KLD to search for candidate behaviors that induce alignment and found that high-speed movement of a focal fish tended to induce alignment of the other members locally and transiently under free-swimming conditions without presentation of a visual cue. The high-speed movement of the informed and trained fish during visual cue presentation appeared to facilitate the alignment of naïve fish in response to some visual cues, thereby mediating social learning. Compared with naïve fish, the informed fish had a higher tendency to induce alignment of other naïve fish under free-swimming conditions without visual cue presentation, suggesting the involvement of individual recognition in social learning. CONCLUSIONS/SIGNIFICANCE: Behavioral cause-and-effect studies of the high-speed movement between fish group members will contribute to our understanding of the dynamics of social behaviors. The data-mining method used in the present study is a powerful method to search for candidates factors associated with inter-individual interactions using a dataset for time-series coordinate data of individuals.

  2. Sparse representation, modeling and learning in visual recognition theory, algorithms and applications

    CERN Document Server

    Cheng, Hong

    2015-01-01

    This unique text/reference presents a comprehensive review of the state of the art in sparse representations, modeling and learning. The book examines both the theoretical foundations and details of algorithm implementation, highlighting the practical application of compressed sensing research in visual recognition and computer vision. Topics and features: provides a thorough introduction to the fundamentals of sparse representation, modeling and learning, and the application of these techniques in visual recognition; describes sparse recovery approaches, robust and efficient sparse represen

  3. Peak Detection Method Evaluation for Ion Mobility Spectrometry by Using Machine Learning Approaches

    DEFF Research Database (Denmark)

    Hauschild, Anne-Christin; Kopczynski, Dominik; D'Addario, Marianna

    2013-01-01

    machine learning methods exist, an inevitable preprocessing step is reliable and robust peak detection without manual intervention. In this work we evaluate four state-of-the-art approaches for automated IMS-based peak detection: local maxima search, watershed transformation with IPHEx, region......-merging with VisualNow, and peak model estimation (PME).We manually generated Metabolites 2013, 3 278 a gold standard with the aid of a domain expert (manual) and compare the performance of the four peak calling methods with respect to two distinct criteria. We first utilize established machine learning methods...

  4. Behind Mathematical Learning Disabilities: What about Visual Perception and Motor Skills?

    Science.gov (United States)

    Pieters, Stefanie; Desoete, Annemie; Roeyers, Herbert; Vanderswalmen, Ruth; Van Waelvelde, Hilde

    2012-01-01

    In a sample of 39 children with mathematical learning disabilities (MLD) and 106 typically developing controls belonging to three control groups of three different ages, we found that visual perception, motor skills and visual-motor integration explained a substantial proportion of the variance in either number fact retrieval or procedural…

  5. Methodological Strategies for Studying the Process of Learning, Memory and Visual Literacy.

    Science.gov (United States)

    Randhawa, Bikkar S.; Hunt, Dennis

    An attempt is made to discuss current models of information processing, learning, and development, thereby suggesting adequate methodological strategies for research in visual literacy. It is maintained that development is a cumulative process of learning, and that learning and memory are the result of new knowledge, sensations, etc. over a short…

  6. The Role of Visual Learning in Improving Students' High-Order Thinking Skills

    Science.gov (United States)

    Raiyn, Jamal

    2016-01-01

    Various concepts have been introduced to improve students' analytical thinking skills based on problem based learning (PBL). This paper introduces a new concept to increase student's analytical thinking skills based on a visual learning strategy. Such a strategy has three fundamental components: a teacher, a student, and a learning process. The…

  7. Learning about “wicked” problems in the Global South. Creating a film-based learning environment with “Visual Problem Appraisal”

    Directory of Open Access Journals (Sweden)

    Loes Witteveen

    2012-03-01

    Full Text Available The current complexity of sustainable development in the Global South calls for the design of learning strategies that can deal with this complexity. One such innovative learning strategy, called Visual Problem Appraisal (VPA, is highlighted in this article. The strategy is termed visual as it creates a learning environment that is film-based. VPA enhances the analysis of complex issues, and facilitates stakeholder dialogue and action planning. The strategy is used in workshops dealing with problem analysis and policy design, and involves the participants “meeting” stakeholders through filmed narratives. The article demonstrates the value of using film in multi stakeholder learning environments addressing issues concerning sustainable development.

  8. Exploring Middle School Students' Representational Competence in Science: Development and Verification of a Framework for Learning with Visual Representations

    Science.gov (United States)

    Tippett, Christine Diane

    Scientific knowledge is constructed and communicated through a range of forms in addition to verbal language. Maps, graphs, charts, diagrams, formulae, models, and drawings are just some of the ways in which science concepts can be represented. Representational competence---an aspect of visual literacy that focuses on the ability to interpret, transform, and produce visual representations---is a key component of science literacy and an essential part of science reading and writing. To date, however, most research has examined learning from representations rather than learning with representations. This dissertation consisted of three distinct projects that were related by a common focus on learning from visual representations as an important aspect of scientific literacy. The first project was the development of an exploratory framework that is proposed for use in investigations of students constructing and interpreting multimedia texts. The exploratory framework, which integrates cognition, metacognition, semiotics, and systemic functional linguistics, could eventually result in a model that might be used to guide classroom practice, leading to improved visual literacy, better comprehension of science concepts, and enhanced science literacy because it emphasizes distinct aspects of learning with representations that can be addressed though explicit instruction. The second project was a metasynthesis of the research that was previously conducted as part of the Explicit Literacy Instruction Embedded in Middle School Science project (Pacific CRYSTAL, http://www.educ.uvic.ca/pacificcrystal). Five overarching themes emerged from this case-to-case synthesis: the engaging and effective nature of multimedia genres, opportunities for differentiated instruction using multimodal strategies, opportunities for assessment, an emphasis on visual representations, and the robustness of some multimodal literacy strategies across content areas. The third project was a mixed-methods

  9. The 50s cliff: a decline in perceptuo-motor learning, not a deficit in visual motion perception.

    Science.gov (United States)

    Ren, Jie; Huang, Shaochen; Zhang, Jiancheng; Zhu, Qin; Wilson, Andrew D; Snapp-Childs, Winona; Bingham, Geoffrey P

    2015-01-01

    Previously, we measured perceptuo-motor learning rates across the lifespan and found a sudden drop in learning rates between ages 50 and 60, called the "50s cliff." The task was a unimanual visual rhythmic coordination task in which participants used a joystick to oscillate one dot in a display in coordination with another dot oscillated by a computer. Participants learned to produce a coordination with a 90° relative phase relation between the dots. Learning rates for participants over 60 were half those of younger participants. Given existing evidence for visual motion perception deficits in people over 60 and the role of visual motion perception in the coordination task, it remained unclear whether the 50s cliff reflected onset of this deficit or a genuine decline in perceptuo-motor learning. The current work addressed this question. Two groups of 12 participants in each of four age ranges (20s, 50s, 60s, 70s) learned to perform a bimanual coordination of 90° relative phase. One group trained with only haptic information and the other group with both haptic and visual information about relative phase. Both groups were tested in both information conditions at baseline and post-test. If the 50s cliff was caused by an age dependent deficit in visual motion perception, then older participants in the visual group should have exhibited less learning than those in the haptic group, which should not exhibit the 50s cliff, and older participants in both groups should have performed less well when tested with visual information. Neither of these expectations was confirmed by the results, so we concluded that the 50s cliff reflects a genuine decline in perceptuo-motor learning with aging, not the onset of a deficit in visual motion perception.

  10. A Review on Different Virtual Learning Methods in Pharmacy Education

    Directory of Open Access Journals (Sweden)

    Amin Noori

    2015-10-01

    Full Text Available Virtual learning is a type of electronic learning system based on the web. It models traditional in- person learning by providing virtual access to classes, tests, homework, feedbacks and etc. Students and teachers can interact through chat rooms or other virtual environments. Web 2.0 services are usually used for this method. Internet audio-visual tools, multimedia systems, a disco CD-ROMs, videotapes, animation, video conferencing, and interactive phones can all be used to deliver data to the students. E-learning can occur in or out of the classroom. It is time saving with lower costs compared to traditional methods. It can be self-paced, it is suitable for distance learning and it is flexible. It is a great learning style for continuing education and students can independently solve their problems but it has its disadvantages too. Thereby, blended learning (combination of conventional and virtual education is being used worldwide and has improved knowledge, skills and confidence of pharmacy students.The aim of this study is to review, discuss and introduce different methods of virtual learning for pharmacy students.Google scholar, Pubmed and Scupus databases were searched for topics related to virtual, electronic and blended learning and different styles like computer simulators, virtual practice environment technology, virtual mentor, virtual patient, 3D simulators, etc. are discussed in this article.Our review on different studies on these areas shows that the students are highly satisfied withvirtual and blended types of learning.

  11. Instructional Television: Visual Production Techniques and Learning Comprehension.

    Science.gov (United States)

    Silbergleid, Michael Ian

    The purpose of this study was to determine if increasing levels of complexity in visual production techniques would increase the viewer's learning comprehension and the degree of likeness expressed for a college level instructional television program. A total of 119 mass communications students at the University of Alabama participated in the…

  12. Individual personality differences in goats predict their performance in visual learning and non-associative cognitive tasks.

    Science.gov (United States)

    Nawroth, Christian; Prentice, Pamela M; McElligott, Alan G

    2017-01-01

    Variation in common personality traits, such as boldness or exploration, is often associated with risk-reward trade-offs and behavioural flexibility. To date, only a few studies have examined the effects of consistent behavioural traits on both learning and cognition. We investigated whether certain personality traits ('exploration' and 'sociability') of individuals were related to cognitive performance, learning flexibility and learning style in a social ungulate species, the goat (Capra hircus). We also investigated whether a preference for feature cues rather than impaired learning abilities can explain performance variation in a visual discrimination task. We found that personality scores were consistent across time and context. Less explorative goats performed better in a non-associative cognitive task, in which subjects had to follow the trajectory of a hidden object (i.e. testing their ability for object permanence). We also found that less sociable subjects performed better compared to more sociable goats in a visual discrimination task. Good visual learning performance was associated with a preference for feature cues, indicating personality-dependent learning strategies in goats. Our results suggest that personality traits predict the outcome in visual discrimination and non-associative cognitive tasks in goats and that impaired performance in a visual discrimination tasks does not necessarily imply impaired learning capacities, but rather can be explained by a varying preference for feature cues. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Methods and tools for big data visualization

    OpenAIRE

    Zubova, Jelena; Kurasova, Olga

    2015-01-01

    In this paper, methods and tools for big data visualization have been investigated. Challenges faced by the big data analysis and visualization have been identified. Technologies for big data analysis have been discussed. A review of methods and tools for big data visualization has been done. Functionalities of the tools have been demonstrated by examples in order to highlight their advantages and disadvantages.

  14. Supporting Multimedia Learning with Visual Signalling and Animated Pedagogical Agent: Moderating Effects of Prior Knowledge

    Science.gov (United States)

    Johnson, A. M.; Ozogul, G.; Reisslein, M.

    2015-01-01

    An experiment examined the effects of visual signalling to relevant information in multiple external representations and the visual presence of an animated pedagogical agent (APA). Students learned electric circuit analysis using a computer-based learning environment that included Cartesian graphs, equations and electric circuit diagrams. The…

  15. Perceptual category learning and visual processing: An exercise in computational cognitive neuroscience.

    Science.gov (United States)

    Cantwell, George; Riesenhuber, Maximilian; Roeder, Jessica L; Ashby, F Gregory

    2017-05-01

    The field of computational cognitive neuroscience (CCN) builds and tests neurobiologically detailed computational models that account for both behavioral and neuroscience data. This article leverages a key advantage of CCN-namely, that it should be possible to interface different CCN models in a plug-and-play fashion-to produce a new and biologically detailed model of perceptual category learning. The new model was created from two existing CCN models: the HMAX model of visual object processing and the COVIS model of category learning. Using bitmap images as inputs and by adjusting only a couple of learning-rate parameters, the new HMAX/COVIS model provides impressively good fits to human category-learning data from two qualitatively different experiments that used different types of category structures and different types of visual stimuli. Overall, the model provides a comprehensive neural and behavioral account of basal ganglia-mediated learning. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. From a Gloss to a Learning Tool: Does Visual Aids Enhance Better Sentence Comprehension?

    Science.gov (United States)

    Sato, Takeshi; Suzuki, Akio

    2012-01-01

    The aim of this study is to optimize CALL environments as a learning tool rather than a gloss, focusing on the learning of polysemous words which refer to spatial relationship between objects. A lot of research has already been conducted to examine the efficacy of visual glosses while reading L2 texts and has reported that visual glosses can be…

  17. Contextual Cueing: Implicit Learning and Memory of Visual Context Guides Spatial Attention.

    Science.gov (United States)

    Chun, Marvin M.; Jiang, Yuhong

    1998-01-01

    Six experiments involving a total of 112 college students demonstrate that a robust memory for visual context exists to guide spatial attention. Results show how implicit learning and memory of visual context can guide spatial attention toward task-relevant aspects of a scene. (SLD)

  18. Analysing the physics learning environment of visually impaired students in high schools

    NARCIS (Netherlands)

    Toenders, F.G.C.; de Putter - Smits, L.G.A.; Sanders, W.T.M.; den Brok, P.J.

    2017-01-01

    Although visually impaired students attend regular high school, their enrolment in advanced science classes is dramatically low. In our research we evaluated the physics learning environment of a blind high school student in a regular Dutch high school. For visually impaired students to grasp

  19. DLNE: A hybridization of deep learning and neuroevolution for visual control

    DEFF Research Database (Denmark)

    Poulsen, Andreas Precht; Thorhauge, Mark; Funch, Mikkel Hvilshj

    2017-01-01

    This paper investigates the potential of combining deep learning and neuroevolution to create a bot for a simple first person shooter (FPS) game capable of aiming and shooting based on high-dimensional raw pixel input. The deep learning component is responsible for visual recognition...... on evolution, and (3) how well they allow the deep network and evolved network to interface with each other. Overall, the results suggest that combining deep learning and neuroevolution in a hybrid approach is a promising research direction that could make complex visual domains directly accessible to networks...... and translating raw pixels to compact feature representations, while the evolving network takes those features as inputs to infer actions. Two types of feature representations are evaluated in terms of (1) how precise they allow the deep network to recognize the position of the enemy, (2) their effect...

  20. Visual Hybrid Development Learning System (VHDLS) Framework for Children with Autism

    Science.gov (United States)

    Banire, Bilikis; Jomhari, Nazean; Ahmad, Rodina

    2015-01-01

    The effect of education on children with autism serves as a relative cure for their deficits. As a result of this, they require special techniques to gain their attention and interest in learning as compared to typical children. Several studies have shown that these children are visual learners. In this study, we proposed a Visual Hybrid…

  1. From phonemes to images : levels of representation in a recurrent neural model of visually-grounded language learning

    NARCIS (Netherlands)

    Gelderloos, L.J.; Chrupala, Grzegorz

    2016-01-01

    We present a model of visually-grounded language learning based on stacked gated recurrent neural networks which learns to predict visual features given an image description in the form of a sequence of phonemes. The learning task resembles that faced by human language learners who need to discover

  2. Collaborative Visualization Project: shared-technology learning environments for science learning

    Science.gov (United States)

    Pea, Roy D.; Gomez, Louis M.

    1993-01-01

    Project-enhanced science learning (PESL) provides students with opportunities for `cognitive apprenticeships' in authentic scientific inquiry using computers for data-collection and analysis. Student teams work on projects with teacher guidance to develop and apply their understanding of science concepts and skills. We are applying advanced computing and communications technologies to augment and transform PESL at-a-distance (beyond the boundaries of the individual school), which is limited today to asynchronous, text-only networking and unsuitable for collaborative science learning involving shared access to multimedia resources such as data, graphs, tables, pictures, and audio-video communication. Our work creates user technology (a Collaborative Science Workbench providing PESL design support and shared synchronous document views, program, and data access; a Science Learning Resource Directory for easy access to resources including two-way video links to collaborators, mentors, museum exhibits, media-rich resources such as scientific visualization graphics), and refine enabling technologies (audiovisual and shared-data telephony, networking) for this PESL niche. We characterize participation scenarios for using these resources and we discuss national networked access to science education expertise.

  3. The Film as Visual Aided Learning Tool in Classroom Management Course

    Science.gov (United States)

    Altinay Gazi, Zehra; Altinay Aksal, Fahriye

    2011-01-01

    This research aims to investigate the impact of the visual aided learning on pre-service teachers' co-construction of subject matter knowledge in teaching practice. The study revealed the examination of film as an active cognizing and learning tool in classroom management course within teacher education programme. Within the framework of action…

  4. The Mediator Role of Perceived Stress in the Relationship between Academic Stress and Depressive Symptoms among E-Learning Students with Visual Impairments

    Science.gov (United States)

    Lee, Soon Min; Oh, Yunjin

    2017-01-01

    Introduction: This study examined a mediator role of perceived stress on the prediction of the effects of academic stress on depressive symptoms among e-learning students with visual impairments. Methods: A convenience sample for this study was collected for three weeks from November to December in 2012 among students with visual impairments…

  5. A mouse model of visual perceptual learning reveals alterations in neuronal coding and dendritic spine density in the visual cortex

    Directory of Open Access Journals (Sweden)

    Yan eWang

    2016-03-01

    Full Text Available Visual perceptual learning (VPL can improve spatial vision in normally sighted and visually impaired individuals. Although previous studies of humans and large animals have explored the neural basis of VPL, elucidation of the underlying cellular and molecular mechanisms remains a challenge. Owing to the advantages of molecular genetic and optogenetic manipulations, the mouse is a promising model for providing a mechanistic understanding of VPL. Here, we thoroughly evaluated the effects and properties of VPL on spatial vision in C57BL/6J mice using a two-alternative, forced-choice visual water task. Briefly, the mice underwent prolonged training at near the individual threshold of contrast or spatial frequency (SF for pattern discrimination or visual detection for 35 consecutive days. Following training, the contrast-threshold trained mice showed an 87% improvement in contrast sensitivity (CS and a 55% gain in visual acuity (VA. Similarly, the SF-threshold trained mice exhibited comparable and long-lasting improvements in VA and significant gains in CS over a wide range of SFs. Furthermore, learning largely transferred across eyes and stimulus orientations. Interestingly, learning could transfer from a pattern discrimination task to a visual detection task, but not vice versa. We validated that this VPL fully restored VA in adult amblyopic mice and old mice. Taken together, these data indicate that mice, as a species, exhibit reliable VPL. Intrinsic signal optical imaging revealed that mice with perceptual training had higher cut-off SFs in primary visual cortex (V1 than those without perceptual training. Moreover, perceptual training induced an increase in the dendritic spine density in layer 2/3 pyramidal neurons of V1. These results indicated functional and structural alterations in V1 during VPL. Overall, our VPL mouse model will provide a platform for investigating the neurobiological basis of VPL.

  6. A Mouse Model of Visual Perceptual Learning Reveals Alterations in Neuronal Coding and Dendritic Spine Density in the Visual Cortex.

    Science.gov (United States)

    Wang, Yan; Wu, Wei; Zhang, Xian; Hu, Xu; Li, Yue; Lou, Shihao; Ma, Xiao; An, Xu; Liu, Hui; Peng, Jing; Ma, Danyi; Zhou, Yifeng; Yang, Yupeng

    2016-01-01

    Visual perceptual learning (VPL) can improve spatial vision in normally sighted and visually impaired individuals. Although previous studies of humans and large animals have explored the neural basis of VPL, elucidation of the underlying cellular and molecular mechanisms remains a challenge. Owing to the advantages of molecular genetic and optogenetic manipulations, the mouse is a promising model for providing a mechanistic understanding of VPL. Here, we thoroughly evaluated the effects and properties of VPL on spatial vision in C57BL/6J mice using a two-alternative, forced-choice visual water task. Briefly, the mice underwent prolonged training at near the individual threshold of contrast or spatial frequency (SF) for pattern discrimination or visual detection for 35 consecutive days. Following training, the contrast-threshold trained mice showed an 87% improvement in contrast sensitivity (CS) and a 55% gain in visual acuity (VA). Similarly, the SF-threshold trained mice exhibited comparable and long-lasting improvements in VA and significant gains in CS over a wide range of SFs. Furthermore, learning largely transferred across eyes and stimulus orientations. Interestingly, learning could transfer from a pattern discrimination task to a visual detection task, but not vice versa. We validated that this VPL fully restored VA in adult amblyopic mice and old mice. Taken together, these data indicate that mice, as a species, exhibit reliable VPL. Intrinsic signal optical imaging revealed that mice with perceptual training had higher cut-off SFs in primary visual cortex (V1) than those without perceptual training. Moreover, perceptual training induced an increase in the dendritic spine density in layer 2/3 pyramidal neurons of V1. These results indicated functional and structural alterations in V1 during VPL. Overall, our VPL mouse model will provide a platform for investigating the neurobiological basis of VPL.

  7. How learning might strengthen existing visual object representations in human object-selective cortex.

    Science.gov (United States)

    Brants, Marijke; Bulthé, Jessica; Daniels, Nicky; Wagemans, Johan; Op de Beeck, Hans P

    2016-02-15

    Visual object perception is an important function in primates which can be fine-tuned by experience, even in adults. Which factors determine the regions and the neurons that are modified by learning is still unclear. Recently, it was proposed that the exact cortical focus and distribution of learning effects might depend upon the pre-learning mapping of relevant functional properties and how this mapping determines the informativeness of neural units for the stimuli and the task to be learned. From this hypothesis we would expect that visual experience would strengthen the pre-learning distributed functional map of the relevant distinctive object properties. Here we present a first test of this prediction in twelve human subjects who were trained in object categorization and differentiation, preceded and followed by a functional magnetic resonance imaging session. Specifically, training increased the distributed multi-voxel pattern information for trained object distinctions in object-selective cortex, resulting in a generalization from pre-training multi-voxel activity patterns to after-training activity patterns. Simulations show that the increased selectivity combined with the inter-session generalization is consistent with a training-induced strengthening of a pre-existing selectivity map. No training-related neural changes were detected in other regions. In sum, training to categorize or individuate objects strengthened pre-existing representations in human object-selective cortex, providing a first indication that the neuroanatomical distribution of learning effects depends upon the pre-learning mapping of visual object properties. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Gains following perceptual learning are closely linked to the initial visual acuity.

    Science.gov (United States)

    Yehezkel, Oren; Sterkin, Anna; Lev, Maria; Levi, Dennis M; Polat, Uri

    2016-04-28

    The goal of the present study was to evaluate the dependence of perceptual learning gains on initial visual acuity (VA), in a large sample of subjects with a wide range of VAs. A large sample of normally sighted and presbyopic subjects (N = 119; aged 40 to 63) with a wide range of uncorrected near visual acuities (VA, -0.12 to 0.8 LogMAR), underwent perceptual learning. Training consisted of detecting briefly presented Gabor stimuli under spatial and temporal masking conditions. Consistent with previous findings, perceptual learning induced a significant improvement in near VA and reading speed under conditions of limited exposure duration. Our results show that the improvements in VA and reading speed observed following perceptual learning are closely linked to the initial VA, with only a minor fraction of the observed improvement that may be attributed to the additional sessions performed by those with the worse VA.

  9. Density of Visual Input Enhancement and Grammar Learning: A Research Proposal

    Science.gov (United States)

    Tran, Thu Hoang

    2009-01-01

    Research in the field of second language acquisition (SLA) has been done to ascertain the effectiveness of visual input enhancement (VIE) on grammar learning. However, one issue remains unexplored: the effects of VIE density on grammar learning. This paper presents a research proposal to investigate the effects of the density of VIE on English…

  10. Infants' statistical learning: 2- and 5-month-olds' segmentation of continuous visual sequences.

    Science.gov (United States)

    Slone, Lauren Krogh; Johnson, Scott P

    2015-05-01

    Past research suggests that infants have powerful statistical learning abilities; however, studies of infants' visual statistical learning offer differing accounts of the developmental trajectory of and constraints on this learning. To elucidate this issue, the current study tested the hypothesis that young infants' segmentation of visual sequences depends on redundant statistical cues to segmentation. A sample of 20 2-month-olds and 20 5-month-olds observed a continuous sequence of looming shapes in which unit boundaries were defined by both transitional probability and co-occurrence frequency. Following habituation, only 5-month-olds showed evidence of statistically segmenting the sequence, looking longer to a statistically improbable shape pair than to a probable pair. These results reaffirm the power of statistical learning in infants as young as 5 months but also suggest considerable development of statistical segmentation ability between 2 and 5 months of age. Moreover, the results do not support the idea that infants' ability to segment visual sequences based on transitional probabilities and/or co-occurrence frequencies is functional at the onset of visual experience, as has been suggested previously. Rather, this type of statistical segmentation appears to be constrained by the developmental state of the learner. Factors contributing to the development of statistical segmentation ability during early infancy, including memory and attention, are discussed. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Perceptual learning of basic visual features remains task specific with Training-Plus-Exposure (TPE) training.

    Science.gov (United States)

    Cong, Lin-Juan; Wang, Ru-Jie; Yu, Cong; Zhang, Jun-Yun

    2016-01-01

    Visual perceptual learning is known to be specific to the trained retinal location, feature, and task. However, location and feature specificity can be eliminated by double-training or TPE training protocols, in which observers receive additional exposure to the transfer location or feature dimension via an irrelevant task besides the primary learning task Here we tested whether these new training protocols could even make learning transfer across different tasks involving discrimination of basic visual features (e.g., orientation and contrast). Observers practiced a near-threshold orientation (or contrast) discrimination task. Following a TPE training protocol, they also received exposure to the transfer task via performing suprathreshold contrast (or orientation) discrimination in alternating blocks of trials in the same sessions. The results showed no evidence for significant learning transfer to the untrained near-threshold contrast (or orientation) discrimination task after discounting the pretest effects and the suprathreshold practice effects. These results thus do not support a hypothetical task-independent component in perceptual learning of basic visual features. They also set the boundary of the new training protocols in their capability to enable learning transfer.

  12. Real-Time Strategy Video Game Experience and Visual Perceptual Learning.

    Science.gov (United States)

    Kim, Yong-Hwan; Kang, Dong-Wha; Kim, Dongho; Kim, Hye-Jin; Sasaki, Yuka; Watanabe, Takeo

    2015-07-22

    Visual perceptual learning (VPL) is defined as long-term improvement in performance on a visual-perception task after visual experiences or training. Early studies have found that VPL is highly specific for the trained feature and location, suggesting that VPL is associated with changes in the early visual cortex. However, the generality of visual skills enhancement attributable to action video-game experience suggests that VPL can result from improvement in higher cognitive skills. If so, experience in real-time strategy (RTS) video-game play, which may heavily involve cognitive skills, may also facilitate VPL. To test this hypothesis, we compared VPL between RTS video-game players (VGPs) and non-VGPs (NVGPs) and elucidated underlying structural and functional neural mechanisms. Healthy young human subjects underwent six training sessions on a texture discrimination task. Diffusion-tensor and functional magnetic resonance imaging were performed before and after training. VGPs performed better than NVGPs in the early phase of training. White-matter connectivity between the right external capsule and visual cortex and neuronal activity in the right inferior frontal gyrus (IFG) and anterior cingulate cortex (ACC) were greater in VGPs than NVGPs and were significantly correlated with RTS video-game experience. In both VGPs and NVGPs, there was task-related neuronal activity in the right IFG, ACC, and striatum, which was strengthened after training. These results indicate that RTS video-game experience, associated with changes in higher-order cognitive functions and connectivity between visual and cognitive areas, facilitates VPL in early phases of training. The results support the hypothesis that VPL can occur without involvement of only visual areas. Significance statement: Although early studies found that visual perceptual learning (VPL) is associated with involvement of the visual cortex, generality of visual skills enhancement by action video-game experience

  13. Lateralization of visual learning in the honeybee

    OpenAIRE

    Letzkus, Pinar; Boeddeker, Norbert; Wood, Jeff T; Zhang, Shao-Wu; Srinivasan, Mandyam V

    2007-01-01

    Lateralization is a well-described phenomenon in humans and other vertebrates and there are interesting parallels across a variety of different vertebrate species. However, there are only a few studies of lateralization in invertebrates. In a recent report, we showed lateralization of olfactory learning in the honeybee (Apis mellifera). Here, we investigate lateralization of another sensory modality, vision. By training honeybees on a modified version of a visual proboscis extension reflex ta...

  14. Changes of the Prefrontal EEG (Electroencephalogram) Activities According to the Repetition of Audio-Visual Learning.

    Science.gov (United States)

    Kim, Yong-Jin; Chang, Nam-Kee

    2001-01-01

    Investigates the changes of neuronal response according to a four time repetition of audio-visual learning. Obtains EEG data from the prefrontal (Fp1, Fp2) lobe from 20 subjects at the 8th grade level. Concludes that the habituation of neuronal response shows up in repetitive audio-visual learning and brain hemisphericity can be changed by…

  15. Robust Subjective Visual Property Prediction from Crowdsourced Pairwise Labels.

    Science.gov (United States)

    Fu, Yanwei; Hospedales, Timothy M; Xiang, Tao; Xiong, Jiechao; Gong, Shaogang; Wang, Yizhou; Yao, Yuan

    2016-03-01

    The problem of estimating subjective visual properties from image and video has attracted increasing interest. A subjective visual property is useful either on its own (e.g. image and video interestingness) or as an intermediate representation for visual recognition (e.g. a relative attribute). Due to its ambiguous nature, annotating the value of a subjective visual property for learning a prediction model is challenging. To make the annotation more reliable, recent studies employ crowdsourcing tools to collect pairwise comparison labels. However, using crowdsourced data also introduces outliers. Existing methods rely on majority voting to prune the annotation outliers/errors. They thus require a large amount of pairwise labels to be collected. More importantly as a local outlier detection method, majority voting is ineffective in identifying outliers that can cause global ranking inconsistencies. In this paper, we propose a more principled way to identify annotation outliers by formulating the subjective visual property prediction task as a unified robust learning to rank problem, tackling both the outlier detection and learning to rank jointly. This differs from existing methods in that (1) the proposed method integrates local pairwise comparison labels together to minimise a cost that corresponds to global inconsistency of ranking order, and (2) the outlier detection and learning to rank problems are solved jointly. This not only leads to better detection of annotation outliers but also enables learning with extremely sparse annotations.

  16. Multiple Learning Strategies Project. Small Engine Repair. Visually Impaired.

    Science.gov (United States)

    Foster, Don; And Others

    This instructional package designed for visually impaired students, focuses on the vocational area of small engine repair. Contained in this document are forty learning modules organized into fourteen units: engine block; starters; fuel tank, lines, filters and pumps; carburetors; electrical; test equipment; motorcycle; machining; tune-ups; short…

  17. Realistic versus Schematic Interactive Visualizations for Learning Surveying Practices: A Comparative Study

    Science.gov (United States)

    Dib, Hazar; Adamo-Villani, Nicoletta; Garver, Stephen

    2014-01-01

    Many benefits have been claimed for visualizations, a general assumption being that learning is facilitated. However, several researchers argue that little is known about the cognitive value of graphical representations, be they schematic visualizations, such as diagrams or more realistic, such as virtual reality. The study reported in the paper…

  18. Electrophysiological Evidence of Heterogeneity in Visual Statistical Learning in Young Children with ASD

    Science.gov (United States)

    Jeste, Shafali S.; Kirkham, Natasha; Senturk, Damla; Hasenstab, Kyle; Sugar, Catherine; Kupelian, Chloe; Baker, Elizabeth; Sanders, Andrew J.; Shimizu, Christina; Norona, Amanda; Paparella, Tanya; Freeman, Stephanny F. N.; Johnson, Scott P.

    2015-01-01

    Statistical learning is characterized by detection of regularities in one's environment without an awareness or intention to learn, and it may play a critical role in language and social behavior. Accordingly, in this study we investigated the electrophysiological correlates of visual statistical learning in young children with autism…

  19. How Spatial Abilities and Dynamic Visualizations Interplay When Learning Functional Anatomy with 3D Anatomical Models

    Science.gov (United States)

    Berney, Sandra; Bétrancourt, Mireille; Molinari, Gaëlle; Hoyek, Nady

    2015-01-01

    The emergence of dynamic visualizations of three-dimensional (3D) models in anatomy curricula may be an adequate solution for spatial difficulties encountered with traditional static learning, as they provide direct visualization of change throughout the viewpoints. However, little research has explored the interplay between learning material…

  20. Sonification and haptic feedback in addition to visual feedback enhances complex motor task learning.

    Science.gov (United States)

    Sigrist, Roland; Rauter, Georg; Marchal-Crespo, Laura; Riener, Robert; Wolf, Peter

    2015-03-01

    Concurrent augmented feedback has been shown to be less effective for learning simple motor tasks than for complex tasks. However, as mostly artificial tasks have been investigated, transfer of results to tasks in sports and rehabilitation remains unknown. Therefore, in this study, the effect of different concurrent feedback was evaluated in trunk-arm rowing. It was then investigated whether multimodal audiovisual and visuohaptic feedback are more effective for learning than visual feedback only. Naïve subjects (N = 24) trained in three groups on a highly realistic virtual reality-based rowing simulator. In the visual feedback group, the subject's oar was superimposed to the target oar, which continuously became more transparent when the deviation between the oars decreased. Moreover, a trace of the subject's trajectory emerged if deviations exceeded a threshold. The audiovisual feedback group trained with oar movement sonification in addition to visual feedback to facilitate learning of the velocity profile. In the visuohaptic group, the oar movement was inhibited by path deviation-dependent braking forces to enhance learning of spatial aspects. All groups significantly decreased the spatial error (tendency in visual group) and velocity error from baseline to the retention tests. Audiovisual feedback fostered learning of the velocity profile significantly more than visuohaptic feedback. The study revealed that well-designed concurrent feedback fosters complex task learning, especially if the advantages of different modalities are exploited. Further studies should analyze the impact of within-feedback design parameters and the transferability of the results to other tasks in sports and rehabilitation.

  1. Impact of audio-visual storytelling in simulation learning experiences of undergraduate nursing students.

    Science.gov (United States)

    Johnston, Sandra; Parker, Christina N; Fox, Amanda

    2017-09-01

    Use of high fidelity simulation has become increasingly popular in nursing education to the extent that it is now an integral component of most nursing programs. Anecdotal evidence suggests that students have difficulty engaging with simulation manikins due to their unrealistic appearance. Introduction of the manikin as a 'real patient' with the use of an audio-visual narrative may engage students in the simulated learning experience and impact on their learning. A paucity of literature currently exists on the use of audio-visual narratives to enhance simulated learning experiences. This study aimed to determine if viewing an audio-visual narrative during a simulation pre-brief altered undergraduate nursing student perceptions of the learning experience. A quasi-experimental post-test design was utilised. A convenience sample of final year baccalaureate nursing students at a large metropolitan university. Participants completed a modified version of the Student Satisfaction with Simulation Experiences survey. This 12-item questionnaire contained questions relating to the ability to transfer skills learned in simulation to the real clinical world, the realism of the simulation and the overall value of the learning experience. Descriptive statistics were used to summarise demographic information. Two tailed, independent group t-tests were used to determine statistical differences within the categories. Findings indicated that students reported high levels of value, realism and transferability in relation to the viewing of an audio-visual narrative. Statistically significant results (t=2.38, psimulation to clinical practice. The subgroups of age and gender although not significant indicated some interesting results. High satisfaction with simulation was indicated by all students in relation to value and realism. There was a significant finding in relation to transferability on knowledge and this is vital to quality educational outcomes. Copyright © 2017. Published by

  2. Coupling Visualization, Simulation, and Deep Learning for Ensemble Steering of Complex Energy Models: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Potter, Kristin C [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Brunhart-Lupo, Nicholas J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Bush, Brian W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Gruchalla, Kenny M [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Bugbee, Bruce [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Krishnan, Venkat K [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-10-09

    We have developed a framework for the exploration, design, and planning of energy systems that combines interactive visualization with machine-learning based approximations of simulations through a general purpose dataflow API. Our system provides a visual inter- face allowing users to explore an ensemble of energy simulations representing a subset of the complex input parameter space, and spawn new simulations to 'fill in' input regions corresponding to new enegery system scenarios. Unfortunately, many energy simula- tions are far too slow to provide interactive responses. To support interactive feedback, we are developing reduced-form models via machine learning techniques, which provide statistically sound esti- mates of the full simulations at a fraction of the computational cost and which are used as proxies for the full-form models. Fast com- putation and an agile dataflow enhance the engagement with energy simulations, and allow researchers to better allocate computational resources to capture informative relationships within the system and provide a low-cost method for validating and quality-checking large-scale modeling efforts.

  3. Decoding the future from past experience: learning shapes predictions in early visual cortex.

    Science.gov (United States)

    Luft, Caroline D B; Meeson, Alan; Welchman, Andrew E; Kourtzi, Zoe

    2015-05-01

    Learning the structure of the environment is critical for interpreting the current scene and predicting upcoming events. However, the brain mechanisms that support our ability to translate knowledge about scene statistics to sensory predictions remain largely unknown. Here we provide evidence that learning of temporal regularities shapes representations in early visual cortex that relate to our ability to predict sensory events. We tested the participants' ability to predict the orientation of a test stimulus after exposure to sequences of leftward- or rightward-oriented gratings. Using fMRI decoding, we identified brain patterns related to the observers' visual predictions rather than stimulus-driven activity. Decoding of predicted orientations following structured sequences was enhanced after training, while decoding of cued orientations following exposure to random sequences did not change. These predictive representations appear to be driven by the same large-scale neural populations that encode actual stimulus orientation and to be specific to the learned sequence structure. Thus our findings provide evidence that learning temporal structures supports our ability to predict future events by reactivating selective sensory representations as early as in primary visual cortex. Copyright © 2015 the American Physiological Society.

  4. Reduction in the retinotopic early visual cortex with normal aging and magnitude of perceptual learning.

    Science.gov (United States)

    Chang, Li-Hung; Yotsumoto, Yuko; Salat, David H; Andersen, George J; Watanabe, Takeo; Sasaki, Yuka

    2015-01-01

    Although normal aging is known to reduce cortical structures globally, the effects of aging on local structures and functions of early visual cortex are less understood. Here, using standard retinotopic mapping and magnetic resonance imaging morphologic analyses, we investigated whether aging affects areal size of the early visual cortex, which were retinotopically localized, and whether those morphologic measures were associated with individual performance on visual perceptual learning. First, significant age-associated reduction was found in the areal size of V1, V2, and V3. Second, individual ability of visual perceptual learning was significantly correlated with areal size of V3 in older adults. These results demonstrate that aging changes local structures of the early visual cortex, and the degree of change may be associated with individual visual plasticity. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Developing Visualization Support System for Teaching/Learning Database Normalization

    Science.gov (United States)

    Folorunso, Olusegun; Akinwale, AdioTaofeek

    2010-01-01

    Purpose: In tertiary institution, some students find it hard to learn database design theory, in particular, database normalization. The purpose of this paper is to develop a visualization tool to give students an interactive hands-on experience in database normalization process. Design/methodology/approach: The model-view-controller architecture…

  6. Multiple Learning Strategies Project. Building Maintenance & Engineering. Visually Impaired.

    Science.gov (United States)

    Smith, Dwight; And Others

    This instructional package is designed for visually impaired students in the vocational area of building maintenance and engineering. The twenty-eight learning modules are organized into six units: floor care, general maintenance tasks; restrooms; carpet care; power and hand tools; and cabinet construction. Each module, printed in large block…

  7. In Light of Visual Arts – A knowledge transfer partnership project as experiential learning

    Directory of Open Access Journals (Sweden)

    Ming-hoi Lai

    2013-10-01

    Full Text Available Knowledge transfer between universities and the commercial sector is becoming more prevalent, and different processes have been adopted to facilitate the transfer of knowledge. The ‘In Light of Visual Arts’ project aimed to facilitate knowledge exchange in relation to an innovative concept, the ‘eco-philosophy of light’, between the lighting industry and the arts and cultural sector through an Informal Learning approach. Young visual artists, light designers and lighting technicians were encouraged to explore and exchange experiences in the areas of visual communication, art appreciation and art archiving to create practical lighting solutions. This project offers a feasible framework for the enhancement of artistic training through knowledge sharing, for the benefit of the participants themselves and, in turn, academia, industry and the community. Keywords: informal learning, experiential learning, knowledge transfer, art education, interdisciplinary study

  8. Explaining neural signals in human visual cortex with an associative learning model.

    Science.gov (United States)

    Jiang, Jiefeng; Schmajuk, Nestor; Egner, Tobias

    2012-08-01

    "Predictive coding" models posit a key role for associative learning in visual cognition, viewing perceptual inference as a process of matching (learned) top-down predictions (or expectations) against bottom-up sensory evidence. At the neural level, these models propose that each region along the visual processing hierarchy entails one set of processing units encoding predictions of bottom-up input, and another set computing mismatches (prediction error or surprise) between predictions and evidence. This contrasts with traditional views of visual neurons operating purely as bottom-up feature detectors. In support of the predictive coding hypothesis, a recent human neuroimaging study (Egner, Monti, & Summerfield, 2010) showed that neural population responses to expected and unexpected face and house stimuli in the "fusiform face area" (FFA) could be well-described as a summation of hypothetical face-expectation and -surprise signals, but not by feature detector responses. Here, we used computer simulations to test whether these imaging data could be formally explained within the broader framework of a mathematical neural network model of associative learning (Schmajuk, Gray, & Lam, 1996). Results show that FFA responses could be fit very closely by model variables coding for conditional predictions (and their violations) of stimuli that unconditionally activate the FFA. These data document that neural population signals in the ventral visual stream that deviate from classic feature detection responses can formally be explained by associative prediction and surprise signals.

  9. Internal attention to features in visual short-term memory guides object learning.

    Science.gov (United States)

    Fan, Judith E; Turk-Browne, Nicholas B

    2013-11-01

    Attending to objects in the world affects how we perceive and remember them. What are the consequences of attending to an object in mind? In particular, how does reporting the features of a recently seen object guide visual learning? In three experiments, observers were presented with abstract shapes in a particular color, orientation, and location. After viewing each object, observers were cued to report one feature from visual short-term memory (VSTM). In a subsequent test, observers were cued to report features of the same objects from visual long-term memory (VLTM). We tested whether reporting a feature from VSTM: (1) enhances VLTM for just that feature (practice-benefit hypothesis), (2) enhances VLTM for all features (object-based hypothesis), or (3) simultaneously enhances VLTM for that feature and suppresses VLTM for unreported features (feature-competition hypothesis). The results provided support for the feature-competition hypothesis, whereby the representation of an object in VLTM was biased towards features reported from VSTM and away from unreported features (Experiment 1). This bias could not be explained by the amount of sensory exposure or response learning (Experiment 2) and was amplified by the reporting of multiple features (Experiment 3). Taken together, these results suggest that selective internal attention induces competitive dynamics among features during visual learning, flexibly tuning object representations to align with prior mnemonic goals. Copyright © 2013 Elsevier B.V. All rights reserved.

  10. Using ICT at an Open Distance Learning (ODL) Institution in South Africa: The Learning Experiences of Students with Visual Impairments

    Science.gov (United States)

    Mokiwa, S. A.; Phasha, T. N.

    2012-01-01

    For students with visual impairments, Information and Communication Technology (ICT) has become an important means through which they can learn and access learning materials at various levels of education. However, their learning experiences in using such form of technologies have been rarely documented, thus suggests society's lack of…

  11. Treatment of amblyopia in the adult: insights from a new rodent model of visual perceptual learning.

    Science.gov (United States)

    Bonaccorsi, Joyce; Berardi, Nicoletta; Sale, Alessandro

    2014-01-01

    Amblyopia is the most common form of impairment of visual function affecting one eye, with a prevalence of about 1-5% of the total world population. Amblyopia usually derives from conditions of early functional imbalance between the two eyes, owing to anisometropia, strabismus, or congenital cataract, and results in a pronounced reduction of visual acuity and severe deficits in contrast sensitivity and stereopsis. It is widely accepted that, due to a lack of sufficient plasticity in the adult brain, amblyopia becomes untreatable after the closure of the critical period in the primary visual cortex. However, recent results obtained both in animal models and in clinical trials have challenged this view, unmasking a previously unsuspected potential for promoting recovery even in adulthood. In this context, non invasive procedures based on visual perceptual learning, i.e., the improvement in visual performance on a variety of simple visual tasks following practice, emerge as particularly promising to rescue discrimination abilities in adult amblyopic subjects. This review will survey recent work regarding the impact of visual perceptual learning on amblyopia, with a special focus on a new experimental model of perceptual learning in the amblyopic rat.

  12. Treatment of amblyopia in the adult: insights from a new rodent model of visual perceptual learning

    Science.gov (United States)

    Bonaccorsi, Joyce; Berardi, Nicoletta; Sale, Alessandro

    2014-01-01

    Amblyopia is the most common form of impairment of visual function affecting one eye, with a prevalence of about 1–5% of the total world population. Amblyopia usually derives from conditions of early functional imbalance between the two eyes, owing to anisometropia, strabismus, or congenital cataract, and results in a pronounced reduction of visual acuity and severe deficits in contrast sensitivity and stereopsis. It is widely accepted that, due to a lack of sufficient plasticity in the adult brain, amblyopia becomes untreatable after the closure of the critical period in the primary visual cortex. However, recent results obtained both in animal models and in clinical trials have challenged this view, unmasking a previously unsuspected potential for promoting recovery even in adulthood. In this context, non invasive procedures based on visual perceptual learning, i.e., the improvement in visual performance on a variety of simple visual tasks following practice, emerge as particularly promising to rescue discrimination abilities in adult amblyopic subjects. This review will survey recent work regarding the impact of visual perceptual learning on amblyopia, with a special focus on a new experimental model of perceptual learning in the amblyopic rat. PMID:25076874

  13. Facilitating role of 3D multimodal visualization and learning rehearsal in memory recall.

    Science.gov (United States)

    Do, Phuong T; Moreland, John R

    2014-04-01

    The present study investigated the influence of 3D multimodal visualization and learning rehearsal on memory recall. Participants (N = 175 college students ranging from 21 to 25 years) were assigned to different training conditions and rehearsal processes to learn a list of 14 terms associated with construction of a wood-frame house. They then completed a memory test determining their cognitive ability to free recall the definitions of the 14 studied terms immediately after training and rehearsal. The audiovisual modality training condition was associated with the highest accuracy, and the visual- and auditory-modality conditions with lower accuracy rates. The no-training condition indicated little learning acquisition. A statistically significant increase in performance accuracy for the audiovisual condition as a function of rehearsal suggested the relative importance of rehearsal strategies in 3D observational learning. Findings revealed the potential application of integrating virtual reality and cognitive sciences to enhance learning and teaching effectiveness.

  14. Interacting Effects of Instructions and Presentation Rate on Visual Statistical Learning

    Directory of Open Access Journals (Sweden)

    Julie eBertels

    2015-11-01

    Full Text Available The statistical regularities of a sequence of visual shapes can be learned incidentally. Arciuli et al. (2014 recently argued that intentional instructions only improve learning at slow presentation rates as they favor the use of explicit strategies. The aim of the present study was (1 to test this assumption directly by investigating how instructions (incidental vs. intentional and presentation rate (fast vs. slow affect the acquisition of knowledge and (2 to examine how these factors influence the conscious vs. unconscious nature of the knowledge acquired. To this aim, we exposed participants to four triplets of shapes, presented sequentially in a pseudo-random order, and assessed their degree of learning in a subsequent completion task that integrated confidence judgments. Supporting Arciuli et al.’s claim, participant performance only benefited from intentional instructions at slow presentation rates. Moreover, informing participants beforehand about the existence of statistical regularities increased their explicit knowledge of the sequences, an effect that was not modulated by presentation speed. These results support that, although visual statistical learning can take place incidentally and, to some extent, outside conscious awareness, factors such as presentation rate and prior knowledge can boost learning of these regularities, presumably by favoring the acquisition of explicit knowledge.

  15. Neurofeedback in Learning Disabled Children: Visual versus Auditory Reinforcement.

    Science.gov (United States)

    Fernández, Thalía; Bosch-Bayard, Jorge; Harmony, Thalía; Caballero, María I; Díaz-Comas, Lourdes; Galán, Lídice; Ricardo-Garcell, Josefina; Aubert, Eduardo; Otero-Ojeda, Gloria

    2016-03-01

    Children with learning disabilities (LD) frequently have an EEG characterized by an excess of theta and a deficit of alpha activities. NFB using an auditory stimulus as reinforcer has proven to be a useful tool to treat LD children by positively reinforcing decreases of the theta/alpha ratio. The aim of the present study was to optimize the NFB procedure by comparing the efficacy of visual (with eyes open) versus auditory (with eyes closed) reinforcers. Twenty LD children with an abnormally high theta/alpha ratio were randomly assigned to the Auditory or the Visual group, where a 500 Hz tone or a visual stimulus (a white square), respectively, was used as a positive reinforcer when the value of the theta/alpha ratio was reduced. Both groups had signs consistent with EEG maturation, but only the Auditory Group showed behavioral/cognitive improvements. In conclusion, the auditory reinforcer was more efficacious in reducing the theta/alpha ratio, and it improved the cognitive abilities more than the visual reinforcer.

  16. MATLAB-aided teaching and learning in optics and photonics using the methods of computational photonics

    Science.gov (United States)

    Lin, Zhili; Li, Xiaoyan; Zhu, Daqing; Pu, Jixiong

    2017-08-01

    Due to the nature of light fields of laser waves and pulses as vector quantities with complex spatial distribution and temporal dependence, the optics and photonics courses have always been difficult to teach and learn without the support of graphical visualization, numerical simulations and hands-on experiments. One of the state-of-the-art method of computational photonics, the finite-difference time-domain(FDTD) method, is applied with MATLAB simulations to model typical teaching cases in optics and photonics courses. The obtained results with graphical visualization in the form of animated pictures allow students to more deeply understand the dynamic process of light interaction with classical optical structures. The discussed teaching methodology is aimed to enhance the teaching effectiveness of optics and photonics courses and arousing the students' learning interest.

  17. Night-Time Vehicle Detection Algorithm Based on Visual Saliency and Deep Learning

    Directory of Open Access Journals (Sweden)

    Yingfeng Cai

    2016-01-01

    Full Text Available Night vision systems get more and more attention in the field of automotive active safety field. In this area, a number of researchers have proposed far-infrared sensor based night-time vehicle detection algorithm. However, existing algorithms have low performance in some indicators such as the detection rate and processing time. To solve this problem, we propose a far-infrared image vehicle detection algorithm based on visual saliency and deep learning. Firstly, most of the nonvehicle pixels will be removed with visual saliency computation. Then, vehicle candidate will be generated by using prior information such as camera parameters and vehicle size. Finally, classifier trained with deep belief networks will be applied to verify the candidates generated in last step. The proposed algorithm is tested in around 6000 images and achieves detection rate of 92.3% and processing time of 25 Hz which is better than existing methods.

  18. Enhancing performance expectancies through visual illusions facilitates motor learning in children.

    Science.gov (United States)

    Bahmani, Moslem; Wulf, Gabriele; Ghadiri, Farhad; Karimi, Saeed; Lewthwaite, Rebecca

    2017-10-01

    In a recent study by Chauvel, Wulf, and Maquestiaux (2015), golf putting performance was found to be affected by the Ebbinghaus illusion. Specifically, adult participants demonstrated more effective learning when they practiced with a hole that was surrounded by small circles, making it look larger, than when the hole was surrounded by large circles, making it look smaller. The present study examined whether this learning advantage would generalize to children who are assumed to be less sensitive to the visual illusion. Two groups of 10-year olds practiced putting golf balls from a distance of 2m, with perceived larger or smaller holes resulting from the visual illusion. Self-efficacy was increased in the group with the perceived larger hole. The latter group also demonstrated more accurate putting performance during practice. Importantly, learning (i.e., delayed retention performance without the illusion) was enhanced in the group that practiced with the perceived larger hole. The findings replicate previous results with adult learners and are in line with the notion that enhanced performance expectancies are key to optimal motor learning (Wulf & Lewthwaite, 2016). Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Visual Interactive Syntax Learning: A Case of Blended Learning

    Directory of Open Access Journals (Sweden)

    Jane Vinther

    2008-11-01

    Full Text Available The integration of the computer as a tool in language learningat the tertiary level brings several opportunities for adaptingto individual student needs, but lack of appropriate material suited for the level of student proficiency in Scandinavia has meant that university teachers have found it difficult to blendthe traditional approach with computer tools. This article will present one programme (VISL which has been developed with the purpose of supporting and enhancing traditional instruction. Visual Interactive Syntax Learning (VISL is a programme which is basically a parser put to pedagogical use. The pedagogical purpose is to teach English syntax to university students at an advanced level. The programme allows the students to build sophisticated tree diagrams of Englishsentences with provisions for both functions and forms (simple or complex, incl. subclauses. VISL was initiated as an attempt to facilitate the metalinguistic learning process. Thisarticle will present VISL as a pedagogical tool and tries to argue the case for the benefits of blending traditional lecturing with modern technology while pointing out some of the issues involved.

  20. Exploring Multi-Modal and Structured Representation Learning for Visual Image and Video Understanding

    OpenAIRE

    Xu, Dan

    2018-01-01

    As the explosive growth of the visual data, it is particularly important to develop intelligent visual understanding techniques for dealing with a large amount of data. Many efforts have been made in recent years to build highly effective and large-scale visual processing algorithms and systems. One of the core aspects in the research line is how to learn robust representations to better describe the data. In this thesis we study the problem of visual image and video understanding and specifi...

  1. Training haptic stiffness discrimination: time course of learning with or without visual information and knowledge of results.

    Science.gov (United States)

    Teodorescu, Kinneret; Bouchigny, Sylvain; Korman, Maria

    2013-08-01

    In this study, we explored the time course of haptic stiffness discrimination learning and how it was affected by two experimental factors, the addition of visual information and/or knowledge of results (KR) during training. Stiffness perception may integrate both haptic and visual modalities. However, in many tasks, the visual field is typically occluded, forcing stiffness perception to be dependent exclusively on haptic information. No studies to date addressed the time course of haptic stiffness perceptual learning. Using a virtual environment (VE) haptic interface and a two-alternative forced-choice discrimination task, the haptic stiffness discrimination ability of 48 participants was tested across 2 days. Each day included two haptic test blocks separated by a training block Additional visual information and/or KR were manipulated between participants during training blocks. Practice repetitions alone induced significant improvement in haptic stiffness discrimination. Between days, accuracy was slightly improved, but decision time performance was deteriorated. The addition of visual information and/or KR had only temporary effects on decision time, without affecting the time course of haptic discrimination learning. Learning in haptic stiffness discrimination appears to evolve through at least two distinctive phases: A single training session resulted in both immediate and latent learning. This learning was not affected by the training manipulations inspected. Training skills in VE in spaced sessions can be beneficial for tasks in which haptic perception is critical, such as surgery procedures, when the visual field is occluded. However, training protocols for such tasks should account for low impact of multisensory information and KR.

  2. Visual attention to features by associative learning.

    Science.gov (United States)

    Gozli, Davood G; Moskowitz, Joshua B; Pratt, Jay

    2014-11-01

    Expecting a particular stimulus can facilitate processing of that stimulus over others, but what is the fate of other stimuli that are known to co-occur with the expected stimulus? This study examined the impact of learned association on feature-based attention. The findings show that the effectiveness of an uninformative color transient in orienting attention can change by learned associations between colors and the expected target shape. In an initial acquisition phase, participants learned two distinct sequences of stimulus-response-outcome, where stimuli were defined by shape ('S' vs. 'H'), responses were localized key-presses (left vs. right), and outcomes were colors (red vs. green). Next, in a test phase, while expecting a target shape (80% probable), participants showed reliable attentional orienting to the color transient associated with the target shape, and showed no attentional orienting with the color associated with the alternative target shape. This bias seemed to be driven by learned association between shapes and colors, and not modulated by the response. In addition, the bias seemed to depend on observing target-color conjunctions, since encountering the two features disjunctively (without spatiotemporal overlap) did not replicate the findings. We conclude that associative learning - likely mediated by mechanisms underlying visual object representation - can extend the impact of goal-driven attention to features associated with a target stimulus. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Visual working memory gives up attentional control early in learning: ruling out interhemispheric cancellation.

    Science.gov (United States)

    Reinhart, Robert M G; Carlisle, Nancy B; Woodman, Geoffrey F

    2014-08-01

    Current research suggests that we can watch visual working memory surrender the control of attention early in the process of learning to search for a specific object. This inference is based on the observation that the contralateral delay activity (CDA) rapidly decreases in amplitude across trials when subjects search for the same target object. Here, we tested the alternative explanation that the role of visual working memory does not actually decline across learning, but instead lateralized representations accumulate in both hemispheres across trials and wash out the lateralized CDA. We show that the decline in CDA amplitude occurred even when the target objects were consistently lateralized to a single visual hemifield. Our findings demonstrate that reductions in the amplitude of the CDA during learning are not simply due to the dilution of the CDA from interhemispheric cancellation. Copyright © 2014 Society for Psychophysiological Research.

  4. Roles of NO signaling in long-term memory formation in visual learning in an insect.

    Directory of Open Access Journals (Sweden)

    Yukihisa Matsumoto

    Full Text Available Many insects exhibit excellent capability of visual learning, but the molecular and neural mechanisms are poorly understood. This is in contrast to accumulation of information on molecular and neural mechanisms of olfactory learning in insects. In olfactory learning in insects, it has been shown that cyclic AMP (cAMP signaling critically participates in the formation of protein synthesis-dependent long-term memory (LTM and, in some insects, nitric oxide (NO-cyclic GMP (cGMP signaling also plays roles in LTM formation. In this study, we examined the possible contribution of NO-cGMP signaling and cAMP signaling to LTM formation in visual pattern learning in crickets. Crickets that had been subjected to 8-trial conditioning to associate a visual pattern with water reward exhibited memory retention 1 day after conditioning, whereas those subjected to 4-trial conditioning exhibited 30-min memory retention but not 1-day retention. Injection of cycloheximide, a protein synthesis inhibitor, into the hemolymph prior to 8-trial conditioning blocked formation of 1-day memory, whereas it had no effect on 30-min memory formation, indicating that 1-day memory can be characterized as protein synthesis-dependent long-term memory (LTM. Injection of an inhibitor of the enzyme producing an NO or cAMP prior to 8-trial visual conditioning blocked LTM formation, whereas it had no effect on 30-min memory formation. Moreover, injection of an NO donor, cGMP analogue or cAMP analogue prior to 4-trial conditioning induced LTM. Induction of LTM by an NO donor was blocked by DDA, an inhibitor of adenylyl cyclase, an enzyme producing cAMP, but LTM induction by a cAMP analogue was not impaired by L-NAME, an inhibitor of NO synthase. The results indicate that cAMP signaling is downstream of NO signaling for visual LTM formation. We conclude that visual learning and olfactory learning share common biochemical cascades for LTM formation.

  5. Cueing and Anxiety in a Visual Concept Learning Task.

    Science.gov (United States)

    Turner, Philip M.

    This study investigated the relationship of two anxiety measures (the State-Trait Anxiety Inventory-Trait Form and the S-R Inventory of Anxiousness-Exam Form) to performance on a visual concept-learning task with embedded criterial information. The effect on anxiety reduction of cueing criterial information was also examined, and two levels of…

  6. Learning Program for Enhancing Visual Literacy for Non-Design Students Using a CMS to Share Outcomes

    Science.gov (United States)

    Ariga, Taeko; Watanabe, Takashi; Otani, Toshio; Masuzawa, Toshimitsu

    2016-01-01

    This study proposes a basic learning program for enhancing visual literacy using an original Web content management system (Web CMS) to share students' outcomes in class as a blog post. It seeks to reinforce students' understanding and awareness of the design of visual content. The learning program described in this research focuses on to address…

  7. Visual artificial grammar learning in dyslexia: A meta-analysis.

    Science.gov (United States)

    van Witteloostuijn, Merel; Boersma, Paul; Wijnen, Frank; Rispens, Judith

    2017-11-01

    Literacy impairments in dyslexia have been hypothesized to be (partly) due to an implicit learning deficit. However, studies of implicit visual artificial grammar learning (AGL) have often yielded null results. The aim of this study is to weigh the evidence collected thus far by performing a meta-analysis of studies on implicit visual AGL in dyslexia. Thirteen studies were selected through a systematic literature search, representing data from 255 participants with dyslexia and 292 control participants (mean age range: 8.5-36.8 years old). If the 13 selected studies constitute a random sample, individuals with dyslexia perform worse on average than non-dyslexic individuals (average weighted effect size=0.46, 95% CI [0.14 … 0.77], p=0.008), with a larger effect in children than in adults (p=0.041; average weighted effect sizes 0.71 [sig.] versus 0.16 [non-sig.]). However, the presence of a publication bias indicates the existence of missing studies that may well null the effect. While the studies under investigation demonstrate that implicit visual AGL is impaired in dyslexia (more so in children than in adults, if in adults at all), the detected publication bias suggests that the effect might in fact be zero. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  8. A Visual Encapsulation of Adlerian Theory: A Tool for Teaching and Learning.

    Science.gov (United States)

    Osborn, Cynthia J.

    2001-01-01

    A visual diagram is presented in this article to illustrate 6 key concepts of Adlerian theory discussed in corresponding narrative format. It is proposed that in an age of multimedia learning, a pictorial reference can enhance the teaching and learning of Adlerian theory, representing a commitment to humanistic education. (Contains 18 references.)…

  9. A computational exploration of complementary learning mechanisms in the primate ventral visual pathway.

    Science.gov (United States)

    Spoerer, Courtney J; Eguchi, Akihiro; Stringer, Simon M

    2016-02-01

    In order to develop transformation invariant representations of objects, the visual system must make use of constraints placed upon object transformation by the environment. For example, objects transform continuously from one point to another in both space and time. These two constraints have been exploited separately in order to develop translation and view invariance in a hierarchical multilayer model of the primate ventral visual pathway in the form of continuous transformation learning and temporal trace learning. We show for the first time that these two learning rules can work cooperatively in the model. Using these two learning rules together can support the development of invariance in cells and help maintain object selectivity when stimuli are presented over a large number of locations or when trained separately over a large number of viewing angles. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  10. Methods of choosing the best methods of building a dynamic visualization environment

    Directory of Open Access Journals (Sweden)

    В.А. Бородін

    2009-02-01

    Full Text Available In work is offered the methods of the choice of the most optimum combination of the methods which provides the building of the visual image of the dynamic scenes on the displays of real-time ANGS, which defines the optimal percent of the use for each of m software programs, that are in complex, n methods, optimizing velocity of the image of the visual image. The calculation of the ratio is carried out using the details of this problem to the linear programming problem. In work is offered the calculation of the optimum methods for building a visual image of a dynamic scenes for a specific task.

  11. Hebbian learning of hand-centred representations in a hierarchical neural network model of the primate visual system

    Science.gov (United States)

    Born, Jannis; Stringer, Simon M.

    2017-01-01

    A subset of neurons in the posterior parietal and premotor areas of the primate brain respond to the locations of visual targets in a hand-centred frame of reference. Such hand-centred visual representations are thought to play an important role in visually-guided reaching to target locations in space. In this paper we show how a biologically plausible, Hebbian learning mechanism may account for the development of localized hand-centred representations in a hierarchical neural network model of the primate visual system, VisNet. The hand-centered neurons developed in the model use an invariance learning mechanism known as continuous transformation (CT) learning. In contrast to previous theoretical proposals for the development of hand-centered visual representations, CT learning does not need a memory trace of recent neuronal activity to be incorporated in the synaptic learning rule. Instead, CT learning relies solely on a Hebbian learning rule, which is able to exploit the spatial overlap that naturally occurs between successive images of a hand-object configuration as it is shifted across different retinal locations due to saccades. Our simulations show how individual neurons in the network model can learn to respond selectively to target objects in particular locations with respect to the hand, irrespective of where the hand-object configuration occurs on the retina. The response properties of these hand-centred neurons further generalise to localised receptive fields in the hand-centred space when tested on novel hand-object configurations that have not been explored during training. Indeed, even when the network is trained with target objects presented across a near continuum of locations around the hand during training, the model continues to develop hand-centred neurons with localised receptive fields in hand-centred space. With the help of principal component analysis, we provide the first theoretical framework that explains the behavior of Hebbian learning

  12. Hebbian learning of hand-centred representations in a hierarchical neural network model of the primate visual system.

    Science.gov (United States)

    Born, Jannis; Galeazzi, Juan M; Stringer, Simon M

    2017-01-01

    A subset of neurons in the posterior parietal and premotor areas of the primate brain respond to the locations of visual targets in a hand-centred frame of reference. Such hand-centred visual representations are thought to play an important role in visually-guided reaching to target locations in space. In this paper we show how a biologically plausible, Hebbian learning mechanism may account for the development of localized hand-centred representations in a hierarchical neural network model of the primate visual system, VisNet. The hand-centered neurons developed in the model use an invariance learning mechanism known as continuous transformation (CT) learning. In contrast to previous theoretical proposals for the development of hand-centered visual representations, CT learning does not need a memory trace of recent neuronal activity to be incorporated in the synaptic learning rule. Instead, CT learning relies solely on a Hebbian learning rule, which is able to exploit the spatial overlap that naturally occurs between successive images of a hand-object configuration as it is shifted across different retinal locations due to saccades. Our simulations show how individual neurons in the network model can learn to respond selectively to target objects in particular locations with respect to the hand, irrespective of where the hand-object configuration occurs on the retina. The response properties of these hand-centred neurons further generalise to localised receptive fields in the hand-centred space when tested on novel hand-object configurations that have not been explored during training. Indeed, even when the network is trained with target objects presented across a near continuum of locations around the hand during training, the model continues to develop hand-centred neurons with localised receptive fields in hand-centred space. With the help of principal component analysis, we provide the first theoretical framework that explains the behavior of Hebbian learning

  13. Learning style preferences and their influence on students' problem solving in kinematics observed by eye-tracking method

    Science.gov (United States)

    Kekule, Martina

    2017-01-01

    The article presents eye-tracking method and its using for observing students when they solve problems from kinematics. Particularly, multiple-choice items in TUG-K test by Robert Beichner. Moreover, student's preference for visual way of learning as a possible influential aspect is proofed and discussed. Learning Style Inventory by Dunn, Dunn&Price was administered to students in order to find out their preferences. More than 20 high school and college students about 20 years old took part in the research. Preferred visual way of learning in contrast to the other ways of learning (audio, tactile, kinesthetic) shows very slight correlation with the total score of the test, none correlation with the average fixation duration and slight correlation with average fixation count on a task and average total visit duration on a task.

  14. Visual Statistical Learning Works after Binding the Temporal Sequences of Shapes and Spatial Positions

    Directory of Open Access Journals (Sweden)

    Osamu Watanabe

    2011-05-01

    Full Text Available The human visual system can acquire the statistical structures in temporal sequences of object feature changes, such as changes in shape, color, and its combination. Here we investigate whether the statistical learning for spatial position and shape changes operates separately or not. It is known that the visual system processes these two types of information separately; the spatial information is processed in the parietal cortex, whereas object shapes and colors are detected in the temporal pathway, and, after that, we perceive bound information in the two streams. We examined whether the statistical learning operates before or after binding the shape and the spatial information by using the “re-paired triplet” paradigm proposed by Turk-Browne, Isola, Scholl, and Treat (2008. The result showed that observers acquired combined sequences of shape and position changes, but no statistical information in individual sequence was obtained. This finding suggests that the visual statistical learning works after binding the temporal sequences of shapes and spatial structures and would operate in the higher-order visual system; this is consistent with recent ERP (Abla & Okanoya, 2009 and fMRI (Turk-Browne, Scholl, Chun, & Johnson, 2009 studies.

  15. Enhance students’ motivation to learn programming by using direct visual feed-back

    DEFF Research Database (Denmark)

    Kofoed, Lise B.; Reng, Lars

    2011-01-01

    The technical subjects chosen are within programming. Using image-processing algorithms as means to provide direct visual feedback for learning basic C/C++. The pedagogical approach is within a PBL framework and is based on dialogue and collaborative learning. At the same time the intention...... was to establish a community of practice among the students and the teachers. A direct visual feedback and a higher level of merging between the artistic, creative, and technical lectures have been the focus of motivation as well as a complete restructuring of the elements of the technical lectures. The paper...... abilities and enhanced balance between the interdisciplinary disciplines of the study are analyzed. The conclusion is that the technical courses have got a higher status for the students. The students now see it as a very important basis for their further study, and their learning results have improved...

  16. Literature Review of Applying Visual Method to Understand Mathematics

    Directory of Open Access Journals (Sweden)

    Yu Xiaojuan

    2015-01-01

    Full Text Available As a new method to understand mathematics, visualization offers a new way of understanding mathematical principles and phenomena via image thinking and geometric explanation. It aims to deepen the understanding of the nature of concepts or phenomena and enhance the cognitive ability of learners. This paper collates and summarizes the application of this visual method in the understanding of mathematics. It also makes a literature review of the existing research, especially with a visual demonstration of Euler’s formula, introduces the application of this method in solving relevant mathematical problems, and points out the differences and similarities between the visualization method and the numerical-graphic combination method, as well as matters needing attention for its application.

  17. Can visual illusions be used to facilitate sport skill learning?

    NARCIS (Netherlands)

    Canal Bruland, R.; van der Meer, Y.; Moerman, J.

    2016-01-01

    Recently it has been reported that practicing putting with visual illusions that make the hole appear larger than it actually is leads to longer-lasting performance improvements. Interestingly, from a motor control and learning perspective, it may be possible to actually predict the opposite to

  18. Advances and limitations of visual conditioning protocols in harnessed bees.

    Science.gov (United States)

    Avarguès-Weber, Aurore; Mota, Theo

    2016-10-01

    Bees are excellent invertebrate models for studying visual learning and memory mechanisms, because of their sophisticated visual system and impressive cognitive capacities associated with a relatively simple brain. Visual learning in free-flying bees has been traditionally studied using an operant conditioning paradigm. This well-established protocol, however, can hardly be combined with invasive procedures for studying the neurobiological basis of visual learning. Different efforts have been made to develop protocols in which harnessed honey bees could associate visual cues with reinforcement, though learning performances remain poorer than those obtained with free-flying animals. Especially in the last decade, the intention of improving visual learning performances of harnessed bees led many authors to adopt distinct visual conditioning protocols, altering parameters like harnessing method, nature and duration of visual stimulation, number of trials, inter-trial intervals, among others. As a result, the literature provides data hardly comparable and sometimes contradictory. In the present review, we provide an extensive analysis of the literature available on visual conditioning of harnessed bees, with special emphasis on the comparison of diverse conditioning parameters adopted by different authors. Together with this comparative overview, we discuss how these diverse conditioning parameters could modulate visual learning performances of harnessed bees. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Visual paired-associate learning: in search of material-specific effects in adult patients who have undergone temporal lobectomy.

    Science.gov (United States)

    Smith, Mary Lou; Bigel, Marla; Miller, Laurie A

    2011-02-01

    The mesial temporal lobes are important for learning arbitrary associations. It has previously been demonstrated that left mesial temporal structures are involved in learning word pairs, but it is not yet known whether comparable lesions in the right temporal lobe impair visually mediated associative learning. Patients who had undergone left (n=16) or right (n=18) temporal lobectomy for relief of intractable epilepsy and healthy controls (n=13) were administered two paired-associate learning tasks assessing their learning and memory of pairs of abstract designs or pairs of symbols in unique locations. Both patient groups had deficits in learning the designs, but only the right temporal group was impaired in recognition. For the symbol location task, differences were not found in learning, but again a recognition deficit was found for the right temporal group. The findings implicate the mesial temporal structures in relational learning. They support a material-specific effect for recognition but not for learning and recall of arbitrary visual and visual-spatial associative information. Copyright © 2010 Elsevier Inc. All rights reserved.

  20. Comparison of Deep Learning With Multiple Machine Learning Methods and Metrics Using Diverse Drug Discovery Data Sets.

    Science.gov (United States)

    Korotcov, Alexandru; Tkachenko, Valery; Russo, Daniel P; Ekins, Sean

    2017-12-04

    Machine learning methods have been applied to many data sets in pharmaceutical research for several decades. The relative ease and availability of fingerprint type molecular descriptors paired with Bayesian methods resulted in the widespread use of this approach for a diverse array of end points relevant to drug discovery. Deep learning is the latest machine learning algorithm attracting attention for many of pharmaceutical applications from docking to virtual screening. Deep learning is based on an artificial neural network with multiple hidden layers and has found considerable traction for many artificial intelligence applications. We have previously suggested the need for a comparison of different machine learning methods with deep learning across an array of varying data sets that is applicable to pharmaceutical research. End points relevant to pharmaceutical research include absorption, distribution, metabolism, excretion, and toxicity (ADME/Tox) properties, as well as activity against pathogens and drug discovery data sets. In this study, we have used data sets for solubility, probe-likeness, hERG, KCNQ1, bubonic plague, Chagas, tuberculosis, and malaria to compare different machine learning methods using FCFP6 fingerprints. These data sets represent whole cell screens, individual proteins, physicochemical properties as well as a data set with a complex end point. Our aim was to assess whether deep learning offered any improvement in testing when assessed using an array of metrics including AUC, F1 score, Cohen's kappa, Matthews correlation coefficient and others. Based on ranked normalized scores for the metrics or data sets Deep Neural Networks (DNN) ranked higher than SVM, which in turn was ranked higher than all the other machine learning methods. Visualizing these properties for training and test sets using radar type plots indicates when models are inferior or perhaps over trained. These results also suggest the need for assessing deep learning further

  1. Visualizing Dataflow Graphs of Deep Learning Models in TensorFlow.

    Science.gov (United States)

    Wongsuphasawat, Kanit; Smilkov, Daniel; Wexler, James; Wilson, Jimbo; Mane, Dandelion; Fritz, Doug; Krishnan, Dilip; Viegas, Fernanda B; Wattenberg, Martin

    2018-01-01

    We present a design study of the TensorFlow Graph Visualizer, part of the TensorFlow machine intelligence platform. This tool helps users understand complex machine learning architectures by visualizing their underlying dataflow graphs. The tool works by applying a series of graph transformations that enable standard layout techniques to produce a legible interactive diagram. To declutter the graph, we decouple non-critical nodes from the layout. To provide an overview, we build a clustered graph using the hierarchical structure annotated in the source code. To support exploration of nested structure on demand, we perform edge bundling to enable stable and responsive cluster expansion. Finally, we detect and highlight repeated structures to emphasize a model's modular composition. To demonstrate the utility of the visualizer, we describe example usage scenarios and report user feedback. Overall, users find the visualizer useful for understanding, debugging, and sharing the structures of their models.

  2. Linguistic labels, dynamic visual features, and attention in infant category learning.

    Science.gov (United States)

    Deng, Wei Sophia; Sloutsky, Vladimir M

    2015-06-01

    How do words affect categorization? According to some accounts, even early in development words are category markers and are different from other features. According to other accounts, early in development words are part of the input and are akin to other features. The current study addressed this issue by examining the role of words and dynamic visual features in category learning in 8- to 12-month-old infants. Infants were familiarized with exemplars from one category in a label-defined or motion-defined condition and then tested with prototypes from the studied category and from a novel contrast category. Eye-tracking results indicated that infants exhibited better category learning in the motion-defined condition than in the label-defined condition, and their attention was more distributed among different features when there was a dynamic visual feature compared with the label-defined condition. These results provide little evidence for the idea that linguistic labels are category markers that facilitate category learning. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Weakly supervised visual dictionary learning by harnessing image attributes.

    Science.gov (United States)

    Gao, Yue; Ji, Rongrong; Liu, Wei; Dai, Qionghai; Hua, Gang

    2014-12-01

    Bag-of-features (BoFs) representation has been extensively applied to deal with various computer vision applications. To extract discriminative and descriptive BoF, one important step is to learn a good dictionary to minimize the quantization loss between local features and codewords. While most existing visual dictionary learning approaches are engaged with unsupervised feature quantization, the latest trend has turned to supervised learning by harnessing the semantic labels of images or regions. However, such labels are typically too expensive to acquire, which restricts the scalability of supervised dictionary learning approaches. In this paper, we propose to leverage image attributes to weakly supervise the dictionary learning procedure without requiring any actual labels. As a key contribution, our approach establishes a generative hidden Markov random field (HMRF), which models the quantized codewords as the observed states and the image attributes as the hidden states, respectively. Dictionary learning is then performed by supervised grouping the observed states, where the supervised information is stemmed from the hidden states of the HMRF. In such a way, the proposed dictionary learning approach incorporates the image attributes to learn a semantic-preserving BoF representation without any genuine supervision. Experiments in large-scale image retrieval and classification tasks corroborate that our approach significantly outperforms the state-of-the-art unsupervised dictionary learning approaches.

  4. Robustness and prediction accuracy of machine learning for objective visual quality assessment

    OpenAIRE

    HINES, ANDREW

    2014-01-01

    PUBLISHED Lisbon, Portugal Machine Learning (ML) is a powerful tool to support the development of objective visual quality assessment metrics, serving as a substitute model for the perceptual mechanisms acting in visual quality appreciation. Nevertheless, the reli- ability of ML-based techniques within objective quality as- sessment metrics is often questioned. In this study, the ro- bustness of ML in supporting objective quality assessment is investigated, specific...

  5. Blended Learning in the Visual Communications Classroom: Student Reflections on a Multimedia Course

    Science.gov (United States)

    George-Palilonis, Jennifer; Filak, Vincent

    2009-01-01

    Advances in digital technology and a rapidly evolving media landscape continue to dramatically change teaching and learning. Among these changes is the emergence of multimedia teaching and learning tools, online degree programs, and hybrid classes that blend traditional and digital content delivery. At the same time, visual communication programs…

  6. Magnifying visual target information and the role of eye movements in motor sequence learning.

    Science.gov (United States)

    Massing, Matthias; Blandin, Yannick; Panzer, Stefan

    2016-01-01

    An experiment investigated the influence of eye movements on learning a simple motor sequence task when the visual display was magnified. The task was to reproduce a 1300 ms spatial-temporal pattern of elbow flexions and extensions. The spatial-temporal pattern was displayed in front of the participants. Participants were randomly assigned to four groups differing on eye movements (free to use their eyes/instructed to fixate) and the visual display (small/magnified). All participants had to perform a pre-test, an acquisition phase, a delayed retention test, and a transfer test. The results indicated that participants in each practice condition increased their performance during acquisition. The participants who were permitted to use their eyes in the magnified visual display outperformed those who were instructed to fixate on the magnified visual display. When a small visual display was used, the instruction to fixate induced no performance decrements compared to participants who were permitted to use their eyes during acquisition. The findings demonstrated that a spatial-temporal pattern can be learned without eye movements, but being permitting to use eye movements facilitates the response production when the visual angle is increased. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Figure analysis: A teaching technique to promote visual literacy and active Learning.

    Science.gov (United States)

    Wiles, Amy M

    2016-07-08

    Learning often improves when active learning techniques are used in place of traditional lectures. For many of these techniques, however, students are expected to apply concepts that they have already grasped. A challenge, therefore, is how to incorporate active learning into the classroom of courses with heavy content, such as molecular-based biology courses. An additional challenge is that visual literacy is often overlooked in undergraduate science education. To address both of these challenges, a technique called figure analysis was developed and implemented in three different levels of undergraduate biology courses. Here, students learn content while gaining practice in interpreting visual information by discussing figures with their peers. Student groups also make connections between new and previously learned concepts on their own while in class. The instructor summarizes the material for the class only after students grapple with it in small groups. Students reported a preference for learning by figure analysis over traditional lecture, and female students in particular reported increased confidence in their analytical abilities. There is not a technology requirement for this technique; therefore, it may be utilized both in classrooms and in nontraditional spaces. Additionally, the amount of preparation required is comparable to that of a traditional lecture. © 2016 by The International Union of Biochemistry and Molecular Biology, 44(4):336-344, 2016. © 2016 The International Union of Biochemistry and Molecular Biology.

  8. Enhancement of Online Robotics Learning Using Real-Time 3D Visualization Technology

    OpenAIRE

    Richard Chiou; Yongjin (james) Kwon; Tzu-Liang (bill) Tseng; Robin Kizirian; Yueh-Ting Yang

    2010-01-01

    This paper discusses a real-time e-Lab Learning system based on the integration of 3D visualization technology with a remote robotic laboratory. With the emergence and development of the Internet field, online learning is proving to play a significant role in the upcoming era. In an effort to enhance Internet-based learning of robotics and keep up with the rapid progression of technology, a 3- Dimensional scheme of viewing the robotic laboratory has been introduced in addition to the remote c...

  9. Machine learning methods for planning

    CERN Document Server

    Minton, Steven

    1993-01-01

    Machine Learning Methods for Planning provides information pertinent to learning methods for planning and scheduling. This book covers a wide variety of learning methods and learning architectures, including analogical, case-based, decision-tree, explanation-based, and reinforcement learning.Organized into 15 chapters, this book begins with an overview of planning and scheduling and describes some representative learning systems that have been developed for these tasks. This text then describes a learning apprentice for calendar management. Other chapters consider the problem of temporal credi

  10. Perceptual learning improves contrast sensitivity, visual acuity, and foveal crowding in amblyopia.

    Science.gov (United States)

    Barollo, Michele; Contemori, Giulio; Battaglini, Luca; Pavan, Andrea; Casco, Clara

    2017-01-01

    Amblyopic observers present abnormal spatial interactions between a low-contrast sinusoidal target and high-contrast collinear flankers. It has been demonstrated that perceptual learning (PL) can modulate these low-level lateral interactions, resulting in improved visual acuity and contrast sensitivity. We measured the extent and duration of generalization effects to various spatial tasks (i.e., visual acuity, Vernier acuity, and foveal crowding) through PL on the target's contrast detection. Amblyopic observers were trained on a contrast-detection task for a central target (i.e., a Gabor patch) flanked above and below by two high-contrast Gabor patches. The pre- and post-learning tasks included lateral interactions at different target-to-flankers separations (i.e., 2, 3, 4, 8λ) and included a range of spatial frequencies and stimulus durations as well as visual acuity, Vernier acuity, contrast-sensitivity function, and foveal crowding. The results showed that perceptual training reduced the target's contrast-detection thresholds more for the longest target-to-flanker separation (i.e., 8λ). We also found generalization of PL to different stimuli and tasks: contrast sensitivity for both trained and untrained spatial frequencies, visual acuity for Sloan letters, and foveal crowding, and partially for Vernier acuity. Follow-ups after 5-7 months showed not only complete maintenance of PL effects on visual acuity and contrast sensitivity function but also further improvement in these tasks. These results suggest that PL improves facilitatory lateral interactions in amblyopic observers, which usually extend over larger separations than in typical foveal vision. The improvement in these basic visual spatial operations leads to a more efficient capability of performing spatial tasks involving high levels of visual processing, possibly due to the refinement of bottom-up and top-down networks of visual areas.

  11. The Effects of Online Interactions on the Relationship between Learning-Related Anxiety and Intention to Persist among E-Learning Students with Visual Impairment

    Science.gov (United States)

    Oh, Yunjin; Lee, Soon Min

    2016-01-01

    This study explored whether learning-related anxiety would negatively affect intention to persist with e-learning among students with visual impairment, and examined the roles of three online interactions in the relationship between learning-related anxiety and intention to persist with e-learning. For this study, a convenience sample of…

  12. Enhancing Nuclear Newcomer Training with 3D Visualization Learning Tools

    International Nuclear Information System (INIS)

    Gagnon, V.

    2016-01-01

    Full text: While the nuclear power industry is trying to reinforce its safety and regain public support post-Fukushima, it is also faced with a very real challenge that affects its day-to-day activities: a rapidly aging workforce. Statistics show that close to 40% of the current nuclear power industry workforce will retire within the next five years. For newcomer countries, the challenge is even greater, having to develop a completely new workforce. The workforce replacement effort introduces nuclear newcomers of a new generation with different backgrounds and affinities. Major lifestyle differences between the two generations of workers result, amongst other things, in different learning habits and needs for this new breed of learners. Interactivity, high visual content and quick access to information are now necessary to achieve a high level of retention. To enhance existing training programmes or to support the establishment of new training programmes for newcomer countries, L-3 MAPPS has devised learning tools to enhance these training programmes focused on the “Practice-by-Doing” principle. L-3 MAPPS has coupled 3D computer visualization with high-fidelity simulation to bring real-time, simulation-driven animated components and systems allowing immersive and participatory, individual or classroom learning. (author

  13. Effects of Anodal Transcranial Direct Current Stimulation on Visually Guided Learning of Grip Force Control

    Directory of Open Access Journals (Sweden)

    Tamas Minarik

    2015-03-01

    Full Text Available Anodal transcranial Direct Current Stimulation (tDCS has been shown to be an effective non-invasive brain stimulation method for improving cognitive and motor functioning in patients with neurological deficits. tDCS over motor cortex (M1, for instance, facilitates motor learning in stroke patients. However, the literature on anodal tDCS effects on motor learning in healthy participants is inconclusive, and the effects of tDCS on visuo-motor integration are not well understood. In the present study we examined whether tDCS over the contralateral motor cortex enhances learning of grip-force output in a visually guided feedback task in young and neurologically healthy volunteers. Twenty minutes of 1 mA anodal tDCS were applied over the primary motor cortex (M1 contralateral to the dominant (right hand, during the first half of a 40 min power-grip task. This task required the control of a visual signal by modulating the strength of the power-grip for six seconds per trial. Each participant completed a two-session sham-controlled crossover protocol. The stimulation conditions were counterbalanced across participants and the sessions were one week apart. Performance measures comprised time-on-target and target-deviation, and were calculated for the periods of stimulation (or sham and during the afterphase respectively. Statistical analyses revealed significant performance improvements over the stimulation and the afterphase, but this learning effect was not modulated by tDCS condition. This suggests that the form of visuomotor learning taking place in the present task was not sensitive to neurostimulation. These null effects, together with similar reports for other types of motor tasks, lead to the proposition that tDCS facilitation of motor learning might be restricted to cases or situations where the motor system is challenged, such as motor deficits, advanced age, or very high task demand.

  14. Optimizing ChIP-seq peak detectors using visual labels and supervised machine learning.

    Science.gov (United States)

    Hocking, Toby Dylan; Goerner-Potvin, Patricia; Morin, Andreanne; Shao, Xiaojian; Pastinen, Tomi; Bourque, Guillaume

    2017-02-15

    Many peak detection algorithms have been proposed for ChIP-seq data analysis, but it is not obvious which algorithm and what parameters are optimal for any given dataset. In contrast, regions with and without obvious peaks can be easily labeled by visual inspection of aligned read counts in a genome browser. We propose a supervised machine learning approach for ChIP-seq data analysis, using labels that encode qualitative judgments about which genomic regions contain or do not contain peaks. The main idea is to manually label a small subset of the genome, and then learn a model that makes consistent peak predictions on the rest of the genome. We created 7 new histone mark datasets with 12 826 visually determined labels, and analyzed 3 existing transcription factor datasets. We observed that default peak detection parameters yield high false positive rates, which can be reduced by learning parameters using a relatively small training set of labeled data from the same experiment type. We also observed that labels from different people are highly consistent. Overall, these data indicate that our supervised labeling method is useful for quantitatively training and testing peak detection algorithms. Labeled histone mark data http://cbio.ensmp.fr/~thocking/chip-seq-chunk-db/ , R package to compute the label error of predicted peaks https://github.com/tdhock/PeakError. toby.hocking@mail.mcgill.ca or guil.bourque@mcgill.ca. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  15. It's all connected: Pathways in visual object recognition and early noun learning.

    Science.gov (United States)

    Smith, Linda B

    2013-11-01

    A developmental pathway may be defined as the route, or chain of events, through which a new structure or function forms. For many human behaviors, including object name learning and visual object recognition, these pathways are often complex and multicausal and include unexpected dependencies. This article presents three principles of development that suggest the value of a developmental psychology that explicitly seeks to trace these pathways and uses empirical evidence on developmental dependencies among motor development, action on objects, visual object recognition, and object name learning in 12- to 24-month-old infants to make the case. The article concludes with a consideration of the theoretical implications of this approach. (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  16. Multivariate nonparametric regression and visualization with R and applications to finance

    CERN Document Server

    Klemelä, Jussi

    2014-01-01

    A modern approach to statistical learning and its applications through visualization methods With a unique and innovative presentation, Multivariate Nonparametric Regression and Visualization provides readers with the core statistical concepts to obtain complete and accurate predictions when given a set of data. Focusing on nonparametric methods to adapt to the multiple types of data generatingmechanisms, the book begins with an overview of classification and regression. The book then introduces and examines various tested and proven visualization techniques for learning samples and functio

  17. A color fusion method of infrared and low-light-level images based on visual perception

    Science.gov (United States)

    Han, Jing; Yan, Minmin; Zhang, Yi; Bai, Lianfa

    2014-11-01

    The color fusion images can be obtained through the fusion of infrared and low-light-level images, which will contain both the information of the two. The fusion images can help observers to understand the multichannel images comprehensively. However, simple fusion may lose the target information due to inconspicuous targets in long-distance infrared and low-light-level images; and if targets extraction is adopted blindly, the perception of the scene information will be affected seriously. To solve this problem, a new fusion method based on visual perception is proposed in this paper. The extraction of the visual targets ("what" information) and parallel processing mechanism are applied in traditional color fusion methods. The infrared and low-light-level color fusion images are achieved based on efficient typical targets learning. Experimental results show the effectiveness of the proposed method. The fusion images achieved by our algorithm can not only improve the detection rate of targets, but also get rich natural information of the scenes.

  18. ROBUSTNESS AND PREDICTION ACCURACY OF MACHINE LEARNING FOR OBJECTIVE VISUAL QUALITY ASSESSMENT

    OpenAIRE

    Hines, Andrew; Kendrick, Paul; Barri, Adriaan; Narwaria, Manish; Redi, Judith A.

    2014-01-01

    Machine Learning (ML) is a powerful tool to support the development of objective visual quality assessment metrics, serving as a substitute model for the perceptual mechanisms acting in visual quality appreciation. Nevertheless, the reliability of ML-based techniques within objective quality assessment metrics is often questioned. In this study, the robustness of ML in supporting objective quality assessment is investigated, specifically when the feature set adopted for prediction is suboptim...

  19. Visual Input Enhancement and Grammar Learning: A Meta-Analytic Review

    Science.gov (United States)

    Lee, Sang-Ki; Huang, Hung-Tzu

    2008-01-01

    Effects of pedagogical interventions with visual input enhancement on grammar learning have been investigated by a number of researchers during the past decade and a half. The present review delineates this research domain via a systematic synthesis of 16 primary studies (comprising 20 unique study samples) retrieved through an exhaustive…

  20. Hebbian learning of hand-centred representations in a hierarchical neural network model of the primate visual system.

    Directory of Open Access Journals (Sweden)

    Jannis Born

    Full Text Available A subset of neurons in the posterior parietal and premotor areas of the primate brain respond to the locations of visual targets in a hand-centred frame of reference. Such hand-centred visual representations are thought to play an important role in visually-guided reaching to target locations in space. In this paper we show how a biologically plausible, Hebbian learning mechanism may account for the development of localized hand-centred representations in a hierarchical neural network model of the primate visual system, VisNet. The hand-centered neurons developed in the model use an invariance learning mechanism known as continuous transformation (CT learning. In contrast to previous theoretical proposals for the development of hand-centered visual representations, CT learning does not need a memory trace of recent neuronal activity to be incorporated in the synaptic learning rule. Instead, CT learning relies solely on a Hebbian learning rule, which is able to exploit the spatial overlap that naturally occurs between successive images of a hand-object configuration as it is shifted across different retinal locations due to saccades. Our simulations show how individual neurons in the network model can learn to respond selectively to target objects in particular locations with respect to the hand, irrespective of where the hand-object configuration occurs on the retina. The response properties of these hand-centred neurons further generalise to localised receptive fields in the hand-centred space when tested on novel hand-object configurations that have not been explored during training. Indeed, even when the network is trained with target objects presented across a near continuum of locations around the hand during training, the model continues to develop hand-centred neurons with localised receptive fields in hand-centred space. With the help of principal component analysis, we provide the first theoretical framework that explains the behavior

  1. A trajectory-preserving synchronization method for collaborative visualization.

    Science.gov (United States)

    Li, Lewis W F; Li, Frederick W B; Lau, Rynson W H

    2006-01-01

    In the past decade, a lot of research work has been conducted to support collaborative visualization among remote users over the networks, allowing them to visualize and manipulate shared data for problem solving. There are many applications of collaborative visualization, such as oceanography, meteorology and medical science. To facilitate user interaction, a critical system requirement for collaborative visualization is to ensure that remote users will perceive a synchronized view of the shared data. Failing this requirement, the user's ability in performing the desirable collaborative tasks will be affected. In this paper, we propose a synchronization method to support collaborative visualization. It considers how interaction with dynamic objects is perceived by application participants under the existence of network latency, and remedies the motion trajectory of the dynamic objects. It also handles the false positive and false negative collision detection problems. The new method is particularly well designed for handling content changes due to unpredictable user interventions or object collisions. We demonstrate the effectiveness of our method through a number of experiments.

  2. The Inductive Method of Teaching Visual Art Criticism.

    Science.gov (United States)

    Clements, Robert D.

    1979-01-01

    The author describes how the true principles of the scientific inductive method are not opposed to the principles of teaching visual art criticism, and suggests that the inductive method of teaching visual art criticism strips it of its mystique in order to make clear its vital role in intellectual development. (KC)

  3. Exploring Antecedents of Performance Differences on Visual and Verbal Test Items: Learning Styles versus Aptitude

    Science.gov (United States)

    Bacon, Donald R.; Hartley, Steven W.

    2015-01-01

    Many educators and researchers have suggested that some students learn more effectively with visual stimuli (e.g., pictures, graphs), whereas others learn more effectively with verbal information (e.g., text) (Felder & Brent, 2005). In two studies, the present research seeks to improve popular self-reported (indirect) learning style measures…

  4. Spatial discrimination and visual discrimination

    DEFF Research Database (Denmark)

    Haagensen, Annika M. J.; Grand, Nanna; Klastrup, Signe

    2013-01-01

    Two methods investigating learning and memory in juvenile Gottingen minipigs were evaluated for potential use in preclinical toxicity testing. Twelve minipigs were tested using a spatial hole-board discrimination test including a learning phase and two memory phases. Five minipigs were tested...... in a visual discrimination test. The juvenile minipigs were able to learn the spatial hole-board discrimination test and showed improved working and reference memory during the learning phase. Performance in the memory phases was affected by the retention intervals, but the minipigs were able to remember...... the concept of the test in both memory phases. Working memory and reference memory were significantly improved in the last trials of the memory phases. In the visual discrimination test, the minipigs learned to discriminate between the three figures presented to them within 9-14 sessions. For the memory test...

  5. Reliability and validity of the rey visual design learning test in primary school children

    NARCIS (Netherlands)

    Wilhelm, P.

    2004-01-01

    The Rey Visual Design Learning Test (Rey, 1964, in Spreen & Strauss, 1991) assesses immediate memory span, new learning and recognition for non-verbal material. Three studies are presented that focused on the reliability and validity of the RVDLT in primary school children. Test-retest reliability

  6. Small Private Online Research: A Proposal for A Numerical Methods Course Based on Technology Use and Blended Learning

    Science.gov (United States)

    Cepeda, Francisco Javier Delgado

    2017-01-01

    This work presents a proposed model in blended learning for a numerical methods course evolved from traditional teaching into a research lab in scientific visualization. The blended learning approach sets a differentiated and flexible scheme based on a mobile setup and face to face sessions centered on a net of research challenges. Model is…

  7. Creating Engaging Online Learning Material with the JSAV JavaScript Algorithm Visualization Library

    Science.gov (United States)

    Karavirta, Ville; Shaffer, Clifford A.

    2016-01-01

    Data Structures and Algorithms are a central part of Computer Science. Due to their abstract and dynamic nature, they are a difficult topic to learn for many students. To alleviate these learning difficulties, instructors have turned to algorithm visualizations (AV) and AV systems. Research has shown that especially engaging AVs can have an impact…

  8. Pretraining Cortical Thickness Predicts Subsequent Perceptual Learning Rate in a Visual Search Task.

    Science.gov (United States)

    Frank, Sebastian M; Reavis, Eric A; Greenlee, Mark W; Tse, Peter U

    2016-03-01

    We report that preexisting individual differences in the cortical thickness of brain areas involved in a perceptual learning task predict the subsequent perceptual learning rate. Participants trained in a motion-discrimination task involving visual search for a "V"-shaped target motion trajectory among inverted "V"-shaped distractor trajectories. Motion-sensitive area MT+ (V5) was functionally identified as critical to the task: after 3 weeks of training, activity increased in MT+ during task performance, as measured by functional magnetic resonance imaging. We computed the cortical thickness of MT+ from anatomical magnetic resonance imaging volumes collected before training started, and found that it significantly predicted subsequent perceptual learning rates in the visual search task. Participants with thicker neocortex in MT+ before training learned faster than those with thinner neocortex in that area. A similar association between cortical thickness and training success was also found in posterior parietal cortex (PPC). © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  9. The Effects of Single and Dual Coded Multimedia Instructional Methods on Chinese Character Learning

    Science.gov (United States)

    Wang, Ling

    2013-01-01

    Learning Chinese characters is a difficult task for adult English native speakers due to the significant differences between the Chinese and English writing system. The visuospatial properties of Chinese characters have inspired the development of instructional methods using both verbal and visual information based on the Dual Coding Theory. This…

  10. A Mouse Model of Visual Perceptual Learning Reveals Alterations in Neuronal Coding and Dendritic Spine Density in the Visual Cortex

    OpenAIRE

    Wang, Yan; Wu, Wei; Zhang, Xian; Hu, Xu; Li, Yue; Lou, Shihao; Ma, Xiao; An, Xu; Liu, Hui; Peng, Jing; Ma, Danyi; Zhou, Yifeng; Yang, Yupeng

    2016-01-01

    Visual perceptual learning (VPL) can improve spatial vision in normally sighted and visually impaired individuals. Although previous studies of humans and large animals have explored the neural basis of VPL, elucidation of the underlying cellular and molecular mechanisms remains a challenge. Owing to the advantages of molecular genetic and optogenetic manipulations, the mouse is a promising model for providing a mechanistic understanding of VPL. Here, we thoroughly evaluated the effects and p...

  11. Computer systems and methods for visualizing data

    Science.gov (United States)

    Stolte, Chris; Hanrahan, Patrick

    2013-01-29

    A method for forming a visual plot using a hierarchical structure of a dataset. The dataset comprises a measure and a dimension. The dimension consists of a plurality of levels. The plurality of levels form a dimension hierarchy. The visual plot is constructed based on a specification. A first level from the plurality of levels is represented by a first component of the visual plot. A second level from the plurality of levels is represented by a second component of the visual plot. The dataset is queried to retrieve data in accordance with the specification. The data includes all or a portion of the dimension and all or a portion of the measure. The visual plot is populated with the retrieved data in accordance with the specification.

  12. Mobile Learning and the Visual Web, Oh My! Nutrition Education in the 21st Century

    Science.gov (United States)

    Schuster, Ellen

    2012-01-01

    Technology is rapidly changing how our program participants learn in school and for their personal improvement. Extension educators who deliver nutrition program will want to be aware of the technology trends that are driving these changes. Blended learning, mobile learning, the visual Web, and the gamification of health are approaches to consider…

  13. Concurrent Unimodal Learning Enhances Multisensory Responses of Bi-Directional Crossmodal Learning in Robotic Audio-Visual Tracking

    DEFF Research Database (Denmark)

    Shaikh, Danish; Bodenhagen, Leon; Manoonpong, Poramate

    2018-01-01

    modalities to independently update modality-specific neural weights on a moment-by-moment basis, in response to dynamic changes in noisy sensory stimuli. The circuit is embodied as a non-holonomic robotic agent that must orient a towards a moving audio-visual target. The circuit continuously learns the best...

  14. Learning about Complex Multi-Stakeholder Issues: Assessing the Visual Problem Appraisal

    NARCIS (Netherlands)

    Witteveen, L.M.; Put, M.; Leeuwis, C.

    2010-01-01

    This paper presents an evaluation of the visual problem appraisal (VPA) learning environment in higher education. The VPA has been designed for the training of competences that are required in complex stakeholder settings in relation to sustainability issues. The design of VPA incorporates a

  15. A systematic review on ‘Foveal Crowding’ in visually impaired children and perceptual learning as a method to reduce Crowding

    Directory of Open Access Journals (Sweden)

    Huurneman Bianca

    2012-07-01

    Full Text Available Abstract Background This systematic review gives an overview of foveal crowding (the inability to recognize objects due to surrounding nearby contours in foveal vision and possible interventions. Foveal crowding can have a major effect on reading rate and deciphering small pieces of information from busy visual scenes. Three specific groups experience more foveal crowding than adults with normal vision (NV: 1 children with NV, 2 visually impaired (VI children and adults and 3 children with cerebral visual impairment (CVI. The extent and magnitude of foveal crowding as well as interventions aimed at reducing crowding were investigated in this review. The twofold goal of this review is : [A] to compare foveal crowding in children with NV, VI children and adults and CVI children and [B] to compare interventions to reduce crowding. Methods Three electronic databases were used to conduct the literature search: PubMed, PsycINFO (Ovid, and Cochrane. Additional studies were identified by contacting experts. Search terms included visual perception, contour interaction, crowding, crowded, and contour interactions. Results Children with normal vision show an extent of contour interaction over an area 1.5–3× as large as that seen in adults NV. The magnitude of contour interaction normally ranges between 1–2 lines on an acuity chart and this magnitude is even larger when stimuli are arranged in a circular configuration. Adults with congenital nystagmus (CN show interaction areas that are 2× larger than those seen adults with NV. The magnitude of the crowding effect is also 2× as large in individuals with CN as in individuals with NV. Finally, children with CVI experience a magnitude of the crowding effect that is 3× the size of that experienced by adults with NV. Conclusions The methodological heterogeneity, the diversity in paradigms used to measure crowding, made it impossible to conduct a meta-analysis. This is the first systematic review to

  16. The Effect of Teaching Methods and Learning Styles on Students’ English Achievement (An Experimental Study at Junior High School 1 Pasangkayu

    Directory of Open Access Journals (Sweden)

    Syahrul Munir

    2019-10-01

    Full Text Available The objectives of the research are to determine the effects of teaching methods (STAD and jigsaw and learning styles (visual, auditory, and kinesthetic on students’ English achievement. This research is an experimental study conducted at Junior High School Pasangkayu in 2014 with 213 sample which is selected stratified-randomly (n = 68. The results of the research are as follow: (1 English achievement of students taught with STAD is better than those of taught with jigsaw; (2 there is no significant difference in  English achievement among visual, auditory, and kinesthetic students; (3 there is any significant effect of interaction among teaching method and learning styles on students’ learning English achievement. The research also find out that for visual students, studying English achievement of students taught with STAD is better than that of students taught with jigsaw; for auditory students, learning English achievement  of students taught with jigsaw is better than that of students taught with STAD; and for kinesthetic students, English achievement of students taught with STAD is better than that of students taught with jigsaw. To sum up, STAD is more effective than jigsaw in improving students’ English achievement. STAD is suitable to improve English achievement of visual and kinesthetic students, and jigsaw is suitable to improve English achievement of auditory students.

  17. Cooperative Learning as a Democratic Learning Method

    Science.gov (United States)

    Erbil, Deniz Gökçe; Kocabas, Ayfer

    2018-01-01

    In this study, the effects of applying the cooperative learning method on the students' attitude toward democracy in an elementary 3rd-grade life studies course was examined. Over the course of 8 weeks, the cooperative learning method was applied with an experimental group, and traditional methods of teaching life studies in 2009, which was still…

  18. Sustained Cortical and Subcortical Measures of Auditory and Visual Plasticity following Short-Term Perceptual Learning.

    Science.gov (United States)

    Lau, Bonnie K; Ruggles, Dorea R; Katyal, Sucharit; Engel, Stephen A; Oxenham, Andrew J

    2017-01-01

    Short-term training can lead to improvements in behavioral discrimination of auditory and visual stimuli, as well as enhanced EEG responses to those stimuli. In the auditory domain, fluency with tonal languages and musical training has been associated with long-term cortical and subcortical plasticity, but less is known about the effects of shorter-term training. This study combined electroencephalography (EEG) and behavioral measures to investigate short-term learning and neural plasticity in both auditory and visual domains. Forty adult participants were divided into four groups. Three groups trained on one of three tasks, involving discrimination of auditory fundamental frequency (F0), auditory amplitude modulation rate (AM), or visual orientation (VIS). The fourth (control) group received no training. Pre- and post-training tests, as well as retention tests 30 days after training, involved behavioral discrimination thresholds, steady-state visually evoked potentials (SSVEP) to the flicker frequencies of visual stimuli, and auditory envelope-following responses simultaneously evoked and measured in response to rapid stimulus F0 (EFR), thought to reflect subcortical generators, and slow amplitude modulation (ASSR), thought to reflect cortical generators. Enhancement of the ASSR was observed in both auditory-trained groups, not specific to the AM-trained group, whereas enhancement of the SSVEP was found only in the visually-trained group. No evidence was found for changes in the EFR. The results suggest that some aspects of neural plasticity can develop rapidly and may generalize across tasks but not across modalities. Behaviorally, the pattern of learning was complex, with significant cross-task and cross-modal learning effects.

  19. Matching Learning Style to Instructional Method: Effects on Comprehension

    Science.gov (United States)

    Rogowsky, Beth A.; Calhoun, Barbara M.; Tallal, Paula

    2015-01-01

    While it is hypothesized that providing instruction based on individuals' preferred learning styles improves learning (i.e., reading for visual learners and listening for auditory learners, also referred to as the "meshing hypothesis"), after a critical review of the literature Pashler, McDaniel, Rohrer, and Bjork (2008) concluded that…

  20. Qualitative methods in workplace learning

    OpenAIRE

    Fabritius, Hannele

    2015-01-01

    Methods of learning in the workplace will be introduced. The methods are connect to competence development and to the process of conducting development discussions in a dialogical way. The tools developed and applied are a fourfold table, a cycle of work identity, a plan of personal development targets, a learning meeting and a learning map. The methods introduced will aim to better learning at work.

  1. Influence of visual observational conditions on tongue motor learning

    DEFF Research Database (Denmark)

    Kothari, Mohit; Liu, Xuimei; Baad-Hansen, Lene

    2016-01-01

    To investigate the impact of visual observational conditions on performance during a standardized tongue-protrusion training (TPT) task and to evaluate subject-based reports of helpfulness, disturbance, pain, and fatigue due to the observational conditions on 0-10 numerical rating scales. Forty...... regarding the level of disturbance, pain or fatigue. Self-observation of tongue-training facilitated behavioral aspects of tongue motor learning compared with model-observation but not compared with control....

  2. State of the art/science: Visual methods and information behavior research

    DEFF Research Database (Denmark)

    Hartel, Jenna; Sonnenwald, Diane H.; Lundh, Anna

    2012-01-01

    This panel reports on methodological innovation now underway as information behavior scholars begin to experiment with visual methods. The session launches with a succinct introduction to visual methods by Jenna Hartel and then showcases three exemplar visual research designs. First, Dianne Sonne...... will have gained: knowledge of the state of the art/science of visual methods in information behavior research; an appreciation for the richness the approach brings to the specialty; and a platform to take new visual research designs forward....

  3. Design of a Braille Learning Application for Visually Impaired Students in Bangladesh.

    Science.gov (United States)

    Nahar, Lutfun; Jaafar, Azizah; Ahamed, Eistiak; Kaish, A B M A

    2015-01-01

    Visually impaired students (VIS) are unable to get visual information, which has made their learning process complicated. This paper discusses the overall situation of VIS in Bangladesh and identifies major challenges that they are facing in getting education. The Braille system is followed to educate blind students in Bangladesh. However, lack of Braille based educational resources and technological solutions have made the learning process lengthy and complicated for VIS. As a developing country, Bangladesh cannot afford for the costly Braille related technological tools for VIS. Therefore, a mobile phone based Braille application, "mBRAILLE", for Android platform is designed to provide an easy Braille learning technology for VIS in Bangladesh. The proposed design is evaluated by experts in assistive technology for students with disabilities, and advanced learners of Braille. The application aims to provide a Bangla and English Braille learning platform for VIS. In this paper, we depict iterative (participatory) design of the application along with a preliminary evaluation with 5 blind subjects, and 1 sighted and 2 blind experts. The results show that the design scored an overall satisfaction level of 4.53 out of 5 by all respondents, indicating that our design is ready for the next step of development.

  4. Learning Visual Forward Models to Compensate for Self-Induced Image Motion.

    NARCIS (Netherlands)

    Ghadirzadeh, A.; Kootstra, G.W.; Maki, A.; Björkman, M.

    2014-01-01

    Predicting the sensory consequences of an agent's own actions is considered an important skill for intelligent behavior. In terms of vision, so-called visual forward models can be applied to learn such predictions. This is no trivial task given the high-dimensionality of sensory data and complex

  5. What Would a Graph Look Like in this Layout? A Machine Learning Approach to Large Graph Visualization.

    Science.gov (United States)

    Kwon, Oh-Hyun; Crnovrsanin, Tarik; Ma, Kwan-Liu

    2018-01-01

    Using different methods for laying out a graph can lead to very different visual appearances, with which the viewer perceives different information. Selecting a "good" layout method is thus important for visualizing a graph. The selection can be highly subjective and dependent on the given task. A common approach to selecting a good layout is to use aesthetic criteria and visual inspection. However, fully calculating various layouts and their associated aesthetic metrics is computationally expensive. In this paper, we present a machine learning approach to large graph visualization based on computing the topological similarity of graphs using graph kernels. For a given graph, our approach can show what the graph would look like in different layouts and estimate their corresponding aesthetic metrics. An important contribution of our work is the development of a new framework to design graph kernels. Our experimental study shows that our estimation calculation is considerably faster than computing the actual layouts and their aesthetic metrics. Also, our graph kernels outperform the state-of-the-art ones in both time and accuracy. In addition, we conducted a user study to demonstrate that the topological similarity computed with our graph kernel matches perceptual similarity assessed by human users.

  6. Learning temporal context shapes prestimulus alpha oscillations and improves visual discrimination performance.

    Science.gov (United States)

    Toosi, Tahereh; K Tousi, Ehsan; Esteky, Hossein

    2017-08-01

    Time is an inseparable component of every physical event that we perceive, yet it is not clear how the brain processes time or how the neuronal representation of time affects our perception of events. Here we asked subjects to perform a visual discrimination task while we changed the temporal context in which the stimuli were presented. We collected electroencephalography (EEG) signals in two temporal contexts. In predictable blocks stimuli were presented after a constant delay relative to a visual cue, and in unpredictable blocks stimuli were presented after variable delays relative to the visual cue. Four subsecond delays of 83, 150, 400, and 800 ms were used in the predictable and unpredictable blocks. We observed that predictability modulated the power of prestimulus alpha oscillations in the parieto-occipital sites: alpha power increased in the 300-ms window before stimulus onset in the predictable blocks compared with the unpredictable blocks. This modulation only occurred in the longest delay period, 800 ms, in which predictability also improved the behavioral performance of the subjects. Moreover, learning the temporal context shaped the prestimulus alpha power: modulation of prestimulus alpha power grew during the predictable block and correlated with performance enhancement. These results suggest that the brain is able to learn the subsecond temporal context of stimuli and use this to enhance sensory processing. Furthermore, the neural correlate of this temporal prediction is reflected in the alpha oscillations. NEW & NOTEWORTHY It is not well understood how the uncertainty in the timing of an external event affects its processing, particularly at subsecond scales. Here we demonstrate how a predictable timing scheme improves visual processing. We found that learning the predictable scheme gradually shaped the prestimulus alpha power. These findings indicate that the human brain is able to extract implicit subsecond patterns in the temporal context of

  7. Learning and Recognition of a Non-conscious Sequence of Events in Human Primary Visual Cortex.

    Science.gov (United States)

    Rosenthal, Clive R; Andrews, Samantha K; Antoniades, Chrystalina A; Kennard, Christopher; Soto, David

    2016-03-21

    Human primary visual cortex (V1) has long been associated with learning simple low-level visual discriminations [1] and is classically considered outside of neural systems that support high-level cognitive behavior in contexts that differ from the original conditions of learning, such as recognition memory [2, 3]. Here, we used a novel fMRI-based dichoptic masking protocol-designed to induce activity in V1, without modulation from visual awareness-to test whether human V1 is implicated in human observers rapidly learning and then later (15-20 min) recognizing a non-conscious and complex (second-order) visuospatial sequence. Learning was associated with a change in V1 activity, as part of a temporo-occipital and basal ganglia network, which is at variance with the cortico-cerebellar network identified in prior studies of "implicit" sequence learning that involved motor responses and visible stimuli (e.g., [4]). Recognition memory was associated with V1 activity, as part of a temporo-occipital network involving the hippocampus, under conditions that were not imputable to mechanisms associated with conscious retrieval. Notably, the V1 responses during learning and recognition separately predicted non-conscious recognition memory, and functional coupling between V1 and the hippocampus was enhanced for old retrieval cues. The results provide a basis for novel hypotheses about the signals that can drive recognition memory, because these data (1) identify human V1 with a memory network that can code complex associative serial visuospatial information and support later non-conscious recognition memory-guided behavior (cf. [5]) and (2) align with mouse models of experience-dependent V1 plasticity in learning and memory [6]. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Phoneme Awareness, Visual-Verbal Paired-Associate Learning, and Rapid Automatized Naming as Predictors of Individual Differences in Reading Ability

    Science.gov (United States)

    Warmington, Meesha; Hulme, Charles

    2012-01-01

    This study examines the concurrent relationships between phoneme awareness, visual-verbal paired-associate learning, rapid automatized naming (RAN), and reading skills in 7- to 11-year-old children. Path analyses showed that visual-verbal paired-associate learning and RAN, but not phoneme awareness, were unique predictors of word recognition,…

  9. Learning representation hierarchies by sharing visual features: a computational investigation of Persian character recognition with unsupervised deep learning.

    Science.gov (United States)

    Sadeghi, Zahra; Testolin, Alberto

    2017-08-01

    In humans, efficient recognition of written symbols is thought to rely on a hierarchical processing system, where simple features are progressively combined into more abstract, high-level representations. Here, we present a computational model of Persian character recognition based on deep belief networks, where increasingly more complex visual features emerge in a completely unsupervised manner by fitting a hierarchical generative model to the sensory data. Crucially, high-level internal representations emerging from unsupervised deep learning can be easily read out by a linear classifier, achieving state-of-the-art recognition accuracy. Furthermore, we tested the hypothesis that handwritten digits and letters share many common visual features: A generative model that captures the statistical structure of the letters distribution should therefore also support the recognition of written digits. To this aim, deep networks trained on Persian letters were used to build high-level representations of Persian digits, which were indeed read out with high accuracy. Our simulations show that complex visual features, such as those mediating the identification of Persian symbols, can emerge from unsupervised learning in multilayered neural networks and can support knowledge transfer across related domains.

  10. Learning Sparse Visual Representations with Leaky Capped Norm Regularizers

    OpenAIRE

    Wangni, Jianqiao; Lin, Dahua

    2017-01-01

    Sparsity inducing regularization is an important part for learning over-complete visual representations. Despite the popularity of $\\ell_1$ regularization, in this paper, we investigate the usage of non-convex regularizations in this problem. Our contribution consists of three parts. First, we propose the leaky capped norm regularization (LCNR), which allows model weights below a certain threshold to be regularized more strongly as opposed to those above, therefore imposes strong sparsity and...

  11. The Effect of Visual and Etymological Treatments on Learning Decomposable Idioms among EFL Learners

    Directory of Open Access Journals (Sweden)

    Nassim Golaghaei

    2015-09-01

    Full Text Available The present study endeavors to investigate the impact of visual and etymological treatments on learning idioms among English language learners. Seventy-nine intermediate students at Rooz Academy Language School in Babol were selected from among a total number of 116 learners based on their performances on the Longman complete course for the TOEFL test to fulfill the purpose of the study. The students were then assigned into three experimental groups. Initially, a pre-test of idiomatic expressions including 48 idiomatic items was administered to the participants in all groups. During the instructional period, the groups were taught a group of abnormally decomposable idioms through different treatments, namely, visual, etymological, and a combination of visual-etymological elaboration. At the end of the instructional period, the participants in all groups were given a posttest which was the same as the pretest. The design of this study is quasi-experimental. The data obtained was analyzed using one-way ANOVA analysis. The results of data analysis revealed that the etymological treatment was more effective than visual aids on learning idioms among intermediate English language learners. However, the visual-etymological treatment was the most effective one. The findings of this study have implications for EFL teachers, students, and materials developers.

  12. Quality-Related Monitoring and Grading of Granulated Products by Weibull-Distribution Modeling of Visual Images with Semi-Supervised Learning.

    Science.gov (United States)

    Liu, Jinping; Tang, Zhaohui; Xu, Pengfei; Liu, Wenzhong; Zhang, Jin; Zhu, Jianyong

    2016-06-29

    The topic of online product quality inspection (OPQI) with smart visual sensors is attracting increasing interest in both the academic and industrial communities on account of the natural connection between the visual appearance of products with their underlying qualities. Visual images captured from granulated products (GPs), e.g., cereal products, fabric textiles, are comprised of a large number of independent particles or stochastically stacking locally homogeneous fragments, whose analysis and understanding remains challenging. A method of image statistical modeling-based OPQI for GP quality grading and monitoring by a Weibull distribution(WD) model with a semi-supervised learning classifier is presented. WD-model parameters (WD-MPs) of GP images' spatial structures, obtained with omnidirectional Gaussian derivative filtering (OGDF), which were demonstrated theoretically to obey a specific WD model of integral form, were extracted as the visual features. Then, a co-training-style semi-supervised classifier algorithm, named COSC-Boosting, was exploited for semi-supervised GP quality grading, by integrating two independent classifiers with complementary nature in the face of scarce labeled samples. Effectiveness of the proposed OPQI method was verified and compared in the field of automated rice quality grading with commonly-used methods and showed superior performance, which lays a foundation for the quality control of GP on assembly lines.

  13. The Effect of Animation in Multimedia Computer-Based Learning and Learning Style to the Learning Results

    Directory of Open Access Journals (Sweden)

    Muhammad RUSLI

    2017-10-01

    Full Text Available The effectiveness of a learning depends on four main elements, they are content, desired learning outcome, instructional method and the delivery media. The integration of those four elements can be manifested into a learning modul which is called multimedia learning or learning by using multimedia. In learning context by using computer-based multimedia, there are two main things that need to be noticed so that the learning process can run effectively: how the content is presented, and what the learner’s chosen way in accepting and processing the information into a meaningful knowledge. First it is related with the way to visualize the content and how people learn. The second one is related with the learning style of the learner. This research aims to investigate the effect of the type of visualization—static vs animated—on a multimedia computer-based learning, and learning styles—visual vs verbal, towards the students’ capability in applying the concepts, procedures, principles of Java programming. Visualization type act as independent variables, and learning styles of the students act as a moderator variable. Moreover, the instructional strategies followed the Component Display Theory of Merril, and the format of presentation of multimedia followed the Seven Principles of Multimedia Learning of Mayer and Moreno. Learning with the multimedia computer-based learning has been done in the classroom. The subject of this research was the student of STMIK-STIKOM Bali in odd semester 2016-2017 which followed the course of Java programming. The Design experiments used multivariate analysis of variance, MANOVA 2 x 2, with a large sample of 138 students in 4 classes. Based on the results of the analysis, it can be concluded that the animation in multimedia interactive learning gave a positive effect in improving students’ learning outcomes, particularly in the applying the concepts, procedures, and principles of Java programming. The

  14. Visual Literacy Skills of Students in College-Level Biology: Learning Outcomes Following Digital or Hand-Drawing Activities

    Science.gov (United States)

    Bell, Justine C.

    2014-01-01

    To test the claim that digital learning tools enhance the acquisition of visual literacy in this generation of biology students, a learning intervention was carried out with 33 students enrolled in an introductory college biology course. This study compared learning outcomes following two types of learning tools: a traditional drawing activity, or…

  15. Preparing Content-Rich Learning Environments with VPython and Excel, Controlled by Visual Basic for Applications

    Science.gov (United States)

    Prayaga, Chandra

    2008-01-01

    A simple interface between VPython and Microsoft (MS) Office products such as Word and Excel, controlled by Visual Basic for Applications, is described. The interface allows the preparation of content-rich, interactive learning environments by taking advantage of the three-dimensional (3D) visualization capabilities of VPython and the GUI…

  16. Collaboration in Visual Culture Learning Communities: Towards a Synergy of Individual and Collective Creative Practice

    Science.gov (United States)

    Karpati, Andrea; Freedman, Kerry; Castro, Juan Carlos; Kallio-Tavin, Mira; Heijnen, Emiel

    2017-01-01

    A visual culture learning community (VCLC) is an adolescent or young adult group engaged in expression and creation outside of formal institutions and without adult supervision. In the framework of an international, comparative research project executed between 2010 and 2014, members of a variety of eight self-initiated visual culture groups…

  17. A perceptual learning deficit in Chinese developmental dyslexia as revealed by visual texture discrimination training.

    Science.gov (United States)

    Wang, Zhengke; Cheng-Lai, Alice; Song, Yan; Cutting, Laurie; Jiang, Yuzheng; Lin, Ou; Meng, Xiangzhi; Zhou, Xiaolin

    2014-08-01

    Learning to read involves discriminating between different written forms and establishing connections with phonology and semantics. This process may be partially built upon visual perceptual learning, during which the ability to process the attributes of visual stimuli progressively improves with practice. The present study investigated to what extent Chinese children with developmental dyslexia have deficits in perceptual learning by using a texture discrimination task, in which participants were asked to discriminate the orientation of target bars. Experiment l demonstrated that, when all of the participants started with the same initial stimulus-to-mask onset asynchrony (SOA) at 300 ms, the threshold SOA, adjusted according to response accuracy for reaching 80% accuracy, did not show a decrement over 5 days of training for children with dyslexia, whereas this threshold SOA steadily decreased over the training for the control group. Experiment 2 used an adaptive procedure to determine the threshold SOA for each participant during training. Results showed that both the group of dyslexia and the control group attained perceptual learning over the sessions in 5 days, although the threshold SOAs were significantly higher for the group of dyslexia than for the control group; moreover, over individual participants, the threshold SOA negatively correlated with their performance in Chinese character recognition. These findings suggest that deficits in visual perceptual processing and learning might, in part, underpin difficulty in reading Chinese. Copyright © 2014 John Wiley & Sons, Ltd.

  18. Data visualization a guide to visual storytelling for libraries

    CERN Document Server

    2016-01-01

    Data Visualization: A Guide to Visual Storytelling for Libraries is a practical guide to the skills and tools needed to create beautiful and meaningful visual stories through data visualization. Learn how to sift through complex datasets to better understand a variety of metrics, such as trends in user behavior and electronic resource usage, return on investment (ROI) and impact metrics, and learning and reference analytics. Sections include: .Identifying and interpreting datasets for visualization .Tools and technologies for creating meaningful visualizations .Case studies in data visualization and dashboards Understanding and communicating trends from your organization s data is essential. Whether you are looking to make more informed decisions by visualizing organizational data, or to tell the story of your library s impact on your community, this book will give you the tools to make it happen."

  19. Autonomous learning of robust visual object detection and identification on a humanoid

    NARCIS (Netherlands)

    Leitner, J.; Chandrashekhariah, P.; Harding, S.; Frank, M.; Spina, G.; Förster, A.; Triesch, J.; Schmidhuber, J.

    2012-01-01

    In this work we introduce a technique for a humanoid robot to autonomously learn the representations of objects within its visual environment. Our approach involves an attention mechanism in association with feature based segmentation that explores the environment and provides object samples for

  20. The Effect of Extremely Low Frequency Electromagnetic Fields on Visual Learning & Memory and Anatomical Structures of the Brain in Male Rhesus Monkeys

    Directory of Open Access Journals (Sweden)

    Elahe Tekieh

    2018-04-01

    Full Text Available Background: Humans in modern societies expose to substantially elevated levels of electromagnetic field (EMF emissions with different frequencies.The neurobiological effects of EMF have been the subject of debate and intensive research over the past few decades. Therefore, we evaluated the effects of EMF on visual learning and anatomical dimensions of the hippocampus and the prefrontal area (PFA in male Rhesus monkeys. Materials and Methods:In this study, four rhesus monkeys were irradiated by 0.7 microtesla ELF-EMF either at 5 or 30 Hz, 4 h a day, for 30 days. Alterations in visual learning and memory were assessed before and after irradiation phase by using a box designed that cchallenging animals for gaining rewards Also, the monkeys’ brains were scanned by using MRI technique one week before and one week after irradiation. The monkeys were anesthetized by intramuscular injection of ketamine hydrochloride (10–20 mg/kg and xylazine (0.2–0.4 mg/kg, and scanned with a 3-Tesla Magnetom, in axial, sagittal, and coronal planes using T2 weight­ed protocol with a slice thickness of 3 mm. The anatomical changes of hippocampus and the prefrontal area (PFA was measured by volumetric study. Results: Electromagnetic field exposure at a frequency of 30 Hz reduced the number of correct responses in the learning process and delayed memory formation in the two tested monkeys. While, ELF-EMF at 5 Hz had no effect on the visual learning and memory changes. No anatomical changes were found in the prefrontal area and the hippocampus at both frequencies. Conclusion: ELF-EMF irradiation at 30 Hz adversely affected visual learning and memory, pprobably through these changes apply through effects on other factors except changes in brain structure and anatomy.

  1. Visual Literacy and Visual Thinking.

    Science.gov (United States)

    Hortin, John A.

    It is proposed that visual literacy be defined as the ability to understand (read) and use (write) images and to think and learn in terms of images. This definition includes three basic principles: (1) visuals are a language and thus analogous to verbal language; (2) a visually literate person should be able to understand (read) images and use…

  2. Examining the direct and indirect effects of visual-verbal paired associate learning on Chinese word reading.

    Science.gov (United States)

    Georgiou, George; Liu, Cuina; Xu, Shiyang

    2017-08-01

    Associative learning, traditionally measured with paired associate learning (PAL) tasks, has been found to predict reading ability in several languages. However, it remains unclear whether it also predicts word reading in Chinese, which is known for its ambiguous print-sound correspondences, and whether its effects are direct or indirect through the effects of other reading-related skills such as phonological awareness and rapid naming. Thus, the purpose of this study was to examine the direct and indirect effects of visual-verbal PAL on word reading in an unselected sample of Chinese children followed from the second to the third kindergarten year. A sample of 141 second-year kindergarten children (71 girls and 70 boys; mean age=58.99months, SD=3.17) were followed for a year and were assessed at both times on measures of visual-verbal PAL, rapid naming, and phonological awareness. In the third kindergarten year, they were also assessed on word reading. The results of path analysis showed that visual-verbal PAL exerted a significant direct effect on word reading that was independent of the effects of phonological awareness and rapid naming. However, it also exerted significant indirect effects through phonological awareness. Taken together, these findings suggest that variations in cross-modal associative learning (as measured by visual-verbal PAL) place constraints on the development of word recognition skills irrespective of the characteristics of the orthography children are learning to read. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Visual artificial grammar learning: comparative research on humans, kea (Nestor notabilis) and pigeons (Columba livia)

    Science.gov (United States)

    Stobbe, Nina; Westphal-Fitch, Gesche; Aust, Ulrike; Fitch, W. Tecumseh

    2012-01-01

    Artificial grammar learning (AGL) provides a useful tool for exploring rule learning strategies linked to general purpose pattern perception. To be able to directly compare performance of humans with other species with different memory capacities, we developed an AGL task in the visual domain. Presenting entire visual patterns simultaneously instead of sequentially minimizes the amount of required working memory. This approach allowed us to evaluate performance levels of two bird species, kea (Nestor notabilis) and pigeons (Columba livia), in direct comparison to human participants. After being trained to discriminate between two types of visual patterns generated by rules at different levels of computational complexity and presented on a computer screen, birds and humans received further training with a series of novel stimuli that followed the same rules, but differed in various visual features from the training stimuli. Most avian and all human subjects continued to perform well above chance during this initial generalization phase, suggesting that they were able to generalize learned rules to novel stimuli. However, detailed testing with stimuli that violated the intended rules regarding the exact number of stimulus elements indicates that neither bird species was able to successfully acquire the intended pattern rule. Our data suggest that, in contrast to humans, these birds were unable to master a simple rule above the finite-state level, even with simultaneous item presentation and despite intensive training. PMID:22688635

  4. Attention Cueing and Activity Equally Reduce False Alarm Rate in Visual-Auditory Associative Learning through Improving Memory.

    Science.gov (United States)

    Nikouei Mahani, Mohammad-Ali; Haghgoo, Hojjat Allah; Azizi, Solmaz; Nili Ahmadabadi, Majid

    2016-01-01

    In our daily life, we continually exploit already learned multisensory associations and form new ones when facing novel situations. Improving our associative learning results in higher cognitive capabilities. We experimentally and computationally studied the learning performance of healthy subjects in a visual-auditory sensory associative learning task across active learning, attention cueing learning, and passive learning modes. According to our results, the learning mode had no significant effect on learning association of congruent pairs. In addition, subjects' performance in learning congruent samples was not correlated with their vigilance score. Nevertheless, vigilance score was significantly correlated with the learning performance of the non-congruent pairs. Moreover, in the last block of the passive learning mode, subjects significantly made more mistakes in taking non-congruent pairs as associated and consciously reported lower confidence. These results indicate that attention and activity equally enhanced visual-auditory associative learning for non-congruent pairs, while false alarm rate in the passive learning mode did not decrease after the second block. We investigated the cause of higher false alarm rate in the passive learning mode by using a computational model, composed of a reinforcement learning module and a memory-decay module. The results suggest that the higher rate of memory decay is the source of making more mistakes and reporting lower confidence in non-congruent pairs in the passive learning mode.

  5. Using a model of human visual perception to improve deep learning.

    Science.gov (United States)

    Stettler, Michael; Francis, Gregory

    2018-04-17

    Deep learning algorithms achieve human-level (or better) performance on many tasks, but there still remain situations where humans learn better or faster. With regard to classification of images, we argue that some of those situations are because the human visual system represents information in a format that promotes good training and classification. To demonstrate this idea, we show how occluding objects can impair performance of a deep learning system that is trained to classify digits in the MNIST database. We describe a human inspired segmentation and interpolation algorithm that attempts to reconstruct occluded parts of an image, and we show that using this reconstruction algorithm to pre-process occluded images promotes training and classification performance. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Learner differences and learning outcomes in an introductory biochemistry class: attitude toward images, visual cognitive skills, and learning approach.

    Science.gov (United States)

    Milner, Rachel E

    2014-01-01

    The practice of using images in teaching is widespread, and in science education images are used so extensively that some have argued they are now the "main vehicle of communication" (C. Ferreira, A. Arroio Problems Educ. 21st Century 2009, 16, 48-53). Although this phenomenon is especially notable in the field of biochemistry, we know little about the role and importance of images in communicating concepts to students in the classroom. This study reports the development of a scale to assess students' attitude toward biochemical images, particularly their willingness and ability to use the images to support their learning. In addition, because it is argued that images are central in the communication of biochemical concepts, we investigated three "learner differences" which might impact learning outcomes in this kind of classroom environment: attitude toward images, visual cognitive skills, and learning approach. Overall, the students reported a positive attitude toward the images, the majority agreeing that they liked images and considered them useful. However, the participants also reported that verbal explanations were more important than images in helping them to understand the concepts. In keeping with this we found that there was no relationship between learning outcomes and the students' self-reported attitude toward images or visual cognitive skills. In contrast, learning outcomes were significantly correlated with the students' self-reported approach to learning. These findings suggest that images are not necessarily the main vehicle of communication in a biochemistry classroom and that verbal explanations and encouragement of a deep learning approach are important considerations in improving our pedagogical approach. © 2013 International Union of Biochemistry and Molecular Biology, Inc.

  7. Practice makes it better: A psychophysical study of visual perceptual learning and its transfer effects on aging.

    Science.gov (United States)

    Li, Xuan; Allen, Philip A; Lien, Mei-Ching; Yamamoto, Naohide

    2017-02-01

    Previous studies on perceptual learning, acquiring a new skill through practice, appear to stimulate brain plasticity and enhance performance (Fiorentini & Berardi, 1981). The present study aimed to determine (a) whether perceptual learning can be used to compensate for age-related declines in perceptual abilities, and (b) whether the effect of perceptual learning can be transferred to untrained stimuli and subsequently improve capacity of visual working memory (VWM). We tested both healthy younger and older adults in a 3-day training session using an orientation discrimination task. A matching-to-sample psychophysical method was used to measure improvements in orientation discrimination thresholds and reaction times (RTs). Results showed that both younger and older adults improved discrimination thresholds and RTs with similar learning rates and magnitudes. Furthermore, older adults exhibited a generalization of improvements to 3 untrained orientations that were close to the training orientation and benefited more compared with younger adults from the perceptual learning as they transferred learning effects to the VWM performance. We conclude that through perceptual learning, older adults can partially counteract age-related perceptual declines, generalize the learning effect to other stimulus conditions, and further overcome the limitation of using VWM capacity to perform a perceptual task. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  8. Effects of Visual Feedback Distortion on Gait Adaptation: Comparison of Implicit Visual Distortion Versus Conscious Modulation on Retention of Motor Learning.

    Science.gov (United States)

    Kim, Seung-Jae; Ogilvie, Mitchell; Shimabukuro, Nathan; Stewart, Trevor; Shin, Joon-Ho

    2015-09-01

    Visual feedback can be used during gait rehabilitation to improve the efficacy of training. We presented a paradigm called visual feedback distortion; the visual representation of step length was manipulated during treadmill walking. Our prior work demonstrated that an implicit distortion of visual feedback of step length entails an unintentional adaptive process in the subjects' spatial gait pattern. Here, we investigated whether the implicit visual feedback distortion, versus conscious correction, promotes efficient locomotor adaptation that relates to greater retention of a task. Thirteen healthy subjects were studied under two conditions: (1) we implicitly distorted the visual representation of their gait symmetry over 14 min, and (2) with help of visual feedback, subjects were told to walk on the treadmill with the intent of attaining the gait asymmetry observed during the first implicit trial. After adaptation, the visual feedback was removed while subjects continued walking normally. Over this 6-min period, retention of preserved asymmetric pattern was assessed. We found that there was a greater retention rate during the implicit distortion trial than that of the visually guided conscious modulation trial. This study highlights the important role of implicit learning in the context of gait rehabilitation by demonstrating that training with implicit visual feedback distortion may produce longer lasting effects. This suggests that using visual feedback distortion could improve the effectiveness of treadmill rehabilitation processes by influencing the retention of motor skills.

  9. Learning Curve Analyses in Neurodevelopmental Disorders: Are Children with Autism Spectrum Disorder Truly Visual Learners?

    Science.gov (United States)

    Erdodi, Laszlo; Lajiness-O'Neill, Renee; Schmitt, Thomas A.

    2013-01-01

    Visual and auditory verbal learning using a selective reminding format was studied in a mixed clinical sample of children with autism spectrum disorder (ASD) (n = 42), attention-deficit hyperactivity disorder (n = 83), velocardiofacial syndrome (n = 17) and neurotypicals (n = 38) using the Test of Memory and Learning to (1) more thoroughly…

  10. Real-world visual statistics and infants' first-learned object names.

    Science.gov (United States)

    Clerkin, Elizabeth M; Hart, Elizabeth; Rehg, James M; Yu, Chen; Smith, Linda B

    2017-01-05

    We offer a new solution to the unsolved problem of how infants break into word learning based on the visual statistics of everyday infant-perspective scenes. Images from head camera video captured by 8 1/2 to 10 1/2 month-old infants at 147 at-home mealtime events were analysed for the objects in view. The images were found to be highly cluttered with many different objects in view. However, the frequency distribution of object categories was extremely right skewed such that a very small set of objects was pervasively present-a fact that may substantially reduce the problem of referential ambiguity. The statistical structure of objects in these infant egocentric scenes differs markedly from that in the training sets used in computational models and in experiments on statistical word-referent learning. Therefore, the results also indicate a need to re-examine current explanations of how infants break into word learning.This article is part of the themed issue 'New frontiers for statistical learning in the cognitive sciences'. © 2016 The Author(s).

  11. Meeting the Needs of Students with Coexisting Visual Impairments and Learning Disabilities

    Science.gov (United States)

    Jones, Beth A.; Hensley-Maloney, Lauren

    2015-01-01

    The coexistence of visual impairments and learning disabilities presents unique challenges. It is imperative that teachers be apprised of the characteristics of this population as well as instructional strategies targeted at meeting their unique needs. The authors highlight typical patterns of performance and provide suggestions for effective…

  12. Influence on Learning of a Collaborative Learning Method Comprising the Jigsaw Method and Problem-based Learning (PBL).

    Science.gov (United States)

    Takeda, Kayoko; Takahashi, Kiyoshi; Masukawa, Hiroyuki; Shimamori, Yoshimitsu

    2017-01-01

    Recently, the practice of active learning has spread, increasingly recognized as an essential component of academic studies. Classes incorporating small group discussion (SGD) are conducted at many universities. At present, assessments of the effectiveness of SGD have mostly involved evaluation by questionnaires conducted by teachers, by peer assessment, and by self-evaluation of students. However, qualitative data, such as open-ended descriptions by students, have not been widely evaluated. As a result, we have been unable to analyze the processes and methods involved in how students acquire knowledge in SGD. In recent years, due to advances in information and communication technology (ICT), text mining has enabled the analysis of qualitative data. We therefore investigated whether the introduction of a learning system comprising the jigsaw method and problem-based learning (PBL) would improve student attitudes toward learning; we did this by text mining analysis of the content of student reports. We found that by applying the jigsaw method before PBL, we were able to improve student attitudes toward learning and increase the depth of their understanding of the area of study as a result of working with others. The use of text mining to analyze qualitative data also allowed us to understand the processes and methods by which students acquired knowledge in SGD and also changes in students' understanding and performance based on improvements to the class. This finding suggests that the use of text mining to analyze qualitative data could enable teachers to evaluate the effectiveness of various methods employed to improve learning.

  13. Online Dissection Audio-Visual Resources for Human Anatomy: Undergraduate Medical Students' Usage and Learning Outcomes

    Science.gov (United States)

    Choi-Lundberg, Derek L.; Cuellar, William A.; Williams, Anne-Marie M.

    2016-01-01

    In an attempt to improve undergraduate medical student preparation for and learning from dissection sessions, dissection audio-visual resources (DAVR) were developed. Data from e-learning management systems indicated DAVR were accessed by 28% ± 10 (mean ± SD for nine DAVR across three years) of students prior to the corresponding dissection…

  14. PERANCANGAN MEDIA PEMBELAJARAN BERBASIS AUDIO VISUAL UNTUK MATA KULIAH TIPOGRAFI PADA PROGRAM STUDI DESAIN KOMUNIKASI VISUAL UNIVERSITAS DIAN NUSWANTORO

    Directory of Open Access Journals (Sweden)

    Puri Sulistiyawati

    2017-02-01

    Full Text Available Abstrak Tipografi merupakan salah satu mata kuliah pada bidang desain komunikasi visual yang mengutamakan aspek visual. Namun berdasarkan hasil observasi diketahui bahwa media pembelajaran yang selama ini digunakan kurang efektif karena kurangnya pemanfaatan teknologi informasi, sehingga mahasiswa kurang maksimal dalam memahami materi kuliah yang disampaikan oleh pengajar. Perkembangan teknologi informasi saat ini banyak memberikan dampak positif bagi kemajuan bidang pendidikan diantaranya dapat digunakan untuk mendukung media dalam proses pembelajaran. Tujuan penelitian ini adalah merancang media pembelajaran untuk mata kuliah tipografi dengan memanfaatkan teknologi informasi yaitu media audio visual. Metode yang digunakan dalam penelitian ini adalah Research and Development dengan pendekatan model ADDIE (Analysis, Design, Development, Implementation, Evaluation. Dengan diciptakannya media pembelajaran audio visual ini diharapkan proses pembelajaran mata kuliah Tipografi dapat lebih efektif dan materi kuliah lebih mudah dipahami oleh mahasiswa. Kata Kunci : audio visual, media pembelajaran, tipografi Abstract Typography is one of the subjects in the field of visual communication design that prioritizes the visual aspect. However, based on the observation note that the media has been used less effective because the lack of use information technology, so students can't understand the course material that explained by lecturers. Today, the development of information technology is being positive impact for the advancement of education which can be used to support the media in the learning process. The purpose of this research is to design learning media for the course of typography by utilizing information technology, called audio-visual media.  The method that used in this research is Research and Development with ADDIE model (Analysis, Design, Development, Implementation, Evaluation. With the creation of audio-visual learning media is expected

  15. Webly-Supervised Fine-Grained Visual Categorization via Deep Domain Adaptation.

    Science.gov (United States)

    Xu, Zhe; Huang, Shaoli; Zhang, Ya; Tao, Dacheng

    2018-05-01

    Learning visual representations from web data has recently attracted attention for object recognition. Previous studies have mainly focused on overcoming label noise and data bias and have shown promising results by learning directly from web data. However, we argue that it might be better to transfer knowledge from existing human labeling resources to improve performance at nearly no additional cost. In this paper, we propose a new semi-supervised method for learning via web data. Our method has the unique design of exploiting strong supervision, i.e., in addition to standard image-level labels, our method also utilizes detailed annotations including object bounding boxes and part landmarks. By transferring as much knowledge as possible from existing strongly supervised datasets to weakly supervised web images, our method can benefit from sophisticated object recognition algorithms and overcome several typical problems found in webly-supervised learning. We consider the problem of fine-grained visual categorization, in which existing training resources are scarce, as our main research objective. Comprehensive experimentation and extensive analysis demonstrate encouraging performance of the proposed approach, which, at the same time, delivers a new pipeline for fine-grained visual categorization that is likely to be highly effective for real-world applications.

  16. A process-based approach to characterizing the effect of acute alprazolam challenge on visual paired associate learning and memory in healthy older adults.

    Science.gov (United States)

    Pietrzak, Robert H; Scott, James Cobb; Harel, Brian T; Lim, Yen Ying; Snyder, Peter J; Maruff, Paul

    2012-11-01

    Alprazolam is a benzodiazepine that, when administered acutely, results in impairments in several aspects of cognition, including attention, learning, and memory. However, the profile (i.e., component processes) that underlie alprazolam-related decrements in visual paired associate learning has not been fully explored. In this double-blind, placebo-controlled, randomized cross-over study of healthy older adults, we used a novel, "process-based" computerized measure of visual paired associate learning to examine the effect of a single, acute 1-mg dose of alprazolam on component processes of visual paired associate learning and memory. Acute alprazolam challenge was associated with a large magnitude reduction in visual paired associate learning and memory performance (d = 1.05). Process-based analyses revealed significant increases in distractor, exploratory, between-search, and within-search error types. Analyses of percentages of each error type suggested that, relative to placebo, alprazolam challenge resulted in a decrease in the percentage of exploratory errors and an increase in the percentage of distractor errors, both of which reflect memory processes. Results of this study suggest that acute alprazolam challenge decreases visual paired associate learning and memory performance by reducing the strength of the association between pattern and location, which may reflect a general breakdown in memory consolidation, with less evidence of reductions in executive processes (e.g., working memory) that facilitate visual paired associate learning and memory. Copyright © 2012 John Wiley & Sons, Ltd.

  17. Research on Visual Analysis Methods of Terrorism Events

    Science.gov (United States)

    Guo, Wenyue; Liu, Haiyan; Yu, Anzhu; Li, Jing

    2016-06-01

    Under the situation that terrorism events occur more and more frequency throughout the world, improving the response capability of social security incidents has become an important aspect to test governments govern ability. Visual analysis has become an important method of event analysing for its advantage of intuitive and effective. To analyse events' spatio-temporal distribution characteristics, correlations among event items and the development trend, terrorism event's spatio-temporal characteristics are discussed. Suitable event data table structure based on "5W" theory is designed. Then, six types of visual analysis are purposed, and how to use thematic map and statistical charts to realize visual analysis on terrorism events is studied. Finally, experiments have been carried out by using the data provided by Global Terrorism Database, and the results of experiments proves the availability of the methods.

  18. Cortical Dynamics of Contextually Cued Attentive Visual Learning and Search: Spatial and Object Evidence Accumulation

    Science.gov (United States)

    Huang, Tsung-Ren; Grossberg, Stephen

    2010-01-01

    How do humans use target-predictive contextual information to facilitate visual search? How are consistently paired scenic objects and positions learned and used to more efficiently guide search in familiar scenes? For example, humans can learn that a certain combination of objects may define a context for a kitchen and trigger a more efficient…

  19. The mirror-neuron system and observational learning: Implications for the effectiveness of dynamic visualizations.

    OpenAIRE

    Van Gog, Tamara; Paas, Fred; Marcus, Nadine; Ayres, Paul; Sweller, John

    2009-01-01

    Van Gog, T., Paas, F., Marcus, N., Ayres, P., & Sweller, J. (2009). The mirror-neuron system and observational learning: Implications for the effectiveness of dynamic visualizations. Educational Psychology Review, 21, 21-30.

  20. Visual feedback of tongue movement for novel speech sound learning

    Directory of Open Access Journals (Sweden)

    William F Katz

    2015-11-01

    Full Text Available Pronunciation training studies have yielded important information concerning the processing of audiovisual (AV information. Second language (L2 learners show increased reliance on bottom-up, multimodal input for speech perception (compared to monolingual individuals. However, little is known about the role of viewing one’s own speech articulation processes during speech training. The current study investigated whether real-time, visual feedback for tongue movement can improve a speaker’s learning of non-native speech sounds. An interactive 3D tongue visualization system based on electromagnetic articulography (EMA was used in a speech training experiment. Native speakers of American English produced a novel speech sound (/ɖ̠/; a voiced, coronal, palatal stop before, during, and after trials in which they viewed their own speech movements using the 3D model. Talkers’ productions were evaluated using kinematic (tongue-tip spatial positioning and acoustic (burst spectra measures. The results indicated a rapid gain in accuracy associated with visual feedback training. The findings are discussed with respect to neural models for multimodal speech processing.

  1. Analytical Review of Data Visualization Methods in Application to Big Data

    OpenAIRE

    Gorodov, Evgeniy Yur’evich; Gubarev, Vasiliy Vasil’evich

    2013-01-01

    This paper describes the term Big Data in aspects of data representation and visualization. There are some specific problems in Big Data visualization, so there are definitions for these problems and a set of approaches to avoid them. Also, we make a review of existing methods for data visualization in application to Big Data and taking into account the described problems. Summarizing the result, we have provided a classification of visualization methods in application to Big Data.

  2. The seam visual tracking method for large structures

    Science.gov (United States)

    Bi, Qilin; Jiang, Xiaomin; Liu, Xiaoguang; Cheng, Taobo; Zhu, Yulong

    2017-10-01

    In this paper, a compact and flexible weld visual tracking method is proposed. Firstly, there was the interference between the visual device and the work-piece to be welded when visual tracking height cannot change. a kind of weld vision system with compact structure and tracking height is researched. Secondly, according to analyze the relative spatial pose between the camera, the laser and the work-piece to be welded and study with the theory of relative geometric imaging, The mathematical model between image feature parameters and three-dimensional trajectory of the assembly gap to be welded is established. Thirdly, the visual imaging parameters of line structured light are optimized by experiment of the weld structure of the weld. Fourth, the interference that line structure light will be scatters at the bright area of metal and the area of surface scratches will be bright is exited in the imaging. These disturbances seriously affect the computational efficiency. The algorithm based on the human eye visual attention mechanism is used to extract the weld characteristics efficiently and stably. Finally, in the experiment, It is verified that the compact and flexible weld tracking method has the tracking accuracy of 0.5mm in the tracking of large structural parts. It is a wide range of industrial application prospects.

  3. Visualizing the Perception Filter and Breaching It with Active-Learning Strategies

    Science.gov (United States)

    White, Harold B.

    2012-01-01

    Teachers' perception filter operates in all realms of their consciousness. It plays an important part in what and how students learn and should play a central role in what and how they teach. This may be obvious, but having a visual model of a perception filter can guide the way they think about education. In this article, the author talks about…

  4. Superior short-term learning effect of visual and sensory organisation ability when sensory information is unreliable in adolescent rhythmic gymnasts.

    Science.gov (United States)

    Chen, Hui-Ya; Chang, Hsiao-Yun; Ju, Yan-Ying; Tsao, Hung-Ting

    2017-06-01

    Rhythmic gymnasts specialise in dynamic balance under sensory conditions of numerous somatosensory, visual, and vestibular stimulations. This study investigated whether adolescent rhythmic gymnasts are superior to peers in Sensory Organisation test (SOT) performance, which quantifies the ability to maintain standing balance in six sensory conditions, and explored whether they plateaued faster during familiarisation with the SOT. Three and six sessions of SOTs were administered to 15 female rhythmic gymnasts (15.0 ± 1.8 years) and matched peers (15.1 ± 2.1 years), respectively. The gymnasts were superior to their peers in terms of fitness measures, and their performance was better in the SOT equilibrium score when visual information was unreliable. The SOT learning effects were shown in more challenging sensory conditions between Sessions 1 and 2 and were equivalent in both groups; however, over time, the gymnasts gained marginally significant better visual ability and relied less on visual sense when unreliable. In conclusion, adolescent rhythmic gymnasts have generally the same sensory organisation ability and learning rates as their peers. However, when visual information is unreliable, they have superior sensory organisation ability and learn faster to rely less on visual sense.

  5. Primary School Pupils' Response to Audio-Visual Learning Process in Port-Harcourt

    Science.gov (United States)

    Olube, Friday K.

    2015-01-01

    The purpose of this study is to examine primary school children's response on the use of audio-visual learning processes--a case study of Chokhmah International Academy, Port-Harcourt (owned by Salvation Ministries). It looked at the elements that enhance pupils' response to educational television programmes and their hindrances to these…

  6. Basic Visual Disciplines in Heritage Conservation: Outline of Selected Perspectives in Teaching and Learning

    Science.gov (United States)

    Lobovikov-Katz, A.

    2017-08-01

    Acknowledgement of the value of a basic freehand sketch by the information and communication community of researchers and developers brought about the advanced developments for the use of sketches as free input to complicated processes of computerized visualization, so as to make them more widely accessible. However, a sharp reduction and even exclusion of this and other basic visual disciplines from education in sciences, technology, engineering and architecture dramatically reduces the number of future users of such applications. The unique needs of conservation of cultural heritage pose specific challenges as well as encourage the formulation of innovative development tasks in related areas of information and communication technologies (ICT). This paper claims that the introduction of basic visual disciplines to both communities is essential to the effectiveness of integration of heritage conservation needs and the advanced ICT development of conservation value, and beyond. It provides an insight into the challenges and advantages of introducing these subjects in a relevant educational context, presents some examples of their teaching and learning in the modern environment, including e-learning, and sketches perspectives to their application.

  7. Audio-visual synchronization in reading while listening to texts: Effects on visual behavior and verbal learning

    OpenAIRE

    Gerbier , Emilie; Bailly , Gérard; Bosse , Marie-Line

    2018-01-01

    International audience; Reading while listening to texts (RWL) is a promising way to improve the learning benefits provided by a reading experience. In an exploratory study, we investigated the effect of synchronizing the highlighting of words (visual) with their auditory (speech) counterpart during a RWL task. Forty French children from 3rd to 5th grade read short stories in their native language while hearing the story spoken by a narrator. In the non-synchronized (S-) condition the text wa...

  8. Impact of online visual feedback on motor acquisition and retention when learning to reach in a force field.

    Science.gov (United States)

    Batcho, C S; Gagné, M; Bouyer, L J; Roy, J S; Mercier, C

    2016-11-19

    When subjects learn a novel motor task, several sources of feedback (proprioceptive, visual or auditory) contribute to the performance. Over the past few years, several studies have investigated the role of visual feedback in motor learning, yet evidence remains conflicting. The aim of this study was therefore to investigate the role of online visual feedback (VFb) on the acquisition and retention stages of motor learning associated with training in a reaching task. Thirty healthy subjects made ballistic reaching movements with their dominant arm toward two targets, on 2 consecutive days using a robotized exoskeleton (KINARM). They were randomly assigned to a group with (VFb) or without (NoVFb) VFb of index position during movement. On day 1, the task was performed before (baseline) and during the application of a velocity-dependent resistive force field (adaptation). To assess retention, participants repeated the task with the force field on day 2. Motor learning was characterized by: (1) the final endpoint error (movement accuracy) and (2) the initial angle (iANG) of deviation (motor planning). Even though both groups showed motor adaptation, the NoVFb-group exhibited slower learning and higher final endpoint error than the VFb-group. In some condition, subjects trained without visual feedback used more curved initial trajectories to anticipate for the perturbation. This observation suggests that learning to reach targets in a velocity-dependent resistive force field is possible even when feedback is limited. However, the absence of VFb leads to different strategies that were only apparent when reaching toward the most challenging target. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  9. Estimation of mental effort in learning visual search by measuring pupil response.

    Directory of Open Access Journals (Sweden)

    Tatsuto Takeuchi

    Full Text Available Perceptual learning refers to the improvement of perceptual sensitivity and performance with training. In this study, we examined whether learning is accompanied by a release from mental effort on the task, leading to automatization of the learned task. For this purpose, we had subjects conduct a visual search for a target, defined by a combination of orientation and spatial frequency, while we monitored their pupil size. It is well known that pupil size reflects the strength of mental effort invested in a task. We found that pupil size increased rapidly as the learning proceeded in the early phase of training and decreased at the later phase to a level half of its maximum value. This result does not support the simple automatization hypothesis. Instead, it suggests that the mental effort and behavioral performance reflect different aspects of perceptual learning. Further, mental effort would be continued to be invested to maintain good performance at a later stage of training.

  10. Evaluating the Visualization of What a Deep Neural Network Has Learned.

    Science.gov (United States)

    Samek, Wojciech; Binder, Alexander; Montavon, Gregoire; Lapuschkin, Sebastian; Muller, Klaus-Robert

    Deep neural networks (DNNs) have demonstrated impressive performance in complex machine learning tasks such as image classification or speech recognition. However, due to their multilayer nonlinear structure, they are not transparent, i.e., it is hard to grasp what makes them arrive at a particular classification or recognition decision, given a new unseen data sample. Recently, several approaches have been proposed enabling one to understand and interpret the reasoning embodied in a DNN for a single test image. These methods quantify the "importance" of individual pixels with respect to the classification decision and allow a visualization in terms of a heatmap in pixel/input space. While the usefulness of heatmaps can be judged subjectively by a human, an objective quality measure is missing. In this paper, we present a general methodology based on region perturbation for evaluating ordered collections of pixels such as heatmaps. We compare heatmaps computed by three different methods on the SUN397, ILSVRC2012, and MIT Places data sets. Our main result is that the recently proposed layer-wise relevance propagation algorithm qualitatively and quantitatively provides a better explanation of what made a DNN arrive at a particular classification decision than the sensitivity-based approach or the deconvolution method. We provide theoretical arguments to explain this result and discuss its practical implications. Finally, we investigate the use of heatmaps for unsupervised assessment of the neural network performance.Deep neural networks (DNNs) have demonstrated impressive performance in complex machine learning tasks such as image classification or speech recognition. However, due to their multilayer nonlinear structure, they are not transparent, i.e., it is hard to grasp what makes them arrive at a particular classification or recognition decision, given a new unseen data sample. Recently, several approaches have been proposed enabling one to understand and

  11. Dynamic visualizations as tools for supporting cosmological literacy

    Science.gov (United States)

    Buck, Zoe Elizabeth

    My dissertation research is designed to improve access to STEM content through the development of cosmology visualizations that support all learners as they engage in cosmological sense-making. To better understand how to design visualizations that work toward breaking cycles of power and access in the sciences, I orient my work to following "meta-question": How might educators use visualizations to support diverse ways of knowing and learning in order to expand access to cosmology, and to science? In this dissertation, I address this meta-question from a pragmatic epistemological perspective, through a sociocultural lens, following three lines of inquiry: experimental methods (Creswell, 2003) with a focus on basic visualization design, activity analysis (Wells, 1996; Ash, 2001; Rahm, 2012) with a focus on culturally and linguistically diverse learners, and case study (Creswell, 2000) with a focus on expansive learning at a planetarium (Engestrom, 2001; Ash, 2014). My research questions are as follows, each of which corresponds to a self contained course of inquiry with its own design, data, analysis and results: 1) Can mediational cues like color affect the way learners interpret the content in a cosmology visualization? 2) How do cosmology visualizations support cosmological sense-making for diverse students? 3) What are the shared objects of dynamic networks of activity around visualization production and use in a large, urban planetarium and how do they affect learning? The result is a mixed-methods design (Sweetman, Badiee & Creswell, 2010) where both qualitative and quantitative data are used when appropriate to address my research goals. In the introduction I begin by establishing a theoretical framework for understanding visualizations within cultural historical activity theory (CHAT) and situating the chapters that follow within that framework. I also introduce the concept of cosmological literacy, which I define as the set of conceptual, semiotic and

  12. Effects of visual feedback-induced variability on motor learning of handrim wheelchair propulsion.

    Science.gov (United States)

    Leving, Marika T; Vegter, Riemer J K; Hartog, Johanneke; Lamoth, Claudine J C; de Groot, Sonja; van der Woude, Lucas H V

    2015-01-01

    It has been suggested that a higher intra-individual variability benefits the motor learning of wheelchair propulsion. The present study evaluated whether feedback-induced variability on wheelchair propulsion technique variables would also enhance the motor learning process. Learning was operationalized as an improvement in mechanical efficiency and propulsion technique, which are thought to be closely related during the learning process. 17 Participants received visual feedback-based practice (feedback group) and 15 participants received regular practice (natural learning group). Both groups received equal practice dose of 80 min, over 3 weeks, at 0.24 W/kg at a treadmill speed of 1.11 m/s. To compare both groups the pre- and post-test were performed without feedback. The feedback group received real-time visual feedback on seven propulsion variables with instruction to manipulate the presented variable to achieve the highest possible variability (1st 4-min block) and optimize it in the prescribed direction (2nd 4-min block). To increase motor exploration the participants were unaware of the exact variable they received feedback on. Energy consumption and the propulsion technique variables with their respective coefficient of variation were calculated to evaluate the amount of intra-individual variability. The feedback group, which practiced with higher intra-individual variability, improved the propulsion technique between pre- and post-test to the same extent as the natural learning group. Mechanical efficiency improved between pre- and post-test in the natural learning group but remained unchanged in the feedback group. These results suggest that feedback-induced variability inhibited the improvement in mechanical efficiency. Moreover, since both groups improved propulsion technique but only the natural learning group improved mechanical efficiency, it can be concluded that the improvement in mechanical efficiency and propulsion technique do not always appear

  13. Effects of visual feedback-induced variability on motor learning of handrim wheelchair propulsion.

    Directory of Open Access Journals (Sweden)

    Marika T Leving

    Full Text Available It has been suggested that a higher intra-individual variability benefits the motor learning of wheelchair propulsion. The present study evaluated whether feedback-induced variability on wheelchair propulsion technique variables would also enhance the motor learning process. Learning was operationalized as an improvement in mechanical efficiency and propulsion technique, which are thought to be closely related during the learning process.17 Participants received visual feedback-based practice (feedback group and 15 participants received regular practice (natural learning group. Both groups received equal practice dose of 80 min, over 3 weeks, at 0.24 W/kg at a treadmill speed of 1.11 m/s. To compare both groups the pre- and post-test were performed without feedback. The feedback group received real-time visual feedback on seven propulsion variables with instruction to manipulate the presented variable to achieve the highest possible variability (1st 4-min block and optimize it in the prescribed direction (2nd 4-min block. To increase motor exploration the participants were unaware of the exact variable they received feedback on. Energy consumption and the propulsion technique variables with their respective coefficient of variation were calculated to evaluate the amount of intra-individual variability.The feedback group, which practiced with higher intra-individual variability, improved the propulsion technique between pre- and post-test to the same extent as the natural learning group. Mechanical efficiency improved between pre- and post-test in the natural learning group but remained unchanged in the feedback group.These results suggest that feedback-induced variability inhibited the improvement in mechanical efficiency. Moreover, since both groups improved propulsion technique but only the natural learning group improved mechanical efficiency, it can be concluded that the improvement in mechanical efficiency and propulsion technique do not

  14. Neural Network Machine Learning and Dimension Reduction for Data Visualization

    Science.gov (United States)

    Liles, Charles A.

    2014-01-01

    Neural network machine learning in computer science is a continuously developing field of study. Although neural network models have been developed which can accurately predict a numeric value or nominal classification, a general purpose method for constructing neural network architecture has yet to be developed. Computer scientists are often forced to rely on a trial-and-error process of developing and improving accurate neural network models. In many cases, models are constructed from a large number of input parameters. Understanding which input parameters have the greatest impact on the prediction of the model is often difficult to surmise, especially when the number of input variables is very high. This challenge is often labeled the "curse of dimensionality" in scientific fields. However, techniques exist for reducing the dimensionality of problems to just two dimensions. Once a problem's dimensions have been mapped to two dimensions, it can be easily plotted and understood by humans. The ability to visualize a multi-dimensional dataset can provide a means of identifying which input variables have the highest effect on determining a nominal or numeric output. Identifying these variables can provide a better means of training neural network models; models can be more easily and quickly trained using only input variables which appear to affect the outcome variable. The purpose of this project is to explore varying means of training neural networks and to utilize dimensional reduction for visualizing and understanding complex datasets.

  15. Observation of Depictive Versus Tracing Gestures Selectively Aids Verbal Versus Visual-Spatial Learning in Primary School Children

    NARCIS (Netherlands)

    van Wermeskerken, Margot; Fijan, Nathalie; Eielts, Charly; Pouw, Wim T. J. L.

    2016-01-01

    Previous research has established that gesture observation aids learning in children. The current study examined whether observation of gestures (i.e. depictive and tracing gestures) differentially affected verbal and visual-spatial retention when learning a route and its street names. Specifically,

  16. Binding of visual and spatial short-term memory in Williams syndrome and moderate learning disability.

    Science.gov (United States)

    Jarrold, Christopher; Phillips, Caroline; Baddeley, Alan D

    2007-04-01

    A main aim of this study was to test the claim that individuals with Williams syndrome have selectively impaired memory for spatial as opposed to visual information. The performance of 16 individuals with Williams syndrome (six males, 10 females; mean age 18y 7mo [SD 7y 6mo], range 9y 1mo-30y 7mo) on tests of short-term memory for item and location information was compared with that shown by individuals with moderate learning difficulties (12 males, four females; mean age 10y 3mo [SD 1y], range 8y 6mo-11y 7mo) and typically developing children (six males, 10 females; mean age 6y 8mo [SD 7mo], range 5y 10mo-7y 9mo) of an equivalent level of visuospatial ability. A second aim was to determine whether individuals had impaired ability to 'bind' visual spatial information when required to recall 'item in location' information. In contrast to previous findings, there was no evidence that individuals with Williams syndrome were more impaired in the spatial than the visual memory condition. However, individuals with both Williams syndrome and moderate learning difficulties showed impaired memory for item in location information, suggesting that problems of binding may be generally associated with learning disability.

  17. Reframing Photovoice to Boost Its Potential for Learning Research

    Directory of Open Access Journals (Sweden)

    Lucian Ciolan

    2017-04-01

    Full Text Available Visual methods are not new within education research field, but they are certainly an innovative approach, especially in higher education where students’ voice is understood as a central need. In this positional article, the authors intend to accomplish two key objectives. First, the article aims to emphasize that visual method, especially photovoice, can be enriching for studying the ways students engage in learning activities and support authentic conversations about how learning takes place and what students are thinking about this process (metacognition. The second objective is to set theoretical and methodological grounds to apply visually based methods such as photovoice and bubble dialogue in education research, particularly in learning research area. The considerations regarding specific methodological aspects are based on the discussion of a study conducted by using photovoice methodology. The authors suggest that participatory analysis and particularly interpretative phenomenological analysis are appropriate to complete the process of data analysis. The article, therefore, contributes to expanding knowledge about specific visual methods and set the ground for methodological innovation in learning research.

  18. A robust method for estimating motorbike count based on visual information learning

    Science.gov (United States)

    Huynh, Kien C.; Thai, Dung N.; Le, Sach T.; Thoai, Nam; Hamamoto, Kazuhiko

    2015-03-01

    Estimating the number of vehicles in traffic videos is an important and challenging task in traffic surveillance, especially with a high level of occlusions between vehicles, e.g.,in crowded urban area with people and/or motorbikes. In such the condition, the problem of separating individual vehicles from foreground silhouettes often requires complicated computation [1][2][3]. Thus, the counting problem is gradually shifted into drawing statistical inferences of target objects density from their shape [4], local features [5], etc. Those researches indicate a correlation between local features and the number of target objects. However, they are inadequate to construct an accurate model for vehicles density estimation. In this paper, we present a reliable method that is robust to illumination changes and partial affine transformations. It can achieve high accuracy in case of occlusions. Firstly, local features are extracted from images of the scene using Speed-Up Robust Features (SURF) method. For each image, a global feature vector is computed using a Bag-of-Words model which is constructed from the local features above. Finally, a mapping between the extracted global feature vectors and their labels (the number of motorbikes) is learned. That mapping provides us a strong prediction model for estimating the number of motorbikes in new images. The experimental results show that our proposed method can achieve a better accuracy in comparison to others.

  19. Method and apparatus for modeling, visualization and analysis of materials

    KAUST Repository

    Aboulhassan, Amal

    2016-08-25

    A method, apparatus, and computer readable medium are provided for modeling of materials and visualization of properties of the materials. An example method includes receiving data describing a set of properties of a material, and computing, by a processor and based on the received data, geometric features of the material. The example method further includes extracting, by the processor, particle paths within the material based on the computed geometric features, and geometrically modeling, by the processor, the material using the geometric features and the extracted particle paths. The example method further includes generating, by the processor and based on the geometric modeling of the material, one or more visualizations regarding the material, and causing display, by a user interface, of the one or more visualizations.

  20. Método computadorizado para medida da acuidade visual A computerized method for visual acuity assessment

    Directory of Open Access Journals (Sweden)

    Patrícia Katayama Kjaer Arippol

    2006-12-01

    Full Text Available OBJETIVO: Elaborar e validar teste computadorizado para medida da acuidade visual de escolares. MÉTODOS: Foi elaborado teste computadorizado para determinação da acuidade visual utilizando os padrões das tabelas logarítmicas impressas adotadas na clínica oftalmológica. Foram avaliados 90 alunos da primeira série do ensino básico, oito estudantes do curso de Tecnologia Oftálmica da UNIFESP-EPM e 10 pacientes do ambulatório de Estrabismo do Departamento de Oftalmologia da UNIFESP-EPM. Todos os sujeitos foram avaliados pelo mesmo examinador e submetidos ao exame de acuidade visual monocular, pela tabela logarítmica de optotipos E impressa e do novo teste computadorizado no mesmo momento. Os participantes forneceram os seus consentimentos após esclarecimento. RESULTADOS: As análises estatísticas revelaram correlação excelente (r>0,75 entre os dois métodos, apesar da leve tendência apresentada pelo teste computadorizado em superestimar a acuidade visual quando comparado com o padrão-ouro. O teste computadorizado apresentou sensibilidade de 100% e especificidade de 94%. CONCLUSÕES: Os resultados obtidos nos permitem dizer que o teste computadorizado pode ser utilizado como novo recurso para triagem da qualidade visual dos escolares, por ser método rápido, de fácil aplicação, barato, automático e atrativo para as crianças. A automatização desvincula o aplicador da interpretação das respostas dadas pelo aluno testado, garante padronização do procedimento, que favorece as análises de acompanhamento e pode ser realizado por diferentes examinadores. Para melhor compreensão da efetividade do teste como instrumento de triagem visual, seria interessante instituí-lo nas escolas do curso básico, após treinamento dos professores para sua aplicação.PURPOSE: To elaborate and to validate a computerized test for visual acuity screening of school-age children. METHODS: We have created a computerized test for visual acuity

  1. Reframing Children’s Learning: Capturing the Practice of Articulation Done by Truku Children in a Visual Narrative Program

    Directory of Open Access Journals (Sweden)

    Huei-Hsuan Lin

    2015-06-01

    Full Text Available Grounded in a three-semester visual program in which a college team collaborated with Truku elementary students using photographs to explore their lives, this paper focuses on the ways that indigenous students constructed their learning experiences. The present study uses the concept of articulation to describe the process through which children connected their lives with their families and communities with the visual project taking place at school: As this visual project was aimed to infiltrate into the fabrics of children’s everyday lives, those who succeeded to negotiate with their guardians to gain full control over their after-school lives were able to incorporate the activities of photo-taking into their free time and engage in friendship-making activities. Additionally, students were raised to be involved in farming activities in which they learned by participating physically rather than listening to verbal explanations. Thus, this way of learning had been inscribed onto their bodies, moving between and sutured the worlds of school learning and community’s manual laboring. Lastly, this study found that students exuberated “groupness” as they negotiated their way with each other establishing their positions in the web of peer relations. Groupness embodied an articulated quality indicating the way that students were related to each other at school was inseparable from their relations after school. Educators need to be attuned to the dynamics of the groupness as it has been somewhat stable, yet open to change, and certainly impacted how students performed in various learning spaces, such as in the case of the visual program.

  2. Numbers, Pictures, and Politics: Teaching Research Methods through Data Visualizations

    Science.gov (United States)

    Rom, Mark Carl

    2015-01-01

    Data visualization is the term used to describe the methods and technologies used to allow the exploration and communication of quantitative information graphically. Data visualization is a rapidly growing and evolving discipline, and visualizations are widely used to cover politics. Yet, while popular and scholarly publications widely use…

  3. Effects of Jigsaw Learning Method on Students’ Self-Efficacy and Motivation to Learn

    Directory of Open Access Journals (Sweden)

    Dwi Nur Rachmah

    2017-12-01

    Full Text Available Jigsaw learning as a cooperative learning method, according to the results of some studies, can improve academic skills, social competence, behavior in learning, and motivation to learn. However, in some other studies, there are different findings regarding the effect of jigsaw learning method on self-efficacy. The purpose of this study is to examine the effects of jigsaw learning method on self-efficacy and motivation to learn in psychology students at the Faculty of Medicine, Universitas Lambung Mangkurat. The method used in the study is the experimental method using one group pre-test and post-test design. The results of the measurements before and after the use of jigsaw learning method were compared using paired samples t-test. The results showed that there is a difference in students’ self-efficacy and motivation to learn before and after subjected to the treatments; therefore, it can be said that jigsaw learning method had significant effects on self-efficacy and motivation to learn. The application of jigsaw learning model in a classroom with large number of students was the discussion of this study.

  4. A Study to Understand the Role of Visual Arts in the Teaching and Learning of Science

    Science.gov (United States)

    Dhanapal, Saroja; Kanapathy, Ravi; Mastan, Jamilah

    2014-01-01

    This research was carried out to understand the role of visual arts in the teaching and learning of science among Grade 3 teachers and students. A mixture of qualitative and quantitative research design was used to discover the different perceptions of both teachers and students on the role of visual arts in science. The data for the research was…

  5. Improving training of laparoscopic tissue manipulation skills using various visual force feedback types

    NARCIS (Netherlands)

    Smit, Daan; Spruit, Edward; Dankelman, J.; Tuijthof, G.J.M.; Hamming, J; Horeman, T.

    2017-01-01

    Background Visual force feedback allows trainees to learn laparoscopic tissue manipulation skills. The aim of this experimental study was to find the most efficient visual force feedback method to acquire these skills. Retention and transfer validity to an untrained task were assessed. Methods

  6. Optimization of interactive visual-similarity-based search

    NARCIS (Netherlands)

    Nguyen, G.P.; Worring, M.

    2008-01-01

    At one end of the spectrum, research in interactive content-based retrieval concentrates on machine learning methods for effective use of relevance feedback. On the other end, the information visualization community focuses on effective methods for conveying information to the user. What is lacking

  7. Promoting Visualization Skills through Deconstruction Using Physical Models and a Visualization Activity Intervention

    Science.gov (United States)

    Schiltz, Holly Kristine

    Visualization skills are important in learning chemistry, as these skills have been shown to correlate to high ability in problem solving. Students' understanding of visual information and their problem-solving processes may only ever be accessed indirectly: verbalization, gestures, drawings, etc. In this research, deconstruction of complex visual concepts was aligned with the promotion of students' verbalization of visualized ideas to teach students to solve complex visual tasks independently. All instructional tools and teaching methods were developed in accordance with the principles of the theoretical framework, the Modeling Theory of Learning: deconstruction of visual representations into model components, comparisons to reality, and recognition of students' their problemsolving strategies. Three physical model systems were designed to provide students with visual and tangible representations of chemical concepts. The Permanent Reflection Plane Demonstration provided visual indicators that students used to support or invalidate the presence of a reflection plane. The 3-D Coordinate Axis system provided an environment that allowed students to visualize and physically enact symmetry operations in a relevant molecular context. The Proper Rotation Axis system was designed to provide a physical and visual frame of reference to showcase multiple symmetry elements that students must identify in a molecular model. Focus groups of students taking Inorganic chemistry working with the physical model systems demonstrated difficulty documenting and verbalizing processes and descriptions of visual concepts. Frequently asked student questions were classified, but students also interacted with visual information through gestures and model manipulations. In an effort to characterize how much students used visualization during lecture or recitation, we developed observation rubrics to gather information about students' visualization artifacts and examined the effect instructors

  8. Imprinting modulates processing of visual information in the visual wulst of chicks

    Directory of Open Access Journals (Sweden)

    Uchimura Motoaki

    2006-11-01

    Full Text Available Abstract Background Imprinting behavior is one form of learning and memory in precocial birds. With the aim of elucidating of the neural basis for visual imprinting, we focused on visual information processing. Results A lesion in the visual wulst, which is similar functionally to the mammalian visual cortex, caused anterograde amnesia in visual imprinting behavior. Since the color of an object was one of the important cues for imprinting, we investigated color information processing in the visual wulst. Intrinsic optical signals from the visual wulst were detected in the early posthatch period and the peak regions of responses to red, green, and blue were spatially organized from the caudal to the nasal regions in dark-reared chicks. This spatial representation of color recognition showed plastic changes, and the response pattern along the antero-posterior axis of the visual wulst altered according to the color the chick was imprinted to. Conclusion These results indicate that the thalamofugal pathway is critical for learning the imprinting stimulus and that the visual wulst shows learning-related plasticity and may relay processed visual information to indicate the color of the imprint stimulus to the memory storage region, e.g., the intermediate medial mesopallium.

  9. Robust visual tracking via multi-task sparse learning

    KAUST Repository

    Zhang, Tianzhu

    2012-06-01

    In this paper, we formulate object tracking in a particle filter framework as a multi-task sparse learning problem, which we denote as Multi-Task Tracking (MTT). Since we model particles as linear combinations of dictionary templates that are updated dynamically, learning the representation of each particle is considered a single task in MTT. By employing popular sparsity-inducing p, q mixed norms (p D; 1), we regularize the representation problem to enforce joint sparsity and learn the particle representations together. As compared to previous methods that handle particles independently, our results demonstrate that mining the interdependencies between particles improves tracking performance and overall computational complexity. Interestingly, we show that the popular L 1 tracker [15] is a special case of our MTT formulation (denoted as the L 11 tracker) when p q 1. The learning problem can be efficiently solved using an Accelerated Proximal Gradient (APG) method that yields a sequence of closed form updates. As such, MTT is computationally attractive. We test our proposed approach on challenging sequences involving heavy occlusion, drastic illumination changes, and large pose variations. Experimental results show that MTT methods consistently outperform state-of-the-art trackers. © 2012 IEEE.

  10. News video story segmentation method using fusion of audio-visual features

    Science.gov (United States)

    Wen, Jun; Wu, Ling-da; Zeng, Pu; Luan, Xi-dao; Xie, Yu-xiang

    2007-11-01

    News story segmentation is an important aspect for news video analysis. This paper presents a method for news video story segmentation. Different form prior works, which base on visual features transform, the proposed technique uses audio features as baseline and fuses visual features with it to refine the results. At first, it selects silence clips as audio features candidate points, and selects shot boundaries and anchor shots as two kinds of visual features candidate points. Then this paper selects audio feature candidates as cues and develops different fusion method, which effectively using diverse type visual candidates to refine audio candidates, to get story boundaries. Experiment results show that this method has high efficiency and adaptability to different kinds of news video.

  11. Underwater photography - A visual survey method

    Digital Repository Service at National Institute of Oceanography (India)

    Sharma, R.

    Content-Type text/plain; charset=UTF-8 173 Underwater photography - A visual survey method Rahul Sharma National Institute of Oceanography, Dona Paula, Goa-403004 rsharma@nio.org Introduction “Photography as a means of observing...-sea photographs were those made by Maurice Ewing and his co-workers during cruises on Atlantis in 1940-48. Their subject was the seafloor and their method of clicking was to trigger the camera mechanically when its mounting struck bottom. This is the only...

  12. Teaching with Concrete and Abstract Visual Representations: Effects on Students' Problem Solving, Problem Representations, and Learning Perceptions

    Science.gov (United States)

    Moreno, Roxana; Ozogul, Gamze; Reisslein, Martin

    2011-01-01

    In 3 experiments, we examined the effects of using concrete and/or abstract visual problem representations during instruction on students' problem-solving practice, near transfer, problem representations, and learning perceptions. In Experiments 1 and 2, novice students learned about electrical circuit analysis with an instructional program that…

  13. Teaching case of Gamification and visual technologies for education

    OpenAIRE

    Villagrasa, Sergi; Fonseca Escudero, David; Redondo Domínguez, Ernesto; Duran Castells, Jaume

    2014-01-01

    3D Education, Engaging, Gamification, Learning Management System, Mixed-Methods Evaluation, Oculus Rift, Problem Based Learning, Quest Based Learning, Virtual Reality, Web GL This paper describes the use of gamification and visual technologies in a classroom for higher education, specifically for university students. The goal is to achieve a major increase in student motivation and engagement through the use of various technologies and learning methodologies based on game mechanics called ...

  14. Visual management of large scale data mining projects.

    Science.gov (United States)

    Shah, I; Hunter, L

    2000-01-01

    This paper describes a unified framework for visualizing the preparations for, and results of, hundreds of machine learning experiments. These experiments were designed to improve the accuracy of enzyme functional predictions from sequence, and in many cases were successful. Our system provides graphical user interfaces for defining and exploring training datasets and various representational alternatives, for inspecting the hypotheses induced by various types of learning algorithms, for visualizing the global results, and for inspecting in detail results for specific training sets (functions) and examples (proteins). The visualization tools serve as a navigational aid through a large amount of sequence data and induced knowledge. They provided significant help in understanding both the significance and the underlying biological explanations of our successes and failures. Using these visualizations it was possible to efficiently identify weaknesses of the modular sequence representations and induction algorithms which suggest better learning strategies. The context in which our data mining visualization toolkit was developed was the problem of accurately predicting enzyme function from protein sequence data. Previous work demonstrated that approximately 6% of enzyme protein sequences are likely to be assigned incorrect functions on the basis of sequence similarity alone. In order to test the hypothesis that more detailed sequence analysis using machine learning techniques and modular domain representations could address many of these failures, we designed a series of more than 250 experiments using information-theoretic decision tree induction and naive Bayesian learning on local sequence domain representations of problematic enzyme function classes. In more than half of these cases, our methods were able to perfectly discriminate among various possible functions of similar sequences. We developed and tested our visualization techniques on this application.

  15. Complex scenes and situations visualization in hierarchical learning algorithm with dynamic 3D NeoAxis engine

    Science.gov (United States)

    Graham, James; Ternovskiy, Igor V.

    2013-06-01

    We applied a two stage unsupervised hierarchical learning system to model complex dynamic surveillance and cyber space monitoring systems using a non-commercial version of the NeoAxis visualization software. The hierarchical scene learning and recognition approach is based on hierarchical expectation maximization, and was linked to a 3D graphics engine for validation of learning and classification results and understanding the human - autonomous system relationship. Scene recognition is performed by taking synthetically generated data and feeding it to a dynamic logic algorithm. The algorithm performs hierarchical recognition of the scene by first examining the features of the objects to determine which objects are present, and then determines the scene based on the objects present. This paper presents a framework within which low level data linked to higher-level visualization can provide support to a human operator and be evaluated in a detailed and systematic way.

  16. Two different motor learning mechanisms contribute to learning reaching movements in a rotated visual environment [version 2; referees: 2 approved, 1 approved with reservations

    Directory of Open Access Journals (Sweden)

    Virginia Way Tong Chu

    2014-12-01

    Full Text Available Practice of movement in virtual-reality and other artificially altered environments has been proposed as a method for rehabilitation following neurological injury and for training new skills in healthy humans.  For such training to be useful, there must be transfer of learning from the artificial environment to the performance of desired skills in the natural environment.  Therefore an important assumption of such methods is that practice in the altered environment engages the same learning and plasticity mechanisms that are required for skill performance in the natural environment.  We test the hypothesis that transfer of learning may fail because the learning and plasticity mechanism that adapts to the altered environment is different from the learning mechanism required for improvement of motor skill.  In this paper, we propose that a model that separates skill learning and environmental adaptation is necessary to explain the learning and aftereffects that are observed in virtual reality experiments.  In particular, we studied the condition where practice in the altered environment should lead to correct skill performance in the original environment. Our 2-mechanism model predicts that aftereffects will still be observed when returning to the original environment, indicating a lack of skill transfer from the artificial environment to the original environment. To illustrate the model prediction, we tested 10 healthy participants on the interaction between a simple overlearned motor skill (straight hand movements to targets in different directions and an artificially altered visuomotor environment (rotation of visual feedback of the results of movement.  As predicted by the models, participants show adaptation to the altered environment and after-effects on return to the baseline environment even when practice in the altered environment should have led to correct skill performance.  The presence of aftereffect under all conditions that

  17. Visual question answering using hierarchical dynamic memory networks

    Science.gov (United States)

    Shang, Jiayu; Li, Shiren; Duan, Zhikui; Huang, Junwei

    2018-04-01

    Visual Question Answering (VQA) is one of the most popular research fields in machine learning which aims to let the computer learn to answer natural language questions with images. In this paper, we propose a new method called hierarchical dynamic memory networks (HDMN), which takes both question attention and visual attention into consideration impressed by Co-Attention method, which is the best (or among the best) algorithm for now. Additionally, we use bi-directional LSTMs, which have a better capability to remain more information from the question and image, to replace the old unit so that we can capture information from both past and future sentences to be used. Then we rebuild the hierarchical architecture for not only question attention but also visual attention. What's more, we accelerate the algorithm via a new technic called Batch Normalization which helps the network converge more quickly than other algorithms. The experimental result shows that our model improves the state of the art on the large COCO-QA dataset, compared with other methods.

  18. Learning Science, Learning about Science, Doing Science: Different Goals Demand Different Learning Methods

    Science.gov (United States)

    Hodson, Derek

    2014-01-01

    This opinion piece paper urges teachers and teacher educators to draw careful distinctions among four basic learning goals: learning science, learning about science, doing science and learning to address socio-scientific issues. In elaboration, the author urges that careful attention is paid to the selection of teaching/learning methods that…

  19. Selective visual scaling of time-scale processes facilitates broadband learning of isometric force frequency tracking.

    Science.gov (United States)

    King, Adam C; Newell, Karl M

    2015-10-01

    The experiment investigated the effect of selectively augmenting faster time scales of visual feedback information on the learning and transfer of continuous isometric force tracking tasks to test the generality of the self-organization of 1/f properties of force output. Three experimental groups tracked an irregular target pattern either under a standard fixed gain condition or with selectively enhancement in the visual feedback display of intermediate (4-8 Hz) or high (8-12 Hz) frequency components of the force output. All groups reduced tracking error over practice, with the error lowest in the intermediate scaling condition followed by the high scaling and fixed gain conditions, respectively. Selective visual scaling induced persistent changes across the frequency spectrum, with the strongest effect in the intermediate scaling condition and positive transfer to novel feedback displays. The findings reveal an interdependence of the timescales in the learning and transfer of isometric force output frequency structures consistent with 1/f process models of the time scales of motor output variability.

  20. Spelling pronunciation and visual preview both facilitate learning to spell irregular words.

    Science.gov (United States)

    Hilte, Maartje; Reitsma, Pieter

    2006-12-01

    Spelling pronunciations are hypothesized to be helpful in building up relatively stable phonologically underpinned orthographic representations, particularly for learning words with irregular phoneme-grapheme correspondences. In a four-week computer-based training, the efficacy of spelling pronunciations and previewing the spelling patterns on learning to spell loan words in Dutch, originating from French and English, was examined in skilled and less skilled spellers with varying ages. Reading skills were taken into account. Overall, compared to normal pronunciation, spelling pronunciation facilitated the learning of the correct spelling of irregular words, but it appeared to be no more effective than previewing. Differences between training conditions appeared to fade with older spellers. Less skilled young spellers seemed to profit more from visual examination of the word as compared to practice with spelling pronunciations. The findings appear to indicate that spelling pronunciation and allowing a preview can both be effective ways to learn correct spellings of orthographically unpredictable words, irrespective of age or spelling ability.

  1. Different levels of food restriction reveal genotype-specific differences in learning a visual discrimination task.

    Directory of Open Access Journals (Sweden)

    Kalina Makowiecki

    Full Text Available In behavioural experiments, motivation to learn can be achieved using food rewards as positive reinforcement in food-restricted animals. Previous studies reduce animal weights to 80-90% of free-feeding body weight as the criterion for food restriction. However, effects of different degrees of food restriction on task performance have not been assessed. We compared learning task performance in mice food-restricted to 80 or 90% body weight (BW. We used adult wildtype (WT; C57Bl/6j and knockout (ephrin-A2⁻/⁻ mice, previously shown to have a reverse learning deficit. Mice were trained in a two-choice visual discrimination task with food reward as positive reinforcement. When mice reached criterion for one visual stimulus (80% correct in three consecutive 10 trial sets they began the reverse learning phase, where the rewarded stimulus was switched to the previously incorrect stimulus. For the initial learning and reverse phase of the task, mice at 90%BW took almost twice as many trials to reach criterion as mice at 80%BW. Furthermore, WT 80 and 90%BW groups significantly differed in percentage correct responses and learning strategy in the reverse learning phase, whereas no differences between weight restriction groups were observed in ephrin-A2⁻/⁻ mice. Most importantly, genotype-specific differences in reverse learning strategy were only detected in the 80%BW groups. Our results indicate that increased food restriction not only results in better performance and a shorter training period, but may also be necessary for revealing behavioural differences between experimental groups. This has important ethical and animal welfare implications when deciding extent of diet restriction in behavioural studies.

  2. Comparison on testability of visual acuity, stereo acuity and colour vision tests between children with learning disabilities and children without learning disabilities in government primary schools.

    Science.gov (United States)

    Abu Bakar, Nurul Farhana; Chen, Ai-Hong

    2014-02-01

    Children with learning disabilities might have difficulties to communicate effectively and give reliable responses as required in various visual function testing procedures. The purpose of this study was to compare the testability of visual acuity using the modified Early Treatment Diabetic Retinopathy Study (ETDRS) and Cambridge Crowding Cards, stereo acuity using Lang Stereo test II and Butterfly stereo tests and colour perception using Colour Vision Test Made Easy (CVTME) and Ishihara's Test for Colour Deficiency (Ishihara Test) between children in mainstream classes and children with learning disabilities in special education classes in government primary schools. A total of 100 primary school children (50 children from mainstream classes and 50 children from special education classes) matched in age were recruited in this cross-sectional comparative study. The testability was determined by the percentage of children who were able to give reliable respond as required by the respective tests. 'Unable to test' was defined as inappropriate response or uncooperative despite best efforts of the screener. The testability of the modified ETDRS, Butterfly stereo test and Ishihara test for respective visual function tests were found lower among children in special education classes ( P learning disabilities. Modifications of vision testing procedures are essential for children with learning disabilities.

  3. Visual event-related potential studies supporting the validity of VARK learning styles' visual and read/write learners.

    Science.gov (United States)

    Thepsatitporn, Sarawin; Pichitpornchai, Chailerd

    2016-06-01

    The validity of learning styles needs supports of additional objective evidence. The identification of learning styles using subjective evidence from VARK questionnaires (where V is visual, A is auditory, R is read/write, and K is kinesthetic) combined with objective evidence from visual event-related potential (vERP) studies has never been investigated. It is questionable whether picture superiority effects exist in V learners and R learners. Thus, the present study aimed to investigate whether vERP could show the relationship between vERP components and VARK learning styles and to identify the existence of picture superiority effects in V learners and R learners. Thirty medical students (15 V learners and 15 R learners) performed recognition tasks with vERP and an intermediate-term memory (ITM) test. The results of within-group comparisons showed that pictures elicited larger P200 amplitudes than words at the occipital 2 site (P < 0.05) in V learners and at the occipital 1 and 2 sites (P < 0.05) in R learners. The between-groups comparison showed that P200 amplitudes elicited by pictures in V learners were larger than those of R learners at the parietal 4 site (P < 0.05). The ITM test result showed that a picture set showed distinctively more correct responses than that of a word set for both V learners (P < 0.001) and R learners (P < 0.01). In conclusion, the result indicated that the P200 amplitude at the parietal 4 site could be used to objectively distinguish V learners from R learners. A lateralization existed to the right brain (occipital 2 site) in V learners. The ITM test demonstrated the existence of picture superiority effects in both learners. The results revealed the first objective electrophysiological evidence partially supporting the validity of the subjective psychological VARK questionnaire study. Copyright © 2016 The American Physiological Society.

  4. An Issue of Learning: The Effect of Visual Split Attention in Classes for Deaf and Hard of Hearing Students

    Science.gov (United States)

    Mather, Susan M.; Clark, M. Diane

    2012-01-01

    One of the ongoing challenges teachers of students who are deaf or hard of hearing face is managing the visual split attention implicit in multimedia learning. When a teacher presents various types of visual information at the same time, visual learners have no choice but to divide their attention among those materials and the teacher and…

  5. Reflexive photography: an alternative method for documenting the learning process of cultural competence.

    Science.gov (United States)

    Amerson, Roxanne; Livingston, Wade G

    2014-04-01

    This qualitative descriptive study used reflexive photography to evaluate the learning process of cultural competence during an international service-learning project in Guatemala. Reflexive photography is an innovative qualitative research technique that examines participants' interactions with their environment through their personal reflections on images that they captured during their experience. A purposive sample of 10 baccalaureate nursing students traveled to Guatemala, where they conducted family and community assessments, engaged in home visits, and provided health education. Data collection involved over 100 photographs and a personal interview with each student. The themes developed from the photographs and interviews provided insight into the activities of an international experience that influence the cognitive, practical, and affective learning of cultural competence. Making home visits and teaching others from a different culture increased students' transcultural self-efficacy. Reflexive photography is a more robust method of self-reflection, especially for visual learners.

  6. The Preference of Visualization in Teaching and Learning Absolute Value

    Science.gov (United States)

    Konyalioglu, Alper Cihan; Aksu, Zeki; Senel, Esma Ozge

    2012-01-01

    Visualization is mostly despised although it complements and--sometimes--guides the analytical process. This study mainly investigates teachers' preferences concerning the use of the visualization method and determines the extent to which they encourage their students to make use of it within the problem-solving process. This study was conducted…

  7. Statistical learning methods: Basics, control and performance

    Energy Technology Data Exchange (ETDEWEB)

    Zimmermann, J. [Max-Planck-Institut fuer Physik, Foehringer Ring 6, 80805 Munich (Germany)]. E-mail: zimmerm@mppmu.mpg.de

    2006-04-01

    The basics of statistical learning are reviewed with a special emphasis on general principles and problems for all different types of learning methods. Different aspects of controlling these methods in a physically adequate way will be discussed. All principles and guidelines will be exercised on examples for statistical learning methods in high energy and astrophysics. These examples prove in addition that statistical learning methods very often lead to a remarkable performance gain compared to the competing classical algorithms.

  8. Statistical learning methods: Basics, control and performance

    International Nuclear Information System (INIS)

    Zimmermann, J.

    2006-01-01

    The basics of statistical learning are reviewed with a special emphasis on general principles and problems for all different types of learning methods. Different aspects of controlling these methods in a physically adequate way will be discussed. All principles and guidelines will be exercised on examples for statistical learning methods in high energy and astrophysics. These examples prove in addition that statistical learning methods very often lead to a remarkable performance gain compared to the competing classical algorithms

  9. Kinespell: Kinesthetic Learning Activity and Assessment in a Digital Game-Based Learning Environment

    Science.gov (United States)

    Cariaga, Ada Angeli; Salvador, Jay Andrae; Solamo, Ma. Rowena; Feria, Rommel

    Various approaches in learning are commonly classified into visual, auditory and kinesthetic (VAK) learning styles. One way of addressing the VAK learning styles is through game-based learning which motivates learners pursue knowledge holistically. The paper presents Kinespell, an unconventional method of learning through digital game-based learning. Kinespell is geared towards enhancing not only the learner’s spelling abilities but also the motor skills through utilizing wireless controllers. It monitors player’s performance through integrated assessment scheme. Results show that Kinespell may accommodate the VAK learning styles and is a promising alternative to established methods in learning and assessing students’ performance in Spelling.

  10. Effects of Jigsaw Learning Method on Students’ Self-Efficacy and Motivation to Learn

    OpenAIRE

    Dwi Nur Rachmah

    2017-01-01

    Jigsaw learning as a cooperative learning method, according to the results of some studies, can improve academic skills, social competence, behavior in learning, and motivation to learn. However, in some other studies, there are different findings regarding the effect of jigsaw learning method on self-efficacy. The purpose of this study is to examine the effects of jigsaw learning method on self-efficacy and motivation to learn in psychology students at the Faculty of Medicine, Universitas La...

  11. Error amplification to promote motor learning and motivation in therapy robotics.

    Science.gov (United States)

    Shirzad, Navid; Van der Loos, H F Machiel

    2012-01-01

    To study the effects of different feedback error amplification methods on a subject's upper-limb motor learning and affect during a point-to-point reaching exercise, we developed a real-time controller for a robotic manipulandum. The reaching environment was visually distorted by implementing a thirty degrees rotation between the coordinate systems of the robot's end-effector and the visual display. Feedback error amplification was provided to subjects as they trained to learn reaching within the visually rotated environment. Error amplification was provided either visually or through both haptic and visual means, each method with two different amplification gains. Subjects' performance (i.e., trajectory error) and self-reports to a questionnaire were used to study the speed and amount of adaptation promoted by each error amplification method and subjects' emotional changes. We found that providing haptic and visual feedback promotes faster adaptation to the distortion and increases subjects' satisfaction with the task, leading to a higher level of attentiveness during the exercise. This finding can be used to design a novel exercise regimen, where alternating between error amplification methods is used to both increase a subject's motor learning and maintain a minimum level of motivational engagement in the exercise. In future experiments, we will test whether such exercise methods will lead to a faster learning time and greater motivation to pursue a therapy exercise regimen.

  12. Visual intelligence Microsoft tools and techniques for visualizing data

    CERN Document Server

    Stacey, Mark; Jorgensen, Adam

    2013-01-01

    Go beyond design concepts and learn to build state-of-the-art visualizations The visualization experts at Microsoft's Pragmatic Works have created a full-color, step-by-step guide to building specific types of visualizations. The book thoroughly covers the Microsoft toolset for data analysis and visualization, including Excel, and explores best practices for choosing a data visualization design, selecting tools from the Microsoft stack, and building a dynamic data visualization from start to finish. You'll examine different types of visualizations, their strengths and weaknesses, a

  13. Visual research in clinical education.

    Science.gov (United States)

    Bezemer, Jeff

    2017-01-01

    The aim of this paper is to explore what might be gained from collecting and analysing visual data, such as photographs, scans, drawings, video and screen recordings, in clinical educational research. Its focus is on visual research that looks at teaching and learning 'as it naturally occurs' in the work place, in simulation centres and other sites, and also involves the collection and analysis of visual learning materials circulating in these sites. With the ubiquity of digital recording devices, video data and visual learning materials are now relatively cheap to collect. Compared to other domains of education research visual materials are not widely used in clinical education research. The paper sets out to identify and reflect on the possibilities for visual research using examples from an ethnographic study on surgical and inter-professional learning in the operating theatres of a London hospital. The paper shows how visual research enables recognition, analysis and critical evaluation of (1) the hidden curriculum, such as the meanings implied by embodied, visible actions of clinicians; (2) the ways in which clinical teachers design multimodal learning environments using a range of modes of communication available to them, combining, for instance, gesture and speech; (3) the informal assessment of clinical skills, and the intricate relation between trainee performance and supervisor feedback; (4) the potentialities and limitations of different visual learning materials, such as textbooks and videos, for representing medical knowledge. The paper concludes with theoretical and methodological reflections on what can be made visible, and therefore available for analysis, explanation and evaluation if visual materials are used for clinical education research, and what remains unaccounted for if written language remains the dominant mode in the research cycle. Opportunities for quantitative analysis and ethical implications are also discussed. © 2016 John Wiley

  14. Visual field

    Science.gov (United States)

    ... your visual field. How the Test is Performed Confrontation visual field exam. This is a quick and ... to achieve this important distinction for online health information and services. Learn more about A.D.A. ...

  15. Nonlinear dimensionality reduction methods for synthetic biology biobricks' visualization.

    Science.gov (United States)

    Yang, Jiaoyun; Wang, Haipeng; Ding, Huitong; An, Ning; Alterovitz, Gil

    2017-01-19

    Visualizing data by dimensionality reduction is an important strategy in Bioinformatics, which could help to discover hidden data properties and detect data quality issues, e.g. data noise, inappropriately labeled data, etc. As crowdsourcing-based synthetic biology databases face similar data quality issues, we propose to visualize biobricks to tackle them. However, existing dimensionality reduction methods could not be directly applied on biobricks datasets. Hereby, we use normalized edit distance to enhance dimensionality reduction methods, including Isomap and Laplacian Eigenmaps. By extracting biobricks from synthetic biology database Registry of Standard Biological Parts, six combinations of various types of biobricks are tested. The visualization graphs illustrate discriminated biobricks and inappropriately labeled biobricks. Clustering algorithm K-means is adopted to quantify the reduction results. The average clustering accuracy for Isomap and Laplacian Eigenmaps are 0.857 and 0.844, respectively. Besides, Laplacian Eigenmaps is 5 times faster than Isomap, and its visualization graph is more concentrated to discriminate biobricks. By combining normalized edit distance with Isomap and Laplacian Eigenmaps, synthetic biology biobircks are successfully visualized in two dimensional space. Various types of biobricks could be discriminated and inappropriately labeled biobricks could be determined, which could help to assess crowdsourcing-based synthetic biology databases' quality, and make biobricks selection.

  16. Visualizing histopathologic deep learning classification and anomaly detection using nonlinear feature space dimensionality reduction.

    Science.gov (United States)

    Faust, Kevin; Xie, Quin; Han, Dominick; Goyle, Kartikay; Volynskaya, Zoya; Djuric, Ugljesa; Diamandis, Phedias

    2018-05-16

    There is growing interest in utilizing artificial intelligence, and particularly deep learning, for computer vision in histopathology. While accumulating studies highlight expert-level performance of convolutional neural networks (CNNs) on focused classification tasks, most studies rely on probability distribution scores with empirically defined cutoff values based on post-hoc analysis. More generalizable tools that allow humans to visualize histology-based deep learning inferences and decision making are scarce. Here, we leverage t-distributed Stochastic Neighbor Embedding (t-SNE) to reduce dimensionality and depict how CNNs organize histomorphologic information. Unique to our workflow, we develop a quantitative and transparent approach to visualizing classification decisions prior to softmax compression. By discretizing the relationships between classes on the t-SNE plot, we show we can super-impose randomly sampled regions of test images and use their distribution to render statistically-driven classifications. Therefore, in addition to providing intuitive outputs for human review, this visual approach can carry out automated and objective multi-class classifications similar to more traditional and less-transparent categorical probability distribution scores. Importantly, this novel classification approach is driven by a priori statistically defined cutoffs. It therefore serves as a generalizable classification and anomaly detection tool less reliant on post-hoc tuning. Routine incorporation of this convenient approach for quantitative visualization and error reduction in histopathology aims to accelerate early adoption of CNNs into generalized real-world applications where unanticipated and previously untrained classes are often encountered.

  17. Multiresolution and Explicit Methods for Vector Field Analysis and Visualization

    Science.gov (United States)

    Nielson, Gregory M.

    1997-01-01

    This is a request for a second renewal (3d year of funding) of a research project on the topic of multiresolution and explicit methods for vector field analysis and visualization. In this report, we describe the progress made on this research project during the second year and give a statement of the planned research for the third year. There are two aspects to this research project. The first is concerned with the development of techniques for computing tangent curves for use in visualizing flow fields. The second aspect of the research project is concerned with the development of multiresolution methods for curvilinear grids and their use as tools for visualization, analysis and archiving of flow data. We report on our work on the development of numerical methods for tangent curve computation first.

  18. Visual error augmentation enhances learning in three dimensions.

    Science.gov (United States)

    Sharp, Ian; Huang, Felix; Patton, James

    2011-09-02

    Because recent preliminary evidence points to the use of Error augmentation (EA) for motor learning enhancements, we visually enhanced deviations from a straight line path while subjects practiced a sensorimotor reversal task, similar to laparoscopic surgery. Our study asked 10 healthy subjects in two groups to perform targeted reaching in a simulated virtual reality environment, where the transformation of the hand position matrix was a complete reversal--rotated 180 degrees about an arbitrary axis (hence 2 of the 3 coordinates are reversed). Our data showed that after 500 practice trials, error-augmented-trained subjects reached the desired targets more quickly and with lower error (differences of 0.4 seconds and 0.5 cm Maximum Perpendicular Trajectory deviation) when compared to the control group. Furthermore, the manner in which subjects practiced was influenced by the error augmentation, resulting in more continuous motions for this group and smaller errors. Even with the extreme sensory discordance of a reversal, these data further support that distorted reality can promote more complete adaptation/learning when compared to regular training. Lastly, upon removing the flip all subjects quickly returned to baseline rapidly within 6 trials.

  19. Visual error augmentation enhances learning in three dimensions

    Directory of Open Access Journals (Sweden)

    Huang Felix

    2011-09-01

    Full Text Available Abstract Because recent preliminary evidence points to the use of Error augmentation (EA for motor learning enhancements, we visually enhanced deviations from a straight line path while subjects practiced a sensorimotor reversal task, similar to laparoscopic surgery. Our study asked 10 healthy subjects in two groups to perform targeted reaching in a simulated virtual reality environment, where the transformation of the hand position matrix was a complete reversal--rotated 180 degrees about an arbitrary axis (hence 2 of the 3 coordinates are reversed. Our data showed that after 500 practice trials, error-augmented-trained subjects reached the desired targets more quickly and with lower error (differences of 0.4 seconds and 0.5 cm Maximum Perpendicular Trajectory deviation when compared to the control group. Furthermore, the manner in which subjects practiced was influenced by the error augmentation, resulting in more continuous motions for this group and smaller errors. Even with the extreme sensory discordance of a reversal, these data further support that distorted reality can promote more complete adaptation/learning when compared to regular training. Lastly, upon removing the flip all subjects quickly returned to baseline rapidly within 6 trials.

  20. Mapping, Learning, Visualization, Classification, and Understanding of fMRI Data in the NeuCube Evolving Spatiotemporal Data Machine of Spiking Neural Networks.

    Science.gov (United States)

    Kasabov, Nikola K; Doborjeh, Maryam Gholami; Doborjeh, Zohreh Gholami

    2017-04-01

    This paper introduces a new methodology for dynamic learning, visualization, and classification of functional magnetic resonance imaging (fMRI) as spatiotemporal brain data. The method is based on an evolving spatiotemporal data machine of evolving spiking neural networks (SNNs) exemplified by the NeuCube architecture [1]. The method consists of several steps: mapping spatial coordinates of fMRI data into a 3-D SNN cube (SNNc) that represents a brain template; input data transformation into trains of spikes; deep, unsupervised learning in the 3-D SNNc of spatiotemporal patterns from data; supervised learning in an evolving SNN classifier; parameter optimization; and 3-D visualization and model interpretation. Two benchmark case study problems and data are used to illustrate the proposed methodology-fMRI data collected from subjects when reading affirmative or negative sentences and another one-on reading a sentence or seeing a picture. The learned connections in the SNNc represent dynamic spatiotemporal relationships derived from the fMRI data. They can reveal new information about the brain functions under different conditions. The proposed methodology allows for the first time to analyze dynamic functional and structural connectivity of a learned SNN model from fMRI data. This can be used for a better understanding of brain activities and also for online generation of appropriate neurofeedback to subjects for improved brain functions. For example, in this paper, tracing the 3-D SNN model connectivity enabled us for the first time to capture prominent brain functional pathways evoked in language comprehension. We found stronger spatiotemporal interaction between left dorsolateral prefrontal cortex and left temporal while reading a negated sentence. This observation is obviously distinguishable from the patterns generated by either reading affirmative sentences or seeing pictures. The proposed NeuCube-based methodology offers also a superior classification accuracy

  1. The use of radionuclide skeleton visualization method in hygienic studies

    International Nuclear Information System (INIS)

    Likutova, I.V.; Bobkova, T.E.; Belova, E.A.; Bogomazov, M.Ya.

    1984-01-01

    Inhalation, intragastric and combined effect of two cadmium compounds on rats is studied. Investigations are performed by biochemical methods and the method of radionuclide visualization of the skeleton which was performed delta hours after RPP introduction in gamma-chamber with computer tape recording for the following mathematical treatment of the image. Using the method of radionuclide skeleton visualization pronounced quantitative characteristics of changes in the bone tissue are obtained, it is found that dose dependence of these changes is especially important when estimating the complex effect. Biochemical methods, are used to find alterations, however they have not been assessed quantitatively

  2. Visual Thinking Routines: A Mixed Methods Approach Applied to Student Teachers at the American University in Dubai

    Science.gov (United States)

    Gholam, Alain

    2017-01-01

    Visual thinking routines are principles based on several theories, approaches, and strategies. Such routines promote thinking skills, call for collaboration and sharing of ideas, and above all, make thinking and learning visible. Visual thinking routines were implemented in the teaching methodology graduate course at the American University in…

  3. The Review of Visual Analysis Methods of Multi-modal Spatio-temporal Big Data

    Directory of Open Access Journals (Sweden)

    ZHU Qing

    2017-10-01

    Full Text Available The visual analysis of spatio-temporal big data is not only the state-of-art research direction of both big data analysis and data visualization, but also the core module of pan-spatial information system. This paper reviews existing visual analysis methods at three levels:descriptive visual analysis, explanatory visual analysis and exploratory visual analysis, focusing on spatio-temporal big data's characteristics of multi-source, multi-granularity, multi-modal and complex association.The technical difficulties and development tendencies of multi-modal feature selection, innovative human-computer interaction analysis and exploratory visual reasoning in the visual analysis of spatio-temporal big data were discussed. Research shows that the study of descriptive visual analysis for data visualizationis is relatively mature.The explanatory visual analysis has become the focus of the big data analysis, which is mainly based on interactive data mining in a visual environment to diagnose implicit reason of problem. And the exploratory visual analysis method needs a major break-through.

  4. Simulating Visual Learning and Optical Illusions via a Network-Based Genetic Algorithm

    Science.gov (United States)

    Siu, Theodore; Vivar, Miguel; Shinbrot, Troy

    We present a neural network model that uses a genetic algorithm to identify spatial patterns. We show that the model both learns and reproduces common visual patterns and optical illusions. Surprisingly, we find that the illusions generated are a direct consequence of the network architecture used. We discuss the implications of our results and the insights that we gain on how humans fall for optical illusions

  5. Progressive Learning of Topic Modeling Parameters: A Visual Analytics Framework.

    Science.gov (United States)

    El-Assady, Mennatallah; Sevastjanova, Rita; Sperrle, Fabian; Keim, Daniel; Collins, Christopher

    2018-01-01

    Topic modeling algorithms are widely used to analyze the thematic composition of text corpora but remain difficult to interpret and adjust. Addressing these limitations, we present a modular visual analytics framework, tackling the understandability and adaptability of topic models through a user-driven reinforcement learning process which does not require a deep understanding of the underlying topic modeling algorithms. Given a document corpus, our approach initializes two algorithm configurations based on a parameter space analysis that enhances document separability. We abstract the model complexity in an interactive visual workspace for exploring the automatic matching results of two models, investigating topic summaries, analyzing parameter distributions, and reviewing documents. The main contribution of our work is an iterative decision-making technique in which users provide a document-based relevance feedback that allows the framework to converge to a user-endorsed topic distribution. We also report feedback from a two-stage study which shows that our technique results in topic model quality improvements on two independent measures.

  6. Perceptual learning to reduce sensory eye dominance beyond the focus of top-down visual attention.

    Science.gov (United States)

    Xu, Jingping P; He, Zijiang J; Ooi, Teng Leng

    2012-05-15

    Perceptual learning is an important means for the brain to maintain its agility in a dynamic environment. Top-down focal attention, which selects task-relevant stimuli against competing ones in the background, is known to control and select what is learned in adults. Still unknown, is whether the adult brain is able to learn highly visible information beyond the focus of top-down attention. If it is, we should be able to reveal a purely stimulus-driven perceptual learning occurring in functions that are largely determined by the early cortical level, where top-down attention modulation is weak. Such an automatic, stimulus-driven learning mechanism is commonly assumed to operate only in the juvenile brain. We performed perceptual training to reduce sensory eye dominance (SED), a function that taps on the eye-of-origin information represented in the early visual cortex. Two retinal locations were simultaneously stimulated with suprathreshold, dichoptic orthogonal gratings. At each location, monocular cueing triggered perception of the grating images of the weak eye and suppression of the strong eye. Observers attended only to one location and performed orientation discrimination of the gratings seen by the weak eye, while ignoring the highly visible gratings at the second, unattended, location. We found SED was not only reduced at the attended location, but also at the unattended location. Furthermore, other untrained visual functions mediated by higher cortical levels improved. An automatic, stimulus-driven learning mechanism causes synaptic alterations in the early cortical level, with a far-reaching impact on the later cortical levels. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. Topological Methods for Visualization

    Energy Technology Data Exchange (ETDEWEB)

    Berres, Anne Sabine [Los Alamos National Lab. (LANL), Los Alamos, NM (United Stat

    2016-04-07

    This slide presentation describes basic topological concepts, including topological spaces, homeomorphisms, homotopy, betti numbers. Scalar field topology explores finding topological features and scalar field visualization, and vector field topology explores finding topological features and vector field visualization.

  8. Multivariate Gradient Analysis for Evaluating and Visualizing a Learning System Platform for Computer Programming

    Science.gov (United States)

    Mather, Richard

    2015-01-01

    This paper explores the application of canonical gradient analysis to evaluate and visualize student performance and acceptance of a learning system platform. The subject of evaluation is a first year BSc module for computer programming. This uses "Ceebot," an animated and immersive game-like development environment. Multivariate…

  9. Teaching and Learning with Computers! A Method for American Indian Bilingual Classrooms.

    Science.gov (United States)

    Bennett, Ruth

    Computer instruction can offer particular benefits to the Indian child. Computer use emphasizes the visual facets of learning, teaches language based skills needed for higher education and careers, and provides types of instruction proven effective with Indian children, such as private self-testing and cooperative learning. The Hupa, Yurok, Karuk,…

  10. Blue colour preference in honeybees distracts visual attention for learning closed shapes.

    Science.gov (United States)

    Morawetz, Linde; Svoboda, Alexander; Spaethe, Johannes; Dyer, Adrian G

    2013-10-01

    Spatial vision is an important cue for how honeybees (Apis mellifera) find flowers, and previous work has suggested that spatial learning in free-flying bees is exclusively mediated by achromatic input to the green photoreceptor channel. However, some data suggested that bees may be able to use alternative channels for shape processing, and recent work shows conditioning type and training length can significantly influence bee learning and cue use. We thus tested the honeybees' ability to discriminate between two closed shapes considering either absolute or differential conditioning, and using eight stimuli differing in their spectral characteristics. Consistent with previous work, green contrast enabled reliable shape learning for both types of conditioning, but surprisingly, we found that bees trained with appetitive-aversive differential conditioning could additionally use colour and/or UV contrast to enable shape discrimination. Interestingly, we found that a high blue contrast initially interferes with bee shape learning, probably due to the bees innate preference for blue colours, but with increasing experience bees can learn a variety of spectral and/or colour cues to facilitate spatial learning. Thus, the relationship between bee pollinators and the spatial and spectral cues that they use to find rewarding flowers appears to be a more rich visual environment than previously thought.

  11. Geometrical methods in learning theory

    International Nuclear Information System (INIS)

    Burdet, G.; Combe, Ph.; Nencka, H.

    2001-01-01

    The methods of information theory provide natural approaches to learning algorithms in the case of stochastic formal neural networks. Most of the classical techniques are based on some extremization principle. A geometrical interpretation of the associated algorithms provides a powerful tool for understanding the learning process and its stability and offers a framework for discussing possible new learning rules. An illustration is given using sequential and parallel learning in the Boltzmann machine

  12. The effects of inspecting and constructing part-task-specific visualizations on team and individual learning

    NARCIS (Netherlands)

    Slof, Bert; Erkens, Gijsbert; Kirschner, Paul A.; Helms-Lorenz, Michelle

    This study examined whether inspecting and constructing different part-task-specific visualizations differentially affects learning. To this end, a complex business-economics problem was structured into three phase-related part-tasks: (1) determining core concepts, (2) proposing multiple solutions,

  13. A boosting framework for visuality-preserving distance metric learning and its application to medical image retrieval.

    Science.gov (United States)

    Yang, Liu; Jin, Rong; Mummert, Lily; Sukthankar, Rahul; Goode, Adam; Zheng, Bin; Hoi, Steven C H; Satyanarayanan, Mahadev

    2010-01-01

    Similarity measurement is a critical component in content-based image retrieval systems, and learning a good distance metric can significantly improve retrieval performance. However, despite extensive study, there are several major shortcomings with the existing approaches for distance metric learning that can significantly affect their application to medical image retrieval. In particular, "similarity" can mean very different things in image retrieval: resemblance in visual appearance (e.g., two images that look like one another) or similarity in semantic annotation (e.g., two images of tumors that look quite different yet are both malignant). Current approaches for distance metric learning typically address only one goal without consideration of the other. This is problematic for medical image retrieval where the goal is to assist doctors in decision making. In these applications, given a query image, the goal is to retrieve similar images from a reference library whose semantic annotations could provide the medical professional with greater insight into the possible interpretations of the query image. If the system were to retrieve images that did not look like the query, then users would be less likely to trust the system; on the other hand, retrieving images that appear superficially similar to the query but are semantically unrelated is undesirable because that could lead users toward an incorrect diagnosis. Hence, learning a distance metric that preserves both visual resemblance and semantic similarity is important. We emphasize that, although our study is focused on medical image retrieval, the problem addressed in this work is critical to many image retrieval systems. We present a boosting framework for distance metric learning that aims to preserve both visual and semantic similarities. The boosting framework first learns a binary representation using side information, in the form of labeled pairs, and then computes the distance as a weighted Hamming

  14. Collaboration Between Art Teacher Students and Communication and Digital Media Students Promoting Subject Specific Didactics in Digital Visual Learning Design

    DEFF Research Database (Denmark)

    Buhl, Mie; Skov, Kirsten

    . Student art teachers and teacher trainers took part in the design process performed by communication students. The project took its point of the departure in the act of Danish teacher education where student teachers must be educated in the practical use of digital visual media for art practices aiming......, drawing or video. Thus, the project suggested the development of a visual learning design for achieving augmented reality (AR) experiences in urban environments and sharing them on social media. The purpose was to explore adequate approaches to work with digital media in visual arts education based...... on practices and reflective processes. The theoretical framework for our discussion of the empirical project draws on current discussions of learning designs and digital media in visual arts education (Peppler 2010, Rasmussen 2015, Buhl & Ejsing-Duun, 2015; Buhl, 2016). Methodology The choice of empirical...

  15. Reading Authentic EFL Text Using Visualization and Advance Organizers in a Multimedia Learning Environment

    Science.gov (United States)

    Lin, Huifen; Chen, Tsuiping

    2007-01-01

    The purpose of this experimental study was to compare the effects of different types of computer-generated visuals (static versus animated) and advance organizers (descriptive versus question) in enhancing comprehension and retention of a content-based lesson for learning English as a Foreign Language (EFL). Additionally, the study investigated…

  16. Principal component analysis study of visual and verbal metaphoric comprehension in children with autism and learning disabilities.

    Science.gov (United States)

    Mashal, Nira; Kasirer, Anat

    2012-01-01

    This research extends previous studies regarding the metaphoric competence of autistic and learning disable children on different measures of visual and verbal non-literal language comprehension, as well as cognitive abilities that include semantic knowledge, executive functions, similarities, and reading fluency. Thirty seven children with autism (ASD), 20 children with learning disabilities (LD), and 21 typically developed (TD) children participated in the study. Principal components analysis was used to examine the interrelationship among the various tests in each group. Results showed different patterns in the data according to group. In particular, the results revealed that there is no dichotomy between visual and verbal metaphors in TD children but rather metaphor are classified according to their familiarity level. In the LD group visual metaphors were classified independently of the verbal metaphors. Verbal metaphoric understanding in the ASD group resembled the LD group. In addition, our results revealed the relative weakness of the ASD and LD children in suppressing irrelevant information. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. Deep imitation learning for 3D navigation tasks.

    Science.gov (United States)

    Hussein, Ahmed; Elyan, Eyad; Gaber, Mohamed Medhat; Jayne, Chrisina

    2018-01-01

    Deep learning techniques have shown success in learning from raw high-dimensional data in various applications. While deep reinforcement learning is recently gaining popularity as a method to train intelligent agents, utilizing deep learning in imitation learning has been scarcely explored. Imitation learning can be an efficient method to teach intelligent agents by providing a set of demonstrations to learn from. However, generalizing to situations that are not represented in the demonstrations can be challenging, especially in 3D environments. In this paper, we propose a deep imitation learning method to learn navigation tasks from demonstrations in a 3D environment. The supervised policy is refined using active learning in order to generalize to unseen situations. This approach is compared to two popular deep reinforcement learning techniques: deep-Q-networks and Asynchronous actor-critic (A3C). The proposed method as well as the reinforcement learning methods employ deep convolutional neural networks and learn directly from raw visual input. Methods for combining learning from demonstrations and experience are also investigated. This combination aims to join the generalization ability of learning by experience with the efficiency of learning by imitation. The proposed methods are evaluated on 4 navigation tasks in a 3D simulated environment. Navigation tasks are a typical problem that is relevant to many real applications. They pose the challenge of requiring demonstrations of long trajectories to reach the target and only providing delayed rewards (usually terminal) to the agent. The experiments show that the proposed method can successfully learn navigation tasks from raw visual input while learning from experience methods fail to learn an effective policy. Moreover, it is shown that active learning can significantly improve the performance of the initially learned policy using a small number of active samples.

  18. Deep Transfer Metric Learning.

    Science.gov (United States)

    Junlin Hu; Jiwen Lu; Yap-Peng Tan; Jie Zhou

    2016-12-01

    Conventional metric learning methods usually assume that the training and test samples are captured in similar scenarios so that their distributions are assumed to be the same. This assumption does not hold in many real visual recognition applications, especially when samples are captured across different data sets. In this paper, we propose a new deep transfer metric learning (DTML) method to learn a set of hierarchical nonlinear transformations for cross-domain visual recognition by transferring discriminative knowledge from the labeled source domain to the unlabeled target domain. Specifically, our DTML learns a deep metric network by maximizing the inter-class variations and minimizing the intra-class variations, and minimizing the distribution divergence between the source domain and the target domain at the top layer of the network. To better exploit the discriminative information from the source domain, we further develop a deeply supervised transfer metric learning (DSTML) method by including an additional objective on DTML, where the output of both the hidden layers and the top layer are optimized jointly. To preserve the local manifold of input data points in the metric space, we present two new methods, DTML with autoencoder regularization and DSTML with autoencoder regularization. Experimental results on face verification, person re-identification, and handwritten digit recognition validate the effectiveness of the proposed methods.

  19. Visual field examination method using virtual reality glasses compared with the Humphrey perimeter

    Directory of Open Access Journals (Sweden)

    Tsapakis S

    2017-08-01

    Full Text Available Stylianos Tsapakis, Dimitrios Papaconstantinou, Andreas Diagourtas, Konstantinos Droutsas, Konstantinos Andreanos, Marilita M Moschos, Dimitrios Brouzas 1st Department of Ophthalmology, National and Kapodistrian University of Athens, Athens, Greece Purpose: To present a visual field examination method using virtual reality glasses and evaluate the reliability of the method by comparing the results with those of the Humphrey perimeter.Materials and methods: Virtual reality glasses, a smartphone with a 6 inch display, and software that implements a fast-threshold 3 dB step staircase algorithm for the central 24° of visual field (52 points were used to test 20 eyes of 10 patients, who were tested in a random and consecutive order as they appeared in our glaucoma department. The results were compared with those obtained from the same patients using the Humphrey perimeter.Results: High correlation coefficient (r=0.808, P<0.0001 was found between the virtual reality visual field test and the Humphrey perimeter visual field.Conclusion: Visual field examination results using virtual reality glasses have a high correlation with the Humphrey perimeter allowing the method to be suitable for probable clinical use. Keywords: visual fields, virtual reality glasses, perimetry, visual fields software, smartphone

  20. Effects of Using Graphics and Animation Online Problem-Based Learning on Visualization Skills among Students

    Science.gov (United States)

    Ariffin, A.; Samsudin, M. A.; Zain, A. N. Md.; Hamzah, N.; Ismail, M. E.

    2017-05-01

    The Engineering Drawing subject develops skills in geometry drawing becoming more professional. For the concept in Engineering Drawing, students need to have good visualization skills. Visualization is needed to help students get a start before translating into a drawing. So that, Problem Based Learning (PBL) using animation mode (PBL-A) and graphics mode (PBL-G) will be implemented in class. Problem-solving process is repeatedly able to help students interpret engineering drawings step work correctly and accurately. This study examined the effects of PBL-A online and PBL-G online on visualization skills of students in polytechnics. Sixty eight mechanical engineering students have been involved in this study. The visualization test adapted from Bennett, Seashore and Wesman was used in this study. Results showed significant differences in mean scores post-test of visualization skills among the students enrolled in PBL-G with the group of students who attended PBL-A online after effects of pre-test mean score is controlled. Therefore, the effects of animation modes have a positive impact on increasing students’ visualization skills.

  1. Reduced Mental Load in Learning a Motor Visual Task with Virtual 3D Method

    Science.gov (United States)

    Dan, A.; Reiner, M.

    2018-01-01

    Distance learning is expanding rapidly, fueled by the novel technologies for shared recorded teaching sessions on the Web. Here, we ask whether 3D stereoscopic (3DS) virtual learning environment teaching sessions are more compelling than typical two-dimensional (2D) video sessions and whether this type of teaching results in superior learning. The…

  2. Deep learning versus traditional machine learning methods for aggregated energy demand prediction

    NARCIS (Netherlands)

    Paterakis, N.G.; Mocanu, E.; Gibescu, M.; Stappers, B.; van Alst, W.

    2018-01-01

    In this paper the more advanced, in comparison with traditional machine learning approaches, deep learning methods are explored with the purpose of accurately predicting the aggregated energy consumption. Despite the fact that a wide range of machine learning methods have been applied to

  3. Inferring Interaction Force from Visual Information without Using Physical Force Sensors.

    Science.gov (United States)

    Hwang, Wonjun; Lim, Soo-Chul

    2017-10-26

    In this paper, we present an interaction force estimation method that uses visual information rather than that of a force sensor. Specifically, we propose a novel deep learning-based method utilizing only sequential images for estimating the interaction force against a target object, where the shape of the object is changed by an external force. The force applied to the target can be estimated by means of the visual shape changes. However, the shape differences in the images are not very clear. To address this problem, we formulate a recurrent neural network-based deep model with fully-connected layers, which models complex temporal dynamics from the visual representations. Extensive evaluations show that the proposed learning models successfully estimate the interaction forces using only the corresponding sequential images, in particular in the case of three objects made of different materials, a sponge, a PET bottle, a human arm, and a tube. The forces predicted by the proposed method are very similar to those measured by force sensors.

  4. Effect of Methods of Learning and Self Regulated Learning toward Outcomes of Learning Social Studies

    Science.gov (United States)

    Tjalla, Awaluddin; Sofiah, Evi

    2015-01-01

    This research aims to reveal the influence of learning methods and self-regulated learning on students learning scores for Social Studies object. The research was done in Islamic Junior High School (MTs Manba'ul Ulum), Batuceper City Tangerang using quasi-experimental method. The research employed simple random technique to 28 students. Data were…

  5. Cell-fusion method to visualize interphase nuclear pore formation.

    Science.gov (United States)

    Maeshima, Kazuhiro; Funakoshi, Tomoko; Imamoto, Naoko

    2014-01-01

    In eukaryotic cells, the nucleus is a complex and sophisticated organelle that organizes genomic DNA to support essential cellular functions. The nuclear surface contains many nuclear pore complexes (NPCs), channels for macromolecular transport between the cytoplasm and nucleus. It is well known that the number of NPCs almost doubles during interphase in cycling cells. However, the mechanism of NPC formation is poorly understood, presumably because a practical system for analysis does not exist. The most difficult obstacle in the visualization of interphase NPC formation is that NPCs already exist after nuclear envelope formation, and these existing NPCs interfere with the observation of nascent NPCs. To overcome this obstacle, we developed a novel system using the cell-fusion technique (heterokaryon method), previously also used to analyze the shuttling of macromolecules between the cytoplasm and the nucleus, to visualize the newly synthesized interphase NPCs. In addition, we used a photobleaching approach that validated the cell-fusion method. We recently used these methods to demonstrate the role of cyclin-dependent protein kinases and of Pom121 in interphase NPC formation in cycling human cells. Here, we describe the details of the cell-fusion approach and compare the system with other NPC formation visualization methods. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. Perceptions Concerning Visual Culture Dialogues of Visual Art Pre-Service Teachers

    Science.gov (United States)

    Mamur, Nuray

    2012-01-01

    The visual art which is commented by the visual art teachers to help processing of the visual culture is important. In this study it is tried to describe the effect of visual culture based on the usual aesthetic experiences to be included in the learning process art education. The action research design, which is a qualitative study, is conducted…

  7. Contributions of Letter-Speech Sound Learning and Visual Print Tuning to Reading Improvement: Evidence from Brain Potential and Dyslexia Training Studies

    Directory of Open Access Journals (Sweden)

    Gorka Fraga González

    2017-01-01

    Full Text Available We use a neurocognitive perspective to discuss the contribution of learning letter-speech sound (L-SS associations and visual specialization in the initial phases of reading in dyslexic children. We review findings from associative learning studies on related cognitive skills important for establishing and consolidating L-SS associations. Then we review brain potential studies, including our own, that yielded two markers associated with reading fluency. Here we show that the marker related to visual specialization (N170 predicts word and pseudoword reading fluency in children who received additional practice in the processing of morphological word structure. Conversely, L-SS integration (indexed by mismatch negativity (MMN may only remain important when direct orthography to semantic conversion is not possible, such as in pseudoword reading. In addition, the correlation between these two markers supports the notion that multisensory integration facilitates visual specialization. Finally, we review the role of implicit learning and executive functions in audiovisual learning in dyslexia. Implications for remedial research are discussed and suggestions for future studies are presented.

  8. TRACX2: a connectionist autoencoder using graded chunks to model infant visual statistical learning.

    Science.gov (United States)

    Mareschal, Denis; French, Robert M

    2017-01-05

    Even newborn infants are able to extract structure from a stream of sensory inputs; yet how this is achieved remains largely a mystery. We present a connectionist autoencoder model, TRACX2, that learns to extract sequence structure by gradually constructing chunks, storing these chunks in a distributed manner across its synaptic weights and recognizing these chunks when they re-occur in the input stream. Chunks are graded rather than all-or-nothing in nature. As chunks are learnt their component parts become more and more tightly bound together. TRACX2 successfully models the data from five experiments from the infant visual statistical learning literature, including tasks involving forward and backward transitional probabilities, low-salience embedded chunk items, part-sequences and illusory items. The model also captures performance differences across ages through the tuning of a single-learning rate parameter. These results suggest that infant statistical learning is underpinned by the same domain-general learning mechanism that operates in auditory statistical learning and, potentially, in adult artificial grammar learning.This article is part of the themed issue 'New frontiers for statistical learning in the cognitive sciences'. © 2016 The Author(s).

  9. The method of global learning in teaching foreign languages

    Directory of Open Access Journals (Sweden)

    Tatjana Dragovič

    2001-12-01

    Full Text Available The authors describe the method of global learning of foreign languages, which is based on the principles of neurolinguistic programming (NLP. According to this theory, the educator should use the method of the so-called periphery learning, where students learn relaxation techniques and at the same time they »incidentally « or subconsciously learn a foreign language. The method of global learning imitates successful strategies of learning in early childhood and therefore creates a relaxed attitude towards learning. Global learning is also compared with standard methods.

  10. Auditory-visual stimulus pairing enhances perceptual learning in a songbird.

    Science.gov (United States)

    Hultsch; Schleuss; Todt

    1999-07-01

    In many oscine birds, song learning is affected by social variables, for example the behaviour of a tutor. This implies that both auditory and visual perceptual systems should be involved in the acquisition process. To examine whether and how particular visual stimuli can affect song acquisition, we tested the impact of a tutoring design in which the presentation of auditory stimuli (i.e. species-specific master songs) was paired with a well-defined nonauditory stimulus (i.e. stroboscope light flashes: Strobe regime). The subjects were male hand-reared nightingales, Luscinia megarhynchos. For controls, males were exposed to tutoring without a light stimulus (Control regime). The males' singing recorded 9 months later showed that the Strobe regime had enhanced the acquisition of song patterns. During this treatment birds had acquired more songs than during the Control regime; the observed increase in repertoire size was from 20 to 30% in most cases. Furthermore, the copy quality of imitations acquired during the Strobe regime was better than that of imitations developed from the Control regime, and this was due to a significant increase in the number of 'perfect' song copies. We conclude that these effects were mediated by an intrinsic component (e.g. attention or arousal) which specifically responded to the Strobe regime. Our findings also show that mechanisms of song learning are well prepared to process information from cross-modal perception. Thus, more detailed enquiries into stimulus complexes that are usually referred to as social variables are promising. Copyright 1999 The Association for the Study of Animal Behaviour.

  11. Do students’ styles of learning affect how they adapt to learning methods and to the learning environment?

    OpenAIRE

    Topal, Kenan; Sarıkaya, Özlem; Basturk, Ramazan; Buke, Akile

    2015-01-01

    Objectives: The process of development and evaluation of undergraduate medical education programs should include analysis of learners’ characteristics, needs, and perceptions about learning methods. This study aims to evaluate medical students’ perceptions about problem-based learning methods and to compare these results with their individual learning styles.Materials and Methods: The survey was conducted at Marmara University Medical School where problem-based learning was implemented in the...

  12. Visual discrimination transfer and modulation by biogenic amines in honeybees.

    Science.gov (United States)

    Vieira, Amanda Rodrigues; Salles, Nayara; Borges, Marco; Mota, Theo

    2018-05-10

    For more than a century, visual learning and memory have been studied in the honeybee Apis mellifera using operant appetitive conditioning. Although honeybees show impressive visual learning capacities in this well-established protocol, operant training of free-flying animals cannot be combined with invasive protocols for studying the neurobiological basis of visual learning. In view of this, different attempts have been made to develop new classical conditioning protocols for studying visual learning in harnessed honeybees, though learning performance remains considerably poorer than that for free-flying animals. Here, we investigated the ability of honeybees to use visual information acquired during classical conditioning in a new operant context. We performed differential visual conditioning of the proboscis extension reflex (PER) followed by visual orientation tests in a Y-maze. Classical conditioning and Y-maze retention tests were performed using the same pair of perceptually isoluminant chromatic stimuli, to avoid the influence of phototaxis during free-flying orientation. Visual discrimination transfer was clearly observed, with pre-trained honeybees significantly orienting their flights towards the former positive conditioned stimulus (CS+), thus showing that visual memories acquired by honeybees are resistant to context changes between conditioning and the retention test. We combined this visual discrimination approach with selective pharmacological injections to evaluate the effect of dopamine and octopamine in appetitive visual learning. Both octopaminergic and dopaminergic antagonists impaired visual discrimination performance, suggesting that both these biogenic amines modulate appetitive visual learning in honeybees. Our study brings new insight into cognitive and neurobiological mechanisms underlying visual learning in honeybees. © 2018. Published by The Company of Biologists Ltd.

  13. Jordan-3: measuring visual reversals in children as symptoms of learning disability and attention-deficit/hyperactivity disorder.

    Science.gov (United States)

    Jordan, Brian T; Martin, Nancy; Austin, J Sue

    2012-12-01

    The purpose of this research was to establish new norms for the Jordan-3 for children ages 5 to 18 years. The research also investigated the frequency of visual reversals in children previously identified as having reading disability, attention-deficit/hyperactivity disorder, and broader learning disabilities. Participants were regular education students, ages 5 through 18 years, and special education students previously diagnosed with attention-deficit/hyperactivity disorder, reading disability, or broader learning disability. Jordan-3 Accuracy and Error raw scores were compared to assess if there was a significant difference between the two groups. Mean Accuracy and Error scores were compared for males and females. Children with learning disability and attention-deficit/hyperactivity disorder had higher reversals when compared to regular education children, which lends continued support to the Jordan-3 as a valid and reliable measure of visual reversals in children and adolescents. This study illustrates the utility of the Jordan-3 when assessing children who may require remediation to reach their academic potential.

  14. Top-down inputs enhance orientation selectivity in neurons of the primary visual cortex during perceptual learning.

    Directory of Open Access Journals (Sweden)

    Samat Moldakarimov

    2014-08-01

    Full Text Available Perceptual learning has been used to probe the mechanisms of cortical plasticity in the adult brain. Feedback projections are ubiquitous in the cortex, but little is known about their role in cortical plasticity. Here we explore the hypothesis that learning visual orientation discrimination involves learning-dependent plasticity of top-down feedback inputs from higher cortical areas, serving a different function from plasticity due to changes in recurrent connections within a cortical area. In a Hodgkin-Huxley-based spiking neural network model of visual cortex, we show that modulation of feedback inputs to V1 from higher cortical areas results in shunting inhibition in V1 neurons, which changes the response properties of V1 neurons. The orientation selectivity of V1 neurons is enhanced without changing orientation preference, preserving the topographic organizations in V1. These results provide new insights to the mechanisms of plasticity in the adult brain, reconciling apparently inconsistent experiments and providing a new hypothesis for a functional role of the feedback connections.

  15. On the learning difficulty of visual and auditory modal concepts: Evidence for a single processing system.

    Science.gov (United States)

    Vigo, Ronaldo; Doan, Karina-Mikayla C; Doan, Charles A; Pinegar, Shannon

    2018-02-01

    The logic operators (e.g., "and," "or," "if, then") play a fundamental role in concept formation, syntactic construction, semantic expression, and deductive reasoning. In spite of this very general and basic role, there are relatively few studies in the literature that focus on their conceptual nature. In the current investigation, we examine, for the first time, the learning difficulty experienced by observers in classifying members belonging to these primitive "modal concepts" instantiated with sets of acoustic and visual stimuli. We report results from two categorization experiments that suggest the acquisition of acoustic and visual modal concepts is achieved by the same general cognitive mechanism. Additionally, we attempt to account for these results with two models of concept learning difficulty: the generalized invariance structure theory model (Vigo in Cognition 129(1):138-162, 2013, Mathematical principles of human conceptual behavior, Routledge, New York, 2014) and the generalized context model (Nosofsky in J Exp Psychol Learn Mem Cogn 10(1):104-114, 1984, J Exp Psychol 115(1):39-57, 1986).

  16. GPU-based large-scale visualization

    KAUST Repository

    Hadwiger, Markus

    2013-11-19

    Recent advances in image and volume acquisition as well as computational advances in simulation have led to an explosion of the amount of data that must be visualized and analyzed. Modern techniques combine the parallel processing power of GPUs with out-of-core methods and data streaming to enable the interactive visualization of giga- and terabytes of image and volume data. A major enabler for interactivity is making both the computational and the visualization effort proportional to the amount of data that is actually visible on screen, decoupling it from the full data size. This leads to powerful display-aware multi-resolution techniques that enable the visualization of data of almost arbitrary size. The course consists of two major parts: An introductory part that progresses from fundamentals to modern techniques, and a more advanced part that discusses details of ray-guided volume rendering, novel data structures for display-aware visualization and processing, and the remote visualization of large online data collections. You will learn how to develop efficient GPU data structures and large-scale visualizations, implement out-of-core strategies and concepts such as virtual texturing that have only been employed recently, as well as how to use modern multi-resolution representations. These approaches reduce the GPU memory requirements of extremely large data to a working set size that fits into current GPUs. You will learn how to perform ray-casting of volume data of almost arbitrary size and how to render and process gigapixel images using scalable, display-aware techniques. We will describe custom virtual texturing architectures as well as recent hardware developments in this area. We will also describe client/server systems for distributed visualization, on-demand data processing and streaming, and remote visualization. We will describe implementations using OpenGL as well as CUDA, exploiting parallelism on GPUs combined with additional asynchronous

  17. Teach yourself visually Windows 10

    CERN Document Server

    McFedries, Paul

    2015-01-01

    Learn Windows 10 visually with step-by-step instructions Teach Yourself VISUALLY Windows 10 is the visual learner's guide to the latest Windows upgrade. Completely updated to cover all the latest features, this book walks you step-by-step through over 150 essential Windows tasks. Using full color screen shots and clear instruction, you'll learn your way around the interface, set up user accounts, play media files, download photos from your camera, go online, set up email, and much more. You'll even learn how to customize Windows 10 to suit the way you work best, troubleshoot and repair common

  18. Pattern recognition & machine learning

    CERN Document Server

    Anzai, Y

    1992-01-01

    This is the first text to provide a unified and self-contained introduction to visual pattern recognition and machine learning. It is useful as a general introduction to artifical intelligence and knowledge engineering, and no previous knowledge of pattern recognition or machine learning is necessary. Basic for various pattern recognition and machine learning methods. Translated from Japanese, the book also features chapter exercises, keywords, and summaries.

  19. Learning about the scale of the solar system using digital planetarium visualizations

    Science.gov (United States)

    Yu, Ka Chun; Sahami, Kamran; Dove, James

    2017-07-01

    We studied the use of a digital planetarium for teaching relative distances and sizes in introductory undergraduate astronomy classes. Inspired in part by the classic short film The Powers of Ten and large physical scale models of the Solar System that can be explored on foot, we created lectures using virtual versions of these two pedagogical approaches for classes that saw either an immersive treatment in the planetarium or a non-immersive version in the regular classroom (with N = 973 students participating in total). Students who visited the planetarium had not only the greatest learning gains, but their performance increased with time, whereas students who saw the same visuals projected onto a flat display in their classroom showed less retention over time. The gains seen in the students who visited the planetarium reveal that this medium is a powerful tool for visualizing scale over multiple orders of magnitude. However the modest gains for the students in the regular classroom also show the utility of these visualization approaches for the broader category of classroom physics simulations.

  20. 3D visualization and simulation to enhance nuclear learning

    International Nuclear Information System (INIS)

    Dimitri-Hakim, R.

    2012-01-01

    The nuclear power industry is facing a very real challenge that affects its day-to-day activities: a rapidly aging workforce. For New Nuclear Build (NNB) countries, the challenge is even greater, having to develop a completely new workforce with little to no prior experience or exposure to nuclear power. The workforce replacement introduces workers of a new generation with different backgrounds and affinities than its predecessors. Major lifestyle differences between the new and the old generation of workers result, amongst other things, in different learning habits and needs for this new breed of learners. Interactivity, high visual content and quick access to information are now necessary to achieve high level of retention. (author)