WorldWideScience

Sample records for represent visual input

  1. Visual Working Memory Enhances the Neural Response to Matching Visual Input.

    Science.gov (United States)

    Gayet, Surya; Guggenmos, Matthias; Christophel, Thomas B; Haynes, John-Dylan; Paffen, Chris L E; Van der Stigchel, Stefan; Sterzer, Philipp

    2017-07-12

    Visual working memory (VWM) is used to maintain visual information available for subsequent goal-directed behavior. The content of VWM has been shown to affect the behavioral response to concurrent visual input, suggesting that visual representations originating from VWM and from sensory input draw upon a shared neural substrate (i.e., a sensory recruitment stance on VWM storage). Here, we hypothesized that visual information maintained in VWM would enhance the neural response to concurrent visual input that matches the content of VWM. To test this hypothesis, we measured fMRI BOLD responses to task-irrelevant stimuli acquired from 15 human participants (three males) performing a concurrent delayed match-to-sample task. In this task, observers were sequentially presented with two shape stimuli and a retro-cue indicating which of the two shapes should be memorized for subsequent recognition. During the retention interval, a task-irrelevant shape (the probe) was briefly presented in the peripheral visual field, which could either match or mismatch the shape category of the memorized stimulus. We show that this probe stimulus elicited a stronger BOLD response, and allowed for increased shape-classification performance, when it matched rather than mismatched the concurrently memorized content, despite identical visual stimulation. Our results demonstrate that VWM enhances the neural response to concurrent visual input in a content-specific way. This finding is consistent with the view that neural populations involved in sensory processing are recruited for VWM storage, and it provides a common explanation for a plethora of behavioral studies in which VWM-matching visual input elicits a stronger behavioral and perceptual response. SIGNIFICANCE STATEMENT Humans heavily rely on visual information to interact with their environment and frequently must memorize such information for later use. Visual working memory allows for maintaining such visual information in the mind

  2. Visual working memory enhances the neural response to matching visual input

    NARCIS (Netherlands)

    Gayet, Surya; Guggenmos, Matthias; Christophel, Thomas B; Haynes, John-Dylan; Paffen, Chris L E; Van der Stigchel, Stefan; Sterzer, Philipp

    2017-01-01

    Visual working memory (VWM) is used to maintain visual information available for subsequent goal-directed behavior. The content of VWM has been shown to affect the behavioral response to concurrent visual input, suggesting that visual representations originating from VWM and from sensory input draw

  3. The content of visual working memory alters processing of visual input prior to conscious access: Evidence from pupillometry

    NARCIS (Netherlands)

    Gayet, S.; Paffen, C.L.E.; Guggenmos, M.; Sterzer, P.; Stigchel, S. van der

    2017-01-01

    Visual working memory (VWM) allows for keeping relevant visual information available after termination of its sensory input. Storing information in VWM, however, affects concurrent conscious perception of visual input: initially suppressed visual input gains prioritized access to consciousness when

  4. Keeping in Touch With the Visual System: Spatial Alignment and Multisensory Integration of Visual-Somatosensory Inputs

    Directory of Open Access Journals (Sweden)

    Jeannette Rose Mahoney

    2015-08-01

    Full Text Available Correlated sensory inputs coursing along the individual sensory processing hierarchies arrive at multisensory convergence zones in cortex where inputs are processed in an integrative manner. The exact hierarchical level of multisensory convergence zones and the timing of their inputs are still under debate, although increasingly, evidence points to multisensory integration at very early sensory processing levels. The objective of the current study was to determine, both psychophysically and electrophysiologically, whether differential visual-somatosensory integration patterns exist for stimuli presented to the same versus opposite hemifields. Using high-density electrical mapping and complementary psychophysical data, we examined multisensory integrative processing for combinations of visual and somatosensory inputs presented to both left and right spatial locations. We assessed how early during sensory processing visual-somatosensory (VS interactions were seen in the event-related potential and whether spatial alignment of the visual and somatosensory elements resulted in differential integration effects. Reaction times to all VS pairings were significantly faster than those to the unisensory conditions, regardless of spatial alignment, pointing to engagement of integrative multisensory processing in all conditions. In support, electrophysiological results revealed significant differences between multisensory simultaneous VS and summed V+S responses, regardless of the spatial alignment of the constituent inputs. Nonetheless, multisensory effects were earlier in the aligned conditions, and were found to be particularly robust in the case of right-sided inputs (beginning at just 55ms. In contrast to previous work on audio-visual and audio-somatosensory inputs, the current work suggests a degree of spatial specificity to the earliest detectable multisensory integrative effects in response to visual-somatosensory pairings.

  5. Experimental System for Investigation of Visual Sensory Input in Postural Feedback Control

    Directory of Open Access Journals (Sweden)

    Jozef Pucik

    2012-01-01

    Full Text Available The human postural control system represents a biological feedback system responsible for maintenance of upright stance. Vestibular, proprioceptive and visual sensory inputs provide the most important information into the control system, which controls body centre of mass (COM in order to stabilize the human body resembling an inverted pendulum. The COM can be measured indirectly by means of a force plate as the centre of pressure (COP. Clinically used measurement method is referred to as posturography. In this paper, the conventional static posturography is extended by visual stimulation, which provides insight into a role of visual information in balance control. Visual stimuli have been designed to induce body sway in four specific directions – forward, backward, left and right. Stabilograms were measured using proposed single-PC based system and processed to calculate velocity waveforms and posturographic parameters. The parameters extracted from pre-stimulus and on-stimulus periods exhibit statistically significant differences.

  6. Competition and convergence between auditory and cross-modal visual inputs to primary auditory cortical areas

    Science.gov (United States)

    Mao, Yu-Ting; Hua, Tian-Miao

    2011-01-01

    Sensory neocortex is capable of considerable plasticity after sensory deprivation or damage to input pathways, especially early in development. Although plasticity can often be restorative, sometimes novel, ectopic inputs invade the affected cortical area. Invading inputs from other sensory modalities may compromise the original function or even take over, imposing a new function and preventing recovery. Using ferrets whose retinal axons were rerouted into auditory thalamus at birth, we were able to examine the effect of varying the degree of ectopic, cross-modal input on reorganization of developing auditory cortex. In particular, we assayed whether the invading visual inputs and the existing auditory inputs competed for or shared postsynaptic targets and whether the convergence of input modalities would induce multisensory processing. We demonstrate that although the cross-modal inputs create new visual neurons in auditory cortex, some auditory processing remains. The degree of damage to auditory input to the medial geniculate nucleus was directly related to the proportion of visual neurons in auditory cortex, suggesting that the visual and residual auditory inputs compete for cortical territory. Visual neurons were not segregated from auditory neurons but shared target space even on individual target cells, substantially increasing the proportion of multisensory neurons. Thus spatial convergence of visual and auditory input modalities may be sufficient to expand multisensory representations. Together these findings argue that early, patterned visual activity does not drive segregation of visual and auditory afferents and suggest that auditory function might be compromised by converging visual inputs. These results indicate possible ways in which multisensory cortical areas may form during development and evolution. They also suggest that rehabilitative strategies designed to promote recovery of function after sensory deprivation or damage need to take into

  7. The comparison of visual working memory representations with perceptual inputs.

    Science.gov (United States)

    Hyun, Joo-seok; Woodman, Geoffrey F; Vogel, Edward K; Hollingworth, Andrew; Luck, Steven J

    2009-08-01

    The human visual system can notice differences between memories of previous visual inputs and perceptions of new visual inputs, but the comparison process that detects these differences has not been well characterized. In this study, the authors tested the hypothesis that differences between the memory of a stimulus array and the perception of a new array are detected in a manner that is analogous to the detection of simple features in visual search tasks. That is, just as the presence of a task-relevant feature in visual search can be detected in parallel, triggering a rapid shift of attention to the object containing the feature, the presence of a memory-percept difference along a task-relevant dimension can be detected in parallel, triggering a rapid shift of attention to the changed object. Supporting evidence was obtained in a series of experiments in which manual reaction times, saccadic reaction times, and event-related potential latencies were examined. However, these experiments also showed that a slow, limited-capacity process must occur before the observer can make a manual change detection response.

  8. Asymmetric temporal integration of layer 4 and layer 2/3 inputs in visual cortex.

    Science.gov (United States)

    Hang, Giao B; Dan, Yang

    2011-01-01

    Neocortical neurons in vivo receive concurrent synaptic inputs from multiple sources, including feedforward, horizontal, and feedback pathways. Layer 2/3 of the visual cortex receives feedforward input from layer 4 and horizontal input from layer 2/3. Firing of the pyramidal neurons, which carries the output to higher cortical areas, depends critically on the interaction of these pathways. Here we examined synaptic integration of inputs from layer 4 and layer 2/3 in rat visual cortical slices. We found that the integration is sublinear and temporally asymmetric, with larger responses if layer 2/3 input preceded layer 4 input. The sublinearity depended on inhibition, and the asymmetry was largely attributable to the difference between the two inhibitory inputs. Interestingly, the asymmetric integration was specific to pyramidal neurons, and it strongly affected their spiking output. Thus via cortical inhibition, the temporal order of activation of layer 2/3 and layer 4 pathways can exert powerful control of cortical output during visual processing.

  9. Asymmetric Temporal Integration of Layer 4 and Layer 2/3 Inputs in Visual Cortex

    OpenAIRE

    Hang, Giao B.; Dan, Yang

    2010-01-01

    Neocortical neurons in vivo receive concurrent synaptic inputs from multiple sources, including feedforward, horizontal, and feedback pathways. Layer 2/3 of the visual cortex receives feedforward input from layer 4 and horizontal input from layer 2/3. Firing of the pyramidal neurons, which carries the output to higher cortical areas, depends critically on the interaction of these pathways. Here we examined synaptic integration of inputs from layer 4 and layer 2/3 in rat visual cortical slices...

  10. Higher order visual input to the mushroom bodies in the bee, Bombus impatiens.

    Science.gov (United States)

    Paulk, Angelique C; Gronenberg, Wulfila

    2008-11-01

    To produce appropriate behaviors based on biologically relevant associations, sensory pathways conveying different modalities are integrated by higher-order central brain structures, such as insect mushroom bodies. To address this function of sensory integration, we characterized the structure and response of optic lobe (OL) neurons projecting to the calyces of the mushroom bodies in bees. Bees are well known for their visual learning and memory capabilities and their brains possess major direct visual input from the optic lobes to the mushroom bodies. To functionally characterize these visual inputs to the mushroom bodies, we recorded intracellularly from neurons in bumblebees (Apidae: Bombus impatiens) and a single neuron in a honeybee (Apidae: Apis mellifera) while presenting color and motion stimuli. All of the mushroom body input neurons were color sensitive while a subset was motion sensitive. Additionally, most of the mushroom body input neurons would respond to the first, but not to subsequent, presentations of repeated stimuli. In general, the medulla or lobula neurons projecting to the calyx signaled specific chromatic, temporal, and motion features of the visual world to the mushroom bodies, which included sensory information required for the biologically relevant associations bees form during foraging tasks.

  11. Sparse coding can predict primary visual cortex receptive field changes induced by abnormal visual input.

    Science.gov (United States)

    Hunt, Jonathan J; Dayan, Peter; Goodhill, Geoffrey J

    2013-01-01

    Receptive fields acquired through unsupervised learning of sparse representations of natural scenes have similar properties to primary visual cortex (V1) simple cell receptive fields. However, what drives in vivo development of receptive fields remains controversial. The strongest evidence for the importance of sensory experience in visual development comes from receptive field changes in animals reared with abnormal visual input. However, most sparse coding accounts have considered only normal visual input and the development of monocular receptive fields. Here, we applied three sparse coding models to binocular receptive field development across six abnormal rearing conditions. In every condition, the changes in receptive field properties previously observed experimentally were matched to a similar and highly faithful degree by all the models, suggesting that early sensory development can indeed be understood in terms of an impetus towards sparsity. As previously predicted in the literature, we found that asymmetries in inter-ocular correlation across orientations lead to orientation-specific binocular receptive fields. Finally we used our models to design a novel stimulus that, if present during rearing, is predicted by the sparsity principle to lead robustly to radically abnormal receptive fields.

  12. Visual and Auditory Input in Second-Language Speech Processing

    Science.gov (United States)

    Hardison, Debra M.

    2010-01-01

    The majority of studies in second-language (L2) speech processing have involved unimodal (i.e., auditory) input; however, in many instances, speech communication involves both visual and auditory sources of information. Some researchers have argued that multimodal speech is the primary mode of speech perception (e.g., Rosenblum 2005). Research on…

  13. Learning Arm/Hand Coordination with an Altered Visual Input

    Directory of Open Access Journals (Sweden)

    Simona Denisia Iftime Nielsen

    2010-01-01

    Full Text Available The focus of this study was to test a novel tool for the analysis of motor coordination with an altered visual input. The altered visual input was created using special glasses that presented the view as recorded by a video camera placed at various positions around the subject. The camera was positioned at a frontal (F, lateral (L, or top (T position with respect to the subject. We studied the differences between the arm-end (wrist trajectories while grasping an object between altered vision (F, L, and T conditions and normal vision (N in ten subjects. The outcome measures from the analysis were the trajectory errors, the movement parameters, and the time of execution. We found substantial trajectory errors and an increased execution time at the baseline of the study. We also found that trajectory errors decreased in all conditions after three days of practice with the altered vision in the F condition only for 20 minutes per day, suggesting that recalibration of the visual systems occurred relatively quickly. These results indicate that this recalibration occurs via movement training in an altered condition. The results also suggest that recalibration is more difficult to achieve for altered vision in the F and L conditions compared to the T condition. This study has direct implications on the design of new rehabilitation systems.

  14. Beyond the Vestibulo-Ocular Reflex: Vestibular Input is Processed Centrally to Achieve Visual Stability

    Directory of Open Access Journals (Sweden)

    Edwin S. Dalmaijer

    2018-03-01

    Full Text Available The current study presents a re-analysis of data from Zink et al. (1998, Electroencephalography and Clinical Neurophysiology, 107, who administered galvanic vestibular stimulation through unipolar direct current. They placed electrodes on each mastoid and applied either right or left anodal stimulation. Ocular torsion and visual tilt were measured under different stimulation intensities. New modelling introduced here demonstrates that directly proportional linear models fit reasonably well with the relationship between vestibular input and visual tilt, but not to that between vestibular input and ocular torsion. Instead, an exponential model characterised by a decreasing slope and an asymptote fitted best. These results demonstrate that in the results presented by Zink et al. (1998, ocular torsion could not completely account for visual tilt. This suggests that vestibular input is processed centrally to stabilise vision when ocular torsion is insufficient. Potential mechanisms and seemingly conflicting literature are discussed.

  15. Learning Complex Grammar in the Virtual Classroom: A Comparison of Processing Instruction, Structured Input, Computerized Visual Input Enhancement, and Traditional Instruction

    Science.gov (United States)

    Russell, Victoria

    2012-01-01

    This study investigated the effects of processing instruction (PI) and structured input (SI) on the acquisition of the subjunctive in adjectival clauses by 92 second-semester distance learners of Spanish. Computerized visual input enhancement (VIE) was combined with PI and SI in an attempt to increase the salience of the targeted grammatical form…

  16. Visual Input Enhancement and Grammar Learning: A Meta-Analytic Review

    Science.gov (United States)

    Lee, Sang-Ki; Huang, Hung-Tzu

    2008-01-01

    Effects of pedagogical interventions with visual input enhancement on grammar learning have been investigated by a number of researchers during the past decade and a half. The present review delineates this research domain via a systematic synthesis of 16 primary studies (comprising 20 unique study samples) retrieved through an exhaustive…

  17. Visual Input Enhances Selective Speech Envelope Tracking in Auditory Cortex at a ‘Cocktail Party’

    Science.gov (United States)

    Golumbic, Elana Zion; Cogan, Gregory B.; Schroeder, Charles E.; Poeppel, David

    2013-01-01

    Our ability to selectively attend to one auditory signal amidst competing input streams, epitomized by the ‘Cocktail Party’ problem, continues to stimulate research from various approaches. How this demanding perceptual feat is achieved from a neural systems perspective remains unclear and controversial. It is well established that neural responses to attended stimuli are enhanced compared to responses to ignored ones, but responses to ignored stimuli are nonetheless highly significant, leading to interference in performance. We investigated whether congruent visual input of an attended speaker enhances cortical selectivity in auditory cortex, leading to diminished representation of ignored stimuli. We recorded magnetoencephalographic (MEG) signals from human participants as they attended to segments of natural continuous speech. Using two complementary methods of quantifying the neural response to speech, we found that viewing a speaker’s face enhances the capacity of auditory cortex to track the temporal speech envelope of that speaker. This mechanism was most effective in a ‘Cocktail Party’ setting, promoting preferential tracking of the attended speaker, whereas without visual input no significant attentional modulation was observed. These neurophysiological results underscore the importance of visual input in resolving perceptual ambiguity in a noisy environment. Since visual cues in speech precede the associated auditory signals, they likely serve a predictive role in facilitating auditory processing of speech, perhaps by directing attentional resources to appropriate points in time when to-be-attended acoustic input is expected to arrive. PMID:23345218

  18. Correspondence between visual and electrical input filters of ON and OFF mouse retinal ganglion cells

    Science.gov (United States)

    Sekhar, S.; Jalligampala, A.; Zrenner, E.; Rathbun, D. L.

    2017-08-01

    Objective. Over the past two decades retinal prostheses have made major strides in restoring functional vision to patients blinded by diseases such as retinitis pigmentosa. Presently, implants use single pulses to activate the retina. Though this stimulation paradigm has proved beneficial to patients, an unresolved problem is the inability to selectively stimulate the on and off visual pathways. To this end our goal was to test, using white noise, voltage-controlled, cathodic, monophasic pulse stimulation, whether different retinal ganglion cell (RGC) types in the wild type retina have different electrical input filters. This is an important precursor to addressing pathway-selective stimulation. Approach. Using full-field visual flash and electrical and visual Gaussian noise stimulation, combined with the technique of spike-triggered averaging (STA), we calculate the electrical and visual input filters for different types of RGCs (classified as on, off or on-off based on their response to the flash stimuli). Main results. Examining the STAs, we found that the spiking activity of on cells during electrical stimulation correlates with a decrease in the voltage magnitude preceding a spike, while the spiking activity of off cells correlates with an increase in the voltage preceding a spike. No electrical preference was found for on-off cells. Comparing STAs of wild type and rd10 mice revealed narrower electrical STA deflections with shorter latencies in rd10. Significance. This study is the first comparison of visual cell types and their corresponding temporal electrical input filters in the retina. The altered input filters in degenerated rd10 retinas are consistent with photoreceptor stimulation underlying visual type-specific electrical STA shapes in wild type retina. It is therefore conceivable that existing implants could target partially degenerated photoreceptors that have only lost their outer segments, but not somas, to selectively activate the on and off

  19. Speaking Math--A Voice Input, Speech Output Calculator for Students with Visual Impairments

    Science.gov (United States)

    Bouck, Emily C.; Flanagan, Sara; Joshi, Gauri S.; Sheikh, Waseem; Schleppenbach, Dave

    2011-01-01

    This project explored a newly developed computer-based voice input, speech output (VISO) calculator. Three high school students with visual impairments educated at a state school for the blind and visually impaired participated in the study. The time they took to complete assessments and the average number of attempts per problem were recorded…

  20. Effect of Power Point Enhanced Teaching (Visual Input) on Iranian Intermediate EFL Learners' Listening Comprehension Ability

    Science.gov (United States)

    Sehati, Samira; Khodabandehlou, Morteza

    2017-01-01

    The present investigation was an attempt to study on the effect of power point enhanced teaching (visual input) on Iranian Intermediate EFL learners' listening comprehension ability. To that end, a null hypothesis was formulated as power point enhanced teaching (visual input) has no effect on Iranian Intermediate EFL learners' listening…

  1. Determining the Effectiveness of Visual Input Enhancement across Multiple Linguistic Cues

    Science.gov (United States)

    Comeaux, Ian; McDonald, Janet L.

    2018-01-01

    Visual input enhancement (VIE) increases the salience of grammatical forms, potentially facilitating acquisition through attention mechanisms. Native English speakers were exposed to an artificial language containing four linguistic cues (verb agreement, case marking, animacy, word order), with morphological cues either unmarked, marked in the…

  2. Human visual system automatically represents large-scale sequential regularities.

    Science.gov (United States)

    Kimura, Motohiro; Widmann, Andreas; Schröger, Erich

    2010-03-04

    Our brain recordings reveal that large-scale sequential regularities defined across non-adjacent stimuli can be automatically represented in visual sensory memory. To show that, we adopted an auditory paradigm developed by Sussman, E., Ritter, W., and Vaughan, H. G. Jr. (1998). Predictability of stimulus deviance and the mismatch negativity. NeuroReport, 9, 4167-4170, Sussman, E., and Gumenyuk, V. (2005). Organization of sequential sounds in auditory memory. NeuroReport, 16, 1519-1523 to the visual domain by presenting task-irrelevant infrequent luminance-deviant stimuli (D, 20%) inserted among task-irrelevant frequent stimuli being of standard luminance (S, 80%) in randomized (randomized condition, SSSDSSSSSDSSSSD...) and fixed manners (fixed condition, SSSSDSSSSDSSSSD...). Comparing the visual mismatch negativity (visual MMN), an event-related brain potential (ERP) index of memory-mismatch processes in human visual sensory system, revealed that visual MMN elicited by deviant stimuli was reduced in the fixed compared to the randomized condition. Thus, the large-scale sequential regularity being present in the fixed condition (SSSSD) must have been represented in visual sensory memory. Interestingly, this effect did not occur in conditions with stimulus-onset asynchronies (SOAs) of 480 and 800 ms but was confined to the 160-ms SOA condition supporting the hypothesis that large-scale regularity extraction was based on perceptual grouping of the five successive stimuli defining the regularity. 2010 Elsevier B.V. All rights reserved.

  3. Student preparation and the power of visual input in veterinary surgical education

    DEFF Research Database (Denmark)

    Langebæk, Rikke; Nielsen, Søren Saxmose; Koch, Bodil Cathrine

    2016-01-01

    In recent years, veterinary educational institutions have implemented alternative teaching methods, including video demonstrations of surgical procedures. However, the power of the dynamic visual input from videos in relation to recollection of a surgical procedure has never been evaluated. The aim...... a basic surgical skills course, 112 fourth-year veterinary students participated in the study by completing a questionnaire regarding method of recollection, influence of individual types of educational input, and homework preparation. Furthermore, we observed students performing an orchiectomy...... in a terminal pig lab. Preparation for the pig lab consisted of homework (textbook, online material, including videos), lecture, cadaver lab, and toy animal models in a skills lab. In the instructional video, a detail was used that was not described elsewhere. Results show that 60% of the students used a visual...

  4. Swim pacemakers in box jellyfish are modulated by the visual input

    DEFF Research Database (Denmark)

    Garm, Anders Lydik; Bielecki, Jan

    2008-01-01

    A major part of the cubozoan central nervous system is situated in the eye-bearing rhopalia. One of the neuronal output channels from the rhopalia carries a swim pacemaker signal, which has a one-to-one relation with the swim contractions of the bell shaped body. Given the advanced visual system...... of box jellyfish and that the pacemaker signal originates in the vicinity of these eyes, it seems logical to assume that the pacemakers are modified by the visual input. Here, the firing frequency and distribution of inter-signal intervals (ISIs) of single pacemakers are examined in the Caribbean box...

  5. Density of Visual Input Enhancement and Grammar Learning: A Research Proposal

    Science.gov (United States)

    Tran, Thu Hoang

    2009-01-01

    Research in the field of second language acquisition (SLA) has been done to ascertain the effectiveness of visual input enhancement (VIE) on grammar learning. However, one issue remains unexplored: the effects of VIE density on grammar learning. This paper presents a research proposal to investigate the effects of the density of VIE on English…

  6. Visual input that matches the content of vist of visual working memory requires less (not faster) evidence sampling to reach conscious access

    NARCIS (Netherlands)

    Gayet, S.; van Maanen, L.; Heilbron, M.; Paffen, C.L.E.; Van Der Stigchel, S.

    2016-01-01

    The content of visual working memory (VWM) affects the processing of concurrent visual input. Recently, it has been demonstrated that stimuli are released from interocular suppression faster when they match rather than mismatch a color that is memorized for subsequent recall. In order to investigate

  7. Reorganization of Visual Callosal Connections Following Alterations of Retinal Input and Brain Damage

    Science.gov (United States)

    Restani, Laura; Caleo, Matteo

    2016-01-01

    Vision is a very important sensory modality in humans. Visual disorders are numerous and arising from diverse and complex causes. Deficits in visual function are highly disabling from a social point of view and in addition cause a considerable economic burden. For all these reasons there is an intense effort by the scientific community to gather knowledge on visual deficit mechanisms and to find possible new strategies for recovery and treatment. In this review, we focus on an important and sometimes neglected player of the visual function, the corpus callosum (CC). The CC is the major white matter structure in the brain and is involved in information processing between the two hemispheres. In particular, visual callosal connections interconnect homologous areas of visual cortices, binding together the two halves of the visual field. This interhemispheric communication plays a significant role in visual cortical output. Here, we will first review the essential literature on the physiology of the callosal connections in normal vision. The available data support the view that the callosum contributes to both excitation and inhibition to the target hemisphere, with a dynamic adaptation to the strength of the incoming visual input. Next, we will focus on data showing how callosal connections may sense visual alterations and respond to the classical paradigm for the study of visual plasticity, i.e., monocular deprivation (MD). This is a prototypical example of a model for the study of callosal plasticity in pathological conditions (e.g., strabismus and amblyopia) characterized by unbalanced input from the two eyes. We will also discuss the findings of callosal alterations in blind subjects. Noteworthy, we will discuss data showing that inter-hemispheric transfer mediates recovery of visual responsiveness following cortical damage. Finally, we will provide an overview of how callosal projections dysfunction could contribute to pathologies such as neglect and occipital

  8. REORGANIZATION OF VISUAL CALLOSAL CONNECTIONS FOLLOWING ALTERATIONS OF RETINAL INPUT AND BRAIN DAMAGE

    Directory of Open Access Journals (Sweden)

    LAURA RESTANI

    2016-11-01

    Full Text Available Vision is a very important sensory modality in humans. Visual disorders are numerous and arising from diverse and complex causes. Deficits in visual function are highly disabling from a social point of view and in addition cause a considerable economic burden. For all these reasons there is an intense effort by the scientific community to gather knowledge on visual deficit mechanisms and to find possible new strategies for recovery and treatment. In this review we focus on an important and sometimes neglected player of the visual function, the corpus callosum (CC. The CC is the major white matter structure in the brain and is involved in information processing between the two hemispheres. In particular, visual callosal connections interconnect homologous areas of visual cortices, binding together the two halves of the visual field. This interhemispheric communication plays a significant role in visual cortical output. Here, we will first review essential literature on the physiology of the callosal connections in normal vision. The available data support the view that the callosum contributes to both excitation and inhibition to the target hemisphere, with a dynamic adaptation to the strength of the incoming visual input. Next, we will focus on data showing how callosal connections may sense visual alterations and respond to the classical paradigm for the study of visual plasticity, i.e. monocular deprivation. This is a prototypical example of a model for the study of callosal plasticity in pathological conditions (e.g. strabismus and amblyopia characterized by unbalanced input from the two eyes. We will also discuss findings of callosal alterations in blind subjects. Noteworthy, we will discuss data showing that inter-hemispheric transfer mediates recovery of visual responsiveness following cortical damage. Finally, we will provide an overview of how callosal projections dysfunction could contribute to pathologies such as neglect and occipital

  9. Ankylosing Spondylitis and Posture Control: The Role of Visual Input

    Science.gov (United States)

    De Nunzio, Alessandro Marco; Iervolino, Salvatore; Zincarelli, Carmela; Di Gioia, Luisa; Rengo, Giuseppe; Multari, Vincenzo; Peluso, Rosario; Di Minno, Matteo Nicola Dario; Pappone, Nicola

    2015-01-01

    Objectives. To assess the motor control during quiet stance in patients with established ankylosing spondylitis (AS) and to evaluate the effect of visual input on the maintenance of a quiet posture. Methods. 12 male AS patients (mean age 50.1 ± 13.2 years) and 12 matched healthy subjects performed 2 sessions of 3 trials in quiet stance, with eyes open (EO) and with eyes closed (EC) on a baropodometric platform. The oscillation of the centre of feet pressure (CoP) was acquired. Indices of stability and balance control were assessed by the sway path (SP) of the CoP, the frequency bandwidth (FB1) that includes the 80% of the area under the amplitude spectrum, the mean amplitude of the peaks (MP) of the sway density curve (SDC), and the mean distance (MD) between 2 peaks of the SDC. Results. In severe AS patients, the MD between two peaks of the SDC and the SP of the center of feet pressure were significantly higher than controls during both EO and EC conditions. The MP was significantly reduced just on EC. Conclusions. Ankylosing spondylitis exerts negative effect on postural stability, not compensable by visual inputs. Our findings may be useful in the rehabilitative management of the increased risk of falling in AS. PMID:25821831

  10. Ankylosing Spondylitis and Posture Control: The Role of Visual Input

    Directory of Open Access Journals (Sweden)

    Alessandro Marco De Nunzio

    2015-01-01

    Full Text Available Objectives. To assess the motor control during quiet stance in patients with established ankylosing spondylitis (AS and to evaluate the effect of visual input on the maintenance of a quiet posture. Methods. 12 male AS patients (mean age 50.1 ± 13.2 years and 12 matched healthy subjects performed 2 sessions of 3 trials in quiet stance, with eyes open (EO and with eyes closed (EC on a baropodometric platform. The oscillation of the centre of feet pressure (CoP was acquired. Indices of stability and balance control were assessed by the sway path (SP of the CoP, the frequency bandwidth (FB1 that includes the 80% of the area under the amplitude spectrum, the mean amplitude of the peaks (MP of the sway density curve (SDC, and the mean distance (MD between 2 peaks of the SDC. Results. In severe AS patients, the MD between two peaks of the SDC and the SP of the center of feet pressure were significantly higher than controls during both EO and EC conditions. The MP was significantly reduced just on EC. Conclusions. Ankylosing spondylitis exerts negative effect on postural stability, not compensable by visual inputs. Our findings may be useful in the rehabilitative management of the increased risk of falling in AS.

  11. A special role for binocular visual input during development and as a component of occlusion therapy for treatment of amblyopia.

    Science.gov (United States)

    Mitchell, Donald E

    2008-01-01

    To review work on animal models of deprivation amblyopia that points to a special role for binocular visual input in the development of spatial vision and as a component of occlusion (patching) therapy for amblyopia. The studies reviewed employ behavioural methods to measure the effects of various early experiential manipulations on the development of the visual acuity of the two eyes. Short periods of concordant binocular input, if continuous, can offset much longer daily periods of monocular deprivation to allow the development of normal visual acuity in both eyes. It appears that the visual system does not weigh all visual input equally in terms of its ability to impact on the development of vision but instead places greater weight on concordant binocular exposure. Experimental models of patching therapy for amblyopia imposed on animals in which amblyopia had been induced by a prior period of early monocular deprivation, indicate that the benefits of patching therapy may be only temporary and decline rapidly after patching is discontinued. However, when combined with critical amounts of binocular visual input each day, the benefits of patching can be both heightened and made permanent. Taken together with demonstrations of retained binocular connections in the visual cortex of monocularly deprived animals, a strong argument is made for inclusion of specific training of stereoscopic vision for part of the daily periods of binocular exposure that should be incorporated as part of any patching protocol for amblyopia.

  12. Ankylosing Spondylitis and Posture Control: The Role of Visual Input

    OpenAIRE

    De Nunzio, Alessandro Marco; Iervolino, Salvatore; Zincarelli, Carmela; Di Gioia, Luisa; Rengo, Giuseppe; Multari, Vincenzo; Peluso, Rosario; Di Minno, Matteo Nicola Dario; Pappone, Nicola

    2015-01-01

    Objectives. To assess the motor control during quiet stance in patients with established ankylosing spondylitis (AS) and to evaluate the effect of visual input on the maintenance of a quiet posture. Methods. 12 male AS patients (mean age 50.1 ± 13.2 years) and 12 matched healthy subjects performed 2 sessions of 3 trials in quiet stance, with eyes open (EO) and with eyes closed (EC) on a baropodometric platform. The oscillation of the centre of feet pressure (CoP) was acquired. Indices of stab...

  13. Visualization and verification of the input data in transport calculations with TORT

    International Nuclear Information System (INIS)

    Portulyan, A.; Belousov, S.

    2011-01-01

    A software package, called VTSTO and applied for visualization of three-dimensional objects, is developed. The purpose of the package is to verify the input data describing the model of an object in TORT code calculation. TORT calculates the neutron and gamma flux in three-dimensional system through the method of discrete ordinates and is used as an essential tool when calculating the radiation load of the reactor construction. The software requires data of the reactor component,, which is then processed and used for the generation of the graphic image. The object is presented in two planes. The user has the opportunity to choose and change the pair sections determined by those planes, which is crucial when obtaining the view of the composition and structure of the reactor elements. Consequently the generated visualization allows the preparation of an evaluation of the model and if necessary the input data for TORT can be corrected. In this way tie software reduces significantly the possibility of committing an error while modeling complex objects of the reactor system In addition the process of modeling becomes easier and faster. (full text)

  14. An Investigation of the Differential Effects of Visual Input Enhancement on the Vocabulary Learning of Iranian EFL Learners

    Directory of Open Access Journals (Sweden)

    Zhila Mohammadnia

    2014-07-01

    Full Text Available This study investigated the effect of visual input enhancement on the vocabulary learning of Iranian EFL learners. One hundred and thirty-two EFL learners from elementary, intermediate and advanced proficiency levels were assigned to six groups, two groups at each proficiency level with one being an experimental and the other a control group. The study employed pretests, treatment reading texts, and posttests. T-test was used for the analysis of the data. The results revealed positive effects for visual input enhancement in the advanced level based on within group and between groups’ comparisons. However this positive effect was not found for the elementary and intermediate levels based on between groups’ comparisons. It was concluded that although visual input enhancement may have beneficial effects for elementary and intermediate levels, it is much more effective for the advanced EFL learners. This study may provide useful guiding principles for EFL teachers and syllabus designers.

  15. The workload implications of haptic displays in multi-display environments such as the cockpit: Dual-task interference of within-sense haptic inputs (tactile/proprioceptive) and between-sense inputs (tactile/proprioceptive/auditory/visual)

    OpenAIRE

    Castle, H

    2007-01-01

    Visual workload demand within the cockpit is reaching saturation, whereas the haptic sense (proprioceptive and tactile sensation) is relatively untapped, despite studies suggesting the benefits of haptic displays. MRT suggests that inputs from haptic displays will not interfere with inputs from visual or auditory displays. MRT is based on the premise that multisensory integration occurs only after unisensory processing. However, recent neuroscientific findings suggest that t...

  16. Patch-based visual tracking with online representative sample selection

    Science.gov (United States)

    Ou, Weihua; Yuan, Di; Li, Donghao; Liu, Bin; Xia, Daoxun; Zeng, Wu

    2017-05-01

    Occlusion is one of the most challenging problems in visual object tracking. Recently, a lot of discriminative methods have been proposed to deal with this problem. For the discriminative methods, it is difficult to select the representative samples for the target template updating. In general, the holistic bounding boxes that contain tracked results are selected as the positive samples. However, when the objects are occluded, this simple strategy easily introduces the noises into the training data set and the target template and then leads the tracker to drift away from the target seriously. To address this problem, we propose a robust patch-based visual tracker with online representative sample selection. Different from previous works, we divide the object and the candidates into several patches uniformly and propose a score function to calculate the score of each patch independently. Then, the average score is adopted to determine the optimal candidate. Finally, we utilize the non-negative least square method to find the representative samples, which are used to update the target template. The experimental results on the object tracking benchmark 2013 and on the 13 challenging sequences show that the proposed method is robust to the occlusion and achieves promising results.

  17. Enhanced Input in LCTL Pedagogy

    Directory of Open Access Journals (Sweden)

    Marilyn S. Manley

    2009-08-01

    Full Text Available Language materials for the more-commonly-taught languages (MCTLs often include visual input enhancement (Sharwood Smith 1991, 1993 which makes use of typographical cues like bolding and underlining to enhance the saliency of targeted forms. For a variety of reasons, this paper argues that the use of enhanced input, both visual and oral, is especially important as a tool for the lesscommonly-taught languages (LCTLs. As there continues to be a scarcity of teaching resources for the LCTLs, individual teachers must take it upon themselves to incorporate enhanced input into their own self-made materials. Specific examples of how to incorporate both visual and oral enhanced input into language teaching are drawn from the author’s own experiences teaching Cuzco Quechua. Additionally, survey results are presented from the author’s Fall 2010 semester Cuzco Quechua language students, supporting the use of both visual and oral enhanced input.

  18. Enhanced Input in LCTL Pedagogy

    Directory of Open Access Journals (Sweden)

    Marilyn S. Manley

    2010-08-01

    Full Text Available Language materials for the more-commonly-taught languages (MCTLs often include visual input enhancement (Sharwood Smith 1991, 1993 which makes use of typographical cues like bolding and underlining to enhance the saliency of targeted forms. For a variety of reasons, this paper argues that the use of enhanced input, both visual and oral, is especially important as a tool for the lesscommonly-taught languages (LCTLs. As there continues to be a scarcity of teaching resources for the LCTLs, individual teachers must take it upon themselves to incorporate enhanced input into their own self-made materials. Specific examples of how to incorporate both visual and oral enhanced input into language teaching are drawn from the author’s own experiences teaching Cuzco Quechua. Additionally, survey results are presented from the author’s Fall 2010 semester Cuzco Quechua language students, supporting the use of both visual and oral enhanced input.

  19. Development of the RETRAN input model for Ulchin 3/4 visual system analyzer

    International Nuclear Information System (INIS)

    Lee, S. W.; Kim, K. D.; Lee, Y. J.; Lee, W. J.; Chung, B. D.; Jeong, J. J.; Hwang, M. K.

    2004-01-01

    As a part of the Long-Term Nuclear R and D program, KAERI has developed the so-called Visual System Analyzer (ViSA) based on best-estimate codes. The MARS and RETRAN codes are used as the best-estimate codes for ViSA. Between these two codes, the RETRAN code is used for realistic analysis of Non-LOCA transients and small-break loss-of-coolant accidents, of which break size is less than 3 inch diameter. So it is necessary to develop the RETRAN input model for Ulchin 3/4 plants (KSNP). In recognition of this, the RETRAN input model for Ulchin 3/4 plants has been developed. This report includes the input model requirements and the calculation note for the input data generation (see the Appendix). In order to confirm the validity of the input data, the calculations are performed for a steady state at 100 % power operation condition, inadvertent reactor trip and RCP trip. The results of the steady-state calculation agree well with the design data. The results of the other transient calculations seem to be reasonable and consistent with those of other best-estimate calculations. Therefore, the RETRAN input data can be used as a base input deck for the RETRAN transient analyzer for Ulchin 3/4. Moreover, it is found that Core Protection Calculator (CPC) module, which is modified by Korea Electric Power Research Institute (KEPRI), is well adapted to ViSA

  20. Higher Level Visual Cortex Represents Retinotopic, Not Spatiotopic, Object Location

    Science.gov (United States)

    Kanwisher, Nancy

    2012-01-01

    The crux of vision is to identify objects and determine their locations in the environment. Although initial visual representations are necessarily retinotopic (eye centered), interaction with the real world requires spatiotopic (absolute) location information. We asked whether higher level human visual cortex—important for stable object recognition and action—contains information about retinotopic and/or spatiotopic object position. Using functional magnetic resonance imaging multivariate pattern analysis techniques, we found information about both object category and object location in each of the ventral, dorsal, and early visual regions tested, replicating previous reports. By manipulating fixation position and stimulus position, we then tested whether these location representations were retinotopic or spatiotopic. Crucially, all location information was purely retinotopic. This pattern persisted when location information was irrelevant to the task, and even when spatiotopic (not retinotopic) stimulus position was explicitly emphasized. We also conducted a “searchlight” analysis across our entire scanned volume to explore additional cortex but again found predominantly retinotopic representations. The lack of explicit spatiotopic representations suggests that spatiotopic object position may instead be computed indirectly and continually reconstructed with each eye movement. Thus, despite our subjective impression that visual information is spatiotopic, even in higher level visual cortex, object location continues to be represented in retinotopic coordinates. PMID:22190434

  1. Visual Memories Bypass Normalization.

    Science.gov (United States)

    Bloem, Ilona M; Watanabe, Yurika L; Kibbe, Melissa M; Ling, Sam

    2018-05-01

    How distinct are visual memory representations from visual perception? Although evidence suggests that briefly remembered stimuli are represented within early visual cortices, the degree to which these memory traces resemble true visual representations remains something of a mystery. Here, we tested whether both visual memory and perception succumb to a seemingly ubiquitous neural computation: normalization. Observers were asked to remember the contrast of visual stimuli, which were pitted against each other to promote normalization either in perception or in visual memory. Our results revealed robust normalization between visual representations in perception, yet no signature of normalization occurring between working memory stores-neither between representations in memory nor between memory representations and visual inputs. These results provide unique insight into the nature of visual memory representations, illustrating that visual memory representations follow a different set of computational rules, bypassing normalization, a canonical visual computation.

  2. Designing Data Visualizations Representing Informational Relationships

    CERN Document Server

    Steele, Julie

    2011-01-01

    Data visualization is an efficient and effective medium for communicating large amounts of information, but the design process can often seem like an unexplainable creative endeavor. This concise book aims to demystify the design process by showing you how to use a linear decision-making process to encode your information visually. Delve into different kinds of visualization, including infographics and visual art, and explore the influences at work in each one. Then learn how to apply these concepts to your design process. Learn data visualization classifications, including explanatory, expl

  3. Check and visualization of input geometry data using the geometrical module of the Monte Carlo code MCU: WWER-440 pressure vessel dosimetry benchmarks

    International Nuclear Information System (INIS)

    Gurevich, M.; Zaritsky, S.; Osmera, B.; Mikus, J.

    1997-01-01

    The Monte Carlo method gives the opportunity to conduct the calculations of neutron and photon flux without any simplifications of the 3-D geometry of the nuclear power and experimental devices. So, each graduated Monte Carlo code includes the combinatorial geometry module and tools for the geometry description giving a possibility to describe very complex systems with a number of hierarchy levels of the geometrical objects. Such codes as usual have special modules for the visual checking of geometry input information. These geometry opportunities could be used for all cases when the accurate 3-D description of the complex geometry becomes a necessity. The description (specification) of benchmark experiments is one of the such cases. Such accurate and uniform description detects all mistakes and ambiguities in the starting information of various kinds (drawings, reports etc.). Usually the quality of different parts of the starting information (generally produced by different persons during the different stages of the device elaboration and operation) is different. After using the above mentioned modules and tools, the resultant geometry description can be used as a standard for this device. One can automatically produce any type of the device figure. The detail geometry description can be used as input for different calculation models carrying out (not only for Monte Carlo). The application of that method to the description of the WWER-440 mock-ups is represented in the report. The mock-ups were created on the reactor LR-O (NRI) and the reactor vessel dosimetry benchmarks were developed on the basis of these mock-up experiments. The NCG-8 module of the Russian Monte Carlo code MCU was used. It is the combinatorial multilingual universal geometrical module. The MCU code was certified by Russian Nuclear Regulatory Body. Almost all figures for mentioned benchmarks specifications were made by the MCU visualization code. The problem of the automatic generation of the

  4. Top-down inputs enhance orientation selectivity in neurons of the primary visual cortex during perceptual learning.

    Directory of Open Access Journals (Sweden)

    Samat Moldakarimov

    2014-08-01

    Full Text Available Perceptual learning has been used to probe the mechanisms of cortical plasticity in the adult brain. Feedback projections are ubiquitous in the cortex, but little is known about their role in cortical plasticity. Here we explore the hypothesis that learning visual orientation discrimination involves learning-dependent plasticity of top-down feedback inputs from higher cortical areas, serving a different function from plasticity due to changes in recurrent connections within a cortical area. In a Hodgkin-Huxley-based spiking neural network model of visual cortex, we show that modulation of feedback inputs to V1 from higher cortical areas results in shunting inhibition in V1 neurons, which changes the response properties of V1 neurons. The orientation selectivity of V1 neurons is enhanced without changing orientation preference, preserving the topographic organizations in V1. These results provide new insights to the mechanisms of plasticity in the adult brain, reconciling apparently inconsistent experiments and providing a new hypothesis for a functional role of the feedback connections.

  5. Visual Input Enhancement via Essay Coding Results in Deaf Learners' Long-Term Retention of Improved English Grammatical Knowledge

    Science.gov (United States)

    Berent, Gerald P.; Kelly, Ronald R.; Schmitz, Kathryn L.; Kenney, Patricia

    2009-01-01

    This study explored the efficacy of visual input enhancement, specifically "essay enhancement", for facilitating deaf college students' improvement in English grammatical knowledge. Results documented students' significant improvement immediately after a 10-week instructional intervention, a replication of recent research. Additionally, the…

  6. Analysis and Visualization of Internet QA Bulletin Boards Represented as Heterogeneous Networks

    Science.gov (United States)

    Murata, Tsuyoshi; Ikeya, Tomoyuki

    Visualizing and analyzing social interactions of CGM (Consumer Generated Media) are important for understanding overall activities on the internet. Social interactions are often represented as simple networks that are composed of homogeneous nodes and edges between them. However, related entities in real world are often not homogeneous. Such relations are naturally represented as heterogeneous networks composed of more than one kind of nodes and edges connecting them. In the case of CGM, for example, users and their contents constitute nodes of heterogeneous networks. There are related users (user communities) and related contents (contents communities) in the heterogeneous networks. Discovering both communities and finding correspondence among them will clarify the characteristics of the communites. This paper describes an attempt for visualizing and analyzing social interactions of Yahoo! Chiebukuro (Japanese Yahoo! Answers). New criteria for measuring correspondence between user communities and board communites are defined, and characteristics of both communities are analyzed using the criteria.

  7. The Effectiveness of Visual Input Enhancement on the Noticing and L2 Development of the Spanish Past Tense

    Science.gov (United States)

    Loewen, Shawn; Inceoglu, Solène

    2016-01-01

    Textual manipulation is a common pedagogic tool used to emphasize specific features of a second language (L2) text, thereby facilitating noticing and, ideally, second language development. Visual input enhancement has been used to investigate the effects of highlighting specific grammatical structures in a text. The current study uses a…

  8. Sound effects: Multimodal input helps infants find displaced objects.

    Science.gov (United States)

    Shinskey, Jeanne L

    2017-09-01

    Before 9 months, infants use sound to retrieve a stationary object hidden by darkness but not one hidden by occlusion, suggesting auditory input is more salient in the absence of visual input. This article addresses how audiovisual input affects 10-month-olds' search for displaced objects. In AB tasks, infants who previously retrieved an object at A subsequently fail to find it after it is displaced to B, especially following a delay between hiding and retrieval. Experiment 1 manipulated auditory input by keeping the hidden object audible versus silent, and visual input by presenting the delay in the light versus dark. Infants succeeded more at B with audible than silent objects and, unexpectedly, more after delays in the light than dark. Experiment 2 presented both the delay and search phases in darkness. The unexpected light-dark difference disappeared. Across experiments, the presence of auditory input helped infants find displaced objects, whereas the absence of visual input did not. Sound might help by strengthening object representation, reducing memory load, or focusing attention. This work provides new evidence on when bimodal input aids object processing, corroborates claims that audiovisual processing improves over the first year of life, and contributes to multisensory approaches to studying cognition. Statement of contribution What is already known on this subject Before 9 months, infants use sound to retrieve a stationary object hidden by darkness but not one hidden by occlusion. This suggests they find auditory input more salient in the absence of visual input in simple search tasks. After 9 months, infants' object processing appears more sensitive to multimodal (e.g., audiovisual) input. What does this study add? This study tested how audiovisual input affects 10-month-olds' search for an object displaced in an AB task. Sound helped infants find displaced objects in both the presence and absence of visual input. Object processing becomes more

  9. Orientation selectivity of synaptic input to neurons in mouse and cat primary visual cortex.

    Science.gov (United States)

    Tan, Andrew Y Y; Brown, Brandon D; Scholl, Benjamin; Mohanty, Deepankar; Priebe, Nicholas J

    2011-08-24

    Primary visual cortex (V1) is the site at which orientation selectivity emerges in mammals: visual thalamus afferents to V1 respond equally to all stimulus orientations, whereas their target V1 neurons respond selectively to stimulus orientation. The emergence of orientation selectivity in V1 has long served as a model for investigating cortical computation. Recent evidence for orientation selectivity in mouse V1 opens cortical computation to dissection by genetic and imaging tools, but also raises two essential questions: (1) How does orientation selectivity in mouse V1 neurons compare with that in previously described species? (2) What is the synaptic basis for orientation selectivity in mouse V1? A comparison of orientation selectivity in mouse and in cat, where such measures have traditionally been made, reveals that orientation selectivity in mouse V1 is weaker than in cat V1, but that spike threshold plays a similar role in narrowing selectivity between membrane potential and spike rate. To uncover the synaptic basis for orientation selectivity, we made whole-cell recordings in vivo from mouse V1 neurons, comparing neuronal input selectivity-based on membrane potential, synaptic excitation, and synaptic inhibition-to output selectivity based on spiking. We found that a neuron's excitatory and inhibitory inputs are selective for the same stimulus orientations as is its membrane potential response, and that inhibitory selectivity is not broader than excitatory selectivity. Inhibition has different dynamics than excitation, adapting more rapidly. In neurons with temporally modulated responses, the timing of excitation and inhibition was different in mice and cats.

  10. Impaired integration of object knowledge and visual input in a case of ventral simultanagnosia with bilateral damage to area V4.

    Science.gov (United States)

    Leek, E Charles; d'Avossa, Giovanni; Tainturier, Marie-Josèphe; Roberts, Daniel J; Yuen, Sung Lai; Hu, Mo; Rafal, Robert

    2012-01-01

    This study examines how brain damage can affect the cognitive processes that support the integration of sensory input and prior knowledge during shape perception. It is based on the first detailed study of acquired ventral simultanagnosia, which was found in a patient (M.T.) with posterior occipitotemporal lesions encompassing V4 bilaterally. Despite showing normal object recognition for single items in both accuracy and response times (RTs), and intact low-level vision assessed across an extensive battery of tests, M.T. was impaired in object identification with overlapping figures displays. Task performance was modulated by familiarity: Unlike controls, M.T. was faster with overlapping displays of abstract shapes than with overlapping displays of common objects. His performance with overlapping common object displays was also influenced by both the semantic relatedness and visual similarity of the display items. These findings challenge claims that visual perception is driven solely by feedforward mechanisms and show how brain damage can selectively impair high-level perceptual processes supporting the integration of stored knowledge and visual sensory input.

  11. Bottom-up and Top-down Input Augment the Variability of Cortical Neurons

    Science.gov (United States)

    Nassi, Jonathan J.; Kreiman, Gabriel; Born, Richard T.

    2016-01-01

    SUMMARY Neurons in the cerebral cortex respond inconsistently to a repeated sensory stimulus, yet they underlie our stable sensory experiences. Although the nature of this variability is unknown, its ubiquity has encouraged the general view that each cell produces random spike patterns that noisily represent its response rate. In contrast, here we show that reversibly inactivating distant sources of either bottom-up or top-down input to cortical visual areas in the alert primate reduces both the spike train irregularity and the trial-to-trial variability of single neurons. A simple model in which a fraction of the pre-synaptic input is silenced can reproduce this reduction in variability, provided that there exist temporal correlations primarily within, but not between, excitatory and inhibitory input pools. A large component of the variability of cortical neurons may therefore arise from synchronous input produced by signals arriving from multiple sources. PMID:27427459

  12. Visual cortex responses reflect temporal structure of continuous quasi-rhythmic sensory stimulation.

    Science.gov (United States)

    Keitel, Christian; Thut, Gregor; Gross, Joachim

    2017-02-01

    Neural processing of dynamic continuous visual input, and cognitive influences thereon, are frequently studied in paradigms employing strictly rhythmic stimulation. However, the temporal structure of natural stimuli is hardly ever fully rhythmic but possesses certain spectral bandwidths (e.g. lip movements in speech, gestures). Examining periodic brain responses elicited by strictly rhythmic stimulation might thus represent ideal, yet isolated cases. Here, we tested how the visual system reflects quasi-rhythmic stimulation with frequencies continuously varying within ranges of classical theta (4-7Hz), alpha (8-13Hz) and beta bands (14-20Hz) using EEG. Our findings substantiate a systematic and sustained neural phase-locking to stimulation in all three frequency ranges. Further, we found that allocation of spatial attention enhances EEG-stimulus locking to theta- and alpha-band stimulation. Our results bridge recent findings regarding phase locking ("entrainment") to quasi-rhythmic visual input and "frequency-tagging" experiments employing strictly rhythmic stimulation. We propose that sustained EEG-stimulus locking can be considered as a continuous neural signature of processing dynamic sensory input in early visual cortices. Accordingly, EEG-stimulus locking serves to trace the temporal evolution of rhythmic as well as quasi-rhythmic visual input and is subject to attentional bias. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  13. Effect of rehabilitation worker input on visual function outcomes in individuals with low vision: study protocol for a randomised controlled trial.

    Science.gov (United States)

    Acton, Jennifer H; Molik, Bablin; Binns, Alison; Court, Helen; Margrain, Tom H

    2016-02-24

    Visual Rehabilitation Officers help people with a visual impairment maintain their independence. This intervention adopts a flexible, goal-centred approach, which may include training in mobility, use of optical and non-optical aids, and performance of activities of daily living. Although Visual Rehabilitation Officers are an integral part of the low vision service in the United Kingdom, evidence that they are effective is lacking. The purpose of this exploratory trial is to estimate the impact of a Visual Rehabilitation Officer on self-reported visual function, psychosocial and quality-of-life outcomes in individuals with low vision. In this exploratory, assessor-masked, parallel group, randomised controlled trial, participants will be allocated either to receive home visits from a Visual Rehabilitation Officer (n = 30) or to a waiting list control group (n = 30) in a 1:1 ratio. Adult volunteers with a visual impairment, who have been identified as needing rehabilitation officer input by a social worker, will take part. Those with an urgent need for a Visual Rehabilitation Officer or who have a cognitive impairment will be excluded. The primary outcome measure will be self-reported visual function (48-item Veterans Affairs Low Vision Visual Functioning Questionnaire). Secondary outcome measures will include psychological and quality-of-life metrics: the Patient Health Questionnaire (PHQ-9), the Warwick-Edinburgh Mental Well-being Scale (WEMWBS), the Adjustment to Age-related Visual Loss Scale (AVL-12), the Standardised Health-related Quality of Life Questionnaire (EQ-5D) and the UCLA Loneliness Scale. The interviewer collecting the outcomes will be masked to the group allocations. The analysis will be undertaken on a complete case and intention-to-treat basis. Analysis of covariance (ANCOVA) will be applied to follow-up questionnaire scores, with the baseline score as a covariate. This trial is expected to provide robust effect size estimates of the intervention

  14. The primary visual cortex in the neural circuit for visual orienting

    Science.gov (United States)

    Zhaoping, Li

    The primary visual cortex (V1) is traditionally viewed as remote from influencing brain's motor outputs. However, V1 provides the most abundant cortical inputs directly to the sensory layers of superior colliculus (SC), a midbrain structure to command visual orienting such as shifting gaze and turning heads. I will show physiological, anatomical, and behavioral data suggesting that V1 transforms visual input into a saliency map to guide a class of visual orienting that is reflexive or involuntary. In particular, V1 receives a retinotopic map of visual features, such as orientation, color, and motion direction of local visual inputs; local interactions between V1 neurons perform a local-to-global computation to arrive at a saliency map that highlights conspicuous visual locations by higher V1 responses. The conspicuous location are usually, but not always, where visual input statistics changes. The population V1 outputs to SC, which is also retinotopic, enables SC to locate, by lateral inhibition between SC neurons, the most salient location as the saccadic target. Experimental tests of this hypothesis will be shown. Variations of the neural circuit for visual orienting across animal species, with more or less V1 involvement, will be discussed. Supported by the Gatsby Charitable Foundation.

  15. Handwriting generates variable visual input to facilitate symbol learning

    Science.gov (United States)

    Li, Julia X.; James, Karin H.

    2015-01-01

    Recent research has demonstrated that handwriting practice facilitates letter categorization in young children. The present experiments investigated why handwriting practice facilitates visual categorization by comparing two hypotheses: That handwriting exerts its facilitative effect because of the visual-motor production of forms, resulting in a direct link between motor and perceptual systems, or because handwriting produces variable visual instances of a named category in the environment that then changes neural systems. We addressed these issues by measuring performance of 5 year-old children on a categorization task involving novel, Greek symbols across 6 different types of learning conditions: three involving visual-motor practice (copying typed symbols independently, tracing typed symbols, tracing handwritten symbols) and three involving visual-auditory practice (seeing and saying typed symbols of a single typed font, of variable typed fonts, and of handwritten examples). We could therefore compare visual-motor production with visual perception both of variable and similar forms. Comparisons across the six conditions (N=72) demonstrated that all conditions that involved studying highly variable instances of a symbol facilitated symbol categorization relative to conditions where similar instances of a symbol were learned, regardless of visual-motor production. Therefore, learning perceptually variable instances of a category enhanced performance, suggesting that handwriting facilitates symbol understanding by virtue of its environmental output: supporting the notion of developmental change though brain-body-environment interactions. PMID:26726913

  16. Integrate-and-fire vs Poisson models of LGN input to V1 cortex: noisier inputs reduce orientation selectivity.

    Science.gov (United States)

    Lin, I-Chun; Xing, Dajun; Shapley, Robert

    2012-12-01

    One of the reasons the visual cortex has attracted the interest of computational neuroscience is that it has well-defined inputs. The lateral geniculate nucleus (LGN) of the thalamus is the source of visual signals to the primary visual cortex (V1). Most large-scale cortical network models approximate the spike trains of LGN neurons as simple Poisson point processes. However, many studies have shown that neurons in the early visual pathway are capable of spiking with high temporal precision and their discharges are not Poisson-like. To gain an understanding of how response variability in the LGN influences the behavior of V1, we study response properties of model V1 neurons that receive purely feedforward inputs from LGN cells modeled either as noisy leaky integrate-and-fire (NLIF) neurons or as inhomogeneous Poisson processes. We first demonstrate that the NLIF model is capable of reproducing many experimentally observed statistical properties of LGN neurons. Then we show that a V1 model in which the LGN input to a V1 neuron is modeled as a group of NLIF neurons produces higher orientation selectivity than the one with Poisson LGN input. The second result implies that statistical characteristics of LGN spike trains are important for V1's function. We conclude that physiologically motivated models of V1 need to include more realistic LGN spike trains that are less noisy than inhomogeneous Poisson processes.

  17. Postural Control in Bilateral Vestibular Failure: Its Relation to Visual, Proprioceptive, Vestibular, and Cognitive Input

    Science.gov (United States)

    Sprenger, Andreas; Wojak, Jann F.; Jandl, Nico M.; Helmchen, Christoph

    2017-01-01

    Patients with bilateral vestibular failure (BVF) suffer from postural and gait unsteadiness with an increased risk of falls. The aim of this study was to elucidate the differential role of otolith, semicircular canal (SSC), visual, proprioceptive, and cognitive influences on the postural stability of BVF patients. Center-of-pressure displacements were recorded by posturography under six conditions: target visibility; tonic head positions in the pitch plane; horizontal head shaking; sensory deprivation; dual task; and tandem stance. Between-group analysis revealed larger postural sway in BVF patients on eye closure; but with the eyes open, BVF did not differ from healthy controls (HCs). Head tilts and horizontal head shaking increased sway but did not differ between groups. In the dual task condition, BVF patients maintained posture indistinguishable from controls. On foam and tandem stance, postural sway was larger in BVF, even with the eyes open. The best predictor for the severity of bilateral vestibulopathy was standing on foam with eyes closed. Postural control of our BVF was indistinguishable from HCs once visual and proprioceptive feedback is provided. This distinguishes them from patients with vestibulo-cerebellar disorders or functional dizziness. It confirms previous reports and explains that postural unsteadiness of BVF patients can be missed easily if not examined by conditions of visual and/or proprioceptive deprivation. In fact, the best predictor for vestibular hypofunction (VOR gain) was examining patients standing on foam with the eyes closed. Postural sway in that condition increased with the severity of vestibular impairment but not with disease duration. In the absence of visual control, impaired otolith input destabilizes BVF with head retroflexion. Stimulating deficient SSC does not distinguish patients from controls possibly reflecting a shift of intersensory weighing toward proprioceptive-guided postural control. Accordingly, proprioceptive

  18. Postural Control in Bilateral Vestibular Failure: Its Relation to Visual, Proprioceptive, Vestibular, and Cognitive Input.

    Science.gov (United States)

    Sprenger, Andreas; Wojak, Jann F; Jandl, Nico M; Helmchen, Christoph

    2017-01-01

    Patients with bilateral vestibular failure (BVF) suffer from postural and gait unsteadiness with an increased risk of falls. The aim of this study was to elucidate the differential role of otolith, semicircular canal (SSC), visual, proprioceptive, and cognitive influences on the postural stability of BVF patients. Center-of-pressure displacements were recorded by posturography under six conditions: target visibility; tonic head positions in the pitch plane; horizontal head shaking; sensory deprivation; dual task; and tandem stance. Between-group analysis revealed larger postural sway in BVF patients on eye closure; but with the eyes open, BVF did not differ from healthy controls (HCs). Head tilts and horizontal head shaking increased sway but did not differ between groups. In the dual task condition, BVF patients maintained posture indistinguishable from controls. On foam and tandem stance, postural sway was larger in BVF, even with the eyes open. The best predictor for the severity of bilateral vestibulopathy was standing on foam with eyes closed. Postural control of our BVF was indistinguishable from HCs once visual and proprioceptive feedback is provided. This distinguishes them from patients with vestibulo-cerebellar disorders or functional dizziness. It confirms previous reports and explains that postural unsteadiness of BVF patients can be missed easily if not examined by conditions of visual and/or proprioceptive deprivation. In fact, the best predictor for vestibular hypofunction (VOR gain) was examining patients standing on foam with the eyes closed. Postural sway in that condition increased with the severity of vestibular impairment but not with disease duration. In the absence of visual control, impaired otolith input destabilizes BVF with head retroflexion. Stimulating deficient SSC does not distinguish patients from controls possibly reflecting a shift of intersensory weighing toward proprioceptive-guided postural control. Accordingly, proprioceptive

  19. Postural Control in Bilateral Vestibular Failure: Its Relation to Visual, Proprioceptive, Vestibular, and Cognitive Input

    Directory of Open Access Journals (Sweden)

    Andreas Sprenger

    2017-09-01

    Full Text Available Patients with bilateral vestibular failure (BVF suffer from postural and gait unsteadiness with an increased risk of falls. The aim of this study was to elucidate the differential role of otolith, semicircular canal (SSC, visual, proprioceptive, and cognitive influences on the postural stability of BVF patients. Center-of-pressure displacements were recorded by posturography under six conditions: target visibility; tonic head positions in the pitch plane; horizontal head shaking; sensory deprivation; dual task; and tandem stance. Between-group analysis revealed larger postural sway in BVF patients on eye closure; but with the eyes open, BVF did not differ from healthy controls (HCs. Head tilts and horizontal head shaking increased sway but did not differ between groups. In the dual task condition, BVF patients maintained posture indistinguishable from controls. On foam and tandem stance, postural sway was larger in BVF, even with the eyes open. The best predictor for the severity of bilateral vestibulopathy was standing on foam with eyes closed. Postural control of our BVF was indistinguishable from HCs once visual and proprioceptive feedback is provided. This distinguishes them from patients with vestibulo-cerebellar disorders or functional dizziness. It confirms previous reports and explains that postural unsteadiness of BVF patients can be missed easily if not examined by conditions of visual and/or proprioceptive deprivation. In fact, the best predictor for vestibular hypofunction (VOR gain was examining patients standing on foam with the eyes closed. Postural sway in that condition increased with the severity of vestibular impairment but not with disease duration. In the absence of visual control, impaired otolith input destabilizes BVF with head retroflexion. Stimulating deficient SSC does not distinguish patients from controls possibly reflecting a shift of intersensory weighing toward proprioceptive-guided postural control. Accordingly

  20. Removing Visual Bias in Filament Identification: A New Goodness-of-fit Measure

    Science.gov (United States)

    Green, C.-E.; Cunningham, M. R.; Dawson, J. R.; Jones, P. A.; Novak, G.; Fissel, L. M.

    2017-05-01

    Different combinations of input parameters to filament identification algorithms, such as disperse and filfinder, produce numerous different output skeletons. The skeletons are a one-pixel-wide representation of the filamentary structure in the original input image. However, these output skeletons may not necessarily be a good representation of that structure. Furthermore, a given skeleton may not be as good of a representation as another. Previously, there has been no mathematical “goodness-of-fit” measure to compare output skeletons to the input image. Thus far this has been assessed visually, introducing visual bias. We propose the application of the mean structural similarity index (MSSIM) as a mathematical goodness-of-fit measure. We describe the use of the MSSIM to find the output skeletons that are the most mathematically similar to the original input image (the optimum, or “best,” skeletons) for a given algorithm, and independently of the algorithm. This measure makes possible systematic parameter studies, aimed at finding the subset of input parameter values returning optimum skeletons. It can also be applied to the output of non-skeleton-based filament identification algorithms, such as the Hessian matrix method. The MSSIM removes the need to visually examine thousands of output skeletons, and eliminates the visual bias, subjectivity, and limited reproducibility inherent in that process, representing a major improvement upon existing techniques. Importantly, it also allows further automation in the post-processing of output skeletons, which is crucial in this era of “big data.”

  1. Monocular Visual Deprivation Suppresses Excitability in Adult Human Visual Cortex

    DEFF Research Database (Denmark)

    Lou, Astrid Rosenstand; Madsen, Kristoffer Hougaard; Paulson, Olaf Bjarne

    2011-01-01

    The adult visual cortex maintains a substantial potential for plasticity in response to a change in visual input. For instance, transcranial magnetic stimulation (TMS) studies have shown that binocular deprivation (BD) increases the cortical excitability for inducing phosphenes with TMS. Here, we...... of visual deprivation has a substantial impact on experience-dependent plasticity of the human visual cortex.......The adult visual cortex maintains a substantial potential for plasticity in response to a change in visual input. For instance, transcranial magnetic stimulation (TMS) studies have shown that binocular deprivation (BD) increases the cortical excitability for inducing phosphenes with TMS. Here, we...... employed TMS to trace plastic changes in adult visual cortex before, during, and after 48 h of monocular deprivation (MD) of the right dominant eye. In healthy adult volunteers, MD-induced changes in visual cortex excitability were probed with paired-pulse TMS applied to the left and right occipital cortex...

  2. The changes in relation of auditory and visual input activity between hemispheres analized in cartographic EEG in a child with hyperactivity syndrome

    Directory of Open Access Journals (Sweden)

    Radičević Zoran

    2015-01-01

    Full Text Available The paper discusses the changes in relations of visual and auditory inputs between the hemispheres in a child with hyperactive syndrome and its effects which may lead to better attention engagement in auditory and visual information analysis. The method included the use of cartographic EEG and clinical procedure in a 10-year-old boy with hyperactive syndrome and attention deficit disorder, who has theta dysfunction manifested in standard EEG. Cartographic EEG patterns was performed on NihonKohden Corporation, EEG - 1200K Neurofax apparatus in longitudinal bipolar electrode assembly schedule by utilizing10/20 International electrode positioning. Impedance was maintained below 5 kΩ, with not more than 1 kΩ differences between the electrodes. Lower filter was set at 0.53 Hz and higher filter at 35 Hz. Recording was performed in a quiet period and during stimulation procedures that include speech and language basis. Standard EEG and Neurofeedback (NFB treatment indicated higher theta load, alpha 2 and beta 1 activity measured in the cartographic EEG which was done after the relative failure of NFB treatment. After this, the NFB treatment was applied which lasted for six months, in a way that when the boy was reading, the visual input was enhanced to the left hemisphere and auditory input was reduced to the right hemisphere. Repeated EEG mapping analysis showed that there was a significant improvement, both in EEG findings as well as in attention, behavioural and learning disorders. The paper discusses some aspects of learning, attention and behaviour in relation to changes in the standard EEG, especially in cartographic EEG and NFB findings.

  3. Does visual working memory represent the predicted locations of future target objects? An event-related brain potential study.

    Science.gov (United States)

    Grubert, Anna; Eimer, Martin

    2015-11-11

    During the maintenance of task-relevant objects in visual working memory, the contralateral delay activity (CDA) is elicited over the hemisphere opposite to the visual field where these objects are presented. The presence of this lateralised CDA component demonstrates the existence of position-dependent object representations in working memory. We employed a change detection task to investigate whether the represented object locations in visual working memory are shifted in preparation for the known location of upcoming comparison stimuli. On each trial, bilateral memory displays were followed after a delay period by bilateral test displays. Participants had to encode and maintain three visual objects on one side of the memory display, and to judge whether they were identical or different to three objects in the test display. Task-relevant memory and test stimuli were located in the same visual hemifield in the no-shift task, and on opposite sides in the horizontal shift task. CDA components of similar size were triggered contralateral to the memorized objects in both tasks. The absence of a polarity reversal of the CDA in the horizontal shift task demonstrated that there was no preparatory shift of memorized object location towards the side of the upcoming comparison stimuli. These results suggest that visual working memory represents the locations of visual objects during encoding, and that the matching of memorized and test objects at different locations is based on a comparison process that can bridge spatial translations between these objects. This article is part of a Special Issue entitled SI: Prediction and Attention. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Visually representing reality: aesthetics and accessibility aspects

    Science.gov (United States)

    van Nes, Floris L.

    2009-02-01

    This paper gives an overview of the visual representation of reality with three imaging technologies: painting, photography and electronic imaging. The contribution of the important image aspects, called dimensions hereafter, such as color, fine detail and total image size, to the degree of reality and aesthetic value of the rendered image are described for each of these technologies. Whereas quite a few of these dimensions - or approximations, or even only suggestions thereof - were already present in prehistoric paintings, apparent motion and true stereoscopic vision only recently were added - unfortunately also introducing accessibility and image safety issues. Efforts are made to reduce the incidence of undesirable biomedical effects such as photosensitive seizures (PSS), visually induced motion sickness (VIMS), and visual fatigue from stereoscopic images (VFSI) by international standardization of the image parameters to be avoided by image providers and display manufacturers. The history of this type of standardization, from an International Workshop Agreement to a strategy for accomplishing effective international standardization by ISO, is treated at some length. One of the difficulties to be mastered in this process is the reconciliation of the, sometimes opposing, interests of vulnerable persons, thrill-seeking viewers, creative video designers and the game industry.

  5. Visual analytics in healthcare education: exploring novel ways to analyze and represent big data in undergraduate medical education

    Directory of Open Access Journals (Sweden)

    Christos Vaitsis

    2014-11-01

    Full Text Available Introduction. The big data present in the medical curriculum that informs undergraduate medical education is beyond human abilities to perceive and analyze. The medical curriculum is the main tool used by teachers and directors to plan, design, and deliver teaching and assessment activities and student evaluations in medical education in a continuous effort to improve it. Big data remains largely unexploited for medical education improvement purposes. The emerging research field of visual analytics has the advantage of combining data analysis and manipulation techniques, information and knowledge representation, and human cognitive strength to perceive and recognize visual patterns. Nevertheless, there is a lack of research on the use and benefits of visual analytics in medical education.Methods. The present study is based on analyzing the data in the medical curriculum of an undergraduate medical program as it concerns teaching activities, assessment methods and learning outcomes in order to explore visual analytics as a tool for finding ways of representing big data from undergraduate medical education for improvement purposes. Cytoscape software was employed to build networks of the identified aspects and visualize them.Results. After the analysis of the curriculum data, eleven aspects were identified. Further analysis and visualization of the identified aspects with Cytoscape resulted in building an abstract model of the examined data that presented three different approaches; (i learning outcomes and teaching methods, (ii examination and learning outcomes, and (iii teaching methods, learning outcomes, examination results, and gap analysis.Discussion. This study identified aspects of medical curriculum that play an important role in how medical education is conducted. The implementation of visual analytics revealed three novel ways of representing big data in the undergraduate medical education context. It appears to be a useful tool to

  6. Visual analytics in healthcare education: exploring novel ways to analyze and represent big data in undergraduate medical education.

    Science.gov (United States)

    Vaitsis, Christos; Nilsson, Gunnar; Zary, Nabil

    2014-01-01

    Introduction. The big data present in the medical curriculum that informs undergraduate medical education is beyond human abilities to perceive and analyze. The medical curriculum is the main tool used by teachers and directors to plan, design, and deliver teaching and assessment activities and student evaluations in medical education in a continuous effort to improve it. Big data remains largely unexploited for medical education improvement purposes. The emerging research field of visual analytics has the advantage of combining data analysis and manipulation techniques, information and knowledge representation, and human cognitive strength to perceive and recognize visual patterns. Nevertheless, there is a lack of research on the use and benefits of visual analytics in medical education. Methods. The present study is based on analyzing the data in the medical curriculum of an undergraduate medical program as it concerns teaching activities, assessment methods and learning outcomes in order to explore visual analytics as a tool for finding ways of representing big data from undergraduate medical education for improvement purposes. Cytoscape software was employed to build networks of the identified aspects and visualize them. Results. After the analysis of the curriculum data, eleven aspects were identified. Further analysis and visualization of the identified aspects with Cytoscape resulted in building an abstract model of the examined data that presented three different approaches; (i) learning outcomes and teaching methods, (ii) examination and learning outcomes, and (iii) teaching methods, learning outcomes, examination results, and gap analysis. Discussion. This study identified aspects of medical curriculum that play an important role in how medical education is conducted. The implementation of visual analytics revealed three novel ways of representing big data in the undergraduate medical education context. It appears to be a useful tool to explore such data

  7. Contextual modulation of primary visual cortex by auditory signals.

    Science.gov (United States)

    Petro, L S; Paton, A T; Muckli, L

    2017-02-19

    Early visual cortex receives non-feedforward input from lateral and top-down connections (Muckli & Petro 2013 Curr. Opin. Neurobiol. 23, 195-201. (doi:10.1016/j.conb.2013.01.020)), including long-range projections from auditory areas. Early visual cortex can code for high-level auditory information, with neural patterns representing natural sound stimulation (Vetter et al. 2014 Curr. Biol. 24, 1256-1262. (doi:10.1016/j.cub.2014.04.020)). We discuss a number of questions arising from these findings. What is the adaptive function of bimodal representations in visual cortex? What type of information projects from auditory to visual cortex? What are the anatomical constraints of auditory information in V1, for example, periphery versus fovea, superficial versus deep cortical layers? Is there a putative neural mechanism we can infer from human neuroimaging data and recent theoretical accounts of cortex? We also present data showing we can read out high-level auditory information from the activation patterns of early visual cortex even when visual cortex receives simple visual stimulation, suggesting independent channels for visual and auditory signals in V1. We speculate which cellular mechanisms allow V1 to be contextually modulated by auditory input to facilitate perception, cognition and behaviour. Beyond cortical feedback that facilitates perception, we argue that there is also feedback serving counterfactual processing during imagery, dreaming and mind wandering, which is not relevant for immediate perception but for behaviour and cognition over a longer time frame.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Authors.

  8. Visual Perceptual Echo Reflects Learning of Regularities in Rapid Luminance Sequences.

    Science.gov (United States)

    Chang, Acer Y-C; Schwartzman, David J; VanRullen, Rufin; Kanai, Ryota; Seth, Anil K

    2017-08-30

    A novel neural signature of active visual processing has recently been described in the form of the "perceptual echo", in which the cross-correlation between a sequence of randomly fluctuating luminance values and occipital electrophysiological signals exhibits a long-lasting periodic (∼100 ms cycle) reverberation of the input stimulus (VanRullen and Macdonald, 2012). As yet, however, the mechanisms underlying the perceptual echo and its function remain unknown. Reasoning that natural visual signals often contain temporally predictable, though nonperiodic features, we hypothesized that the perceptual echo may reflect a periodic process associated with regularity learning. To test this hypothesis, we presented subjects with successive repetitions of a rapid nonperiodic luminance sequence, and examined the effects on the perceptual echo, finding that echo amplitude linearly increased with the number of presentations of a given luminance sequence. These data suggest that the perceptual echo reflects a neural signature of regularity learning.Furthermore, when a set of repeated sequences was followed by a sequence with inverted luminance polarities, the echo amplitude decreased to the same level evoked by a novel stimulus sequence. Crucially, when the original stimulus sequence was re-presented, the echo amplitude returned to a level consistent with the number of presentations of this sequence, indicating that the visual system retained sequence-specific information, for many seconds, even in the presence of intervening visual input. Altogether, our results reveal a previously undiscovered regularity learning mechanism within the human visual system, reflected by the perceptual echo. SIGNIFICANCE STATEMENT How the brain encodes and learns fast-changing but nonperiodic visual input remains unknown, even though such visual input characterizes natural scenes. We investigated whether the phenomenon of "perceptual echo" might index such learning. The perceptual echo is a

  9. Haptic over visual information in the distribution of visual attention after tool-use in near and far space.

    Science.gov (United States)

    Park, George D; Reed, Catherine L

    2015-10-01

    Despite attentional prioritization for grasping space near the hands, tool-use appears to transfer attentional bias to the tool's end/functional part. The contributions of haptic and visual inputs to attentional distribution along a tool were investigated as a function of tool-use in near (Experiment 1) and far (Experiment 2) space. Visual attention was assessed with a 50/50, go/no-go, target discrimination task, while a tool was held next to targets appearing near the tool-occupied hand or tool-end. Target response times (RTs) and sensitivity (d-prime) were measured at target locations, before and after functional tool practice for three conditions: (1) open-tool: tool-end visible (visual + haptic inputs), (2) hidden-tool: tool-end visually obscured (haptic input only), and (3) short-tool: stick missing tool's length/end (control condition: hand occupied but no visual/haptic input). In near space, both open- and hidden-tool groups showed a tool-end, attentional bias (faster RTs toward tool-end) before practice; after practice, RTs near the hand improved. In far space, the open-tool group showed no bias before practice; after practice, target RTs near the tool-end improved. However, the hidden-tool group showed a consistent tool-end bias despite practice. Lack of short-tool group results suggested that hidden-tool group results were specific to haptic inputs. In conclusion, (1) allocation of visual attention along a tool due to tool practice differs in near and far space, and (2) visual attention is drawn toward the tool's end even when visually obscured, suggesting haptic input provides sufficient information for directing attention along the tool.

  10. Nine-year-old children use norm-based coding to visually represent facial expression.

    Science.gov (United States)

    Burton, Nichola; Jeffery, Linda; Skinner, Andrew L; Benton, Christopher P; Rhodes, Gillian

    2013-10-01

    Children are less skilled than adults at making judgments about facial expression. This could be because they have not yet developed adult-like mechanisms for visually representing faces. Adults are thought to represent faces in a multidimensional face-space, and have been shown to code the expression of a face relative to the norm or average face in face-space. Norm-based coding is economical and adaptive, and may be what makes adults more sensitive to facial expression than children. This study investigated the coding system that children use to represent facial expression. An adaptation aftereffect paradigm was used to test 24 adults and 18 children (9 years 2 months to 9 years 11 months old). Participants adapted to weak and strong antiexpressions. They then judged the expression of an average expression. Adaptation created aftereffects that made the test face look like the expression opposite that of the adaptor. Consistent with the predictions of norm-based but not exemplar-based coding, aftereffects were larger for strong than weak adaptors for both age groups. Results indicate that, like adults, children's coding of facial expressions is norm-based. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  11. Visual and proprioceptive interaction in patients with bilateral vestibular loss.

    Science.gov (United States)

    Cutfield, Nicholas J; Scott, Gregory; Waldman, Adam D; Sharp, David J; Bronstein, Adolfo M

    2014-01-01

    Following bilateral vestibular loss (BVL) patients gradually adapt to the loss of vestibular input and rely more on other sensory inputs. Here we examine changes in the way proprioceptive and visual inputs interact. We used functional magnetic resonance imaging (fMRI) to investigate visual responses in the context of varying levels of proprioceptive input in 12 BVL subjects and 15 normal controls. A novel metal-free vibrator was developed to allow vibrotactile neck proprioceptive input to be delivered in the MRI system. A high level (100 Hz) and low level (30 Hz) control stimulus was applied over the left splenius capitis; only the high frequency stimulus generates a significant proprioceptive stimulus. The neck stimulus was applied in combination with static and moving (optokinetic) visual stimuli, in a factorial fMRI experimental design. We found that high level neck proprioceptive input had more cortical effect on brain activity in the BVL patients. This included a reduction in visual motion responses during high levels of proprioceptive input and differential activation in the midline cerebellum. In early visual cortical areas, the effect of high proprioceptive input was present for both visual conditions but in lateral visual areas, including V5/MT, the effect was only seen in the context of visual motion stimulation. The finding of a cortical visuo-proprioceptive interaction in BVL patients is consistent with behavioural data indicating that, in BVL patients, neck afferents partly replace vestibular input during the CNS-mediated compensatory process. An fMRI cervico-visual interaction may thus substitute the known visuo-vestibular interaction reported in normal subject fMRI studies. The results provide evidence for a cortical mechanism of adaptation to vestibular failure, in the form of an enhanced proprioceptive influence on visual processing. The results may provide the basis for a cortical mechanism involved in proprioceptive substitution of vestibular

  12. The effectiveness of visual input enhancement on the noticing and L2 development of the Spanish past tense

    Directory of Open Access Journals (Sweden)

    Shawn Loewen

    2016-03-01

    Full Text Available Textual manipulation is a common pedagogic tool used to emphasize specific features of a second language (L2 text, thereby facilitating noticing and, ideally, second language development. Visual input enhancement has been used to investigate the effects of highlighting specific grammatical structures in a text. The current study uses a quasi-experimental design to determine the extent to which textual manipulation increase (a learners’ perception of targeted forms and (b their knowledge of the forms. Input enhancement was used to highlight the Spanish preterit and imperfect verb forms and an eye tracker measured the frequency and duration of participants’ fixation on the targeted items. In addition, pretests and posttests of the Spanish past tense provided information about participants’ knowledge of the targeted forms. Results indicate that learners were aware of the highlighted grammatical forms in the text; however, there was no difference in the amount of attention between the enhanced and unenhanced groups. In addition, both groups improved in their knowledge of the L2 forms; however, again, there was no differential improvement between the two groups.

  13. Rhythm information represented in the fronto-parieto-cerebellar motor system.

    Science.gov (United States)

    Konoike, Naho; Kotozaki, Yuka; Miyachi, Shigehiro; Miyauchi, Carlos Makoto; Yomogida, Yukihito; Akimoto, Yoritaka; Kuraoka, Koji; Sugiura, Motoaki; Kawashima, Ryuta; Nakamura, Katsuki

    2012-10-15

    Rhythm is an essential element of human culture, particularly in language and music. To acquire language or music, we have to perceive the sensory inputs, organize them into structured sequences as rhythms, actively hold the rhythm information in mind, and use the information when we reproduce or mimic the same rhythm. Previous brain imaging studies have elucidated brain regions related to the perception and production of rhythms. However, the neural substrates involved in the working memory of rhythm remain unclear. In addition, little is known about the processing of rhythm information from non-auditory inputs (visual or tactile). Therefore, we measured brain activity by functional magnetic resonance imaging while healthy subjects memorized and reproduced auditory and visual rhythmic information. The inferior parietal lobule, inferior frontal gyrus, supplementary motor area, and cerebellum exhibited significant activations during both encoding and retrieving rhythm information. In addition, most of these areas exhibited significant activation also during the maintenance of rhythm information. All of these regions functioned in the processing of auditory and visual rhythms. The bilateral inferior parietal lobule, inferior frontal gyrus, supplementary motor area, and cerebellum are thought to be essential for motor control. When we listen to a certain rhythm, we are often stimulated to move our body, which suggests the existence of a strong interaction between rhythm processing and the motor system. Here, we propose that rhythm information may be represented and retained as information about bodily movements in the supra-modal motor brain system. Copyright © 2012 Elsevier Inc. All rights reserved.

  14. Oscillatory Mechanisms of Stimulus Processing and Selection in the Visual and Auditory Systems: State-of-the-Art, Speculations and Suggestions

    Directory of Open Access Journals (Sweden)

    Benedikt Zoefel

    2017-05-01

    Full Text Available All sensory systems need to continuously prioritize and select incoming stimuli in order to avoid overflow or interference, and provide a structure to the brain's input. However, the characteristics of this input differ across sensory systems; therefore, and as a direct consequence, each sensory system might have developed specialized strategies to cope with the continuous stream of incoming information. Neural oscillations are intimately connected with this selection process, as they can be used by the brain to rhythmically amplify or attenuate input and therefore represent an optimal tool for stimulus selection. In this paper, we focus on oscillatory processes for stimulus selection in the visual and auditory systems. We point out both commonalities and differences between the two systems and develop several hypotheses, inspired by recently published findings: (1 The rhythmic component in its input is crucial for the auditory, but not for the visual system. The alignment between oscillatory phase and rhythmic input (phase entrainment is therefore an integral part of stimulus selection in the auditory system whereas the visual system merely adjusts its phase to upcoming events, without the need for any rhythmic component. (2 When input is unpredictable, the visual system can maintain its oscillatory sampling, whereas the auditory system switches to a different, potentially internally oriented, “mode” of processing that might be characterized by alpha oscillations. (3 Visual alpha can be divided into a faster occipital alpha (10 Hz and a slower frontal alpha (7 Hz that critically depends on attention.

  15. Modeling recognition memory using the similarity structure of natural input

    NARCIS (Netherlands)

    Lacroix, J.P.W.; Murre, J.M.J.; Postma, E.O.; van den Herik, H.J.

    2006-01-01

    The natural input memory (NIM) model is a new model for recognition memory that operates on natural visual input. A biologically informed perceptual preprocessing method takes local samples (eye fixations) from a natural image and translates these into a feature-vector representation. During

  16. Visual gravitational motion and the vestibular system in humans

    Directory of Open Access Journals (Sweden)

    Francesco eLacquaniti

    2013-12-01

    Full Text Available The visual system is poorly sensitive to arbitrary accelerations, but accurately detects the effects of gravity on a target motion. Here we review behavioral and neuroimaging data about the neural mechanisms for dealing with object motion and egomotion under gravity. The results from several experiments show that the visual estimates of a target motion under gravity depend on the combination of a prior of gravity effects with on-line visual signals on target position and velocity. These estimates are affected by vestibular inputs, and are encoded in a visual-vestibular network whose core regions lie within or around the Sylvian fissure, and are represented by the posterior insula/retroinsula/temporo-parietal junction. This network responds both to target motions coherent with gravity and to vestibular caloric stimulation in human fMRI studies. Transient inactivation of the temporo-parietal junction selectively disrupts the interception of targets accelerated by gravity.

  17. Visual gravitational motion and the vestibular system in humans.

    Science.gov (United States)

    Lacquaniti, Francesco; Bosco, Gianfranco; Indovina, Iole; La Scaleia, Barbara; Maffei, Vincenzo; Moscatelli, Alessandro; Zago, Myrka

    2013-12-26

    The visual system is poorly sensitive to arbitrary accelerations, but accurately detects the effects of gravity on a target motion. Here we review behavioral and neuroimaging data about the neural mechanisms for dealing with object motion and egomotion under gravity. The results from several experiments show that the visual estimates of a target motion under gravity depend on the combination of a prior of gravity effects with on-line visual signals on target position and velocity. These estimates are affected by vestibular inputs, and are encoded in a visual-vestibular network whose core regions lie within or around the Sylvian fissure, and are represented by the posterior insula/retroinsula/temporo-parietal junction. This network responds both to target motions coherent with gravity and to vestibular caloric stimulation in human fMRI studies. Transient inactivation of the temporo-parietal junction selectively disrupts the interception of targets accelerated by gravity.

  18. Does Kaniso activate CASINO?: input coding schemes and phonology in visual-word recognition.

    Science.gov (United States)

    Acha, Joana; Perea, Manuel

    2010-01-01

    Most recent input coding schemes in visual-word recognition assume that letter position coding is orthographic rather than phonological in nature (e.g., SOLAR, open-bigram, SERIOL, and overlap). This assumption has been drawn - in part - by the fact that the transposed-letter effect (e.g., caniso activates CASINO) seems to be (mostly) insensitive to phonological manipulations (e.g., Perea & Carreiras, 2006, 2008; Perea & Pérez, 2009). However, one could argue that the lack of a phonological effect in prior research was due to the fact that the manipulation always occurred in internal letter positions - note that phonological effects tend to be stronger for the initial syllable (Carreiras, Ferrand, Grainger, & Perea, 2005). To reexamine this issue, we conducted a masked priming lexical decision experiment in which we compared the priming effect for transposed-letter pairs (e.g., caniso-CASINO vs. caviro-CASINO) and for pseudohomophone transposed-letter pairs (kaniso-CASINO vs. kaviro-CASINO). Results showed a transposed-letter priming effect for the correctly spelled pairs, but not for the pseudohomophone pairs. This is consistent with the view that letter position coding is (primarily) orthographic in nature.

  19. Rehabilitation of balance-impaired stroke patients through audio-visual biofeedback

    DEFF Research Database (Denmark)

    Gheorghe, Cristina; Nissen, Thomas; Juul Rosengreen Christensen, Daniel

    2015-01-01

    This study explored how audio-visual biofeedback influences physical balance of seven balance-impaired stroke patients, between 33–70 years-of-age. The setup included a bespoke balance board and a music rhythm game. The procedure was designed as follows: (1) a control group who performed a balance...... training exercise without any technological input, (2) a visual biofeedback group, performing via visual input, and (3) an audio-visual biofeedback group, performing via audio and visual input. Results retrieved from comparisons between the data sets (2) and (3) suggested superior postural stability...

  20. Posterior Inferotemporal Cortex Cells Use Multiple Input Pathways for Shape Encoding.

    Science.gov (United States)

    Ponce, Carlos R; Lomber, Stephen G; Livingstone, Margaret S

    2017-05-10

    In the macaque monkey brain, posterior inferior temporal (PIT) cortex cells contribute to visual object recognition. They receive concurrent inputs from visual areas V4, V3, and V2. We asked how these different anatomical pathways shape PIT response properties by deactivating them while monitoring PIT activity in two male macaques. We found that cooling of V4 or V2|3 did not lead to consistent changes in population excitatory drive; however, population pattern analyses showed that V4-based pathways were more important than V2|3-based pathways. We did not find any image features that predicted decoding accuracy differences between both interventions. Using the HMAX hierarchical model of visual recognition, we found that different groups of simulated "PIT" units with different input histories (lacking "V2|3" or "V4" input) allowed for comparable levels of object-decoding performance and that removing a large fraction of "PIT" activity resulted in similar drops in performance as in the cooling experiments. We conclude that distinct input pathways to PIT relay similar types of shape information, with V1-dependent V4 cells providing more quantitatively useful information for overall encoding than cells in V2 projecting directly to PIT. SIGNIFICANCE STATEMENT Convolutional neural networks are the best models of the visual system, but most emphasize input transformations across a serial hierarchy akin to the primary "ventral stream" (V1 → V2 → V4 → IT). However, the ventral stream also comprises parallel "bypass" pathways: V1 also connects to V4, and V2 to IT. To explore the advantages of mixing long and short pathways in the macaque brain, we used cortical cooling to silence inputs to posterior IT and compared the findings with an HMAX model with parallel pathways. Copyright © 2017 the authors 0270-6474/17/375019-16$15.00/0.

  1. NewsPaperBox - Online News Space: a visual model for representing the social space of a website

    Directory of Open Access Journals (Sweden)

    Selçuk Artut

    2010-02-01

    Full Text Available NewsPaperBox * propounds an alternative visual model utilizing the treemap algorithm to represent the collective use of a website that evolves in response to user interaction. While the technology currently exists to track various user behaviors such as number of clicks, duration of stay on a given web site, these statistics are not yet employed to influence the visual representation of that site's design in real time. In that sense, this project propounds an alternative modeling of a representational outlook of a website that is developed by collaborations and competitions of its global users. This paper proposes the experience of cyberspace as a generative process driven by its effective user participation.

  2. Modeling Recognition Memory Using the Similarity Structure of Natural Input

    Science.gov (United States)

    Lacroix, Joyca P. W.; Murre, Jaap M. J.; Postma, Eric O.; van den Herik, H. Jaap

    2006-01-01

    The natural input memory (NAM) model is a new model for recognition memory that operates on natural visual input. A biologically informed perceptual preprocessing method takes local samples (eye fixations) from a natural image and translates these into a feature-vector representation. During recognition, the model compares incoming preprocessed…

  3. Ageing well? A cross-country analysis of the way older people are visually represented on websites of organizations for older people

    Directory of Open Access Journals (Sweden)

    Eugène Loos

    2017-12-01

    Full Text Available The ‘aging well’ discourse advances the idea of making older people responsible for their capability to stay healthy and active. In the context of an increased ageing population, which poses several challenges to countries’ government, this discourse has become dominant in Europe. We explore the way older people are visually represented on websites of organizations for older people in seven European countries (Finland, UK, the Netherlands, Spain, Italy, Poland and Romania, using an analytical approached based on visual content analysis, inspired by the dimensional model of national cultural differences from the Hofstede model (1991; 2001; 2011. We used two out of the five Hofstede dimensions: Individualism/Collectivism (IDV and Masculinity/Femininity (MAS. The results demonstrated that in all seven countries older people are mostly visually represented as healthy/active, which reflects a dominant ‘ageing well’ discourse in Europe. The results also demonstrated that in most cases older people tend to be represented together with others, which is not consonant with the dominant ‘ageing well’ discourse in Europe. A last finding was that the visual representation of older people is in about half of the cases in line with these Hofstede dimensions. We discuss the implications of these findings claiming that the ‘ageing well’ discourse might lead to ‘visual ageism’. Organizations could keep this in mind while using pictures for their website or in other media and consider to use various kind of pictures, or to avoid using pictures of older people that stigmatize, marginalize or injure. They could look into the cultural situatedness and intersectional character of age relations and consider alternative strategies of both visibility and invisibility to talk with and about our ageing societies.

  4. Basin Visual Estimation Technique (BVET) and Representative Reach Approaches to Wadeable Stream Surveys: Methodological Limitations and Future Directions

    Science.gov (United States)

    Lance R. Williams; Melvin L. Warren; Susan B. Adams; Joseph L. Arvai; Christopher M. Taylor

    2004-01-01

    Basin Visual Estimation Techniques (BVET) are used to estimate abundance for fish populations in small streams. With BVET, independent samples are drawn from natural habitat units in the stream rather than sampling "representative reaches." This sampling protocol provides an alternative to traditional reach-level surveys, which are criticized for their lack...

  5. Multiple Concurrent Visual-Motor Mappings: Implications for Models of Adaptation

    Science.gov (United States)

    Cunningham, H. A.; Welch, Robert B.

    1994-01-01

    Previous research on adaptation to visual-motor rearrangement suggests that the central nervous system represents accurately only 1 visual-motor mapping at a time. This idea was examined in 3 experiments where subjects tracked a moving target under repeated alternations between 2 initially interfering mappings (the 'normal' mapping characteristic of computer input devices and a 108' rotation of the normal mapping). Alternation between the 2 mappings led to significant reduction in error under the rotated mapping and significant reduction in the adaptation aftereffect ordinarily caused by switching between mappings. Color as a discriminative cue, interference versus decay in adaptation aftereffect, and intermanual transfer were also examined. The results reveal a capacity for multiple concurrent visual-motor mappings, possibly controlled by a parametric process near the motor output stage of processing.

  6. Manipulation of Auditory Inputs as Rehabilitation Therapy for Maladaptive Auditory Cortical Reorganization

    Directory of Open Access Journals (Sweden)

    Hidehiko Okamoto

    2018-01-01

    Full Text Available Neurophysiological and neuroimaging data suggest that the brains of not only children but also adults are reorganized based on sensory inputs and behaviors. Plastic changes in the brain are generally beneficial; however, maladaptive cortical reorganization in the auditory cortex may lead to hearing disorders such as tinnitus and hyperacusis. Recent studies attempted to noninvasively visualize pathological neural activity in the living human brain and reverse maladaptive cortical reorganization by the suitable manipulation of auditory inputs in order to alleviate detrimental auditory symptoms. The effects of the manipulation of auditory inputs on maladaptively reorganized brain were reviewed herein. The findings obtained indicate that rehabilitation therapy based on the manipulation of auditory inputs is an effective and safe approach for hearing disorders. The appropriate manipulation of sensory inputs guided by the visualization of pathological brain activities using recent neuroimaging techniques may contribute to the establishment of new clinical applications for affected individuals.

  7. Visualization of virtual slave manipulator using the master input device

    International Nuclear Information System (INIS)

    Kim, S. H.; Song, T. K.; Lee, J. Y.; Yoon, J. S.

    2003-01-01

    To handle the high level radioactive materials such a spent fuel, the Master-Slave Manipulators (MSM) are widely used as a remote handling device in nuclear facilities such as the hot cell with sealed and shielded space. In this paper, the Digital Mockup which simulates the remote operation of the Advanced Conditioning Process(ACP) is developed. Also, the workspace and the motion of the slave manipulator, as well as, the remote operation task should be analyzed. The process equipment of ACP and Maintenance/Handling Device are drawn in 3D CAD models using IGRIP. Modeling device of manipulator is assigned with various mobiles attributes such as a relative position, kinematics constraints, and a range of mobility. The 3D graphic simulator using the external input device of space ball displays the movement of manipulator. To connect the external input device to the graphic simulator, the interface program of external input device with 6 DOF is deigned using the Low Level Tele-operation Interface (LLTI). The experimental result shows that the developed simulation system gives much-improved human interface characteristics and shows satisfactory response characteristics in terms of synchronization speed. This should be useful for the development of work's education system in the virtual environment

  8. Sensory experience modifies feature map relationships in visual cortex

    Science.gov (United States)

    Cloherty, Shaun L; Hughes, Nicholas J; Hietanen, Markus A; Bhagavatula, Partha S

    2016-01-01

    The extent to which brain structure is influenced by sensory input during development is a critical but controversial question. A paradigmatic system for studying this is the mammalian visual cortex. Maps of orientation preference (OP) and ocular dominance (OD) in the primary visual cortex of ferrets, cats and monkeys can be individually changed by altered visual input. However, the spatial relationship between OP and OD maps has appeared immutable. Using a computational model we predicted that biasing the visual input to orthogonal orientation in the two eyes should cause a shift of OP pinwheels towards the border of OD columns. We then confirmed this prediction by rearing cats wearing orthogonally oriented cylindrical lenses over each eye. Thus, the spatial relationship between OP and OD maps can be modified by visual experience, revealing a previously unknown degree of brain plasticity in response to sensory input. DOI: http://dx.doi.org/10.7554/eLife.13911.001 PMID:27310531

  9. Visualizing the Verbal and Verbalizing the Visual.

    Science.gov (United States)

    Braden, Roberts A.

    This paper explores relationships of visual images to verbal elements, beginning with a discussion of visible language as represented by words printed on the page. The visual flexibility inherent in typography is discussed in terms of the appearance of the letters and the denotative and connotative meanings represented by type, typographical…

  10. Developmental and visual input-dependent regulation of the CB1 cannabinoid receptor in the mouse visual cortex.

    Directory of Open Access Journals (Sweden)

    Taisuke Yoneda

    Full Text Available The mammalian visual system exhibits significant experience-induced plasticity in the early postnatal period. While physiological studies have revealed the contribution of the CB1 cannabinoid receptor (CB1 to developmental plasticity in the primary visual cortex (V1, it remains unknown whether the expression and localization of CB1 is regulated during development or by visual experience. To explore a possible role of the endocannabinoid system in visual cortical plasticity, we examined the expression of CB1 in the visual cortex of mice. We found intense CB1 immunoreactivity in layers II/III and VI. CB1 mainly localized at vesicular GABA transporter-positive inhibitory nerve terminals. The amount of CB1 protein increased throughout development, and the specific laminar pattern of CB1 appeared at P20 and remained until adulthood. Dark rearing from birth to P30 decreased the amount of CB1 protein in V1 and altered the synaptic localization of CB1 in the deep layer. Dark rearing until P50, however, did not influence the expression of CB1. Brief monocular deprivation for 2 days upregulated the localization of CB1 at inhibitory nerve terminals in the deep layer. Taken together, the expression and the localization of CB1 are developmentally regulated, and both parameters are influenced by visual experience.

  11. A Brief Period of Postnatal Visual Deprivation Alters the Balance between Auditory and Visual Attention.

    Science.gov (United States)

    de Heering, Adélaïde; Dormal, Giulia; Pelland, Maxime; Lewis, Terri; Maurer, Daphne; Collignon, Olivier

    2016-11-21

    Is a short and transient period of visual deprivation early in life sufficient to induce lifelong changes in how we attend to, and integrate, simple visual and auditory information [1, 2]? This question is of crucial importance given the recent demonstration in both animals and humans that a period of blindness early in life permanently affects the brain networks dedicated to visual, auditory, and multisensory processing [1-16]. To address this issue, we compared a group of adults who had been treated for congenital bilateral cataracts during early infancy with a group of normally sighted controls on a task requiring simple detection of lateralized visual and auditory targets, presented alone or in combination. Redundancy gains obtained from the audiovisual conditions were similar between groups and surpassed the reaction time distribution predicted by Miller's race model. However, in comparison to controls, cataract-reversal patients were faster at processing simple auditory targets and showed differences in how they shifted attention across modalities. Specifically, they were faster at switching attention from visual to auditory inputs than in the reverse situation, while an opposite pattern was observed for controls. Overall, these results reveal that the absence of visual input during the first months of life does not prevent the development of audiovisual integration but enhances the salience of simple auditory inputs, leading to a different crossmodal distribution of attentional resources between auditory and visual stimuli. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Impact of enhanced sensory input on treadmill step frequency: infants born with myelomeningocele.

    Science.gov (United States)

    Pantall, Annette; Teulier, Caroline; Smith, Beth A; Moerchen, Victoria; Ulrich, Beverly D

    2011-01-01

    To determine the effect of enhanced sensory input on the step frequency of infants with myelomeningocele (MMC) when supported on a motorized treadmill. Twenty-seven infants aged 2 to 10 months with MMC lesions at, or caudal to, L1 participated. We supported infants upright on the treadmill for 2 sets of 6 trials, each 30 seconds long. Enhanced sensory inputs within each set were presented in random order and included baseline, visual flow, unloading, weights, Velcro, and friction. Overall friction and visual flow significantly increased step rate, particularly for the older subjects. Friction and Velcro increased stance-phase duration. Enhanced sensory input had minimal effect on leg activity when infants were not stepping. : Increased friction via Dycem and enhancing visual flow via a checkerboard pattern on the treadmill belt appear to be more effective than the traditional smooth black belt surface for eliciting stepping patterns in infants with MMC.

  13. Electrophysiological evidence of altered visual processing in adults who experienced visual deprivation during infancy.

    Science.gov (United States)

    Segalowitz, Sidney J; Sternin, Avital; Lewis, Terri L; Dywan, Jane; Maurer, Daphne

    2017-04-01

    We examined the role of early visual input in visual system development by testing adults who had been born with dense bilateral cataracts that blocked all patterned visual input during infancy until the cataractous lenses were removed surgically and the eyes fitted with compensatory contact lenses. Patients viewed checkerboards and textures to explore early processing regions (V1, V2), Glass patterns to examine global form processing (V4), and moving stimuli to explore global motion processing (V5). Patients' ERPs differed from those of controls in that (1) the V1 component was much smaller for all but the simplest stimuli and (2) extrastriate components did not differentiate amongst texture stimuli, Glass patterns, or motion stimuli. The results indicate that early visual deprivation contributes to permanent abnormalities at early and mid levels of visual processing, consistent with enduring behavioral deficits in the ability to process complex textures, global form, and global motion. © 2017 Wiley Periodicals, Inc.

  14. Music Alters Visual Perception

    NARCIS (Netherlands)

    Jolij, Jacob; Meurs, Maaike

    2011-01-01

    Background: Visual perception is not a passive process: in order to efficiently process visual input, the brain actively uses previous knowledge (e. g., memory) and expectations about what the world should look like. However, perception is not only influenced by previous knowledge. Especially the

  15. Visual and non-visual motion information processing during pursuit eye tracking in schizophrenia and bipolar disorder.

    Science.gov (United States)

    Trillenberg, Peter; Sprenger, Andreas; Talamo, Silke; Herold, Kirsten; Helmchen, Christoph; Verleger, Rolf; Lencer, Rebekka

    2017-04-01

    Despite many reports on visual processing deficits in psychotic disorders, studies are needed on the integration of visual and non-visual components of eye movement control to improve the understanding of sensorimotor information processing in these disorders. Non-visual inputs to eye movement control include prediction of future target velocity from extrapolation of past visual target movement and anticipation of future target movements. It is unclear whether non-visual input is impaired in patients with schizophrenia. We recorded smooth pursuit eye movements in 21 patients with schizophrenia spectrum disorder, 22 patients with bipolar disorder, and 24 controls. In a foveo-fugal ramp task, the target was either continuously visible or was blanked during movement. We determined peak gain (measuring overall performance), initial eye acceleration (measuring visually driven pursuit), deceleration after target extinction (measuring prediction), eye velocity drifts before onset of target visibility (measuring anticipation), and residual gain during blanking intervals (measuring anticipation and prediction). In both patient groups, initial eye acceleration was decreased and the ability to adjust eye acceleration to increasing target acceleration was impaired. In contrast, neither deceleration nor eye drift velocity was reduced in patients, implying unimpaired non-visual contributions to pursuit drive. Disturbances of eye movement control in psychotic disorders appear to be a consequence of deficits in sensorimotor transformation rather than a pure failure in adding cognitive contributions to pursuit drive in higher-order cortical circuits. More generally, this deficit might reflect a fundamental imbalance between processing external input and acting according to internal preferences.

  16. Dynamic visual noise interferes with storage in visual working memory.

    Science.gov (United States)

    Dean, Graham M; Dewhurst, Stephen A; Whittaker, Annalise

    2008-01-01

    Several studies have demonstrated that dynamic visual noise (DVN) does not interfere with memory for random matrices. This has led to suggestions that (a) visual working memory is distinct from imagery, and (b) visual working memory is not a gateway between sensory input and long-term storage. A comparison of the interference effects of DVN with memory for matrices and colored textures shows that DVN can interfere with visual working memory, probably at a level of visual detail not easily supported by long-term memory structures or the recoding of the visual pattern elements. The results support a gateway model of visuospatial working memory and raise questions about the most appropriate ways to measure and model the different levels of representation of information that can be held in visual working memory.

  17. Visual communication and terminal equipment

    International Nuclear Information System (INIS)

    Kang, Cheol Hui

    1988-06-01

    This book is divided two parts about visual communication and terminal equipment. The first part introduces visual communication, which deals with foundation of visual communication, technique of visual communication, equipment of visual communication, a facsimile and pictorial image system. The second part contains terminal equipment such as telephone, terminal equipment for data transmission on constitution and constituent of terminal equipment for data transmission, input device and output device, terminal device and up-to-date terminal device.

  18. Visual communication and terminal equipment

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Cheol Hui

    1988-06-15

    This book is divided two parts about visual communication and terminal equipment. The first part introduces visual communication, which deals with foundation of visual communication, technique of visual communication, equipment of visual communication, a facsimile and pictorial image system. The second part contains terminal equipment such as telephone, terminal equipment for data transmission on constitution and constituent of terminal equipment for data transmission, input device and output device, terminal device and up-to-date terminal device.

  19. Ray-based approach to integrated 3D visual communication

    Science.gov (United States)

    Naemura, Takeshi; Harashima, Hiroshi

    2001-02-01

    For a high sense of reality in the next-generation communications, it is very important to realize three-dimensional (3D) spatial media, instead of existing 2D image media. In order to comprehensively deal with a variety of 3D visual data formats, the authors first introduce the concept of "Integrated 3D Visual Communication," which reflects the necessity of developing a neutral representation method independent of input/output systems. Then, the following discussions are concentrated on the ray-based approach to this concept, in which any visual sensation is considered to be derived from a set of light rays. This approach is a simple and straightforward to the problem of how to represent 3D space, which is an issue shared by various fields including 3D image communications, computer graphics, and virtual reality. This paper mainly presents the several developments in this approach, including some efficient methods of representing ray data, a real-time video-based rendering system, an interactive rendering system based on the integral photography, a concept of virtual object surface for the compression of tremendous amount of data, and a light ray capturing system using a telecentric lens. Experimental results demonstrate the effectiveness of the proposed techniques.

  20. Multidimensional structured data visualization method and apparatus, text visualization method and apparatus, method and apparatus for visualizing and graphically navigating the world wide web, method and apparatus for visualizing hierarchies

    Science.gov (United States)

    Risch, John S [Kennewick, WA; Dowson, Scott T [West Richland, WA; Hart, Michelle L [Richland, WA; Hatley, Wes L [Kennewick, WA

    2008-05-13

    A method of displaying correlations among information objects comprises receiving a query against a database; obtaining a query result set; and generating a visualization representing the components of the result set, the visualization including one of a plane and line to represent a data field, nodes representing data values, and links showing correlations among fields and values. Other visualization methods and apparatus are disclosed.

  1. Semantics by analogy for illustrative volume visualization

    NARCIS (Netherlands)

    Gerl, Moritz; Rautek, Peter; Isenberg, Tobias; Groeller, Eduard

    We present an interactive graphical approach for the explicit specification of semantics for volume visualization. This explicit and graphical specification of semantics for volumetric features allows us to visually assign meaning to both input and output parameters of the visualization mapping.

  2. Visual Semiotics & Uncertainty Visualization: An Empirical Study.

    Science.gov (United States)

    MacEachren, A M; Roth, R E; O'Brien, J; Li, B; Swingley, D; Gahegan, M

    2012-12-01

    This paper presents two linked empirical studies focused on uncertainty visualization. The experiments are framed from two conceptual perspectives. First, a typology of uncertainty is used to delineate kinds of uncertainty matched with space, time, and attribute components of data. Second, concepts from visual semiotics are applied to characterize the kind of visual signification that is appropriate for representing those different categories of uncertainty. This framework guided the two experiments reported here. The first addresses representation intuitiveness, considering both visual variables and iconicity of representation. The second addresses relative performance of the most intuitive abstract and iconic representations of uncertainty on a map reading task. Combined results suggest initial guidelines for representing uncertainty and discussion focuses on practical applicability of results.

  3. Neuron analysis of visual perception

    Science.gov (United States)

    Chow, K. L.

    1980-01-01

    The receptive fields of single cells in the visual system of cat and squirrel monkey were studied investigating the vestibular input affecting the cells, and the cell's responses during visual discrimination learning process. The receptive field characteristics of the rabbit visual system, its normal development, its abnormal development following visual deprivation, and on the structural and functional re-organization of the visual system following neo-natal and prenatal surgery were also studied. The results of each individual part of each investigation are detailed.

  4. Visually guided adjustments of body posture in the roll plane

    OpenAIRE

    Tarnutzer, A A; Bockisch, C J; Straumann, D

    2013-01-01

    Body position relative to gravity is continuously updated to prevent falls. Therefore, the brain integrates input from the otoliths, truncal graviceptors, proprioception and vision. Without visual cues estimated direction of gravity mainly depends on otolith input and becomes more variable with increasing roll-tilt. Contrary, the discrimination threshold for object orientation shows little modulation with varying roll orientation of the visual stimulus. Providing earth-stationary visual cues,...

  5. Quantitative analysis of gait in the visually impaired.

    Science.gov (United States)

    Nakamura, T

    1997-05-01

    In this comparative study concerning characteristics of independent walking by visually impaired persons, we used a motion analyser system to perform gait analysis of 15 late blind (age 36-54, mean 44.3 years), 15 congenitally blind (age 39-48, mean 43.8 years) and 15 sighted persons (age 40-50, mean 44.4 years) while walking a 10-m walkway. All subjects were male. Compared to the sighted, late blind and congenitally blind persons had a significantly slower walking speed, shorter stride length and longer time in the stance phase of gait. However, the relationships between gait parameters in the late and congenitally blind groups were maintained, as in the sighted group. In addition, the gait of the late blind showed a tendency to approximate the gait patterns of the congenitally blind as the duration of visual loss progressed. Based on these results we concluded that the gait of visually impaired persons, through its active use of non-visual sensory input, represents an attempt to adapt to various environmental conditions in order to maintain a more stable posture and to effect safe walking.

  6. Representing Color Ensembles.

    Science.gov (United States)

    Chetverikov, Andrey; Campana, Gianluca; Kristjánsson, Árni

    2017-10-01

    Colors are rarely uniform, yet little is known about how people represent color distributions. We introduce a new method for studying color ensembles based on intertrial learning in visual search. Participants looked for an oddly colored diamond among diamonds with colors taken from either uniform or Gaussian color distributions. On test trials, the targets had various distances in feature space from the mean of the preceding distractor color distribution. Targets on test trials therefore served as probes into probabilistic representations of distractor colors. Test-trial response times revealed a striking similarity between the physical distribution of colors and their internal representations. The results demonstrate that the visual system represents color ensembles in a more detailed way than previously thought, coding not only mean and variance but, most surprisingly, the actual shape (uniform or Gaussian) of the distribution of colors in the environment.

  7. Anatomy and physiology of the afferent visual system.

    Science.gov (United States)

    Prasad, Sashank; Galetta, Steven L

    2011-01-01

    The efficient organization of the human afferent visual system meets enormous computational challenges. Once visual information is received by the eye, the signal is relayed by the retina, optic nerve, chiasm, tracts, lateral geniculate nucleus, and optic radiations to the striate cortex and extrastriate association cortices for final visual processing. At each stage, the functional organization of these circuits is derived from their anatomical and structural relationships. In the retina, photoreceptors convert photons of light to an electrochemical signal that is relayed to retinal ganglion cells. Ganglion cell axons course through the optic nerve, and their partial decussation in the chiasm brings together corresponding inputs from each eye. Some inputs follow pathways to mediate pupil light reflexes and circadian rhythms. However, the majority of inputs arrive at the lateral geniculate nucleus, which relays visual information via second-order neurons that course through the optic radiations to arrive in striate cortex. Feedback mechanisms from higher cortical areas shape the neuronal responses in early visual areas, supporting coherent visual perception. Detailed knowledge of the anatomy of the afferent visual system, in combination with skilled examination, allows precise localization of neuropathological processes and guides effective diagnosis and management of neuro-ophthalmic disorders. Copyright © 2011 Elsevier B.V. All rights reserved.

  8. Output, Input Enhancement, and the Noticing Hypothesis: An Experimental Study on ESL Relativization.

    Science.gov (United States)

    Izumi, Shinichi

    2002-01-01

    Investigates potentially facilitative effects of internal and external attention-drawing devices--output and visual input enhancement--on acquisition of English relativization by adult English-as-a-Second-Language (ESL) learners. Addresses whether producing output promotes noticing of formal elements in target language input and affects subsequent…

  9. Visual working memory as visual attention sustained internally over time.

    Science.gov (United States)

    Chun, Marvin M

    2011-05-01

    Visual working memory and visual attention are intimately related, such that working memory encoding and maintenance reflects actively sustained attention to a limited number of visual objects and events important for ongoing cognition and action. Although attention is typically considered to operate over perceptual input, a recent taxonomy proposes to additionally consider how attention can be directed to internal perceptual representations in the absence of sensory input, as well as other internal memories, choices, and thoughts (Chun, Golomb, & Turk-Browne, 2011). Such internal attention enables prolonged binding of features into integrated objects, along with enhancement of relevant sensory mechanisms. These processes are all limited in capacity, although different types of working memory and attention, such as spatial vs. object processing, operate independently with separate capacity. Overall, the success of maintenance depends on the ability to inhibit both external (perceptual) and internal (cognitive) distraction. Working memory is the interface by which attentional mechanisms select and actively maintain relevant perceptual information from the external world as internal representations within the mind. Copyright © 2011. Published by Elsevier Ltd.

  10. Development and validation of gui based input file generation code for relap

    International Nuclear Information System (INIS)

    Anwar, M.M.; Khan, A.A.; Chughati, I.R.; Chaudri, K.S.; Inyat, M.H.; Hayat, T.

    2009-01-01

    Reactor Excursion and Leak Analysis Program (RELAP) is a widely acceptable computer code for thermal hydraulics modeling of Nuclear Power Plants. It calculates thermal- hydraulic transients in water-cooled nuclear reactors by solving approximations to the one-dimensional, two-phase equations of hydraulics in an arbitrarily connected system of nodes. However, the preparation of input file and subsequent analysis of results in this code is a tedious task. The development of a Graphical User Interface (GUI) for preparation of the input file for RELAP-5 is done with the validation of GUI generated Input File. The GUI is developed in Microsoft Visual Studio using Visual C Sharp (C) as programming language. The Nodalization diagram is drawn graphically and the program contains various component forms along with the starting data form, which are launched for properties assignment to generate Input File Cards serving as GUI for the user. The GUI is provided with Open / Save function to store and recall the Nodalization diagram along with Components' properties. The GUI generated Input File is validated for several case studies and individual component cards are compared with the originally required format. The generated Input File of RELAP is found consistent with the requirement of RELAP. The GUI provided a useful platform for simulating complex hydrodynamic problems efficiently with RELAP. (author)

  11. Maturation of GABAergic inhibition promotes strengthening of temporally coherent inputs among convergent pathways.

    Directory of Open Access Journals (Sweden)

    Sandra J Kuhlman

    2010-06-01

    Full Text Available Spike-timing-dependent plasticity (STDP, a form of Hebbian plasticity, is inherently stabilizing. Whether and how GABAergic inhibition influences STDP is not well understood. Using a model neuron driven by converging inputs modifiable by STDP, we determined that a sufficient level of inhibition was critical to ensure that temporal coherence (correlation among presynaptic spike times of synaptic inputs, rather than initial strength or number of inputs within a pathway, controlled postsynaptic spike timing. Inhibition exerted this effect by preferentially reducing synaptic efficacy, the ability of inputs to evoke postsynaptic action potentials, of the less coherent inputs. In visual cortical slices, inhibition potently reduced synaptic efficacy at ages during but not before the critical period of ocular dominance (OD plasticity. Whole-cell recordings revealed that the amplitude of unitary IPSCs from parvalbumin positive (Pv+ interneurons to pyramidal neurons increased during the critical period, while the synaptic decay time-constant decreased. In addition, intrinsic properties of Pv+ interneurons matured, resulting in an increase in instantaneous firing rate. Our results suggest that maturation of inhibition in visual cortex ensures that the temporally coherent inputs (e.g. those from the open eye during monocular deprivation control postsynaptic spike times of binocular neurons, a prerequisite for Hebbian mechanisms to induce OD plasticity.

  12. VISUALIZATION METHODS OF VORTICAL FLOWS IN COMPUTATIONAL FLUID DYNAMICS AND THEIR APPLICATIONS

    Directory of Open Access Journals (Sweden)

    K. N. Volkov

    2014-05-01

    Full Text Available The paper deals with conceptions and methods for visual representation of research numerical results in the problems of fluid mechanics and gas. The three-dimensional nature of unsteady flow being simulated creates significant difficulties for the visual representation of results. It complicates control and understanding of numerical data, and exchange and processing of obtained information about the flow field. Approaches to vortical flows visualization with the usage of gradients of primary and secondary scalar and vector fields are discussed. An overview of visualization techniques for vortical flows using different definitions of the vortex and its identification criteria is given. Visualization examples for some solutions of gas dynamics problems related to calculations of jets and cavity flows are presented. Ideas of the vortical structure of the free non-isothermal jet and the formation of coherent vortex structures in the mixing layer are developed. Analysis of formation patterns for spatial flows inside large-scale vortical structures within the enclosed space of the cubic lid-driven cavity is performed. The singular points of the vortex flow in a cubic lid-driven cavity are found based on the results of numerical simulation; their type and location are identified depending on the Reynolds number. Calculations are performed with fine meshes and modern approaches to the simulation of vortical flows (direct numerical simulation and large-eddy simulation. Paradigm of graphical programming and COVISE virtual environment are used for the visual representation of computational results. Application that implements the visualization of the problem is represented as a network which links are modules and each of them is designed to solve a case-specific problem. Interaction between modules is carried out by the input and output ports (data receipt and data transfer giving the possibility to use various input and output devices.

  13. KENO2MCNP, Version 5L, Conversion of Input Data between KENOV.a and MCNP File Formats

    International Nuclear Information System (INIS)

    2008-01-01

    1 - Description of program or function: The KENO2MCNP program was written to convert KENO V.a input files to MCNP Format. This program currently only works with KENO Va geometries and will not work with geometries that contain more than a single array. A C++ graphical user interface was created that was linked to Fortran routines from KENO V.a that read the material library and Fortran routines from the MCNP Visual Editor that generate the MCNP input file. Either SCALE 5.0 or SCALE 5.1 cross section files will work with this release. 2 - Methods: The C++ binary executable reads the KENO V.a input file, the KENO V.a material library and SCALE data libraries. When an input file is read in, the input is stored in memory. The converter goes through and loads different sections of the input file into memory including parameters, composition, geometry information, array information and starting information. Many of the KENO V.a materials represent compositions that must be read from the KENO V.a material library. KENO2MCNP includes the KENO V.a FORTRAN routines used to read this material file for creating the MCNP materials. Once the file has been read in, the user must select 'Convert' to convert the file from KENO V.a to MCNP. This will generate the MCNP input file along with an output window that lists the KENO V.a composition information for the materials contained in the KENO V.a input file. The program can be run interactively by clicking on the executable or in batch mode from the command prompt. 3 - Restrictions on the complexity of the problem: Not all KENO V.a input files are supported. Only one array is allowed in the input file. Some of the more complex material descriptions also may not be converted

  14. LSD alters eyes-closed functional connectivity within the early visual cortex in a retinotopic fashion.

    Science.gov (United States)

    Roseman, Leor; Sereno, Martin I; Leech, Robert; Kaelen, Mendel; Orban, Csaba; McGonigle, John; Feilding, Amanda; Nutt, David J; Carhart-Harris, Robin L

    2016-08-01

    The question of how spatially organized activity in the visual cortex behaves during eyes-closed, lysergic acid diethylamide (LSD)-induced "psychedelic imagery" (e.g., visions of geometric patterns and more complex phenomena) has never been empirically addressed, although it has been proposed that under psychedelics, with eyes-closed, the brain may function "as if" there is visual input when there is none. In this work, resting-state functional connectivity (RSFC) data was analyzed from 10 healthy subjects under the influence of LSD and, separately, placebo. It was suspected that eyes-closed psychedelic imagery might involve transient local retinotopic activation, of the sort typically associated with visual stimulation. To test this, it was hypothesized that, under LSD, patches of the visual cortex with congruent retinotopic representations would show greater RSFC than incongruent patches. Using a retinotopic localizer performed during a nondrug baseline condition, nonadjacent patches of V1 and V3 that represent the vertical or the horizontal meridians of the visual field were identified. Subsequently, RSFC between V1 and V3 was measured with respect to these a priori identified patches. Consistent with our prior hypothesis, the difference between RSFC of patches with congruent retinotopic specificity (horizontal-horizontal and vertical-vertical) and those with incongruent specificity (horizontal-vertical and vertical-horizontal) increased significantly under LSD relative to placebo, suggesting that activity within the visual cortex becomes more dependent on its intrinsic retinotopic organization in the drug condition. This result may indicate that under LSD, with eyes-closed, the early visual system behaves as if it were seeing spatially localized visual inputs. Hum Brain Mapp 37:3031-3040, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  15. Prefrontal Neurons Represent Motion Signals from Across the Visual Field But for Memory-Guided Comparisons Depend on Neurons Providing These Signals.

    Science.gov (United States)

    Wimmer, Klaus; Spinelli, Philip; Pasternak, Tatiana

    2016-09-07

    Visual decisions often involve comparisons of sequential stimuli that can appear at any location in the visual field. The lateral prefrontal cortex (LPFC) in nonhuman primates, shown to play an important role in such comparisons, receives information about contralateral stimuli directly from sensory neurons in the same hemisphere, and about ipsilateral stimuli indirectly from neurons in the opposite hemisphere. This asymmetry of sensory inputs into the LPFC poses the question of whether and how its neurons incorporate sensory information arriving from the two hemispheres during memory-guided comparisons of visual motion. We found that, although responses of individual LPFC neurons to contralateral stimuli were stronger and emerged 40 ms earlier, they carried remarkably similar signals about motion direction in the two hemifields, with comparable direction selectivity and similar direction preferences. This similarity was also apparent around the time of the comparison between the current and remembered stimulus because both ipsilateral and contralateral responses showed similar signals reflecting the remembered direction. However, despite availability in the LPFC of motion information from across the visual field, these "comparison effects" required for the comparison stimuli to appear at the same retinal location. This strict dependence on spatial overlap of the comparison stimuli suggests participation of neurons with localized receptive fields in the comparison process. These results suggest that while LPFC incorporates many key aspects of the information arriving from sensory neurons residing in opposite hemispheres, it continues relying on the interactions with these neurons at the time of generating signals leading to successful perceptual decisions. Visual decisions often involve comparisons of sequential visual motion that can appear at any location in the visual field. We show that during such comparisons, the lateral prefrontal cortex (LPFC) contains

  16. Input-dependent frequency modulation of cortical gamma oscillations shapes spatial synchronization and enables phase coding.

    Science.gov (United States)

    Lowet, Eric; Roberts, Mark; Hadjipapas, Avgis; Peter, Alina; van der Eerden, Jan; De Weerd, Peter

    2015-02-01

    Fine-scale temporal organization of cortical activity in the gamma range (∼25-80Hz) may play a significant role in information processing, for example by neural grouping ('binding') and phase coding. Recent experimental studies have shown that the precise frequency of gamma oscillations varies with input drive (e.g. visual contrast) and that it can differ among nearby cortical locations. This has challenged theories assuming widespread gamma synchronization at a fixed common frequency. In the present study, we investigated which principles govern gamma synchronization in the presence of input-dependent frequency modulations and whether they are detrimental for meaningful input-dependent gamma-mediated temporal organization. To this aim, we constructed a biophysically realistic excitatory-inhibitory network able to express different oscillation frequencies at nearby spatial locations. Similarly to cortical networks, the model was topographically organized with spatially local connectivity and spatially-varying input drive. We analyzed gamma synchronization with respect to phase-locking, phase-relations and frequency differences, and quantified the stimulus-related information represented by gamma phase and frequency. By stepwise simplification of our models, we found that the gamma-mediated temporal organization could be reduced to basic synchronization principles of weakly coupled oscillators, where input drive determines the intrinsic (natural) frequency of oscillators. The gamma phase-locking, the precise phase relation and the emergent (measurable) frequencies were determined by two principal factors: the detuning (intrinsic frequency difference, i.e. local input difference) and the coupling strength. In addition to frequency coding, gamma phase contained complementary stimulus information. Crucially, the phase code reflected input differences, but not the absolute input level. This property of relative input-to-phase conversion, contrasting with latency codes

  17. A Neural Circuit for Auditory Dominance over Visual Perception.

    Science.gov (United States)

    Song, You-Hyang; Kim, Jae-Hyun; Jeong, Hye-Won; Choi, Ilsong; Jeong, Daun; Kim, Kwansoo; Lee, Seung-Hee

    2017-02-22

    When conflicts occur during integration of visual and auditory information, one modality often dominates the other, but the underlying neural circuit mechanism remains unclear. Using auditory-visual discrimination tasks for head-fixed mice, we found that audition dominates vision in a process mediated by interaction between inputs from the primary visual (VC) and auditory (AC) cortices in the posterior parietal cortex (PTLp). Co-activation of the VC and AC suppresses VC-induced PTLp responses, leaving AC-induced responses. Furthermore, parvalbumin-positive (PV+) interneurons in the PTLp mainly receive AC inputs, and muscimol inactivation of the PTLp or optogenetic inhibition of its PV+ neurons abolishes auditory dominance in the resolution of cross-modal sensory conflicts without affecting either sensory perception. Conversely, optogenetic activation of PV+ neurons in the PTLp enhances the auditory dominance. Thus, our results demonstrate that AC input-specific feedforward inhibition of VC inputs in the PTLp is responsible for the auditory dominance during cross-modal integration. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Temporal visual cues aid speech recognition

    DEFF Research Database (Denmark)

    Zhou, Xiang; Ross, Lars; Lehn-Schiøler, Tue

    2006-01-01

    of audio to generate an artificial talking-face video and measured word recognition performance on simple monosyllabic words. RESULTS: When presenting words together with the artificial video we find that word recognition is improved over purely auditory presentation. The effect is significant (p......BACKGROUND: It is well known that under noisy conditions, viewing a speaker's articulatory movement aids the recognition of spoken words. Conventionally it is thought that the visual input disambiguates otherwise confusing auditory input. HYPOTHESIS: In contrast we hypothesize...... that it is the temporal synchronicity of the visual input that aids parsing of the auditory stream. More specifically, we expected that purely temporal information, which does not convey information such as place of articulation may facility word recognition. METHODS: To test this prediction we used temporal features...

  19. Six axis force feedback input device

    Science.gov (United States)

    Ohm, Timothy (Inventor)

    1998-01-01

    The present invention is a low friction, low inertia, six-axis force feedback input device comprising an arm with double-jointed, tendon-driven revolute joints, a decoupled tendon-driven wrist, and a base with encoders and motors. The input device functions as a master robot manipulator of a microsurgical teleoperated robot system including a slave robot manipulator coupled to an amplifier chassis, which is coupled to a control chassis, which is coupled to a workstation with a graphical user interface. The amplifier chassis is coupled to the motors of the master robot manipulator and the control chassis is coupled to the encoders of the master robot manipulator. A force feedback can be applied to the input device and can be generated from the slave robot to enable a user to operate the slave robot via the input device without physically viewing the slave robot. Also, the force feedback can be generated from the workstation to represent fictitious forces to constrain the input device's control of the slave robot to be within imaginary predetermined boundaries.

  20. Properties of V1 neurons tuned to conjunctions of visual features: application of the V1 saliency hypothesis to visual search behavior.

    Directory of Open Access Journals (Sweden)

    Li Zhaoping

    Full Text Available From a computational theory of V1, we formulate an optimization problem to investigate neural properties in the primary visual cortex (V1 from human reaction times (RTs in visual search. The theory is the V1 saliency hypothesis that the bottom-up saliency of any visual location is represented by the highest V1 response to it relative to the background responses. The neural properties probed are those associated with the less known V1 neurons tuned simultaneously or conjunctively in two feature dimensions. The visual search is to find a target bar unique in color (C, orientation (O, motion direction (M, or redundantly in combinations of these features (e.g., CO, MO, or CM among uniform background bars. A feature singleton target is salient because its evoked V1 response largely escapes the iso-feature suppression on responses to the background bars. The responses of the conjunctively tuned cells are manifested in the shortening of the RT for a redundant feature target (e.g., a CO target from that predicted by a race between the RTs for the two corresponding single feature targets (e.g., C and O targets. Our investigation enables the following testable predictions. Contextual suppression on the response of a CO-tuned or MO-tuned conjunctive cell is weaker when the contextual inputs differ from the direct inputs in both feature dimensions, rather than just one. Additionally, CO-tuned cells and MO-tuned cells are often more active than the single feature tuned cells in response to the redundant feature targets, and this occurs more frequently for the MO-tuned cells such that the MO-tuned cells are no less likely than either the M-tuned or O-tuned neurons to be the most responsive neuron to dictate saliency for an MO target.

  1. Properties of V1 neurons tuned to conjunctions of visual features: application of the V1 saliency hypothesis to visual search behavior.

    Science.gov (United States)

    Zhaoping, Li; Zhe, Li

    2012-01-01

    From a computational theory of V1, we formulate an optimization problem to investigate neural properties in the primary visual cortex (V1) from human reaction times (RTs) in visual search. The theory is the V1 saliency hypothesis that the bottom-up saliency of any visual location is represented by the highest V1 response to it relative to the background responses. The neural properties probed are those associated with the less known V1 neurons tuned simultaneously or conjunctively in two feature dimensions. The visual search is to find a target bar unique in color (C), orientation (O), motion direction (M), or redundantly in combinations of these features (e.g., CO, MO, or CM) among uniform background bars. A feature singleton target is salient because its evoked V1 response largely escapes the iso-feature suppression on responses to the background bars. The responses of the conjunctively tuned cells are manifested in the shortening of the RT for a redundant feature target (e.g., a CO target) from that predicted by a race between the RTs for the two corresponding single feature targets (e.g., C and O targets). Our investigation enables the following testable predictions. Contextual suppression on the response of a CO-tuned or MO-tuned conjunctive cell is weaker when the contextual inputs differ from the direct inputs in both feature dimensions, rather than just one. Additionally, CO-tuned cells and MO-tuned cells are often more active than the single feature tuned cells in response to the redundant feature targets, and this occurs more frequently for the MO-tuned cells such that the MO-tuned cells are no less likely than either the M-tuned or O-tuned neurons to be the most responsive neuron to dictate saliency for an MO target.

  2. Coupling Visualization, Simulation, and Deep Learning for Ensemble Steering of Complex Energy Models: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Potter, Kristin C [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Brunhart-Lupo, Nicholas J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Bush, Brian W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Gruchalla, Kenny M [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Bugbee, Bruce [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Krishnan, Venkat K [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-10-09

    We have developed a framework for the exploration, design, and planning of energy systems that combines interactive visualization with machine-learning based approximations of simulations through a general purpose dataflow API. Our system provides a visual inter- face allowing users to explore an ensemble of energy simulations representing a subset of the complex input parameter space, and spawn new simulations to 'fill in' input regions corresponding to new enegery system scenarios. Unfortunately, many energy simula- tions are far too slow to provide interactive responses. To support interactive feedback, we are developing reduced-form models via machine learning techniques, which provide statistically sound esti- mates of the full simulations at a fraction of the computational cost and which are used as proxies for the full-form models. Fast com- putation and an agile dataflow enhance the engagement with energy simulations, and allow researchers to better allocate computational resources to capture informative relationships within the system and provide a low-cost method for validating and quality-checking large-scale modeling efforts.

  3. Complexity Level Analysis Revisited: What Can 30 Years of Hindsight Tell Us about How the Brain Might Represent Visual Information?

    Directory of Open Access Journals (Sweden)

    John K. Tsotsos

    2017-08-01

    Full Text Available Much has been written about how the biological brain might represent and process visual information, and how this might inspire and inform machine vision systems. Indeed, tremendous progress has been made, and especially during the last decade in the latter area. However, a key question seems too often, if not mostly, be ignored. This question is simply: do proposed solutions scale with the reality of the brain's resources? This scaling question applies equally to brain and to machine solutions. A number of papers have examined the inherent computational difficulty of visual information processing using theoretical and empirical methods. The main goal of this activity had three components: to understand the deep nature of the computational problem of visual information processing; to discover how well the computational difficulty of vision matches to the fixed resources of biological seeing systems; and, to abstract from the matching exercise the key principles that lead to the observed characteristics of biological visual performance. This set of components was termed complexity level analysis in Tsotsos (1987 and was proposed as an important complement to Marr's three levels of analysis. This paper revisits that work with the advantage that decades of hindsight can provide.

  4. The non-lemniscal auditory cortex in ferrets: convergence of corticotectal inputs in the superior colliculus

    Directory of Open Access Journals (Sweden)

    Victoria M Bajo

    2010-05-01

    Full Text Available Descending cortical inputs to the superior colliculus (SC contribute to the unisensory response properties of the neurons found there and are critical for multisensory integration. However, little is known about the relative contribution of different auditory cortical areas to this projection or the distribution of their terminals in the SC. We characterized this projection in the ferret by injecting tracers in the SC and auditory cortex. Large pyramidal neurons were labeled in layer V of different parts of the ectosylvian gyrus after tracer injections in the SC. Those cells were most numerous in the anterior ectosylvian gyrus (AEG, and particularly in the anterior ventral field, which receives both auditory and visual inputs. Labeling was also found in the posterior ectosylvian gyrus (PEG, predominantly in the tonotopically-organized posterior suprasylvian field. Profuse anterograde labeling was present in the SC following tracer injections at the site of acoustically-responsive neurons in the AEG or PEG, with terminal fields being both more prominent and clustered for inputs originating from the AEG. Terminals from both cortical areas were located throughout the intermediate and deep layers, but were most concentrated in the posterior half of the SC, where peripheral stimulus locations are represented. No inputs were identified from primary auditory cortical areas, although some labeling was found in the surrounding sulci. Our findings suggest that higher level auditory cortical areas, including those involved in multisensory processing, may modulate SC function via their projections into its deeper layers.

  5. Cross-modal processing in auditory and visual working memory.

    Science.gov (United States)

    Suchan, Boris; Linnewerth, Britta; Köster, Odo; Daum, Irene; Schmid, Gebhard

    2006-02-01

    This study aimed to further explore processing of auditory and visual stimuli in working memory. Smith and Jonides (1997) [Smith, E.E., Jonides, J., 1997. Working memory: A view from neuroimaging. Cogn. Psychol. 33, 5-42] described a modified working memory model in which visual input is automatically transformed into a phonological code. To study this process, auditory and the corresponding visual stimuli were presented in a variant of the 2-back task which involved changes from the auditory to the visual modality and vice versa. Brain activation patterns underlying visual and auditory processing as well as transformation mechanisms were analyzed. Results yielded a significant activation in the left primary auditory cortex associated with transformation of visual into auditory information which reflects the matching and recoding of a stored item and its modality. This finding yields empirical evidence for a transformation of visual input into a phonological code, with the auditory cortex as the neural correlate of the recoding process in working memory.

  6. The Effects of Audiovisual Inputs on Solving the Cocktail Party Problem in the Human Brain: An fMRI Study.

    Science.gov (United States)

    Li, Yuanqing; Wang, Fangyi; Chen, Yongbin; Cichocki, Andrzej; Sejnowski, Terrence

    2017-09-25

    At cocktail parties, our brains often simultaneously receive visual and auditory information. Although the cocktail party problem has been widely investigated under auditory-only settings, the effects of audiovisual inputs have not. This study explored the effects of audiovisual inputs in a simulated cocktail party. In our fMRI experiment, each congruent audiovisual stimulus was a synthesis of 2 facial movie clips, each of which could be classified into 1 of 2 emotion categories (crying and laughing). Visual-only (faces) and auditory-only stimuli (voices) were created by extracting the visual and auditory contents from the synthesized audiovisual stimuli. Subjects were instructed to selectively attend to 1 of the 2 objects contained in each stimulus and to judge its emotion category in the visual-only, auditory-only, and audiovisual conditions. The neural representations of the emotion features were assessed by calculating decoding accuracy and brain pattern-related reproducibility index based on the fMRI data. We compared the audiovisual condition with the visual-only and auditory-only conditions and found that audiovisual inputs enhanced the neural representations of emotion features of the attended objects instead of the unattended objects. This enhancement might partially explain the benefits of audiovisual inputs for the brain to solve the cocktail party problem. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  7. Beyond visualization of big data: a multi-stage data exploration approach using visualization, sonification, and storification

    Science.gov (United States)

    Rimland, Jeffrey; Ballora, Mark; Shumaker, Wade

    2013-05-01

    As the sheer volume of data grows exponentially, it becomes increasingly difficult for existing visualization techniques to keep pace. The sonification field attempts to address this issue by enlisting our auditory senses to detect anomalies or complex events that are difficult to detect via visualization alone. Storification attempts to improve analyst understanding by converting data streams into organized narratives describing the data at a higher level of abstraction than the input stream that they area derived from. While these techniques hold a great deal of promise, they also each have a unique set of challenges that must be overcome. Sonification techniques must represent a broad variety of distributed heterogeneous data and present it to the analyst/listener in a manner that doesn't require extended listening - as visual "snapshots" are useful but auditory sounds only exist over time. Storification still faces many human-computer interface (HCI) challenges as well as technical hurdles related to automatically generating a logical narrative from lower-level data streams. This paper proposes a novel approach that utilizes a service oriented architecture (SOA)-based hybrid visualization/ sonification / storification framework to enable distributed human-in-the-loop processing of data in a manner that makes optimized usage of both visual and auditory processing pathways while also leveraging the value of narrative explication of data streams. It addresses the benefits and shortcomings of each processing modality and discusses information infrastructure and data representation concerns required with their utilization in a distributed environment. We present a generalizable approach with a broad range of applications including cyber security, medical informatics, facilitation of energy savings in "smart" buildings, and detection of natural and man-made disasters.

  8. Visualization of Uncertain Contour Trees

    DEFF Research Database (Denmark)

    Kraus, Martin

    2010-01-01

    Contour trees can represent the topology of large volume data sets in a relatively compact, discrete data structure. However, the resulting trees often contain many thousands of nodes; thus, many graph drawing techniques fail to produce satisfactory results. Therefore, several visualization methods...... were proposed recently for the visualization of contour trees. Unfortunately, none of these techniques is able to handle uncertain contour trees although any uncertainty of the volume data inevitably results in partially uncertain contour trees. In this work, we visualize uncertain contour trees...... by combining the contour trees of two morphologically filtered versions of a volume data set, which represent the range of uncertainty. These two contour trees are combined and visualized within a single image such that a range of potential contour trees is represented by the resulting visualization. Thus...

  9. The effect of early visual deprivation on the neural bases of multisensory processing.

    Science.gov (United States)

    Guerreiro, Maria J S; Putzar, Lisa; Röder, Brigitte

    2015-06-01

    Developmental vision is deemed to be necessary for the maturation of multisensory cortical circuits. Thus far, this has only been investigated in animal studies, which have shown that congenital visual deprivation markedly reduces the capability of neurons to integrate cross-modal inputs. The present study investigated the effect of transient congenital visual deprivation on the neural mechanisms of multisensory processing in humans. We used functional magnetic resonance imaging to compare responses of visual and auditory cortical areas to visual, auditory and audio-visual stimulation in cataract-reversal patients and normally sighted controls. The results showed that cataract-reversal patients, unlike normally sighted controls, did not exhibit multisensory integration in auditory areas. Furthermore, cataract-reversal patients, but not normally sighted controls, exhibited lower visual cortical processing within visual cortex during audio-visual stimulation than during visual stimulation. These results indicate that congenital visual deprivation affects the capability of cortical areas to integrate cross-modal inputs in humans, possibly because visual processing is suppressed during cross-modal stimulation. Arguably, the lack of vision in the first months after birth may result in a reorganization of visual cortex, including the suppression of noisy visual input from the deprived retina in order to reduce interference during auditory processing. © The Author (2015). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  10. Load-Dependent Increases in Delay-Period Alpha-Band Power Track the Gating of Task-Irrelevant Inputs to Working Memory

    Directory of Open Access Journals (Sweden)

    Andrew J. Heinz

    2017-05-01

    Full Text Available Studies exploring the role of neural oscillations in cognition have revealed sustained increases in alpha-band power (ABP during the delay period of verbal and visual working memory (VWM tasks. There have been various proposals regarding the functional significance of such increases, including the inhibition of task-irrelevant cortical areas as well as the active retention of information in VWM. The present study examines the role of delay-period ABP in mediating the effects of interference arising from on-going visual processing during a concurrent VWM task. Specifically, we reasoned that, if set-size dependent increases in ABP represent the gating out of on-going task-irrelevant visual inputs, they should be predictive with respect to some modulation in visual evoked potentials resulting from a task-irrelevant delay period probe stimulus. In order to investigate this possibility, we recorded the electroencephalogram while subjects performed a change detection task requiring the retention of two or four novel shapes. On a portion of trials, a novel, task-irrelevant bilateral checkerboard probe was presented mid-way through the delay. Analyses focused on examining correlations between set-size dependent increases in ABP and changes in the magnitude of the P1, N1 and P3a components of the probe-evoked response and how such increases might be related to behavior. Results revealed that increased delay-period ABP was associated with changes in the amplitude of the N1 and P3a event-related potential (ERP components, and with load-dependent changes in capacity when the probe was presented during the delay. We conclude that load-dependent increases in ABP likely play a role in supporting short-term retention by gating task-irrelevant sensory inputs and suppressing potential sources of disruptive interference.

  11. Postural response to predictable and nonpredictable visual flow in children and adults.

    Science.gov (United States)

    Schmuckler, Mark A

    2017-11-01

    Children's (3-5years) and adults' postural reactions to different conditions of visual flow information varying in its frequency content was examined using a moving room apparatus. Both groups experienced four conditions of visual input: low-frequency (0.20Hz) visual oscillations, high-frequency (0.60Hz) oscillations, multifrequency nonpredictable visual input, and no imposed visual information. Analyses of the frequency content of anterior-posterior (AP) sway revealed that postural reactions to the single-frequency conditions replicated previous findings; children were responsive to low- and high-frequency oscillations, whereas adults were responsive to low-frequency information. Extending previous work, AP sway in response to the nonpredictable condition revealed that both groups were responsive to the different components contained in the multifrequency visual information, although adults retained their frequency selectivity to low-frequency versus high-frequency content. These findings are discussed in relation to work examining feedback versus feedforward control of posture, and the reweighting of sensory inputs for postural control, as a function of development and task context. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Visual masking and the dynamics of human perception, cognition, and consciousness A century of progress, a contemporary synthesis, and future directions.

    Science.gov (United States)

    Ansorge, Ulrich; Francis, Gregory; Herzog, Michael H; Oğmen, Haluk

    2008-07-15

    The 1990s, the "decade of the brain," witnessed major advances in the study of visual perception, cognition, and consciousness. Impressive techniques in neurophysiology, neuroanatomy, neuropsychology, electrophysiology, psychophysics and brain-imaging were developed to address how the nervous system transforms and represents visual inputs. Many of these advances have dealt with the steady-state properties of processing. To complement this "steady-state approach," more recent research emphasized the importance of dynamic aspects of visual processing. Visual masking has been a paradigm of choice for more than a century when it comes to the study of dynamic vision. A recent workshop (http://lpsy.epfl.ch/VMworkshop/), held in Delmenhorst, Germany, brought together an international group of researchers to present state-of-the-art research on dynamic visual processing with a focus on visual masking. This special issue presents peer-reviewed contributions by the workshop participants and provides a contemporary synthesis of how visual masking can inform the dynamics of human perception, cognition, and consciousness.

  13. Prediction of visual saliency in video with deep CNNs

    Science.gov (United States)

    Chaabouni, Souad; Benois-Pineau, Jenny; Hadar, Ofer

    2016-09-01

    Prediction of visual saliency in images and video is a highly researched topic. Target applications include Quality assessment of multimedia services in mobile context, video compression techniques, recognition of objects in video streams, etc. In the framework of mobile and egocentric perspectives, visual saliency models cannot be founded only on bottom-up features, as suggested by feature integration theory. The central bias hypothesis, is not respected neither. In this case, the top-down component of human visual attention becomes prevalent. Visual saliency can be predicted on the basis of seen data. Deep Convolutional Neural Networks (CNN) have proven to be a powerful tool for prediction of salient areas in stills. In our work we also focus on sensitivity of human visual system to residual motion in a video. A Deep CNN architecture is designed, where we incorporate input primary maps as color values of pixels and magnitude of local residual motion. Complementary contrast maps allow for a slight increase of accuracy compared to the use of color and residual motion only. The experiments show that the choice of the input features for the Deep CNN depends on visual task:for th eintersts in dynamic content, the 4K model with residual motion is more efficient, and for object recognition in egocentric video the pure spatial input is more appropriate.

  14. Transformation priming helps to disambiguate sudden changes of sensory inputs.

    Science.gov (United States)

    Pastukhov, Alexander; Vivian-Griffiths, Solveiga; Braun, Jochen

    2015-11-01

    Retinal input is riddled with abrupt transients due to self-motion, changes in illumination, object-motion, etc. Our visual system must correctly interpret each of these changes to keep visual perception consistent and sensitive. This poses an enormous challenge, as many transients are highly ambiguous in that they are consistent with many alternative physical transformations. Here we investigated inter-trial effects in three situations with sudden and ambiguous transients, each presenting two alternative appearances (rotation-reversing structure-from-motion, polarity-reversing shape-from-shading, and streaming-bouncing object collisions). In every situation, we observed priming of transformations as the outcome perceived in earlier trials tended to repeat in subsequent trials and this repetition was contingent on perceptual experience. The observed priming was specific to transformations and did not originate in priming of perceptual states preceding a transient. Moreover, transformation priming was independent of attention and specific to low level stimulus attributes. In summary, we show how "transformation priors" and experience-driven updating of such priors helps to disambiguate sudden changes of sensory inputs. We discuss how dynamic transformation priors can be instantiated as "transition energies" in an "energy landscape" model of the visual perception. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Do the Contents of Visual Working Memory Automatically Influence Attentional Selection During Visual Search?

    OpenAIRE

    Woodman, Geoffrey F.; Luck, Steven J.

    2007-01-01

    In many theories of cognition, researchers propose that working memory and perception operate interactively. For example, in previous studies researchers have suggested that sensory inputs matching the contents of working memory will have an automatic advantage in the competition for processing resources. The authors tested this hypothesis by requiring observers to perform a visual search task while concurrently maintaining object representations in visual working memory. The hypothesis that ...

  16. Visual cognition

    Science.gov (United States)

    Cavanagh, Patrick

    2011-01-01

    Visual cognition, high-level vision, mid-level vision and top-down processing all refer to decision-based scene analyses that combine prior knowledge with retinal input to generate representations. The label “visual cognition” is little used at present, but research and experiments on mid- and high-level, inference-based vision have flourished, becoming in the 21st century a significant, if often understated part, of current vision research. How does visual cognition work? What are its moving parts? This paper reviews the origins and architecture of visual cognition and briefly describes some work in the areas of routines, attention, surfaces, objects, and events (motion, causality, and agency). Most vision scientists avoid being too explicit when presenting concepts about visual cognition, having learned that explicit models invite easy criticism. What we see in the literature is ample evidence for visual cognition, but few or only cautious attempts to detail how it might work. This is the great unfinished business of vision research: at some point we will be done with characterizing how the visual system measures the world and we will have to return to the question of how vision constructs models of objects, surfaces, scenes, and events. PMID:21329719

  17. Visual cognition.

    Science.gov (United States)

    Cavanagh, Patrick

    2011-07-01

    Visual cognition, high-level vision, mid-level vision and top-down processing all refer to decision-based scene analyses that combine prior knowledge with retinal input to generate representations. The label "visual cognition" is little used at present, but research and experiments on mid- and high-level, inference-based vision have flourished, becoming in the 21st century a significant, if often understated part, of current vision research. How does visual cognition work? What are its moving parts? This paper reviews the origins and architecture of visual cognition and briefly describes some work in the areas of routines, attention, surfaces, objects, and events (motion, causality, and agency). Most vision scientists avoid being too explicit when presenting concepts about visual cognition, having learned that explicit models invite easy criticism. What we see in the literature is ample evidence for visual cognition, but few or only cautious attempts to detail how it might work. This is the great unfinished business of vision research: at some point we will be done with characterizing how the visual system measures the world and we will have to return to the question of how vision constructs models of objects, surfaces, scenes, and events. Copyright © 2011 Elsevier Ltd. All rights reserved.

  18. Egocentric and allocentric alignment tasks are affected by otolith input.

    Science.gov (United States)

    Tarnutzer, Alexander A; Bockisch, Christopher J; Olasagasti, Itsaso; Straumann, Dominik

    2012-06-01

    Gravicentric visual alignments become less precise when the head is roll-tilted relative to gravity, which is most likely due to decreasing otolith sensitivity. To align a luminous line with the perceived gravity vector (gravicentric task) or the perceived body-longitudinal axis (egocentric task), the roll orientation of the line on the retina and the torsional position of the eyes relative to the head must be integrated to obtain the line orientation relative to the head. Whether otolith input contributes to egocentric tasks and whether the modulation of variability is restricted to vision-dependent paradigms is unknown. In nine subjects we compared precision and accuracy of gravicentric and egocentric alignments in various roll positions (upright, 45°, and 75° right-ear down) using a luminous line (visual paradigm) in darkness. Trial-to-trial variability doubled for both egocentric and gravicentric alignments when roll-tilted. Two mechanisms might explain the roll-angle-dependent modulation in egocentric tasks: 1) Modulating variability in estimated ocular torsion, which reflects the roll-dependent precision of otolith signals, affects the precision of estimating the line orientation relative to the head; this hypothesis predicts that variability modulation is restricted to vision-dependent alignments. 2) Estimated body-longitudinal reflects the roll-dependent variability of perceived earth-vertical. Gravicentric cues are thereby integrated regardless of the task's reference frame. To test the two hypotheses the visual paradigm was repeated using a rod instead (haptic paradigm). As with the visual paradigm, precision significantly decreased with increasing head roll for both tasks. These findings propose that the CNS integrates input coded in a gravicentric frame to solve egocentric tasks. In analogy to gravicentric tasks, where trial-to-trial variability is mainly influenced by the properties of the otolith afferents, egocentric tasks may also integrate

  19. WebStruct and VisualStruct: web interfaces and visualization for Structure software implemented in a cluster environment

    Directory of Open Access Journals (Sweden)

    Jayashree B.

    2008-03-01

    Full Text Available Structure, is a widely used software tool to investigate population genetic structure with multi-locus genotyping data. The software uses an iterative algorithm to group individuals into “K” clusters, representing possibly K genetically distinct subpopulations. The serial implementation of this programme is processor-intensive even with small datasets. We describe an implementation of the program within a parallel framework. Speedup was achieved by running different replicates and values of K on each node of the cluster. A web-based user-oriented GUI has been implemented in PHP, through which the user can specify input parameters for the programme. The number of processors to be used can be specified in the background command. A web-based visualization tool “Visualstruct”, written in PHP (HTML and Java script embedded, allows for the graphical display of population clusters output from Structure, where each individual may be visualized as a line segment with K colors defining its possible genomic composition with respect to the K genetic sub-populations. The advantage over available programs is in the increased number of individuals that can be visualized. The analyses of real datasets indicate a speedup of up to four, when comparing the speed of execution on clusters of eight processors with the speed of execution on one desktop. The software package is freely available to interested users upon request.

  20. WebStruct and VisualStruct: Web interfaces and visualization for Structure software implemented in a cluster environment.

    Science.gov (United States)

    Jayashree, B; Rajgopal, S; Hoisington, D; Prasanth, V P; Chandra, S

    2008-09-24

    Structure, is a widely used software tool to investigate population genetic structure with multi-locus genotyping data. The software uses an iterative algorithm to group individuals into "K" clusters, representing possibly K genetically distinct subpopulations. The serial implementation of this programme is processor-intensive even with small datasets. We describe an implementation of the program within a parallel framework. Speedup was achieved by running different replicates and values of K on each node of the cluster. A web-based user-oriented GUI has been implemented in PHP, through which the user can specify input parameters for the programme. The number of processors to be used can be specified in the background command. A web-based visualization tool "Visualstruct", written in PHP (HTML and Java script embedded), allows for the graphical display of population clusters output from Structure, where each individual may be visualized as a line segment with K colors defining its possible genomic composition with respect to the K genetic sub-populations. The advantage over available programs is in the increased number of individuals that can be visualized. The analyses of real datasets indicate a speedup of up to four, when comparing the speed of execution on clusters of eight processors with the speed of execution on one desktop. The software package is freely available to interested users upon request.

  1. The role of vestibular and support-tactile-proprioceptive inputs in visual-manual tracking

    Science.gov (United States)

    Kornilova, Ludmila; Naumov, Ivan; Glukhikh, Dmitriy; Khabarova, Ekaterina; Pavlova, Aleksandra; Ekimovskiy, Georgiy; Sagalovitch, Viktor; Smirnov, Yuriy; Kozlovskaya, Inesa

    Sensorimotor disorders in weightlessness are caused by changes of functioning of gravity-dependent systems, first of all - vestibular and support. The question arises, what’s the role and the specific contribution of the support afferentation in the development of observed disorders. To determine the role and effects of vestibular, support, tactile and proprioceptive afferentation on characteristics of visual-manual tracking (VMT) we conducted a comparative analysis of the data obtained after prolonged spaceflight and in a model of weightlessness - horizontal “dry” immersion. Altogether we examined 16 Russian cosmonauts before and after prolonged spaceflights (129-215 days) and 30 subjects who stayed in immersion bath for 5-7 days to evaluate the state of the vestibular function (VF) using videooculography and characteristics of the visual-manual tracking (VMT) using electrooculography & joystick with biological visual feedback. Evaluation of the VF has shown that both after immersion and after prolonged spaceflight there were significant decrease of the static torsional otolith-cervical-ocular reflex (OCOR) and simultaneous significant increase of the dynamic vestibular-cervical-ocular reactions (VCOR) with a revealed negative correlation between parameters of the otoliths and canals reactions, as well as significant changes in accuracy of perception of the subjective visual vertical which correlated with changes in OCOR. Analyze of the VMT has shown that significant disorders of the visual tracking (VT) occurred from the beginning of the immersion up to 3-4 day after while in cosmonauts similar but much more pronounced oculomotor disorders and significant changes from the baseline were observed up to R+9 day postflight. Significant changes of the manual tracking (MT) were revealed only for gain and occurred on 1 and 3 days in immersion while after spaceflight such changes were observed up to R+5 day postflight. We found correlation between characteristics

  2. A deep learning / neuroevolution hybrid for visual control

    DEFF Research Database (Denmark)

    Poulsen, Andreas Precht; Thorhauge, Mark; Funch, Mikkel Hvilshj

    2017-01-01

    This paper presents a deep learning / neuroevolution hybrid approach called DLNE, which allows FPS bots to learn to aim & shoot based only on high-dimensional raw pixel input. The deep learning component is responsible for visual recognition and translating raw pixels to compact feature...... representations, while the evolving network takes those features as inputs to infer actions. The results suggest that combining deep learning and neuroevolution in a hybrid approach is a promising research direction that could make complex visual domains directly accessible to networks trained through evolution....

  3. Recent Visual Experience Shapes Visual Processing in Rats through Stimulus-Specific Adaptation and Response Enhancement.

    Science.gov (United States)

    Vinken, Kasper; Vogels, Rufin; Op de Beeck, Hans

    2017-03-20

    From an ecological point of view, it is generally suggested that the main goal of vision in rats and mice is navigation and (aerial) predator evasion [1-3]. The latter requires fast and accurate detection of a change in the visual environment. An outstanding question is whether there are mechanisms in the rodent visual system that would support and facilitate visual change detection. An experimental protocol frequently used to investigate change detection in humans is the oddball paradigm, in which a rare, unexpected stimulus is presented in a train of stimulus repetitions [4]. A popular "predictive coding" theory of cortical responses states that neural responses should decrease for expected sensory input and increase for unexpected input [5, 6]. Despite evidence for response suppression and enhancement in noninvasive scalp recordings in humans with this paradigm [7, 8], it has proven challenging to observe both phenomena in invasive action potential recordings in other animals [9-11]. During a visual oddball experiment, we recorded multi-unit spiking activity in rat primary visual cortex (V1) and latero-intermediate area (LI), which is a higher area of the rodent ventral visual stream. In rat V1, there was only evidence for response suppression related to stimulus-specific adaptation, and not for response enhancement. However, higher up in area LI, spiking activity showed clear surprise-based response enhancement in addition to stimulus-specific adaptation. These results show that neural responses along the rat ventral visual stream become increasingly sensitive to changes in the visual environment, suggesting a system specialized in the detection of unexpected events. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Making Memories: The Development of Long-Term Visual Knowledge in Children with Visual Agnosia

    Directory of Open Access Journals (Sweden)

    Tiziana Metitieri

    2013-01-01

    Full Text Available There are few reports about the effects of perinatal acquired brain lesions on the development of visual perception. These studies demonstrate nonseverely impaired visual-spatial abilities and preserved visual memory. Longitudinal data analyzing the effects of compromised perceptions on long-term visual knowledge in agnosics are limited to lesions having occurred in adulthood. The study of children with focal lesions of the visual pathways provides a unique opportunity to assess the development of visual memory when perceptual input is degraded. We assessed visual recognition and visual memory in three children with lesions to the visual cortex having occurred in early infancy. We then explored the time course of visual memory impairment in two of them at 2 years and 3.7 years from the initial assessment. All children exhibited apperceptive visual agnosia and visual memory impairment. We observed a longitudinal improvement of visual memory modulated by the structural properties of objects. Our findings indicate that processing of degraded perceptions from birth results in impoverished memories. The dynamic interaction between perception and memory during development might modulate the long-term construction of visual representations, resulting in less severe impairment.

  5. Making memories: the development of long-term visual knowledge in children with visual agnosia.

    Science.gov (United States)

    Metitieri, Tiziana; Barba, Carmen; Pellacani, Simona; Viggiano, Maria Pia; Guerrini, Renzo

    2013-01-01

    There are few reports about the effects of perinatal acquired brain lesions on the development of visual perception. These studies demonstrate nonseverely impaired visual-spatial abilities and preserved visual memory. Longitudinal data analyzing the effects of compromised perceptions on long-term visual knowledge in agnosics are limited to lesions having occurred in adulthood. The study of children with focal lesions of the visual pathways provides a unique opportunity to assess the development of visual memory when perceptual input is degraded. We assessed visual recognition and visual memory in three children with lesions to the visual cortex having occurred in early infancy. We then explored the time course of visual memory impairment in two of them at 2  years and 3.7  years from the initial assessment. All children exhibited apperceptive visual agnosia and visual memory impairment. We observed a longitudinal improvement of visual memory modulated by the structural properties of objects. Our findings indicate that processing of degraded perceptions from birth results in impoverished memories. The dynamic interaction between perception and memory during development might modulate the long-term construction of visual representations, resulting in less severe impairment.

  6. Visualization rhetoric: framing effects in narrative visualization.

    Science.gov (United States)

    Hullman, Jessica; Diakopoulos, Nicholas

    2011-12-01

    Narrative visualizations combine conventions of communicative and exploratory information visualization to convey an intended story. We demonstrate visualization rhetoric as an analytical framework for understanding how design techniques that prioritize particular interpretations in visualizations that "tell a story" can significantly affect end-user interpretation. We draw a parallel between narrative visualization interpretation and evidence from framing studies in political messaging, decision-making, and literary studies. Devices for understanding the rhetorical nature of narrative information visualizations are presented, informed by the rigorous application of concepts from critical theory, semiotics, journalism, and political theory. We draw attention to how design tactics represent additions or omissions of information at various levels-the data, visual representation, textual annotations, and interactivity-and how visualizations denote and connote phenomena with reference to unstated viewing conventions and codes. Classes of rhetorical techniques identified via a systematic analysis of recent narrative visualizations are presented, and characterized according to their rhetorical contribution to the visualization. We describe how designers and researchers can benefit from the potentially positive aspects of visualization rhetoric in designing engaging, layered narrative visualizations and how our framework can shed light on how a visualization design prioritizes specific interpretations. We identify areas where future inquiry into visualization rhetoric can improve understanding of visualization interpretation. © 2011 IEEE

  7. Rapid and reversible recruitment of early visual cortex for touch.

    Directory of Open Access Journals (Sweden)

    Lotfi B Merabet

    2008-08-01

    Full Text Available The loss of vision has been associated with enhanced performance in non-visual tasks such as tactile discrimination and sound localization. Current evidence suggests that these functional gains are linked to the recruitment of the occipital visual cortex for non-visual processing, but the neurophysiological mechanisms underlying these crossmodal changes remain uncertain. One possible explanation is that visual deprivation is associated with an unmasking of non-visual input into visual cortex.We investigated the effect of sudden, complete and prolonged visual deprivation (five days in normally sighted adult individuals while they were immersed in an intensive tactile training program. Following the five-day period, blindfolded subjects performed better on a Braille character discrimination task. In the blindfold group, serial fMRI scans revealed an increase in BOLD signal within the occipital cortex in response to tactile stimulation after five days of complete visual deprivation. This increase in signal was no longer present 24 hours after blindfold removal. Finally, reversible disruption of occipital cortex function on the fifth day (by repetitive transcranial magnetic stimulation; rTMS impaired Braille character recognition ability in the blindfold group but not in non-blindfolded controls. This disruptive effect was no longer evident once the blindfold had been removed for 24 hours.Overall, our findings suggest that sudden and complete visual deprivation in normally sighted individuals can lead to profound, but rapidly reversible, neuroplastic changes by which the occipital cortex becomes engaged in processing of non-visual information. The speed and dynamic nature of the observed changes suggests that normally inhibited or masked functions in the sighted are revealed by visual loss. The unmasking of pre-existing connections and shifts in connectivity represent rapid, early plastic changes, which presumably can lead, if sustained and

  8. Impact of environmental inputs on reverse-engineering approach to network structures.

    Science.gov (United States)

    Wu, Jianhua; Sinfield, James L; Buchanan-Wollaston, Vicky; Feng, Jianfeng

    2009-12-04

    Uncovering complex network structures from a biological system is one of the main topic in system biology. The network structures can be inferred by the dynamical Bayesian network or Granger causality, but neither techniques have seriously taken into account the impact of environmental inputs. With considerations of natural rhythmic dynamics of biological data, we propose a system biology approach to reveal the impact of environmental inputs on network structures. We first represent the environmental inputs by a harmonic oscillator and combine them with Granger causality to identify environmental inputs and then uncover the causal network structures. We also generalize it to multiple harmonic oscillators to represent various exogenous influences. This system approach is extensively tested with toy models and successfully applied to a real biological network of microarray data of the flowering genes of the model plant Arabidopsis Thaliana. The aim is to identify those genes that are directly affected by the presence of the sunlight and uncover the interactive network structures associating with flowering metabolism. We demonstrate that environmental inputs are crucial for correctly inferring network structures. Harmonic causal method is proved to be a powerful technique to detect environment inputs and uncover network structures, especially when the biological data exhibit periodic oscillations.

  9. Conceptual Design of GRIG (GUI Based RETRAN Input Generator)

    International Nuclear Information System (INIS)

    Lee, Gyung Jin; Hwang, Su Hyun; Hong, Soon Joon; Lee, Byung Chul; Jang, Chan Su; Um, Kil Sup

    2007-01-01

    For the development of high performance methodology using advanced transient analysis code, it is essential to generate the basic input of transient analysis code by rigorous QA procedures. There are various types of operating NPPs (Nuclear Power Plants) in Korea such as Westinghouse plants, KSNP(Korea Standard Nuclear Power Plant), APR1400 (Advance Power Reactor), etc. So there are some difficulties to generate and manage systematically the input of transient analysis code reflecting the inherent characteristics of various types of NPPs. To minimize the user faults and investment man power and to generate effectively and accurately the basic inputs of transient analysis code for all domestic NPPs, it is needed to develop the program that can automatically generate the basic input, which can be directly applied to the transient analysis, from the NPP design material. ViRRE (Visual RETRAN Running Environment) developed by KEPCO (Korea Electric Power Corporation) and KAERI (Korea Atomic Energy Research Institute) provides convenient working environment for Kori Unit 1/2. ViRRE shows the calculated results through on-line display but its capability is limited on the convenient execution of RETRAN. So it can not be used as input generator. ViSA (Visual System Analyzer) developed by KAERI is a NPA (Nuclear Plant Analyzer) using RETRAN and MARS code as thermal-hydraulic engine. ViSA contains both pre-processing and post-processing functions. In the pre-processing, only the trip data cards and boundary conditions can be changed through GUI mode based on pre-prepared text-input, so the capability of input generation is very limited. SNAP (Symbolic Nuclear Analysis Package) developed by Applied Programming Technology, Inc. and NRC (Nuclear Regulatory Commission) provides efficient working environment for the use of nuclear safety analysis codes such as RELAP5 and TRAC-M codes. SNAP covers wide aspects of thermal-hydraulic analysis from model creation through data analysis

  10. Time-sharing visual and auditory tracking tasks

    Science.gov (United States)

    Tsang, Pamela S.; Vidulich, Michael A.

    1987-01-01

    An experiment is described which examined the benefits of distributing the input demands of two tracking tasks as a function of task integrality. Visual and auditory compensatory tracking tasks were utilized. Results indicate that presenting the two tracking signals in two input modalities did not improve time-sharing efficiency. This was attributed to the difficulty insensitivity phenomenon.

  11. Cortical feedback signals generalise across different spatial frequencies of feedforward inputs.

    Science.gov (United States)

    Revina, Yulia; Petro, Lucy S; Muckli, Lars

    2017-09-22

    Visual processing in cortex relies on feedback projections contextualising feedforward information flow. Primary visual cortex (V1) has small receptive fields and processes feedforward information at a fine-grained spatial scale, whereas higher visual areas have larger, spatially invariant receptive fields. Therefore, feedback could provide coarse information about the global scene structure or alternatively recover fine-grained structure by targeting small receptive fields in V1. We tested if feedback signals generalise across different spatial frequencies of feedforward inputs, or if they are tuned to the spatial scale of the visual scene. Using a partial occlusion paradigm, functional magnetic resonance imaging (fMRI) and multivoxel pattern analysis (MVPA) we investigated whether feedback to V1 contains coarse or fine-grained information by manipulating the spatial frequency of the scene surround outside an occluded image portion. We show that feedback transmits both coarse and fine-grained information as it carries information about both low (LSF) and high spatial frequencies (HSF). Further, feedback signals containing LSF information are similar to feedback signals containing HSF information, even without a large overlap in spatial frequency bands of the HSF and LSF scenes. Lastly, we found that feedback carries similar information about the spatial frequency band across different scenes. We conclude that cortical feedback signals contain information which generalises across different spatial frequencies of feedforward inputs. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  12. Visual updating across saccades by working memory integration

    NARCIS (Netherlands)

    Oostwoud Wijdenes, L.; Marshall, L.; Bays, P.M.

    2015-01-01

    We explore the visual world through saccadic eye movements, but saccades also present a challenge to visual processing, by shifting externally-stable objects from one retinal location to another. The brain could solve this problem in two ways: by overwriting preceding input and starting afresh with

  13. Visual Perceptual Learning and its Specificity and Transfer: A New Perspective

    Directory of Open Access Journals (Sweden)

    Cong Yu

    2011-05-01

    Full Text Available Visual perceptual learning is known to be location and orientation specific, and is thus assumed to reflect the neuronal plasticity in the early visual cortex. However, in recent studies we created “Double training” and “TPE” procedures to demonstrate that these “fundamental” specificities of perceptual learning are in some sense artifacts and that learning can completely transfer to a new location or orientation. We proposed a rule-based learning theory to reinterpret perceptual learning and its specificity and transfer: A high-level decision unit learns the rules of performing a visual task through training. However, the learned rules cannot be applied to a new location or orientation automatically because the decision unit cannot functionally connect to new visual inputs with sufficient strength because these inputs are unattended or even suppressed during training. It is double training and TPE training that reactivate these new inputs, so that the functional connections can be strengthened to enable rule application and learning transfer. Currently we are investigating the properties of perceptual learning free from the bogus specificities, and the results provide some preliminary but very interesting insights into how training reshapes the functional connections between the high-level decision units and sensory inputs in the brain.

  14. Representing Scientific Communities by Data Visualization (1/2)

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    These lectures present a research that investigates the representation of communities, and the way to foster their understanding by different audiences. Communities are complex multidimensional entities intrinsically difficult to represent synthetically. The way to represent them is likely to differ depending on the audience considered: governing entities trying to make decision for the future of the community, general public trying to understand the nature of the community and the members of the community themselves. This work considers two types of communities as example: a scientific organization and an arising domain: the EPFL institutional community composed of faculty members and researchers and, at a world wide level, the emerging community of Digital Humanities researchers. For both cases, the research is organised as a process going from graphical research to actual materialization as physical artefacts (posters, maps, etc.), possibly extended using digital devices (augmented reality applications). T...

  15. The rapid distraction of attentional resources toward the source of incongruent stimulus input during multisensory conflict.

    Science.gov (United States)

    Donohue, Sarah E; Todisco, Alexandra E; Woldorff, Marty G

    2013-04-01

    Neuroimaging work on multisensory conflict suggests that the relevant modality receives enhanced processing in the face of incongruency. However, the degree of stimulus processing in the irrelevant modality and the temporal cascade of the attentional modulations in either the relevant or irrelevant modalities are unknown. Here, we employed an audiovisual conflict paradigm with a sensory probe in the task-irrelevant modality (vision) to gauge the attentional allocation to that modality. ERPs were recorded as participants attended to and discriminated spoken auditory letters while ignoring simultaneous bilateral visual letter stimuli that were either fully congruent, fully incongruent, or partially incongruent (one side incongruent, one congruent) with the auditory stimulation. Half of the audiovisual letter stimuli were followed 500-700 msec later by a bilateral visual probe stimulus. As expected, ERPs to the audiovisual stimuli showed an incongruency ERP effect (fully incongruent versus fully congruent) of an enhanced, centrally distributed, negative-polarity wave starting ∼250 msec. More critically here, the sensory ERP components to the visual probes were larger when they followed fully incongruent versus fully congruent multisensory stimuli, with these enhancements greatest on fully incongruent trials with the slowest RTs. In addition, on the slowest-response partially incongruent trials, the P2 sensory component to the visual probes was larger contralateral to the preceding incongruent visual stimulus. These data suggest that, in response to conflicting multisensory stimulus input, the initial cognitive effect is a capture of attention by the incongruent irrelevant-modality input, pulling neural processing resources toward that modality, resulting in rapid enhancement, rather than rapid suppression, of that input.

  16. Slow changing postural cues cancel visual field dependence on self-tilt detection.

    Science.gov (United States)

    Scotto Di Cesare, C; Macaluso, T; Mestre, D R; Bringoux, L

    2015-01-01

    Interindividual differences influence the multisensory integration process involved in spatial perception. Here, we assessed the effect of visual field dependence on self-tilt detection relative to upright, as a function of static vs. slow changing visual or postural cues. To that aim, we manipulated slow rotations (i.e., 0.05° s(-1)) of the body and/or the visual scene in pitch. Participants had to indicate whether they felt being tilted forward at successive angles. Results show that thresholds for self-tilt detection substantially differed between visual field dependent/independent subjects, when only the visual scene was rotated. This difference was no longer present when the body was actually rotated, whatever the visual scene condition (i.e., absent, static or rotated relative to the observer). These results suggest that the cancellation of visual field dependence by dynamic postural cues may rely on a multisensory reweighting process, where slow changing vestibular/somatosensory inputs may prevail over visual inputs. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. A new chance-constrained DEA model with birandom input and output data

    OpenAIRE

    Tavana, M.; Shiraz, R. K.; Hatami-Marbini, A.

    2013-01-01

    The purpose of conventional Data Envelopment Analysis (DEA) is to evaluate the performance of a set of firms or Decision-Making Units using deterministic input and output data. However, the input and output data in the real-life performance evaluation problems are often stochastic. The stochastic input and output data in DEA can be represented with random variables. Several methods have been proposed to deal with the random input and output data in DEA. In this paper, we propose a new chance-...

  18. The Effects of Type and Quantity of Input on Iranian EFL Learners’ Oral Language Proficiency

    Directory of Open Access Journals (Sweden)

    Zahra Hassanzadeh

    2014-05-01

    Full Text Available In the written texts on foreign language learning, a group of studies has stressed the function of learning context and learning chances for learners’ language input. The present thesis had two main goals: on the one hand, different types of input to which Iranian grade four high school EFL learners’ are exposed were looked at; on the other hand, the possible relationship between types and quantity of input and Iranian EFL learners’ oral proficiency was investigated. It was supposed that EFL learners who have access to more input will show better oral proficiency than those who do not have. Instruments used in the present study for the purpose of data collation included  PET test, researcher- made questionnaire, oral language proficiency test and face- to -face interview. Data were gathered from 50 Iranian female grade four high school foreign language learners who were selected from among 120 students whose score on PET test were +1SD from the mean score. The results of the Spearman rank –order correlation test for the types of input and oral language proficiency scores, showed that the participants’ oral proficiency score significantly correlated with the intended four sources of input including spoken (rho= 0.416, sig=0.003, written (rho= 0.364, sig=0.009, aural (rho= 0.343, sig=0.015 and visual or audio-visual types of input (rho= 0.47, sig=0.00. The findings of Spearman rank –order correlation test for the quantity of input and oral language proficiency scores also showed a significant relationship between quantity of input and oral language proficiency (rho= 0.543, sig= 0.00. The findings showed that EFL learners’ oral proficiency is significantly correlated with efficient and effective input. The findings may also suggest  answers to the question why most Iranian English learners fail to speak English fluently, which might be due to  lack of effective input. This may emphasize the importance of the types and quantity of

  19. The neural circuitry of visual artistic production and appreciation: A proposition

    Directory of Open Access Journals (Sweden)

    Ambar Chakravarty

    2012-01-01

    Full Text Available The nondominant inferior parietal lobule is probably a major "store house" of artistic creativity. The ventromedial prefrontal lobe (VMPFL is supposed to be involved in creative cognition and the dorsolateral prefrontal lobe (DLPFL in creative output. The conceptual ventral and dorsal visual system pathways likely represent the inferior and superior longitudinal fasciculi. During artistic production, conceptualization is conceived in the VMPFL and the executive part is operated through the DLFPL. The latter transfers the concept to the visual brain through the superior longitudinal fasciculus (SLF, relaying on its path to the parietal cortex. The conceptualization at VMPFL is influenced by activity from the anterior temporal lobe through the uncinate fasciculus and limbic system pathways. The final visual image formed in the visual brain is subsequently transferred back to the DLPFL through the SLF and then handed over to the motor cortex for execution. During art appreciation, the image at the visual brain is transferred to the frontal lobe through the SLF and there it is matched with emotional and memory inputs from the anterior temporal lobe transmitted through the uncinate fasiculus. Beauty is perceived at the VMPFL and transferred through the uncinate fasciculus to the hippocampo-amygdaloid complex in the anterior temporal lobe. The limbic system (Papez circuit is activated and emotion of appreciation is evoked. It is postulated that in practice the entire circuitry is activated simultaneously.

  20. The neural circuitry of visual artistic production and appreciation: A proposition.

    Science.gov (United States)

    Chakravarty, Ambar

    2012-04-01

    The nondominant inferior parietal lobule is probably a major "store house" of artistic creativity. The ventromedial prefrontal lobe (VMPFL) is supposed to be involved in creative cognition and the dorsolateral prefrontal lobe (DLPFL) in creative output. The conceptual ventral and dorsal visual system pathways likely represent the inferior and superior longitudinal fasciculi. During artistic production, conceptualization is conceived in the VMPFL and the executive part is operated through the DLFPL. The latter transfers the concept to the visual brain through the superior longitudinal fasciculus (SLF), relaying on its path to the parietal cortex. The conceptualization at VMPFL is influenced by activity from the anterior temporal lobe through the uncinate fasciculus and limbic system pathways. The final visual image formed in the visual brain is subsequently transferred back to the DLPFL through the SLF and then handed over to the motor cortex for execution. During art appreciation, the image at the visual brain is transferred to the frontal lobe through the SLF and there it is matched with emotional and memory inputs from the anterior temporal lobe transmitted through the uncinate fasiculus. Beauty is perceived at the VMPFL and transferred through the uncinate fasciculus to the hippocampo-amygdaloid complex in the anterior temporal lobe. The limbic system (Papez circuit) is activated and emotion of appreciation is evoked. It is postulated that in practice the entire circuitry is activated simultaneously.

  1. GRAVE: An Interactive Geometry Construction and Visualization Software System for the TORT Nuclear Radiation Transport Code

    International Nuclear Information System (INIS)

    Blakeman, E.D.

    2000-01-01

    A software system, GRAVE (Geometry Rendering and Visual Editor), has been developed at the Oak Ridge National Laboratory (ORNL) to perform interactive visualization and development of models used as input to the TORT three-dimensional discrete ordinates radiation transport code. Three-dimensional and two-dimensional visualization displays are included. Display capabilities include image rotation, zoom, translation, wire-frame and translucent display, geometry cuts and slices, and display of individual component bodies and material zones. The geometry can be interactively edited and saved in TORT input file format. This system is an advancement over the current, non-interactive, two-dimensional display software. GRAVE is programmed in the Java programming language and can be implemented on a variety of computer platforms. Three- dimensional visualization is enabled through the Visualization Toolkit (VTK), a free-ware C++ software library developed for geometric and data visual display. Future plans include an extension of the system to read inputs using binary zone maps and combinatorial geometry models containing curved surfaces, such as those used for Monte Carlo code inputs. Also GRAVE will be extended to geometry visualization/editing for the DORT two-dimensional transport code and will be integrated into a single GUI-based system for all of the ORNL discrete ordinates transport codes

  2. V-MitoSNP: visualization of human mitochondrial SNPs

    Directory of Open Access Journals (Sweden)

    Tsui Ke-Hung

    2006-08-01

    Full Text Available Abstract Background Mitochondrial single nucleotide polymorphisms (mtSNPs constitute important data when trying to shed some light on human diseases and cancers. Unfortunately, providing relevant mtSNP genotyping information in mtDNA databases in a neatly organized and transparent visual manner still remains a challenge. Amongst the many methods reported for SNP genotyping, determining the restriction fragment length polymorphisms (RFLPs is still one of the most convenient and cost-saving methods. In this study, we prepared the visualization of the mtDNA genome in a way, which integrates the RFLP genotyping information with mitochondria related cancers and diseases in a user-friendly, intuitive and interactive manner. The inherent problem associated with mtDNA sequences in BLAST of the NCBI database was also solved. Description V-MitoSNP provides complete mtSNP information for four different kinds of inputs: (1 color-coded visual input by selecting genes of interest on the genome graph, (2 keyword search by locus, disease and mtSNP rs# ID, (3 visualized input of nucleotide range by clicking the selected region of the mtDNA sequence, and (4 sequences mtBLAST. The V-MitoSNP output provides 500 bp (base pairs flanking sequences for each SNP coupled with the RFLP enzyme and the corresponding natural or mismatched primer sets. The output format enables users to see the SNP genotype pattern of the RFLP by virtual electrophoresis of each mtSNP. The rate of successful design of enzymes and primers for RFLPs in all mtSNPs was 99.1%. The RFLP information was validated by actual agarose electrophoresis and showed successful results for all mtSNPs tested. The mtBLAST function in V-MitoSNP provides the gene information within the input sequence rather than providing the complete mitochondrial chromosome as in the NCBI BLAST database. All mtSNPs with rs number entries in NCBI are integrated in the corresponding SNP in V-MitoSNP. Conclusion V-MitoSNP is a web

  3. Music alters visual perception.

    Directory of Open Access Journals (Sweden)

    Jacob Jolij

    Full Text Available BACKGROUND: Visual perception is not a passive process: in order to efficiently process visual input, the brain actively uses previous knowledge (e.g., memory and expectations about what the world should look like. However, perception is not only influenced by previous knowledge. Especially the perception of emotional stimuli is influenced by the emotional state of the observer. In other words, how we perceive the world does not only depend on what we know of the world, but also by how we feel. In this study, we further investigated the relation between mood and perception. METHODS AND FINDINGS: We let observers do a difficult stimulus detection task, in which they had to detect schematic happy and sad faces embedded in noise. Mood was manipulated by means of music. We found that observers were more accurate in detecting faces congruent with their mood, corroborating earlier research. However, in trials in which no actual face was presented, observers made a significant number of false alarms. The content of these false alarms, or illusory percepts, was strongly influenced by the observers' mood. CONCLUSIONS: As illusory percepts are believed to reflect the content of internal representations that are employed by the brain during top-down processing of visual input, we conclude that top-down modulation of visual processing is not purely predictive in nature: mood, in this case manipulated by music, may also directly alter the way we perceive the world.

  4. Exploring the potential of neurophysiological measures for user-adaptive visualization

    OpenAIRE

    Tak, S.; Brouwer, A.M.; Toet, A.; Erp, J.B.F. van

    2013-01-01

    User-adaptive visualization aims to adapt visualized information to the needs and characteristics of the individual user. Current approaches deploy user personality factors, user behavior and preferences, and visual scanning behavior to achieve this goal. We argue that neurophysiological data provide valuable additional input for user-adaptive visualization systems since they contain a wealth of objective information about user characteristics. The combination of neurophysiological data with ...

  5. Neural pathways for visual speech perception

    Directory of Open Access Journals (Sweden)

    Lynne E Bernstein

    2014-12-01

    Full Text Available This paper examines the questions, what levels of speech can be perceived visually, and how is visual speech represented by the brain? Review of the literature leads to the conclusions that every level of psycholinguistic speech structure (i.e., phonetic features, phonemes, syllables, words, and prosody can be perceived visually, although individuals differ in their abilities to do so; and that there are visual modality-specific representations of speech qua speech in higher-level vision brain areas. That is, the visual system represents the modal patterns of visual speech. The suggestion that the auditory speech pathway receives and represents visual speech is examined in light of neuroimaging evidence on the auditory speech pathways. We outline the generally agreed-upon organization of the visual ventral and dorsal pathways and examine several types of visual processing that might be related to speech through those pathways, specifically, face and body, orthography, and sign language processing. In this context, we examine the visual speech processing literature, which reveals widespread diverse patterns activity in posterior temporal cortices in response to visual speech stimuli. We outline a model of the visual and auditory speech pathways and make several suggestions: (1 The visual perception of speech relies on visual pathway representations of speech qua speech. (2 A proposed site of these representations, the temporal visual speech area (TVSA has been demonstrated in posterior temporal cortex, ventral and posterior to multisensory posterior superior temporal sulcus (pSTS. (3 Given that visual speech has dynamic and configural features, its representations in feedforward visual pathways are expected to integrate these features, possibly in TVSA.

  6. Input data for inferring species distributions in Kyphosidae world-wide

    Directory of Open Access Journals (Sweden)

    Steen Wilhelm Knudsen

    2016-09-01

    Full Text Available Input data files for inferring the relationship among the family Kyphosidae, as presented in (Knudsen and Clements, 2016 [1], is here provided together with resulting topologies, to allow the reader to explore the topologies in detail. The input data files comprise seven nexus-files with sequence alignments of mtDNA and nDNA markers for performing Bayesian analysis. A matrix of recoded character states inferred from the morphology examined in museum specimens representing Dichistiidae, Girellidae, Kyphosidae, Microcanthidae and Scorpididae, is also provided, and can be used for performing a parsimonious analysis to infer the relationship among these perciform families. The nucleotide input data files comprise both multiple and single representatives of the various species to allow for inference of the relationship among the species in Kyphosidae and between the families closely related to Kyphosidae. The ‘.xml’-files with various constrained relationships among the families potentially closely related to Kyphosidae are also provided to allow the reader to rerun and explore the results from the stepping-stone analysis. The resulting topologies are supplied in newick-file formats together with input data files for Bayesian analysis, together with ‘.xml’-files. Re-running the input data files in the appropriate software, will enable the reader to examine log-files and tree-files themselves. Keywords: Sea chub, Drummer, Kyphosus, Scorpis, Girella

  7. FLUTAN 2.0. Input specifications

    International Nuclear Information System (INIS)

    Willerding, G.; Baumann, W.

    1996-05-01

    FLUTAN is a highly vectorized computer code for 3D fluiddynamic and thermal-hydraulic analyses in Cartesian or cylinder coordinates. It is related to the family of COMMIX codes originally developed at Argonne National Laboratory, USA, and particularly to COMMIX-1A and COMMIX-1B, which were made available to FZK in the frame of cooperation contracts within the fast reactor safety field. FLUTAN 2.0 is an improved version of the FLUTAN code released in 1992. It offers some additional innovations, e.g. the QUICK-LECUSSO-FRAM techniques for reducing numerical diffusion in the k-ε turbulence model equations; a higher sophisticated wall model for specifying a mass flow outside the surface walls together with its flow path and its associated inlet and outlet flow temperatures; and a revised and upgraded pressure boundary condition to fully include the outlet cells in the solution process of the conservation equations. Last but not least, a so-called visualization option based on VISART standards has been provided. This report contains detailed input instructions, presents formulations of the various model options, and explains how to use the code by means of comprehensive sample input. (orig.) [de

  8. Modelling the shape hierarchy for visually guided grasping

    Directory of Open Access Journals (Sweden)

    Omid eRezai

    2014-10-01

    Full Text Available The monkey anterior intraparietal area (AIP encodes visual information about three-dimensional object shape that is used to shape the hand for grasping. We modelled shape tuning in visual AIP neurons and its relationship with curvature and gradient information from the caudal intraparietal area (CIP. The main goal was to gain insight into the kinds of shape parameterizations that can account for AIP tuning and that are consistent with both the inputs to AIP and the role of AIP in grasping. We first experimented with superquadric shape parameters. We considered superquadrics because they occupy a role in robotics that is similar to AIP, in that superquadric fits are derived from visual input and used for grasp planning. We also experimented with an alternative shape parameterization that was based on an Isomap dimension reduction of spatial derivatives of depth (i.e. distance from the observer to the object surface. We considered an Isomap-based model because its parameters lacked discontinuities between similar shapes. When we matched the dimension of the Isomap to the number of superquadric parameters, the superquadric model fit the AIP data somewhat more closely. However, higher-dimensional Isomaps provided excellent fits. Also, we found that the Isomap parameters could be approximated much more accurately than superquadric parameters by feedforward neural networks with CIP-like inputs. We conclude that Isomaps, or perhaps alternative dimension reductions of visual inputs to AIP, provide a promising model of AIP electrophysiology data. However (in contrast with superquadrics further work is needed to test whether such shape parameterizations actually provide an effective basis for grasp control.

  9. Hydrogen Generation Rate Model Calculation Input Data

    International Nuclear Information System (INIS)

    KUFAHL, M.A.

    2000-01-01

    This report documents the procedures and techniques utilized in the collection and analysis of analyte input data values in support of the flammable gas hazard safety analyses. This document represents the analyses of data current at the time of its writing and does not account for data available since then

  10. Feature-Specific Organization of Feedback Pathways in Mouse Visual Cortex.

    Science.gov (United States)

    Huh, Carey Y L; Peach, John P; Bennett, Corbett; Vega, Roxana M; Hestrin, Shaul

    2018-01-08

    Higher and lower cortical areas in the visual hierarchy are reciprocally connected [1]. Although much is known about how feedforward pathways shape receptive field properties of visual neurons, relatively little is known about the role of feedback pathways in visual processing. Feedback pathways are thought to carry top-down signals, including information about context (e.g., figure-ground segmentation and surround suppression) [2-5], and feedback has been demonstrated to sharpen orientation tuning of neurons in the primary visual cortex (V1) [6, 7]. However, the response characteristics of feedback neurons themselves and how feedback shapes V1 neurons' tuning for other features, such as spatial frequency (SF), remain largely unknown. Here, using a retrograde virus, targeted electrophysiological recordings, and optogenetic manipulations, we show that putatively feedback neurons in layer 5 (hereafter "L5 feedback") in higher visual areas, AL (anterolateral area) and PM (posteromedial area), display distinct visual properties in awake head-fixed mice. AL L5 feedback neurons prefer significantly lower SF (mean: 0.04 cycles per degree [cpd]) compared to PM L5 feedback neurons (0.15 cpd). Importantly, silencing AL L5 feedback reduced visual responses of V1 neurons preferring low SF (mean change in firing rate: -8.0%), whereas silencing PM L5 feedback suppressed responses of high-SF-preferring V1 neurons (-20.4%). These findings suggest that feedback connections from higher visual areas convey distinctly tuned visual inputs to V1 that serve to boost V1 neurons' responses to SF. Such like-to-like functional organization may represent an important feature of feedback pathways in sensory systems and in the nervous system in general. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Enhanced learning of natural visual sequences in newborn chicks.

    Science.gov (United States)

    Wood, Justin N; Prasad, Aditya; Goldman, Jason G; Wood, Samantha M W

    2016-07-01

    To what extent are newborn brains designed to operate over natural visual input? To address this question, we used a high-throughput controlled-rearing method to examine whether newborn chicks (Gallus gallus) show enhanced learning of natural visual sequences at the onset of vision. We took the same set of images and grouped them into either natural sequences (i.e., sequences showing different viewpoints of the same real-world object) or unnatural sequences (i.e., sequences showing different images of different real-world objects). When raised in virtual worlds containing natural sequences, newborn chicks developed the ability to recognize familiar images of objects. Conversely, when raised in virtual worlds containing unnatural sequences, newborn chicks' object recognition abilities were severely impaired. In fact, the majority of the chicks raised with the unnatural sequences failed to recognize familiar images of objects despite acquiring over 100 h of visual experience with those images. Thus, newborn chicks show enhanced learning of natural visual sequences at the onset of vision. These results indicate that newborn brains are designed to operate over natural visual input.

  12. Anticipation in Real-world Scenes: The Role of Visual Context and Visual Memory

    Science.gov (United States)

    Coco, Moreno I.; Keller, Frank; Malcolm, George L.

    2016-01-01

    The human sentence processor is able to make rapid predictions about upcoming linguistic input. For example, upon hearing the verb eat, anticipatory eye-movements are launched toward edible objects in a visual scene (Altmann & Kamide, 1999). However, the cognitive mechanisms that underlie anticipation remain to be elucidated in ecologically…

  13. INPUT-OUTPUT STRUCTURE OF LINEAR-DIFFERENTIAL ALGEBRAIC SYSTEMS

    NARCIS (Netherlands)

    KUIJPER, M; SCHUMACHER, JM

    Systems of linear differential and algebraic equations occur in various ways, for instance, as a result of automated modeling procedures and in problems involving algebraic constraints, such as zero dynamics and exact model matching. Differential/algebraic systems may represent an input-output

  14. The Application of Visual Illusion in the Visual Communication Design

    Science.gov (United States)

    Xin, Tao; You Ye, Han

    2018-03-01

    With the development of our national reform, opening up and modernization, the science and technology has also been well developed and it has been applied in every wall of life, the development of visual illusion industry is represented in the widespread use of advanced technology in it. Ultimately, the visual illusion is a phenomenon, it should be analyzed from the angles of physics and philosophy. The widespread application of visual illusion not only can improve the picture quality, but also could maximize peoples’ sense degree through the visual communication design works, expand people’s horizons and promote the diversity of visual communication design works.

  15. Development of Visual CINDER Code with Visual C⧣.NET

    International Nuclear Information System (INIS)

    Kim, Oyeon

    2016-01-01

    CINDER code, CINDER' 90 or CINDER2008 that is integrated with the Monte Carlo code, MCNPX, is widely used to calculate the inventory of nuclides in irradiated materials. The MCNPX code provides decay processes to the particle transport scheme that traditionally only covered prompt processes. The integration schemes serve not only the reactor community (MCNPX burnup) but also the accelerator community as well (residual production information). The big benefit for providing these options lies in the easy cross comparison of the transmutation codes since the calculations are based on exactly the same material, neutron flux and isotope production/destruction inputs. However, it is just frustratingly cumbersome to use. In addition, multiple human interventions may increase the possibility of making errors. The number of significant digits in the input data varies in steps, which may cause big errors for highly nonlinear problems. Thus, it is worthwhile to find a new way to wrap all the codes and procedures in one consistent package which can provide ease of use. The visual CINDER code development is underway with visual C .NET framework. It provides a few benefits for the atomic transmutation simulation with CINDER code. A few interesting and useful properties of visual C .NET framework are introduced. We also showed that the wrapper could make the simulation accurate for highly nonlinear transmutation problems and also increase the possibility of direct combination a radiation transport code MCNPX with CINDER code. Direct combination of CINDER with MCNPX in a wrapper will provide more functionalities for the radiation shielding and prevention study

  16. Development of Visual CINDER Code with Visual C⧣.NET

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Oyeon [Institute for Modeling and Simulation Convergence, Daegu (Korea, Republic of)

    2016-10-15

    CINDER code, CINDER' 90 or CINDER2008 that is integrated with the Monte Carlo code, MCNPX, is widely used to calculate the inventory of nuclides in irradiated materials. The MCNPX code provides decay processes to the particle transport scheme that traditionally only covered prompt processes. The integration schemes serve not only the reactor community (MCNPX burnup) but also the accelerator community as well (residual production information). The big benefit for providing these options lies in the easy cross comparison of the transmutation codes since the calculations are based on exactly the same material, neutron flux and isotope production/destruction inputs. However, it is just frustratingly cumbersome to use. In addition, multiple human interventions may increase the possibility of making errors. The number of significant digits in the input data varies in steps, which may cause big errors for highly nonlinear problems. Thus, it is worthwhile to find a new way to wrap all the codes and procedures in one consistent package which can provide ease of use. The visual CINDER code development is underway with visual C .NET framework. It provides a few benefits for the atomic transmutation simulation with CINDER code. A few interesting and useful properties of visual C .NET framework are introduced. We also showed that the wrapper could make the simulation accurate for highly nonlinear transmutation problems and also increase the possibility of direct combination a radiation transport code MCNPX with CINDER code. Direct combination of CINDER with MCNPX in a wrapper will provide more functionalities for the radiation shielding and prevention study.

  17. Simulation of a Multidimensional Input Quantum Perceptron

    Science.gov (United States)

    Yamamoto, Alexandre Y.; Sundqvist, Kyle M.; Li, Peng; Harris, H. Rusty

    2018-06-01

    In this work, we demonstrate the improved data separation capabilities of the Multidimensional Input Quantum Perceptron (MDIQP), a fundamental cell for the construction of more complex Quantum Artificial Neural Networks (QANNs). This is done by using input controlled alterations of ancillary qubits in combination with phase estimation and learning algorithms. The MDIQP is capable of processing quantum information and classifying multidimensional data that may not be linearly separable, extending the capabilities of the classical perceptron. With this powerful component, we get much closer to the achievement of a feedforward multilayer QANN, which would be able to represent and classify arbitrary sets of data (both quantum and classical).

  18. Communication between mother and her visually imapired child

    OpenAIRE

    Kolarič, Mojca

    2016-01-01

    Communication between mother and child has a significant impact on the development of children's language skills. Blindness or visual impairment limits access to information from the environment, which may have a negative impact on the development of communication between mother and child. In my master's thesis I focused on how the communication between mother and child alters due to visual impairment. In the theoretical part I introduced the importance of visual input in communication and...

  19. Generating descriptive visual words and visual phrases for large-scale image applications.

    Science.gov (United States)

    Zhang, Shiliang; Tian, Qi; Hua, Gang; Huang, Qingming; Gao, Wen

    2011-09-01

    Bag-of-visual Words (BoWs) representation has been applied for various problems in the fields of multimedia and computer vision. The basic idea is to represent images as visual documents composed of repeatable and distinctive visual elements, which are comparable to the text words. Notwithstanding its great success and wide adoption, visual vocabulary created from single-image local descriptors is often shown to be not as effective as desired. In this paper, descriptive visual words (DVWs) and descriptive visual phrases (DVPs) are proposed as the visual correspondences to text words and phrases, where visual phrases refer to the frequently co-occurring visual word pairs. Since images are the carriers of visual objects and scenes, a descriptive visual element set can be composed by the visual words and their combinations which are effective in representing certain visual objects or scenes. Based on this idea, a general framework is proposed for generating DVWs and DVPs for image applications. In a large-scale image database containing 1506 object and scene categories, the visual words and visual word pairs descriptive to certain objects or scenes are identified and collected as the DVWs and DVPs. Experiments show that the DVWs and DVPs are informative and descriptive and, thus, are more comparable with the text words than the classic visual words. We apply the identified DVWs and DVPs in several applications including large-scale near-duplicated image retrieval, image search re-ranking, and object recognition. The combination of DVW and DVP performs better than the state of the art in large-scale near-duplicated image retrieval in terms of accuracy, efficiency and memory consumption. The proposed image search re-ranking algorithm: DWPRank outperforms the state-of-the-art algorithm by 12.4% in mean average precision and about 11 times faster in efficiency.

  20. Self-Structured Organizing Single-Input CMAC Control for Robot Manipulator

    Directory of Open Access Journals (Sweden)

    ThanhQuyen Ngo

    2011-09-01

    Full Text Available This paper represents a self-structured organizing single-input control system based on differentiable cerebellar model articulation controller (CMAC for an n-link robot manipulator to achieve the high-precision position tracking. In the proposed scheme, the single-input CMAC controller is solely used to control the plant, so the input space dimension of CMAC can be simplified and no conventional controller is needed. The structure of single-input CMAC will also be self-organized; that is, the layers of single-input CMAC will grow or prune systematically and their receptive functions can be automatically adjusted. The online tuning laws of single-input CMAC parameters are derived in gradient-descent learning method and the discrete-type Lyapunov function is applied to determine the learning rates of proposed control system so that the stability of the system can be guaranteed. The simulation results of robot manipulator are provided to verify the effectiveness of the proposed control methodology.

  1. Isolating Visual and Proprioceptive Components of Motor Sequence Learning in ASD.

    Science.gov (United States)

    Sharer, Elizabeth A; Mostofsky, Stewart H; Pascual-Leone, Alvaro; Oberman, Lindsay M

    2016-05-01

    In addition to defining impairments in social communication skills, individuals with autism spectrum disorder (ASD) also show impairments in more basic sensory and motor skills. Development of new skills involves integrating information from multiple sensory modalities. This input is then used to form internal models of action that can be accessed when both performing skilled movements, as well as understanding those actions performed by others. Learning skilled gestures is particularly reliant on integration of visual and proprioceptive input. We used a modified serial reaction time task (SRTT) to decompose proprioceptive and visual components and examine whether patterns of implicit motor skill learning differ in ASD participants as compared with healthy controls. While both groups learned the implicit motor sequence during training, healthy controls showed robust generalization whereas ASD participants demonstrated little generalization when visual input was constant. In contrast, no group differences in generalization were observed when proprioceptive input was constant, with both groups showing limited degrees of generalization. The findings suggest, when learning a motor sequence, individuals with ASD tend to rely less on visual feedback than do healthy controls. Visuomotor representations are considered to underlie imitative learning and action understanding and are thereby crucial to social skill and cognitive development. Thus, anomalous patterns of implicit motor learning, with a tendency to discount visual feedback, may be an important contributor in core social communication deficits that characterize ASD. Autism Res 2016, 9: 563-569. © 2015 International Society for Autism Research, Wiley Periodicals, Inc. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.

  2. Building Input Adaptive Parallel Applications: A Case Study of Sparse Grid Interpolation

    KAUST Repository

    Murarasu, Alin

    2012-12-01

    The well-known power wall resulting in multi-cores requires special techniques for speeding up applications. In this sense, parallelization plays a crucial role. Besides standard serial optimizations, techniques such as input specialization can also bring a substantial contribution to the speedup. By identifying common patterns in the input data, we propose new algorithms for sparse grid interpolation that accelerate the state-of-the-art non-specialized version. Sparse grid interpolation is an inherently hierarchical method of interpolation employed for example in computational steering applications for decompressing highdimensional simulation data. In this context, improving the speedup is essential for real-time visualization. Using input specialization, we report a speedup of up to 9x over the nonspecialized version. The paper covers the steps we took to reach this speedup by means of input adaptivity. Our algorithms will be integrated in fastsg, a library for fast sparse grid interpolation. © 2012 IEEE.

  3. WORM: A general-purpose input deck specification language

    International Nuclear Information System (INIS)

    Jones, T.

    1999-01-01

    Using computer codes to perform criticality safety calculations has become common practice in the industry. The vast majority of these codes use simple text-based input decks to represent the geometry, materials, and other parameters that describe the problem. However, the data specified in input files are usually processed results themselves. For example, input decks tend to require the geometry specification in linear dimensions and materials in atom or weight fractions, while the parameter of interest might be mass or concentration. The calculations needed to convert from the item of interest to the required parameter in the input deck are usually performed separately and then incorporated into the input deck. This process of calculating, editing, and renaming files to perform a simple parameter study is tedious at best. In addition, most computer codes require dimensions to be specified in centimeters, while drawings or other materials used to create the input decks might be in other units. This also requires additional calculation or conversion prior to composition of the input deck. These additional calculations, while extremely simple, introduce a source for error in both the calculations and transcriptions. To overcome these difficulties, WORM (Write One, Run Many) was created. It is an easy-to-use programming language to describe input decks and can be used with any computer code that uses standard text files for input. WORM is available, via the Internet, at worm.lanl.gov. A user's guide, tutorials, example models, and other WORM-related materials are also available at this Web site. Questions regarding WORM should be directed to wormatlanl.gov

  4. We have yet to see the "visual argument"

    OpenAIRE

    Popa, O.E.

    2016-01-01

    In this paper, I defend two skeptical claims regarding current research on visual arguments and I explain how these claims reflect upon past and future research. The first claim is that qualifying an argument as being visual amounts to a category mistake; the second claim is that past analyses of visual arguments fault on both end of the “production line” in that the input is not visual and the output is not an argument. Based on the developed critique, I discuss how the study of images in co...

  5. 2011 IEEE Visualization Contest Winner: Visualizing Unsteady Vortical Behavior of a Centrifugal Pump

    KAUST Repository

    Otto, Mathias; Kuhn, Alexander; Engelke, Wito; Theisel, Holger

    2012-01-01

    In the 2011 IEEE Visualization Contest, the dataset represented a high-resolution simulation of a centrifugal pump operating below optimal speed. The goal was to find suitable visualization techniques to identify regions of rotating stall

  6. Quality of early parent input predicts child vocabulary 3 years later.

    Science.gov (United States)

    Cartmill, Erica A; Armstrong, Benjamin F; Gleitman, Lila R; Goldin-Meadow, Susan; Medina, Tamara N; Trueswell, John C

    2013-07-09

    Children vary greatly in the number of words they know when they enter school, a major factor influencing subsequent school and workplace success. This variability is partially explained by the differential quantity of parental speech to preschoolers. However, the contexts in which young learners hear new words are also likely to vary in referential transparency; that is, in how clearly word meaning can be inferred from the immediate extralinguistic context, an aspect of input quality. To examine this aspect, we asked 218 adult participants to guess 50 parents' words from (muted) videos of their interactions with their 14- to 18-mo-old children. We found systematic differences in how easily individual parents' words could be identified purely from this socio-visual context. Differences in this kind of input quality correlated with the size of the children's vocabulary 3 y later, even after controlling for differences in input quantity. Although input quantity differed as a function of socioeconomic status, input quality (as here measured) did not, suggesting that the quality of nonverbal cues to word meaning that parents offer to their children is an individual matter, widely distributed across the population of parents.

  7. STATIC AND DYNAMIC POSTURE CONTROL IN POSTLINGUAL COCHLEAR IMPLANTED PATIENTS: Effects of dual-tasking, visual and auditory inputs suppression

    Directory of Open Access Journals (Sweden)

    BERNARD DEMANZE eLaurence

    2014-01-01

    Full Text Available Posture control is based on central integration of multisensory inputs, and on internal representation of body orientation in space. This multisensory feedback regulates posture control and continuously updates the internal model of body’s position which in turn forwards motor commands adapted to the environmental context and constraints. The peripheral localization of the vestibular system, close to the cochlea, makes vestibular damage possible following cochlear implant (CI surgery. Impaired vestibular function in CI patients, if any, may have a strong impact on posture stability. The simple postural task of quiet standing is generally paired with cognitive activity in most day life conditions, leading therefore to competition for attentional resources in dual-tasking, and increased risk of fall particularly in patients with impaired vestibular function. This study was aimed at evaluating the effects of post-lingual cochlear implantation on posture control in adult deaf patients. Possible impairment of vestibular function was assessed by comparing the postural performance of patients to that of age-matched healthy subjects during a simple postural task performed in static and dynamic conditions, and during dual-tasking with a visual or auditory memory task. Postural tests were done in eyes open (EO and eyes closed (EC conditions, with the cochlear implant activated (ON or not (OFF. Results showed that the CI patients significantly reduced limits of stability and increased postural instability in static conditions. In dynamic conditions, they spent considerably more energy to maintain equilibrium, and their head was stabilized neither in space nor on trunk while the controls showed a whole body rigidification strategy. Hearing (prosthesis on as well as dual-tasking did not really improve the dynamic postural performance of the CI patients. We conclude that CI patients become strongly visual dependent mainly in challenging postural conditions.

  8. Visual Education

    DEFF Research Database (Denmark)

    Buhl, Mie; Flensborg, Ingelise

    2010-01-01

    The intrinsic breadth of various types of images creates new possibilities and challenges for visual education. The digital media have moved the boundaries between images and other kinds of modalities (e.g. writing, speech and sound) and have augmented the possibilities for integrating the functi......The intrinsic breadth of various types of images creates new possibilities and challenges for visual education. The digital media have moved the boundaries between images and other kinds of modalities (e.g. writing, speech and sound) and have augmented the possibilities for integrating...... to emerge in the interlocutory space of a global visual repertoire and diverse local interpretations. The two perspectives represent challenges for future visual education which require visual competences, not only within the arts but also within the subjects of natural sciences, social sciences, languages...

  9. Standing postural reaction to visual and proprioceptive stimulation in chronic acquired demyelinating polyneuropathy.

    Science.gov (United States)

    Provost, Clement P; Tasseel-Ponche, Sophie; Lozeron, Pierre; Piccinini, Giulia; Quintaine, Victorine; Arnulf, Bertrand; Kubis, Nathalie; Yelnik, Alain P

    2018-02-28

    To investigate the weight of visual and proprioceptive inputs, measured indirectly in standing position control, in patients with chronic acquired demyelinating polyneuropathy (CADP). Prospective case study. Twenty-five patients with CADP and 25 healthy controls. Posture was recorded on a double force platform. Stimulations were optokinetic (60°/s) for visual input and vibration (50 Hz) for proprioceptive input. Visual stimulation involved 4 tests (upward, downward, rightward and leftward) and proprioceptive stimulation 2 tests (triceps surae and tibialis anterior). A composite score, previously published and slightly modified, was used for the recorded postural signals from the different stimulations. Despite their sensitivity deficits, patients with CADP were more sensitive to proprioceptive stimuli than were healthy controls (mean composite score 13.9 ((standard deviation; SD) 4.8) vs 18.4 (SD 4.8), p = 0.002). As expected, they were also more sensitive to visual stimuli (mean composite score 10.5 (SD 8.7) vs 22.9 (SD 7.5), p <0.0001). These results encourage balance rehabilitation of patients with CADP, aimed at promoting the use of proprioceptive information, thereby reducing too-early development of visual compensation while proprioception is still available.

  10. The effect of early visual deprivation on the neural bases of multisensory processing

    OpenAIRE

    Guerreiro, Maria J. S.; Putzar, Lisa; Röder, Brigitte

    2015-01-01

    Animal studies have shown that congenital visual deprivation reduces the ability of neurons to integrate cross-modal inputs. Guerreiro et al. reveal that human patients who suffer transient congenital visual deprivation because of cataracts lack multisensory integration in auditory and multisensory areas as adults, and suppress visual processing during audio-visual stimulation.

  11. Decoding visual object categories from temporal correlations of ECoG signals.

    Science.gov (United States)

    Majima, Kei; Matsuo, Takeshi; Kawasaki, Keisuke; Kawai, Kensuke; Saito, Nobuhito; Hasegawa, Isao; Kamitani, Yukiyasu

    2014-04-15

    How visual object categories are represented in the brain is one of the key questions in neuroscience. Studies on low-level visual features have shown that relative timings or phases of neural activity between multiple brain locations encode information. However, whether such temporal patterns of neural activity are used in the representation of visual objects is unknown. Here, we examined whether and how visual object categories could be predicted (or decoded) from temporal patterns of electrocorticographic (ECoG) signals from the temporal cortex in five patients with epilepsy. We used temporal correlations between electrodes as input features, and compared the decoding performance with features defined by spectral power and phase from individual electrodes. While using power or phase alone, the decoding accuracy was significantly better than chance, correlations alone or those combined with power outperformed other features. Decoding performance with correlations was degraded by shuffling the order of trials of the same category in each electrode, indicating that the relative time series between electrodes in each trial is critical. Analysis using a sliding time window revealed that decoding performance with correlations began to rise earlier than that with power. This earlier increase in performance was replicated by a model using phase differences to encode categories. These results suggest that activity patterns arising from interactions between multiple neuronal units carry additional information on visual object categories. Copyright © 2013 Elsevier Inc. All rights reserved.

  12. Visualization-by-Sketching: An Artist's Interface for Creating Multivariate Time-Varying Data Visualizations.

    Science.gov (United States)

    Schroeder, David; Keefe, Daniel F

    2016-01-01

    We present Visualization-by-Sketching, a direct-manipulation user interface for designing new data visualizations. The goals are twofold: First, make the process of creating real, animated, data-driven visualizations of complex information more accessible to artists, graphic designers, and other visual experts with traditional, non-technical training. Second, support and enhance the role of human creativity in visualization design, enabling visual experimentation and workflows similar to what is possible with traditional artistic media. The approach is to conceive of visualization design as a combination of processes that are already closely linked with visual creativity: sketching, digital painting, image editing, and reacting to exemplars. Rather than studying and tweaking low-level algorithms and their parameters, designers create new visualizations by painting directly on top of a digital data canvas, sketching data glyphs, and arranging and blending together multiple layers of animated 2D graphics. This requires new algorithms and techniques to interpret painterly user input relative to data "under" the canvas, balance artistic freedom with the need to produce accurate data visualizations, and interactively explore large (e.g., terabyte-sized) multivariate datasets. Results demonstrate a variety of multivariate data visualization techniques can be rapidly recreated using the interface. More importantly, results and feedback from artists support the potential for interfaces in this style to attract new, creative users to the challenging task of designing more effective data visualizations and to help these users stay "in the creative zone" as they work.

  13. Priming and the guidance by visual and categorical templates in visual search

    Directory of Open Access Journals (Sweden)

    Anna eWilschut

    2014-02-01

    Full Text Available Visual search is thought to be guided by top-down templates that are held in visual working memory. Previous studies have shown that a search-guiding template can be rapidly and strongly implemented from a visual cue, whereas templates are less effective when based on categorical cues. Direct visual priming from cue to target may underlie this difference. In two experiments we first asked observers to remember two possible target colors. A postcue then indicated which of the two would be the relevant color. The task was to locate a briefly presented and masked target of the cued color among irrelevant distractor items. Experiment 1 showed that overall search accuracy improved more rapidly on the basis of a direct visual postcue that carried the target color, compared to a neutral postcue that pointed to the memorized color. However, selectivity towards the target feature, i.e. the extent to which observers searched selectively among items of the cued versus uncued color, was found to be relatively unaffected by the presence of the visual signal. In Experiment 2 we compared search that was based on either visual or categorical information, but now controlled for direct visual priming. This resulted in no differences in overall performance nor selectivity. Altogether the results suggest that perceptual processing of visual search targets is facilitated by priming from visual cues, whereas attentional selectivity is enhanced by a working memory template that can formed from both visual and categorical input. Furthermore, if the priming is controlled for, categorical- and visual-based templates similarly enhance search guidance.

  14. Priming and the guidance by visual and categorical templates in visual search.

    Science.gov (United States)

    Wilschut, Anna; Theeuwes, Jan; Olivers, Christian N L

    2014-01-01

    Visual search is thought to be guided by top-down templates that are held in visual working memory. Previous studies have shown that a search-guiding template can be rapidly and strongly implemented from a visual cue, whereas templates are less effective when based on categorical cues. Direct visual priming from cue to target may underlie this difference. In two experiments we first asked observers to remember two possible target colors. A postcue then indicated which of the two would be the relevant color. The task was to locate a briefly presented and masked target of the cued color among irrelevant distractor items. Experiment 1 showed that overall search accuracy improved more rapidly on the basis of a direct visual postcue that carried the target color, compared to a neutral postcue that pointed to the memorized color. However, selectivity toward the target feature, i.e., the extent to which observers searched selectively among items of the cued vs. uncued color, was found to be relatively unaffected by the presence of the visual signal. In Experiment 2 we compared search that was based on either visual or categorical information, but now controlled for direct visual priming. This resulted in no differences in overall performance nor selectivity. Altogether the results suggest that perceptual processing of visual search targets is facilitated by priming from visual cues, whereas attentional selectivity is enhanced by a working memory template that can formed from both visual and categorical input. Furthermore, if the priming is controlled for, categorical- and visual-based templates similarly enhance search guidance.

  15. Network Visualization Project (NVP)

    Science.gov (United States)

    2016-07-01

    Application data flow .............................................................................2 Fig. 2 Sample JSON data...interface supporting improved network analysis and network communication visualization. 2. Application Design NVP consists of 2 parts: back-end data...notation ( JSON ) format. This JSON is provided as input to the front-end application of the project. This interaction of the user with the back-end

  16. S3-1: The Serial Dependence of Visual Perception

    Directory of Open Access Journals (Sweden)

    David Whitney

    2012-10-01

    Full Text Available In our moment-to-moment perceptual experience, visual scenes can change, but objects rarely spontaneously come into or out of existence. The visual system may therefore delicately balance the need to optimize sensitivity to image changes (e.g., by adapting to changes in color, orientation, object identity, etc with the desire to represent the temporal continuity of objects—the likelihood that objects perceived at this moment tend to exist in subsequent moments. One way that the visual system may promote such stability is through the introduction of serial dependence to visual perception: by biasing the current percept toward what was seen at previous moments, the brain could compensate for variability in visual input that might otherwise disrupt perceptual continuity. Here, in two sets of experiments, we tested for serial dependence in visual perception of orientation and facial expression. We found that on a given trial, a subject's perception of the orientation of a grating reflected not only the currently viewed stimulus, but also a systematic attraction toward the orientations of the previously viewed stimuli. We found the same serial dependence in the perception of facial expression. This perceptual attraction extended over several trials and seconds, and displayed clear tuning to the difference (in orientation or facial expression between the sequential stimuli. Furthermore, serial dependence in object perception was spatially specific and selective to the attended object within a scene. Several control experiments showed that the perceptual serial dependence we report cannot be explained by effects of priming, known hysteresis effects, visual short-term memory, or expectation. Our results reveal a systematic influence of recent visual experiences on perception at any given moment: visual percepts, even of unambiguous stimuli, are attracted toward what was previously seen. We propose that such serial dependence helps to maintain

  17. Anatomical Inputs From the Sensory and Value Structures to the Tail of the Rat Striatum

    Directory of Open Access Journals (Sweden)

    Haiyan Jiang

    2018-05-01

    Full Text Available The caudal region of the rodent striatum, called the tail of the striatum (TS, is a relatively small area but might have a distinct function from other striatal subregions. Recent primate studies showed that this part of the striatum has a unique function in encoding long-term value memory of visual objects for habitual behavior. This function might be due to its specific connectivity. We identified inputs to the rat TS and compared those with inputs to the dorsomedial striatum (DMS in the same animals. The TS directly received anatomical inputs from both sensory structures and value-coding regions, but the DMS did not. First, inputs from the sensory cortex and sensory thalamus to the TS were found; visual, auditory, somatosensory and gustatory cortex and thalamus projected to the TS but not to the DMS. Second, two value systems innervated the TS; dopamine and serotonin neurons in the lateral part of the substantia nigra pars compacta (SNc and dorsal raphe nucleus projected to the TS, respectively. The DMS received inputs from the separate group of dopamine neurons in the medial part of the SNc. In addition, learning-related regions of the limbic system innervated the TS; the temporal areas and the basolateral amygdala selectively innervated the TS, but not the DMS. Our data showed that both sensory and value-processing structures innervated the TS, suggesting its plausible role in value-guided sensory-motor association for habitual behavior.

  18. Parametric embedding for class visualization.

    Science.gov (United States)

    Iwata, Tomoharu; Saito, Kazumi; Ueda, Naonori; Stromsten, Sean; Griffiths, Thomas L; Tenenbaum, Joshua B

    2007-09-01

    We propose a new method, parametric embedding (PE), that embeds objects with the class structure into a low-dimensional visualization space. PE takes as input a set of class conditional probabilities for given data points and tries to preserve the structure in an embedding space by minimizing a sum of Kullback-Leibler divergences, under the assumption that samples are generated by a gaussian mixture with equal covariances in the embedding space. PE has many potential uses depending on the source of the input data, providing insight into the classifier's behavior in supervised, semisupervised, and unsupervised settings. The PE algorithm has a computational advantage over conventional embedding methods based on pairwise object relations since its complexity scales with the product of the number of objects and the number of classes. We demonstrate PE by visualizing supervised categorization of Web pages, semisupervised categorization of digits, and the relations of words and latent topics found by an unsupervised algorithm, latent Dirichlet allocation.

  19. Reinforcement learning on slow features of high-dimensional input streams.

    Directory of Open Access Journals (Sweden)

    Robert Legenstein

    Full Text Available Humans and animals are able to learn complex behaviors based on a massive stream of sensory information from different modalities. Early animal studies have identified learning mechanisms that are based on reward and punishment such that animals tend to avoid actions that lead to punishment whereas rewarded actions are reinforced. However, most algorithms for reward-based learning are only applicable if the dimensionality of the state-space is sufficiently small or its structure is sufficiently simple. Therefore, the question arises how the problem of learning on high-dimensional data is solved in the brain. In this article, we propose a biologically plausible generic two-stage learning system that can directly be applied to raw high-dimensional input streams. The system is composed of a hierarchical slow feature analysis (SFA network for preprocessing and a simple neural network on top that is trained based on rewards. We demonstrate by computer simulations that this generic architecture is able to learn quite demanding reinforcement learning tasks on high-dimensional visual input streams in a time that is comparable to the time needed when an explicit highly informative low-dimensional state-space representation is given instead of the high-dimensional visual input. The learning speed of the proposed architecture in a task similar to the Morris water maze task is comparable to that found in experimental studies with rats. This study thus supports the hypothesis that slowness learning is one important unsupervised learning principle utilized in the brain to form efficient state representations for behavioral learning.

  20. Using NJOY to Create MCNP ACE Files and Visualize Nuclear Data

    Energy Technology Data Exchange (ETDEWEB)

    Kahler, Albert Comstock [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-10-14

    We provide lecture materials that describe the input requirements to create various MCNP ACE files (Fast, Thermal, Dosimetry, Photo-nuclear and Photo-atomic) with the NJOY Nuclear Data Processing code system. Input instructions to visualize nuclear data with NJOY are also provided.

  1. Improving Design Communication: Advanced Visualization

    National Research Council Canada - National Science Library

    Adeoye, Blessing

    2001-01-01

    .... While design professionals may use similar visual modes (lines, text, graphic symbols, etc.) to represent and communicate concepts in complex drawing tasks, similar visual modes may be used ambiguously across disciplines...

  2. Evidence for optimal integration of visual feature representations across saccades

    NARCIS (Netherlands)

    Oostwoud Wijdenes, L.; Marshall, L.; Bays, P.M.

    2015-01-01

    We explore the visual world through saccadic eye movements, but saccades also present a challenge to visual processing by shifting externally stable objects from one retinal location to another. The brain could solve this problem in two ways: by overwriting preceding input and starting afresh with

  3. An efficient visual saliency detection model based on Ripplet transform

    Indian Academy of Sciences (India)

    A Diana Andrushia

    human visual attention models is still not well investigated. ... Ripplet transform; visual saliency model; Receiver Operating Characteristics (ROC); .... proposed method has the same resolution as that of an input ... regions are obtained, which are independent of their sizes. ..... impact than those far away from the attention.

  4. Representing climate change futures: a critique on the use of images for visual communication

    Energy Technology Data Exchange (ETDEWEB)

    Nicholson-Cole, Sophie A. [School of Environmental Sciences, University of East Anglia, Norwich NR4 7TJ, (United Kingdom)

    2005-05-15

    How people perceive their role and the responsibilities of others in determining the outcomes of climate change is of great importance for policy-making, adaptation and climate change mitigation. However, for many people, climate change is a remote problem and not one of personal concern. Meaningful visualisations depicting climate change futures could help to bridge the gap between what may seem an abstract concept and everyday experience, making clearer its local and individual relevance. Computer aided visualisation has great potential as a means to interest and engage different groups in society. However, the way in which information is represented affects an individual's interpretation and uptake, and how they see their present choices affecting their future and that of others. The empirical content of this paper summarises the results of an exploratory qualitative study, consisting of 30 semi-structured interviews investigating people's visual conceptions and feelings about climate change. The emphasis of the inquiry is focussed on eliciting people's spontaneous visualisations of climate change and their feelings of involvement with the issue. The insights gained from the described empirical work set the scene for further research, which will employ the use of a range of images and visualisations for evaluation. (Author)

  5. On Optimal Input Design and Model Selection for Communication Channels

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yanyan [ORNL; Djouadi, Seddik M [ORNL; Olama, Mohammed M [ORNL

    2013-01-01

    In this paper, the optimal model (structure) selection and input design which minimize the worst case identification error for communication systems are provided. The problem is formulated using metric complexity theory in a Hilbert space setting. It is pointed out that model selection and input design can be handled independently. Kolmogorov n-width is used to characterize the representation error introduced by model selection, while Gel fand and Time n-widths are used to represent the inherent error introduced by input design. After the model is selected, an optimal input which minimizes the worst case identification error is shown to exist. In particular, it is proven that the optimal model for reducing the representation error is a Finite Impulse Response (FIR) model, and the optimal input is an impulse at the start of the observation interval. FIR models are widely popular in communication systems, such as, in Orthogonal Frequency Division Multiplexing (OFDM) systems.

  6. The visual attention network untangled

    NARCIS (Netherlands)

    Nieuwenhuis, S.; Donner, T.H.

    2011-01-01

    Goals are represented in prefrontal cortex and modulate sensory processing in visual cortex. A new study combines TMS, fMRI and EEG to understand how feedback improves retention of behaviorally relevant visual information.

  7. Preparation and documentation of a CATHENA input file for Darlington NGS

    International Nuclear Information System (INIS)

    1989-03-01

    A CATHENA input model has been developed and documented for the heat transport system of the Darlington Nuclear Generating Station. CATHENA, an advanced two-fluid thermalhydraulic computer code, has been designed for analysis of postulated loss-of-coolant accidents (LOCA) and upset conditions in the CANDU system. This report describes the Darlington input model (or idealization), and gives representative results for a simulation of a small break at an inlet header

  8. Modification of visual function by early visual experience.

    Science.gov (United States)

    Blakemore, C

    1976-07-01

    Physiological experiments, involving recording from the visual cortex in young kittens and monkeys, have given new insight into human developmental disorders. In the visual cortex of normal cats and monkeys most neurones are selectively sensitive to the orientation of moving edges and they receive very similar signals from both eyes. Even in very young kittens without visual experience, most neurones are binocularly driven and a small proportion of them are genuinely orientation selective. There is no passive maturation of the system in the absence of visual experience, but even very brief exposure to patterned images produces rapid emergence of the adult organization. These results are compared to observations on humans who have "recovered" from early blindness. Covering one eye in a kitten or a monkey, during a sensitive period early in life, produces a virtually complete loss of input from that eye in the cortex. These results can be correlated with the production of "stimulus deprivation amblyopia" in infants who have had one eye patched. Induction of a strabismus causes a loss of binocularity in the visual cortex, and in humans it leads to a loss of stereoscopic vision and binocular fusion. Exposing kittens to lines of one orientation modifies the preferred orientations of cortical cells and there is an analogous "meridional amblyopia" in astigmatic humans. The existence of a sensitive period in human vision is discussed, as well as the possibility of designing remedial and preventive treatments for human developmental disorders.

  9. Enhancing links between visual short term memory, visual attention and cognitive control processes through practice: An electrophysiological insight.

    Science.gov (United States)

    Fuggetta, Giorgio; Duke, Philip A

    2017-05-01

    The operation of attention on visible objects involves a sequence of cognitive processes. The current study firstly aimed to elucidate the effects of practice on neural mechanisms underlying attentional processes as measured with both behavioural and electrophysiological measures. Secondly, it aimed to identify any pattern in the relationship between Event-Related Potential (ERP) components which play a role in the operation of attention in vision. Twenty-seven participants took part in two recording sessions one week apart, performing an experimental paradigm which combined a match-to-sample task with a memory-guided efficient visual-search task within one trial sequence. Overall, practice decreased behavioural response times, increased accuracy, and modulated several ERP components that represent cognitive and neural processing stages. This neuromodulation through practice was also associated with an enhanced link between behavioural measures and ERP components and with an enhanced cortico-cortical interaction of functionally interconnected ERP components. Principal component analysis (PCA) of the ERP amplitude data revealed three components, having different rostro-caudal topographic representations. The first component included both the centro-parietal and parieto-occipital mismatch triggered negativity - involved in integration of visual representations of the target with current task-relevant representations stored in visual working memory - loaded with second negative posterior-bilateral (N2pb) component, involved in categorising specific pop-out target features. The second component comprised the amplitude of bilateral anterior P2 - related to detection of a specific pop-out feature - loaded with bilateral anterior N2, related to detection of conflicting features, and fronto-central mismatch triggered negativity. The third component included the parieto-occipital N1 - related to early neural responses to the stimulus array - which loaded with the second

  10. El impacto de las imágenes en una tarea de recontado: Diseño de un cuento ilustrado para niños basado en la Gramática Visual

    Directory of Open Access Journals (Sweden)

    Carola Alvarado

    2016-06-01

    Full Text Available A common way to measure and stimulate narrative child development is through the retelling of a story (Owen, 2006. In Chile, retelling tasks are designed considering the structure of the verbal input, both on a lexical-syntactic and textual level, as well as setting levels of complexity among them (Pavez & Coloma, 2005. The research project FONDECYT 1130420 produced a storybook – as an input of a retelling task- emphasizing not only the verbal aspect, but also the visual narrative. Therefore, we set a double objective: to design narrative images of a story, enhancing its meaning through a visual grammar, and to observe how these elements were reflected in the oral retelling of the subjects. For this purpose, along with the Story Grammar (Glen & Stein, 1979 for the textual structure analysis, we used the Grammar of Visual Design (Kress & Van Leeuwen, 1996 as analytical tool for preparing narrative images. The story was piloted in a sample of 20 kindergarten children from a semiprivate school in the Region of Valparaiso. Their oral narratives were videotaped, transcribed and analyzed. It was noted that all narrative images were mentioned in more than 60% of the recounted version. In addition, three images were referred in 90% of the child narratives, representing key moments of the story or certain complexity of the narrative input as simultaneous actions. These results highlight the importance of the visual elements of a story for the task of child retelling.

  11. A normalization model suggests that attention changes the weighting of inputs between visual areas.

    Science.gov (United States)

    Ruff, Douglas A; Cohen, Marlene R

    2017-05-16

    Models of divisive normalization can explain the trial-averaged responses of neurons in sensory, association, and motor areas under a wide range of conditions, including how visual attention changes the gains of neurons in visual cortex. Attention, like other modulatory processes, is also associated with changes in the extent to which pairs of neurons share trial-to-trial variability. We showed recently that in addition to decreasing correlations between similarly tuned neurons within the same visual area, attention increases correlations between neurons in primary visual cortex (V1) and the middle temporal area (MT) and that an extension of a classic normalization model can account for this correlation increase. One of the benefits of having a descriptive model that can account for many physiological observations is that it can be used to probe the mechanisms underlying processes such as attention. Here, we use electrical microstimulation in V1 paired with recording in MT to provide causal evidence that the relationship between V1 and MT activity is nonlinear and is well described by divisive normalization. We then use the normalization model and recording and microstimulation experiments to show that the attention dependence of V1-MT correlations is better explained by a mechanism in which attention changes the weights of connections between V1 and MT than by a mechanism that modulates responses in either area. Our study shows that normalization can explain interactions between neurons in different areas and provides a framework for using multiarea recording and stimulation to probe the neural mechanisms underlying neuronal computations.

  12. Gravity dependence of the effect of optokinetic stimulation on the subjective visual vertical.

    Science.gov (United States)

    Ward, Bryan K; Bockisch, Christopher J; Caramia, Nicoletta; Bertolini, Giovanni; Tarnutzer, Alexander Andrea

    2017-05-01

    Accurate and precise estimates of direction of gravity are essential for spatial orientation. According to Bayesian theory, multisensory vestibular, visual, and proprioceptive input is centrally integrated in a weighted fashion based on the reliability of the component sensory signals. For otolithic input, a decreasing signal-to-noise ratio was demonstrated with increasing roll angle. We hypothesized that the weights of vestibular (otolithic) and extravestibular (visual/proprioceptive) sensors are roll-angle dependent and predicted an increased weight of extravestibular cues with increasing roll angle, potentially following the Bayesian hypothesis. To probe this concept, the subjective visual vertical (SVV) was assessed in different roll positions (≤ ± 120°, steps = 30°, n = 10) with/without presenting an optokinetic stimulus (velocity = ± 60°/s). The optokinetic stimulus biased the SVV toward the direction of stimulus rotation for roll angles ≥ ± 30° ( P stimulation. Variability and optokinetic bias were correlated ( R 2 = 0.71, slope = 0.71, 95% confidence interval = 0.57-0.86). An optimal-observer model combining an optokinetic bias with vestibular input reproduced measured errors closely. These findings support the hypothesis of a weighted multisensory integration when estimating direction of gravity with optokinetic stimulation. Visual input was weighted more when vestibular input became less reliable, i.e., at larger roll-tilt angles. However, according to Bayesian theory, the variability of combined cues is always lower than the variability of each source cue. If the observed increase in variability, although nonsignificant, is true, either it must depend on an additional source of variability, added after SVV computation, or it would conflict with the Bayesian hypothesis. NEW & NOTEWORTHY Applying a rotating optokinetic stimulus while recording the subjective visual vertical in different whole body roll angles, we noted the optokinetic

  13. Assessment of visual disability using the WHO disability assessment scale (WHO-DAS-II): role of gender.

    Science.gov (United States)

    Badr, H E; Mourad, H

    2009-10-01

    To study the role of gender in coping with disability in young visually impaired students attending two schools for blindness. The WHO Disability Assessment Schedule (WHODAS II), 36-Item Interviewer Administered translated Arabic version was used. It evaluates six domains of everyday living in the last 30 days. These domains are: understanding and communicating, getting around, self care, getting along with people, household activities and participation in society. Face-to-face interviews were conducted with 200 students who represented the target population of the study. Binary logistic regression analysis of the scores of the six domains revealed that in all of the domains except getting along with people and coping with school activities, females significantly faced more difficulties in coping with daily life activities than did their male counterparts. Increasing age significantly increased difficulties in coping with school activities. Genetic causes of blindness were associated with increased difficulties. Females face more difficulties in coping with visual disability. Genetic counselling is needed to decrease the prevalence of visual disability. Girls with blindness need additional inputs to help cope with blindness. Early intervention facilitates dealing with school activities of the visually impaired.

  14. Toward a visual cognitive system using active top-down saccadic control

    NARCIS (Netherlands)

    LaCroix, J.; Postma, E.; van den Herik, J.; Murre, J.

    2008-01-01

    The saccadic selection of relevant visual input for preferential processing allows the efficient use of computational resources. Based on saccadic active human vision, we aim to develop a plausible saccade-based visual cognitive system for a humanoid robot. This paper presents two initial steps

  15. Designing a view for visually representing information coherence in a document set

    CSIR Research Space (South Africa)

    Engelbrecht, L

    2015-11-01

    Full Text Available Knowledge Management System (NIKMAS) system to capture and preserve indigenous knowledge, it was discovered that there was sometimes a coherence between the descriptions of the knowledge. It is proposed by the paper that a visual representation...

  16. Standing postural reaction to visual and proprioceptive stimulation in chronic acquired demyelinating polyneuropathy

    Directory of Open Access Journals (Sweden)

    Clement P. Provost

    2018-01-01

    Full Text Available Objective: To investigate the weight of visual and proprioceptive inputs, measured indirectly in standing position control, in patients with chronic acquired demyelinating polyneuropathy (CADP. Design: Prospective case study. Subjects: Twenty-five patients with CADP and 25 healthy controls. Methods: Posture was recorded on a double force platform. Stimulations were optokinetic (60°/s for visual input and vibration (50 Hz for proprioceptive input. Visual stimulation involved 4 tests (upward, downward, rightward and leftward and proprioceptive stimulation 2 tests (triceps surae and tibialis anterior. A composite score, previously published and slightly modified, was used for the recorded postural signals from the different stimulations. Results: Despite their sensitivity deficits, patients with CADP were more sensitive to proprioceptive stimuli than were healthy controls (mean composite score 13.9 ((standard deviation; SD 4.8 vs 18.4 (SD 4.8, p = 0.002. As expected, they were also more sensitive to visual stimuli (mean composite score 10.5 (SD 8.7 vs 22.9 (SD 7.5, p< 0.0001. Conclusion: These results encourage balance rehabilitation of patients with CADP, aimed at promoting the use of proprioceptive information, thereby reducing too-early development of visual compensation while proprioception is still available.

  17. Lymphoma diagnosis in histopathology using a multi-stage visual learning approach

    Science.gov (United States)

    Codella, Noel; Moradi, Mehdi; Matasar, Matt; Sveda-Mahmood, Tanveer; Smith, John R.

    2016-03-01

    This work evaluates the performance of a multi-stage image enhancement, segmentation, and classification approach for lymphoma recognition in hematoxylin and eosin (H and E) stained histopathology slides of excised human lymph node tissue. In the first stage, the original histology slide undergoes various image enhancement and segmentation operations, creating an additional 5 images for every slide. These new images emphasize unique aspects of the original slide, including dominant staining, staining segmentations, non-cellular groupings, and cellular groupings. For the resulting 6 total images, a collection of visual features are extracted from 3 different spatial configurations. Visual features include the first fully connected layer (4096 dimensions) of the Caffe convolutional neural network trained from ImageNet data. In total, over 200 resultant visual descriptors are extracted for each slide. Non-linear SVMs are trained over each of the over 200 descriptors, which are then input to a forward stepwise ensemble selection that optimizes a late fusion sum of logistically normalized model outputs using local hill climbing. The approach is evaluated on a public NIH dataset containing 374 images representing 3 lymphoma conditions: chronic lymphocytic leukemia (CLL), follicular lymphoma (FL), and mantle cell lymphoma (MCL). Results demonstrate a 38.4% reduction in residual error over the current state-of-art on this dataset.

  18. Allocentric but not egocentric visual memory difficulties in adults with ADHD may represent cognitive inefficiency.

    Science.gov (United States)

    Brown, Franklin C; Roth, Robert M; Katz, Lynda J

    2015-08-30

    Attention Deficit Hyperactivity Disorder (ADHD) has often been conceptualized as arising executive dysfunctions (e.g., inattention, defective inhibition). However, recent studies suggested that cognitive inefficiency may underlie many ADHD symptoms, according to reaction time and processing speed abnormalities. This study explored whether a non-timed measure of cognitive inefficiency would also be abnormal. A sample of 23 ADHD subjects was compared to 23 controls on a test that included both egocentric and allocentric visual memory subtests. A factor analysis was used to determine which cognitive variables contributed to allocentric visual memory. The ADHD sample performed significantly lower on the allocentric but not egocentric conditions. Allocentric visual memory was not associated with timed, working memory, visual perception, or mental rotation variables. This paper concluded by discussing how these results supported a cognitive inefficiency explanation for some ADHD symptoms, and discussed future research directions. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  19. How cortical neurons help us see: visual recognition in the human brain

    OpenAIRE

    Blumberg, Julie; Kreiman, Gabriel

    2010-01-01

    Through a series of complex transformations, the pixel-like input to the retina is converted into rich visual perceptions that constitute an integral part of visual recognition. Multiple visual problems arise due to damage or developmental abnormalities in the cortex of the brain. Here, we provide an overview of how visual information is processed along the ventral visual cortex in the human brain. We discuss how neurophysiological recordings in macaque monkeys and in humans can help us under...

  20. Real-Time Facial Segmentation and Performance Capture from RGB Input

    OpenAIRE

    Saito, Shunsuke; Li, Tianye; Li, Hao

    2016-01-01

    We introduce the concept of unconstrained real-time 3D facial performance capture through explicit semantic segmentation in the RGB input. To ensure robustness, cutting edge supervised learning approaches rely on large training datasets of face images captured in the wild. While impressive tracking quality has been demonstrated for faces that are largely visible, any occlusion due to hair, accessories, or hand-to-face gestures would result in significant visual artifacts and loss of tracking ...

  1. How cortical neurons help us see: visual recognition in the human brain

    Science.gov (United States)

    Blumberg, Julie; Kreiman, Gabriel

    2010-01-01

    Through a series of complex transformations, the pixel-like input to the retina is converted into rich visual perceptions that constitute an integral part of visual recognition. Multiple visual problems arise due to damage or developmental abnormalities in the cortex of the brain. Here, we provide an overview of how visual information is processed along the ventral visual cortex in the human brain. We discuss how neurophysiological recordings in macaque monkeys and in humans can help us understand the computations performed by visual cortex. PMID:20811161

  2. On the Influence of Input Data Quality to Flood Damage Estimation: The Performance of the INSYDE Model

    Directory of Open Access Journals (Sweden)

    Daniela Molinari

    2017-09-01

    Full Text Available IN-depth SYnthetic Model for Flood Damage Estimation (INSYDE is a model for the estimation of flood damage to residential buildings at the micro-scale. This study investigates the sensitivity of INSYDE to the accuracy of input data. Starting from the knowledge of input parameters at the scale of individual buildings for a case study, the level of detail of input data is progressively downgraded until the condition in which a representative value is defined for all inputs at the census block scale. The analysis reveals that two conditions are required to limit the errors in damage estimation: the representativeness of representatives values with respect to micro-scale values and the local knowledge of the footprint area of the buildings, being the latter the main extensive variable adopted by INSYDE. Such a result allows for extending the usability of the model at the meso-scale, also in different countries, depending on the availability of aggregated building data.

  3. Visual Perceptual Learning and Models.

    Science.gov (United States)

    Dosher, Barbara; Lu, Zhong-Lin

    2017-09-15

    Visual perceptual learning through practice or training can significantly improve performance on visual tasks. Originally seen as a manifestation of plasticity in the primary visual cortex, perceptual learning is more readily understood as improvements in the function of brain networks that integrate processes, including sensory representations, decision, attention, and reward, and balance plasticity with system stability. This review considers the primary phenomena of perceptual learning, theories of perceptual learning, and perceptual learning's effect on signal and noise in visual processing and decision. Models, especially computational models, play a key role in behavioral and physiological investigations of the mechanisms of perceptual learning and for understanding, predicting, and optimizing human perceptual processes, learning, and performance. Performance improvements resulting from reweighting or readout of sensory inputs to decision provide a strong theoretical framework for interpreting perceptual learning and transfer that may prove useful in optimizing learning in real-world applications.

  4. Sensitivity to the visual field origin of natural image patches in human low-level visual cortex

    Directory of Open Access Journals (Sweden)

    Damien J. Mannion

    2015-06-01

    Full Text Available Asymmetries in the response to visual patterns in the upper and lower visual fields (above and below the centre of gaze have been associated with ecological factors relating to the structure of typical visual environments. Here, we investigated whether the content of the upper and lower visual field representations in low-level regions of human visual cortex are specialised for visual patterns that arise from the upper and lower visual fields in natural images. We presented image patches, drawn from above or below the centre of gaze of an observer navigating a natural environment, to either the upper or lower visual fields of human participants (n = 7 while we used functional magnetic resonance imaging (fMRI to measure the magnitude of evoked activity in the visual areas V1, V2, and V3. We found a significant interaction between the presentation location (upper or lower visual field and the image patch source location (above or below fixation; the responses to lower visual field presentation were significantly greater for image patches sourced from below than above fixation, while the responses in the upper visual field were not significantly different for image patches sourced from above and below fixation. This finding demonstrates an association between the representation of the lower visual field in human visual cortex and the structure of the visual input that is likely to be encountered below the centre of gaze.

  5. Central Cross-Talk in Task Switching : Evidence from Manipulating Input-Output Modality Compatibility

    Science.gov (United States)

    Stephan, Denise Nadine; Koch, Iring

    2010-01-01

    Two experiments examined the role of compatibility of input and output (I-O) modality mappings in task switching. We define I-O modality compatibility in terms of similarity of stimulus modality and modality of response-related sensory consequences. Experiment 1 included switching between 2 compatible tasks (auditory-vocal vs. visual-manual) and…

  6. Characterization of Visual Scanning Patterns in Air Traffic Control.

    Science.gov (United States)

    McClung, Sarah N; Kang, Ziho

    2016-01-01

    Characterization of air traffic controllers' (ATCs') visual scanning strategies is a challenging issue due to the dynamic movement of multiple aircraft and increasing complexity of scanpaths (order of eye fixations and saccades) over time. Additionally, terminologies and methods are lacking to accurately characterize the eye tracking data into simplified visual scanning strategies linguistically expressed by ATCs. As an intermediate step to automate the characterization classification process, we (1) defined and developed new concepts to systematically filter complex visual scanpaths into simpler and more manageable forms and (2) developed procedures to map visual scanpaths with linguistic inputs to reduce the human judgement bias during interrater agreement. The developed concepts and procedures were applied to investigating the visual scanpaths of expert ATCs using scenarios with different aircraft congestion levels. Furthermore, oculomotor trends were analyzed to identify the influence of aircraft congestion on scan time and number of comparisons among aircraft. The findings show that (1) the scanpaths filtered at the highest intensity led to more consistent mapping with the ATCs' linguistic inputs, (2) the pattern classification occurrences differed between scenarios, and (3) increasing aircraft congestion caused increased scan times and aircraft pairwise comparisons. The results provide a foundation for better characterizing complex scanpaths in a dynamic task and automating the analysis process.

  7. VIP : A Visual Editor and Compiler for v-Promela

    OpenAIRE

    Kamel, Moataz; Leue, Stefan

    2001-01-01

    We describe the Visual Interface to Promela (VIP) tool that we have recently implemented. VIP supports the visual editing and maintenance of v-Promela models. v-Promela is a visual, object-oriented extension to Promela, the input language to the Spin model checker. We introduce the v-Promela notation as supported by the VIP editor, discuss Promela code generation, and describe the process of property validation for the resulting models. Our discussion centers around two case studies, a call p...

  8. Learning Structure of Sensory Inputs with Synaptic Plasticity Leads to Interference

    Directory of Open Access Journals (Sweden)

    Joseph eChrol-Cannon

    2015-08-01

    Full Text Available Synaptic plasticity is often explored as a form of unsupervised adaptationin cortical microcircuits to learn the structure of complex sensoryinputs and thereby improve performance of classification and prediction. The question of whether the specific structure of the input patterns is encoded in the structure of neural networks has been largely neglected. Existing studies that have analyzed input-specific structural adaptation have used simplified, synthetic inputs in contrast to complex and noisy patterns found in real-world sensory data.In this work, input-specific structural changes are analyzed forthree empirically derived models of plasticity applied to three temporal sensory classification tasks that include complex, real-world visual and auditory data. Two forms of spike-timing dependent plasticity (STDP and the Bienenstock-Cooper-Munro (BCM plasticity rule are used to adapt the recurrent network structure during the training process before performance is tested on the pattern recognition tasks.It is shown that synaptic adaptation is highly sensitive to specific classes of input pattern. However, plasticity does not improve the performance on sensory pattern recognition tasks, partly due to synaptic interference between consecutively presented input samples. The changes in synaptic strength produced by one stimulus are reversed by thepresentation of another, thus largely preventing input-specific synaptic changes from being retained in the structure of the network.To solve the problem of interference, we suggest that models of plasticitybe extended to restrict neural activity and synaptic modification to a subset of the neural circuit, which is increasingly found to be the casein experimental neuroscience.

  9. Functional MRI of the visual cortex and visual testing in patients with previous optic neuritis

    DEFF Research Database (Denmark)

    Langkilde, Annika Reynberg; Frederiksen, J.L.; Rostrup, Egill

    2002-01-01

    to both the results of the contrast sensitivity test and to the Snellen visual acuity. Our results indicate that fMRI is a useful method for the study of ON, even in cases where the visual acuity is severely impaired. The reduction in activated volume could be explained as a reduced neuronal input......The volume of cortical activation as detected by functional magnetic resonance imaging (fMRI) in the visual cortex has previously been shown to be reduced following optic neuritis (ON). In order to understand the cause of this change, we studied the cortical activation, both the size...... of the activated area and the signal change following ON, and compared the results with results of neuroophthalmological testing. We studied nine patients with previous acute ON and 10 healthy persons served as controls using fMRI with visual stimulation. In addition to a reduced activated volume, patients showed...

  10. Using Technology to Support Visual Learning Strategies

    Science.gov (United States)

    O'Bannon, Blanche; Puckett, Kathleen; Rakes, Glenda

    2006-01-01

    Visual learning is a strategy for visually representing the structure of information and for representing the ways in which concepts are related. Based on the work of Ausubel, these hierarchical maps facilitate student learning of unfamiliar information in the K-12 classroom. This paper presents the research base for this Type II computer tool, as…

  11. Computer systems and methods for visualizing data

    Science.gov (United States)

    Stolte, Chris; Hanrahan, Patrick

    2013-01-29

    A method for forming a visual plot using a hierarchical structure of a dataset. The dataset comprises a measure and a dimension. The dimension consists of a plurality of levels. The plurality of levels form a dimension hierarchy. The visual plot is constructed based on a specification. A first level from the plurality of levels is represented by a first component of the visual plot. A second level from the plurality of levels is represented by a second component of the visual plot. The dataset is queried to retrieve data in accordance with the specification. The data includes all or a portion of the dimension and all or a portion of the measure. The visual plot is populated with the retrieved data in accordance with the specification.

  12. Designing the visualization of information

    CSIR Research Space (South Africa)

    Engelbrecht, L

    2015-04-01

    Full Text Available The construction of an artifact to visually represent information is usually required by Information Visualization research projects. The end product of design science research is also an artifact and therefore it can be argued that design science...

  13. Input-profile-based software failure probability quantification for safety signal generation systems

    International Nuclear Information System (INIS)

    Kang, Hyun Gook; Lim, Ho Gon; Lee, Ho Jung; Kim, Man Cheol; Jang, Seung Cheol

    2009-01-01

    The approaches for software failure probability estimation are mainly based on the results of testing. Test cases represent the inputs, which are encountered in an actual use. The test inputs for the safety-critical application such as a reactor protection system (RPS) of a nuclear power plant are the inputs which cause the activation of protective action such as a reactor trip. A digital system treats inputs from instrumentation sensors as discrete digital values by using an analog-to-digital converter. Input profile must be determined in consideration of these characteristics for effective software failure probability quantification. Another important characteristic of software testing is that we do not have to repeat the test for the same input value since the software response is deterministic for each specific digital input. With these considerations, we propose an effective software testing method for quantifying the failure probability. As an example application, the input profile of the digital RPS is developed based on the typical plant data. The proposed method in this study is expected to provide a simple but realistic mean to quantify the software failure probability based on input profile and system dynamics.

  14. Modality-Driven Classification and Visualization of Ensemble Variance

    Energy Technology Data Exchange (ETDEWEB)

    Bensema, Kevin; Gosink, Luke; Obermaier, Harald; Joy, Kenneth I.

    2016-10-01

    Paper for the IEEE Visualization Conference Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space.

  15. Static and dynamic posture control in postlingual cochlear implanted patients: effects of dual-tasking, visual and auditory inputs suppression.

    Science.gov (United States)

    Bernard-Demanze, Laurence; Léonard, Jacques; Dumitrescu, Michel; Meller, Renaud; Magnan, Jacques; Lacour, Michel

    2013-01-01

    Posture control is based on central integration of multisensory inputs, and on internal representation of body orientation in space. This multisensory feedback regulates posture control and continuously updates the internal model of body's position which in turn forwards motor commands adapted to the environmental context and constraints. The peripheral localization of the vestibular system, close to the cochlea, makes vestibular damage possible following cochlear implant (CI) surgery. Impaired vestibular function in CI patients, if any, may have a strong impact on posture stability. The simple postural task of quiet standing is generally paired with cognitive activity in most day life conditions, leading therefore to competition for attentional resources in dual-tasking, and increased risk of fall particularly in patients with impaired vestibular function. This study was aimed at evaluating the effects of postlingual cochlear implantation on posture control in adult deaf patients. Possible impairment of vestibular function was assessed by comparing the postural performance of patients to that of age-matched healthy subjects during a simple postural task performed in static (stable platform) and dynamic (platform in translation) conditions, and during dual-tasking with a visual or auditory memory task. Postural tests were done in eyes open (EO) and eyes closed (EC) conditions, with the CI activated (ON) or not (OFF). Results showed that the postural performance of the CI patients strongly differed from the controls, mainly in the EC condition. The CI patients showed significantly reduced limits of stability and increased postural instability in static conditions. In dynamic conditions, they spent considerably more energy to maintain equilibrium, and their head was stabilized neither in space nor on trunk: they behaved dynamically without vision like an inverted pendulum while the controls showed a whole body rigidification strategy. Hearing (prosthesis on) as well

  16. Computing all hybridization networks for multiple binary phylogenetic input trees.

    Science.gov (United States)

    Albrecht, Benjamin

    2015-07-30

    The computation of phylogenetic trees on the same set of species that are based on different orthologous genes can lead to incongruent trees. One possible explanation for this behavior are interspecific hybridization events recombining genes of different species. An important approach to analyze such events is the computation of hybridization networks. This work presents the first algorithm computing the hybridization number as well as a set of representative hybridization networks for multiple binary phylogenetic input trees on the same set of taxa. To improve its practical runtime, we show how this algorithm can be parallelized. Moreover, we demonstrate the efficiency of the software Hybroscale, containing an implementation of our algorithm, by comparing it to PIRNv2.0, which is so far the best available software computing the exact hybridization number for multiple binary phylogenetic trees on the same set of taxa. The algorithm is part of the software Hybroscale, which was developed specifically for the investigation of hybridization networks including their computation and visualization. Hybroscale is freely available(1) and runs on all three major operating systems. Our simulation study indicates that our approach is on average 100 times faster than PIRNv2.0. Moreover, we show how Hybroscale improves the interpretation of the reported hybridization networks by adding certain features to its graphical representation.

  17. Postdictive modulation of visual orientation.

    Science.gov (United States)

    Kawabe, Takahiro

    2012-01-01

    The present study investigated how visual orientation is modulated by subsequent orientation inputs. Observers were presented a near-vertical Gabor patch as a target, followed by a left- or right-tilted second Gabor patch as a distracter in the spatial vicinity of the target. The task of the observers was to judge whether the target was right- or left-tilted (Experiment 1) or whether the target was vertical or not (Supplementary experiment). The judgment was biased toward the orientation of the distracter (the postdictive modulation of visual orientation). The judgment bias peaked when the target and distracter were temporally separated by 100 ms, indicating a specific temporal mechanism for this phenomenon. However, when the visibility of the distracter was reduced via backward masking, the judgment bias disappeared. On the other hand, the low-visibility distracter could still cause a simultaneous orientation contrast, indicating that the distracter orientation is still processed in the visual system (Experiment 2). Our results suggest that the postdictive modulation of visual orientation stems from spatiotemporal integration of visual orientation on the basis of a slow feature matching process.

  18. Distinct GABAergic targets of feedforward and feedback connections between lower and higher areas of rat visual cortex.

    Science.gov (United States)

    Gonchar, Yuri; Burkhalter, Andreas

    2003-11-26

    Processing of visual information is performed in different cortical areas that are interconnected by feedforward (FF) and feedback (FB) pathways. Although FF and FB inputs are excitatory, their influences on pyramidal neurons also depend on the outputs of GABAergic neurons, which receive FF and FB inputs. Rat visual cortex contains at least three different families of GABAergic neurons that express parvalbumin (PV), calretinin (CR), and somatostatin (SOM) (Gonchar and Burkhalter, 1997). To examine whether pathway-specific inhibition (Shao and Burkhalter, 1996) is attributable to distinct connections with GABAergic neurons, we traced FF and FB inputs to PV, CR, and SOM neurons in layers 1-2/3 of area 17 and the secondary lateromedial area in rat visual cortex. We found that in layer 2/3 maximally 2% of FF and FB inputs go to CR and SOM neurons. This contrasts with 12-13% of FF and FB inputs onto layer 2/3 PV neurons. Unlike inputs to layer 2/3, connections to layer 1, which contains CR but lacks SOM and PV somata, are pathway-specific: 21% of FB inputs go to CR neurons, whereas FF inputs to layer 1 and its CR neurons are absent. These findings suggest that FF and FB influences on layer 2/3 pyramidal neurons mainly involve disynaptic connections via PV neurons that control the spike outputs to axons and proximal dendrites. Unlike FF input, FB input in addition makes a disynaptic link via CR neurons, which may influence the excitability of distal pyramidal cell dendrites in layer 1.

  19. Development of the Visual Word Form Area Requires Visual Experience: Evidence from Blind Braille Readers.

    Science.gov (United States)

    Kim, Judy S; Kanjlia, Shipra; Merabet, Lotfi B; Bedny, Marina

    2017-11-22

    Learning to read causes the development of a letter- and word-selective region known as the visual word form area (VWFA) within the human ventral visual object stream. Why does a reading-selective region develop at this anatomical location? According to one hypothesis, the VWFA develops at the nexus of visual inputs from retinotopic cortices and linguistic input from the frontotemporal language network because reading involves extracting linguistic information from visual symbols. Surprisingly, the anatomical location of the VWFA is also active when blind individuals read Braille by touch, suggesting that vision is not required for the development of the VWFA. In this study, we tested the alternative prediction that VWFA development is in fact influenced by visual experience. We predicted that in the absence of vision, the "VWFA" is incorporated into the frontotemporal language network and participates in high-level language processing. Congenitally blind ( n = 10, 9 female, 1 male) and sighted control ( n = 15, 9 female, 6 male), male and female participants each took part in two functional magnetic resonance imaging experiments: (1) word reading (Braille for blind and print for sighted participants), and (2) listening to spoken sentences of different grammatical complexity (both groups). We find that in blind, but not sighted participants, the anatomical location of the VWFA responds both to written words and to the grammatical complexity of spoken sentences. This suggests that in blindness, this region takes on high-level linguistic functions, becoming less selective for reading. More generally, the current findings suggest that experience during development has a major effect on functional specialization in the human cortex. SIGNIFICANCE STATEMENT The visual word form area (VWFA) is a region in the human cortex that becomes specialized for the recognition of written letters and words. Why does this particular brain region become specialized for reading? We

  20. The Effect of Visual Variability on the Learning of Academic Concepts.

    Science.gov (United States)

    Bourgoyne, Ashley; Alt, Mary

    2017-06-10

    The purpose of this study was to identify effects of variability of visual input on development of conceptual representations of academic concepts for college-age students with normal language (NL) and those with language-learning disabilities (LLD). Students with NL (n = 11) and LLD (n = 11) participated in a computer-based training for introductory biology course concepts. Participants were trained on half the concepts under a low-variability condition and half under a high-variability condition. Participants completed a posttest in which they were asked to identify and rate the accuracy of novel and trained visual representations of the concepts. We performed separate repeated measures analyses of variance to examine the accuracy of identification and ratings. Participants were equally accurate on trained and novel items in the high-variability condition, but were less accurate on novel items only in the low-variability condition. The LLD group showed the same pattern as the NL group; they were just less accurate. Results indicated that high-variability visual input may facilitate the acquisition of academic concepts in college students with NL and LLD. High-variability visual input may be especially beneficial for generalization to novel representations of concepts. Implicit learning methods may be harnessed by college courses to provide students with basic conceptual knowledge when they are entering courses or beginning new units.

  1. Neural entrainment to rhythmically-presented auditory, visual and audio-visual speech in children

    Directory of Open Access Journals (Sweden)

    Alan James Power

    2012-07-01

    Full Text Available Auditory cortical oscillations have been proposed to play an important role in speech perception. It is suggested that the brain may take temporal ‘samples’ of information from the speech stream at different rates, phase-resetting ongoing oscillations so that they are aligned with similar frequency bands in the input (‘phase locking’. Information from these frequency bands is then bound together for speech perception. To date, there are no explorations of neural phase-locking and entrainment to speech input in children. However, it is clear from studies of language acquisition that infants use both visual speech information and auditory speech information in learning. In order to study neural entrainment to speech in typically-developing children, we use a rhythmic entrainment paradigm (underlying 2 Hz or delta rate based on repetition of the syllable ba, presented in either the auditory modality alone, the visual modality alone, or as auditory-visual speech (via a talking head. To ensure attention to the task, children aged 13 years were asked to press a button as fast as possible when the ba stimulus violated the rhythm for each stream type. Rhythmic violation depended on delaying the occurrence of a ba in the isochronous stream. Neural entrainment was demonstrated for all stream types, and individual differences in standardized measures of language processing were related to auditory entrainment at the theta rate. Further, there was significant modulation of the preferred phase of auditory entrainment in the theta band when visual speech cues were present, indicating cross-modal phase resetting. The rhythmic entrainment paradigm developed here offers a method for exploring individual differences in oscillatory phase locking during development. In particular, a method for assessing neural entrainment and cross-modal phase resetting would be useful for exploring developmental learning difficulties thought to involve temporal sampling

  2. Sensitivity analysis of complex models: Coping with dynamic and static inputs

    International Nuclear Information System (INIS)

    Anstett-Collin, F.; Goffart, J.; Mara, T.; Denis-Vidal, L.

    2015-01-01

    In this paper, we address the issue of conducting a sensitivity analysis of complex models with both static and dynamic uncertain inputs. While several approaches have been proposed to compute the sensitivity indices of the static inputs (i.e. parameters), the one of the dynamic inputs (i.e. stochastic fields) have been rarely addressed. For this purpose, we first treat each dynamic as a Gaussian process. Then, the truncated Karhunen–Loève expansion of each dynamic input is performed. Such an expansion allows to generate independent Gaussian processes from a finite number of independent random variables. Given that a dynamic input is represented by a finite number of random variables, its variance-based sensitivity index is defined by the sensitivity index of this group of variables. Besides, an efficient sampling-based strategy is described to estimate the first-order indices of all the input factors by only using two input samples. The approach is applied to a building energy model, in order to assess the impact of the uncertainties of the material properties (static inputs) and the weather data (dynamic inputs) on the energy performance of a real low energy consumption house. - Highlights: • Sensitivity analysis of models with uncertain static and dynamic inputs is performed. • Karhunen–Loève (KL) decomposition of the spatio/temporal inputs is performed. • The influence of the dynamic inputs is studied through the modes of the KL expansion. • The proposed approach is applied to a building energy model. • Impact of weather data and material properties on performance of real house is given

  3. Cholinergic induction of input-specific late-phase LTP via localized Ca2+ release in the visual cortex.

    Science.gov (United States)

    Cho, Kwang-Hyun; Jang, Hyun-Jong; Jo, Yang-Hyeok; Singer, Wolf; Rhie, Duck-Joo

    2012-03-28

    Acetylcholine facilitates long-term potentiation (LTP) and long-term depression (LTD), substrates of learning, memory, and sensory processing, in which acetylcholine also plays a crucial role. Ca(2+) ions serve as a canonical regulator of LTP/LTD but little is known about the effect of acetylcholine on intracellular Ca(2+) dynamics. Here, we investigated dendritic Ca(2+) dynamics evoked by synaptic stimulation and the resulting LTP/LTD in layer 2/3 pyramidal neurons of the rat visual cortex. Under muscarinic stimulation, single-shock electrical stimulation (SES) inducing ∼20 mV EPSP, applied via a glass electrode located ∼10 μm from the basal dendrite, evoked NMDA receptor-dependent fast Ca(2+) transients and the subsequent Ca(2+) release from the inositol 1,4,5-trisphosphate (IP(3))-sensitive stores. These secondary dendritic Ca(2+) transients were highly localized within 10 μm from the center (SD = 5.0 μm). The dendritic release of Ca(2+) was a prerequisite for input-specific muscarinic LTP (LTPm). Without the secondary Ca(2+) release, only muscarinic LTD (LTDm) was induced. D(-)-2-amino-5-phosphopentanoic acid and intracellular heparin blocked LTPm as well as dendritic Ca(2+) release. A single burst consisting of 3 EPSPs with weak stimulus intensities instead of the SES also induced secondary Ca(2+) release and LTPm. LTPm and LTDm were protein synthesis-dependent. Furthermore, LTPm was confined to specific dendritic compartments and not inducible in distal apical dendrites. Thus, cholinergic activation facilitated selectively compartment-specific induction of late-phase LTP through IP(3)-dependent Ca(2+) release.

  4. Visual BOLD Response in Late Blind Subjects with Argus II Retinal Prosthesis.

    Directory of Open Access Journals (Sweden)

    E Castaldi

    2016-10-01

    Full Text Available Retinal prosthesis technologies require that the visual system downstream of the retinal circuitry be capable of transmitting and elaborating visual signals. We studied the capability of plastic remodeling in late blind subjects implanted with the Argus II Retinal Prosthesis with psychophysics and functional MRI (fMRI. After surgery, six out of seven retinitis pigmentosa (RP blind subjects were able to detect high-contrast stimuli using the prosthetic implant. However, direction discrimination to contrast modulated stimuli remained at chance level in all of them. No subject showed any improvement of contrast sensitivity in either eye when not using the Argus II. Before the implant, the Blood Oxygenation Level Dependent (BOLD activity in V1 and the lateral geniculate nucleus (LGN was very weak or absent. Surprisingly, after prolonged use of Argus II, BOLD responses to visual input were enhanced. This is, to our knowledge, the first study tracking the neural changes of visual areas in patients after retinal implant, revealing a capacity to respond to restored visual input even after years of deprivation.

  5. Visual memory errors in Parkinson's disease patient with visual hallucinations.

    Science.gov (United States)

    Barnes, J; Boubert, L

    2011-03-01

    The occurrences of visual hallucinations seem to be more prevalent in low light and hallucinators tend to be more prone to false positive type errors in memory tasks. Here we investigated whether the richness of stimuli does indeed affect recognition differently in hallucinating and nonhallucinating participants, and if so whether this difference extends to identifying spatial context. We compared 36 Parkinson's disease (PD) patients with visual hallucinations, 32 Parkinson's patients without hallucinations, and 36 age-matched controls, on a visual memory task where color and black and white pictures were presented at different locations. Participants had to recognize the pictures among distracters along with the location of the stimulus. Findings revealed clear differences in performance between the groups. Both PD groups had impaired recognition compared to the controls, but those with hallucinations were significantly more impaired on black and white than on color stimuli. In addition, the group with hallucinations was significantly impaired compared to the other two groups on spatial memory. We suggest that not only do PD patients have poorer recognition of pictorial stimuli than controls, those who present with visual hallucinations appear to be more heavily reliant on bottom up sensory input and impaired on spatial ability.

  6. Visual training paired with electrical stimulation of the basal forebrain improves orientation-selective visual acuity in the rat.

    Science.gov (United States)

    Kang, Jun Il; Groleau, Marianne; Dotigny, Florence; Giguère, Hugo; Vaucher, Elvire

    2014-07-01

    The cholinergic afferents from the basal forebrain to the primary visual cortex play a key role in visual attention and cortical plasticity. These afferent fibers modulate acute and long-term responses of visual neurons to specific stimuli. The present study evaluates whether this cholinergic modulation of visual neurons results in cortical activity and visual perception changes. Awake adult rats were exposed repeatedly for 2 weeks to an orientation-specific grating with or without coupling this visual stimulation to an electrical stimulation of the basal forebrain. The visual acuity, as measured using a visual water maze before and after the exposure to the orientation-specific grating, was increased in the group of trained rats with simultaneous basal forebrain/visual stimulation. The increase in visual acuity was not observed when visual training or basal forebrain stimulation was performed separately or when cholinergic fibers were selectively lesioned prior to the visual stimulation. The visual evoked potentials show a long-lasting increase in cortical reactivity of the primary visual cortex after coupled visual/cholinergic stimulation, as well as c-Fos immunoreactivity of both pyramidal and GABAergic interneuron. These findings demonstrate that when coupled with visual training, the cholinergic system improves visual performance for the trained orientation probably through enhancement of attentional processes and cortical plasticity in V1 related to the ratio of excitatory/inhibitory inputs. This study opens the possibility of establishing efficient rehabilitation strategies for facilitating visual capacity.

  7. Modality of Input and Vocabulary Acquisition

    Directory of Open Access Journals (Sweden)

    Tetyana Sydorenko

    2010-06-01

    Full Text Available This study examines the effect of input modality (video, audio, and captions, i.e., on-screen text in the same language as audio on (a the learning of written and aural word forms, (b overall vocabulary gains, (c attention to input, and (d vocabulary learning strategies of beginning L2 learners. Twenty-six second-semester learners of Russian participated in this study. Group one (N = 8 saw video with audio and captions (VAC; group two (N = 9 saw video with audio (VA; group three (N = 9 saw video with captions (VC. All participants completed written and aural vocabulary tests and a final questionnaire.The results indicate that groups with captions (VAC and VC scored higher on written than on aural recognition of word forms, while the reverse applied to the VA group. The VAC group learned more word meanings than the VA group. Results from the questionnaire suggest that learners paid most attention to captions, followed by video and audio, and acquired most words by associating them with visual images. Pedagogical implications of this study are that captioned video tends to aid recognition of written word forms and the learning of word meaning, while non-captioned video tends to improve listening comprehension as it facilitates recognition of aural word forms.

  8. Applying Pragmatics Principles for Interaction with Visual Analytics.

    Science.gov (United States)

    Hoque, Enamul; Setlur, Vidya; Tory, Melanie; Dykeman, Isaac

    2018-01-01

    Interactive visual data analysis is most productive when users can focus on answering the questions they have about their data, rather than focusing on how to operate the interface to the analysis tool. One viable approach to engaging users in interactive conversations with their data is a natural language interface to visualizations. These interfaces have the potential to be both more expressive and more accessible than other interaction paradigms. We explore how principles from language pragmatics can be applied to the flow of visual analytical conversations, using natural language as an input modality. We evaluate the effectiveness of pragmatics support in our system Evizeon, and present design considerations for conversation interfaces to visual analytics tools.

  9. Ground motion input in seismic evaluation studies

    International Nuclear Information System (INIS)

    Sewell, R.T.; Wu, S.C.

    1996-07-01

    This report documents research pertaining to conservatism and variability in seismic risk estimates. Specifically, it examines whether or not artificial motions produce unrealistic evaluation demands, i.e., demands significantly inconsistent with those expected from real earthquake motions. To study these issues, two types of artificial motions are considered: (a) motions with smooth response spectra, and (b) motions with realistic variations in spectral amplitude across vibration frequency. For both types of artificial motion, time histories are generated to match target spectral shapes. For comparison, empirical motions representative of those that might result from strong earthquakes in the Eastern U.S. are also considered. The study findings suggest that artificial motions resulting from typical simulation approaches (aimed at matching a given target spectrum) are generally adequate and appropriate in representing the peak-response demands that may be induced in linear structures and equipment responding to real earthquake motions. Also, given similar input Fourier energies at high-frequencies, levels of input Fourier energy at low frequencies observed for artificial motions are substantially similar to those levels noted in real earthquake motions. In addition, the study reveals specific problems resulting from the application of Western U.S. type motions for seismic evaluation of Eastern U.S. nuclear power plants

  10. The SSVEP-Based BCI Text Input System Using Entropy Encoding Algorithm

    Directory of Open Access Journals (Sweden)

    Yeou-Jiunn Chen

    2015-01-01

    Full Text Available The so-called amyotrophic lateral sclerosis (ALS or motor neuron disease (MND is a neurodegenerative disease with various causes. It is characterized by muscle spasticity, rapidly progressive weakness due to muscle atrophy, and difficulty in speaking, swallowing, and breathing. The severe disabled always have a common problem that is about communication except physical malfunctions. The steady-state visually evoked potential based brain computer interfaces (BCI, which apply visual stimulus, are very suitable to play the role of communication interface for patients with neuromuscular impairments. In this study, the entropy encoding algorithm is proposed to encode the letters of multilevel selection interface for BCI text input systems. According to the appearance frequency of each letter, the entropy encoding algorithm is proposed to construct a variable-length tree for the letter arrangement of multilevel selection interface. Then, the Gaussian mixture models are applied to recognize electrical activity of the brain. According to the recognition results, the multilevel selection interface guides the subject to spell and type the words. The experimental results showed that the proposed approach outperforms the baseline system, which does not consider the appearance frequency of each letter. Hence, the proposed approach is able to ease text input interface for patients with neuromuscular impairments.

  11. Visual search, visual streams, and visual architectures.

    Science.gov (United States)

    Green, M

    1991-10-01

    Most psychological, physiological, and computational models of early vision suggest that retinal information is divided into a parallel set of feature modules. The dominant theories of visual search assume that these modules form a "blackboard" architecture: a set of independent representations that communicate only through a central processor. A review of research shows that blackboard-based theories, such as feature-integration theory, cannot easily explain the existing data. The experimental evidence is more consistent with a "network" architecture, which stresses that: (1) feature modules are directly connected to one another, (2) features and their locations are represented together, (3) feature detection and integration are not distinct processing stages, and (4) no executive control process, such as focal attention, is needed to integrate features. Attention is not a spotlight that synthesizes objects from raw features. Instead, it is better to conceptualize attention as an aperture which masks irrelevant visual information.

  12. Visualizing uncertainties in a storm surge ensemble data assimilation and forecasting system

    KAUST Repository

    Hollt, Thomas

    2015-01-15

    We present a novel integrated visualization system that enables the interactive visual analysis of ensemble simulations and estimates of the sea surface height and other model variables that are used for storm surge prediction. Coastal inundation, caused by hurricanes and tropical storms, poses large risks for today\\'s societies. High-fidelity numerical models of water levels driven by hurricane-force winds are required to predict these events, posing a challenging computational problem, and even though computational models continue to improve, uncertainties in storm surge forecasts are inevitable. Today, this uncertainty is often exposed to the user by running the simulation many times with different parameters or inputs following a Monte-Carlo framework in which uncertainties are represented as stochastic quantities. This results in multidimensional, multivariate and multivalued data, so-called ensemble data. While the resulting datasets are very comprehensive, they are also huge in size and thus hard to visualize and interpret. In this paper, we tackle this problem by means of an interactive and integrated visual analysis system. By harnessing the power of modern graphics processing units for visualization as well as computation, our system allows the user to browse through the simulation ensembles in real time, view specific parameter settings or simulation models and move between different spatial and temporal regions without delay. In addition, our system provides advanced visualizations to highlight the uncertainty or show the complete distribution of the simulations at user-defined positions over the complete time series of the prediction. We highlight the benefits of our system by presenting its application in a real-world scenario using a simulation of Hurricane Ike.

  13. Practice makes perfect: the neural substrates of tactile discrimination by Mah-Jong experts include the primary visual cortex

    Directory of Open Access Journals (Sweden)

    Honda Manabu

    2006-12-01

    Full Text Available Abstract Background It has yet to be determined whether visual-tactile cross-modal plasticity due to visual deprivation, particularly in the primary visual cortex (V1, is solely due to visual deprivation or if it is a result of long-term tactile training. Here we conducted an fMRI study with normally-sighted participants who had undergone long-term training on the tactile shape discrimination of the two dimensional (2D shapes on Mah-Jong tiles (Mah-Jong experts. Eight Mah-Jong experts and twelve healthy volunteers who were naïve to Mah-Jong performed a tactile shape matching task using Mah-Jong tiles with no visual input. Furthermore, seven out of eight experts performed a tactile shape matching task with unfamiliar 2D Braille characters. Results When participants performed tactile discrimination of Mah-Jong tiles, the left lateral occipital cortex (LO and V1 were activated in the well-trained subjects. In the naïve subjects, the LO was activated but V1 was not activated. Both the LO and V1 of the well-trained subjects were activated during Braille tactile discrimination tasks. Conclusion The activation of V1 in subjects trained in tactile discrimination may represent altered cross-modal responses as a result of long-term training.

  14. Visual computing scientific visualization and imaging systems

    CERN Document Server

    2014-01-01

    This volume aims to stimulate discussions on research involving the use of data and digital images as an understanding approach for analysis and visualization of phenomena and experiments. The emphasis is put not only on graphically representing data as a way of increasing its visual analysis, but also on the imaging systems which contribute greatly to the comprehension of real cases. Scientific Visualization and Imaging Systems encompass multidisciplinary areas, with applications in many knowledge fields such as Engineering, Medicine, Material Science, Physics, Geology, Geographic Information Systems, among others. This book is a selection of 13 revised and extended research papers presented in the International Conference on Advanced Computational Engineering and Experimenting -ACE-X conferences 2010 (Paris), 2011 (Algarve), 2012 (Istanbul) and 2013 (Madrid). The examples were particularly chosen from materials research, medical applications, general concepts applied in simulations and image analysis and ot...

  15. Visualization of neural networks using saliency maps

    DEFF Research Database (Denmark)

    Mørch, Niels J.S.; Kjems, Ulrik; Hansen, Lars Kai

    1995-01-01

    The saliency map is proposed as a new method for understanding and visualizing the nonlinearities embedded in feedforward neural networks, with emphasis on the ill-posed case, where the dimensionality of the input-field by far exceeds the number of examples. Several levels of approximations...

  16. Visualization of numerically simulated aerodynamic flow fields

    International Nuclear Information System (INIS)

    Hian, Q.L.; Damodaran, M.

    1991-01-01

    The focus of this paper is to describe the development and the application of an interactive integrated software to visualize numerically simulated aerodynamic flow fields so as to enable the practitioner of computational fluid dynamics to diagnose the numerical simulation and to elucidate essential flow physics from the simulation. The input to the software is the numerical database crunched by a supercomputer and typically consists of flow variables and computational grid geometry. This flow visualization system (FVS), written in C language is targetted at the Personal IRIS Workstations. In order to demonstrate the various visualization modules, the paper also describes the application of this software to visualize two- and three-dimensional flow fields past aerodynamic configurations which have been numerically simulated on the NEC-SXIA Supercomputer. 6 refs

  17. Chess Evolution Visualization.

    Science.gov (United States)

    Lu, Wei-Li; Wang, Yu-Shuen; Lin, Wen-Chieh

    2014-05-01

    We present a chess visualization to convey the changes in a game over successive generations. It contains a score chart, an evolution graph and a chess board, such that users can understand a game from global to local viewpoints. Unlike current graphical chess tools, which focus only on highlighting pieces that are under attack and require sequential investigation, our visualization shows potential outcomes after a piece is moved and indicates how much tactical advantage the player can have over the opponent. Users can first glance at the score chart to roughly obtain the growth and decline of advantages from both sides, and then examine the position relations and the piece placements, to know how the pieces are controlled and how the strategy works. To achieve this visualization, we compute the decision tree using artificial intelligence to analyze a game, in which each node represents a chess position and each edge connects two positions that are one-move different. We then merge nodes representing the same chess position, and shorten branches where nodes on them contain only two neighbors, in order to achieve readability. During the graph rendering, the nodes containing events such as draws, effective checks and checkmates, are highlighted because they show how a game is ended. As a result, our visualization helps players understand a chess game so that they can efficiently learn strategies and tactics. The presented results, evaluations, and the conducted user studies demonstrate the feasibility of our visualization design.

  18. Development of Visualization Tools for ZPPR-15 Analysis

    International Nuclear Information System (INIS)

    Lee, Min Jae; Kim, Sang Ji

    2014-01-01

    ZPPR-15 cores consist of various drawer masters that have great heterogeneity. In order to build a proper homogenization strategy, the geometry of the drawer masters should be carefully analyzed with a visualization. Additionally, a visualization of drawer masters and the core configuration is necessary for minimizing human error during the input processing. For this purpose, visualization tools for a ZPPR-15 analysis has been developed based on a Perl script. In the following section, the implementation of visualization tools will be described and various visualization samples for both drawer masters and ZPPR-15 cores will be demonstrated. Visualization tools for drawer masters and a core configuration were successfully developed for a ZPPR-15 analysis. The visualization tools are expected to be useful for understanding ZPPR-15 experiments, and finding deterministic models of ZPPR-15. It turned out that generating VTK files is handy but the application of VTK files is powerful with the aid of the VISIT program

  19. The effect of combined sensory and semantic components on audio-visual speech perception in older adults

    Directory of Open Access Journals (Sweden)

    Corrina eMaguinness

    2011-12-01

    Full Text Available Previous studies have found that perception in older people benefits from multisensory over uni-sensory information. As normal speech recognition is affected by both the auditory input and the visual lip-movements of the speaker, we investigated the efficiency of audio and visual integration in an older population by manipulating the relative reliability of the auditory and visual information in speech. We also investigated the role of the semantic context of the sentence to assess whether audio-visual integration is affected by top-down semantic processing. We presented participants with audio-visual sentences in which the visual component was either blurred or not blurred. We found that there was a greater cost in recall performance for semantically meaningless speech in the audio-visual blur compared to audio-visual no blur condition and this effect was specific to the older group. Our findings have implications for understanding how aging affects efficient multisensory integration for the perception of speech and suggests that multisensory inputs may benefit speech perception in older adults when the semantic content of the speech is unpredictable.

  20. Top-down attention affects sequential regularity representation in the human visual system.

    Science.gov (United States)

    Kimura, Motohiro; Widmann, Andreas; Schröger, Erich

    2010-08-01

    Recent neuroscience studies using visual mismatch negativity (visual MMN), an event-related brain potential (ERP) index of memory-mismatch processes in the visual sensory system, have shown that although sequential regularities embedded in successive visual stimuli can be automatically represented in the visual sensory system, an existence of sequential regularity itself does not guarantee that the sequential regularity will be automatically represented. In the present study, we investigated the effects of top-down attention on sequential regularity representation in the visual sensory system. Our results showed that a sequential regularity (SSSSD) embedded in a modified oddball sequence where infrequent deviant (D) and frequent standard stimuli (S) differing in luminance were regularly presented (SSSSDSSSSDSSSSD...) was represented in the visual sensory system only when participants attended the sequential regularity in luminance, but not when participants ignored the stimuli or simply attended the dimension of luminance per se. This suggests that top-down attention affects sequential regularity representation in the visual sensory system and that top-down attention is a prerequisite for particular sequential regularities to be represented. Copyright 2010 Elsevier B.V. All rights reserved.

  1. Auditory and visual interactions between the superior and inferior colliculi in the ferret.

    Science.gov (United States)

    Stitt, Iain; Galindo-Leon, Edgar; Pieper, Florian; Hollensteiner, Karl J; Engler, Gerhard; Engel, Andreas K

    2015-05-01

    The integration of visual and auditory spatial information is important for building an accurate perception of the external world, but the fundamental mechanisms governing such audiovisual interaction have only partially been resolved. The earliest interface between auditory and visual processing pathways is in the midbrain, where the superior (SC) and inferior colliculi (IC) are reciprocally connected in an audiovisual loop. Here, we investigate the mechanisms of audiovisual interaction in the midbrain by recording neural signals from the SC and IC simultaneously in anesthetized ferrets. Visual stimuli reliably produced band-limited phase locking of IC local field potentials (LFPs) in two distinct frequency bands: 6-10 and 15-30 Hz. These visual LFP responses co-localized with robust auditory responses that were characteristic of the IC. Imaginary coherence analysis confirmed that visual responses in the IC were not volume-conducted signals from the neighboring SC. Visual responses in the IC occurred later than retinally driven superficial SC layers and earlier than deep SC layers that receive indirect visual inputs, suggesting that retinal inputs do not drive visually evoked responses in the IC. In addition, SC and IC recording sites with overlapping visual spatial receptive fields displayed stronger functional connectivity than sites with separate receptive fields, indicating that visual spatial maps are aligned across both midbrain structures. Reciprocal coupling between the IC and SC therefore probably serves the dynamic integration of visual and auditory representations of space. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  2. Visualization of the sequence of a couple splitting outside shop

    DEFF Research Database (Denmark)

    2015-01-01

    Visualization of tracks of couple walking together before splitting and one goes into shop the other waits outside. The visualization represents the sequence described in figure 7 in the publication 'Taking the temperature of pedestrian movement in public spaces'......Visualization of tracks of couple walking together before splitting and one goes into shop the other waits outside. The visualization represents the sequence described in figure 7 in the publication 'Taking the temperature of pedestrian movement in public spaces'...

  3. A visual user interface program, EGSWIN, for EGS4

    International Nuclear Information System (INIS)

    Qiu Rui; Li Junli; Wu Zhen

    2005-01-01

    To overcome the inconvenience and difficulty in using the EGS4 code by novice users, a visual user interface program, called the EGSWIN system, has been developed by the Monte Carlo Research Center of Tsinghua University in China. EGSWIN allows users to run EGS4 for many applications without any user coding. A mixed-language programming technique with Visual C++ and Visual Fortran is used in order to embed both EGS4 and PEGS4 into EGSWIN. The system has the features of visual geometry input, geometry processing, visual definitions of source, scoring and computing parameters, and particle trajectories display. Comparison between the calculated results with EGS4 and EGSWIN, as well as with FLUKA and GEANT, has been made to validate EGSWIN. (author)

  4. Metabolic activity in striate and extrastriate cortex in the hooded rat: contralateral and ipsilateral eye input

    International Nuclear Information System (INIS)

    Thurlow, G.A.; Cooper, R.M.

    1988-01-01

    The extent of changes in glucose metabolism resulting from ipsilateral and contralateral eye activity in the posterior cortex of the hooded rat was demonstrated by means of the C-14 2-deoxyglucose autoradiographic technique. By stimulating one eye with square wave gratings and eliminating efferent activation from the other by means of enucleation or intraocular TTX injection, differences between ipsilaterally and contralaterally based visual activity in the two hemispheres were maximized. Carbon-14 levels in layer IV of autoradiographs of coronal sections were measured and combined across sections to form right and left matrices of posterior cortex metabolic activity. A difference matrix, formed by subtracting the metabolic activity matrix of cortex contralateral to the stimulated eye from the ipsilateral depressed matrix, emphasized those parts of the visual cortex that received monocular visual input. The demarcation of striate cortex by means of cholinesterase stain and the examination of autoradiographs from sections cut tangential to the cortical surface aided in the interpretation of the difference matrices. In striate cortex, differences were maximal in the medial monocular portion, and the lateral or binocular portion was shown to be divided metabolically into a far lateral contralaterally dominant strip along the cortical representation of the vertical meridian, and a more medial region of patches of more or less contralaterally dominant binocular input. Lateral peristriate differences were less than those of striate cortex, and regions of greater and lesser monocular input could be distinguished. We did not detect differences between the two hemispheres in either anterior or medial peristriate areas

  5. Visual coherence for large-scale line-plot visualizations

    KAUST Repository

    Muigg, Philipp

    2011-06-01

    Displaying a large number of lines within a limited amount of screen space is a task that is common to many different classes of visualization techniques such as time-series visualizations, parallel coordinates, link-node diagrams, and phase-space diagrams. This paper addresses the challenging problems of cluttering and overdraw inherent to such visualizations. We generate a 2x2 tensor field during line rasterization that encodes the distribution of line orientations through each image pixel. Anisotropic diffusion of a noise texture is then used to generate a dense, coherent visualization of line orientation. In order to represent features of different scales, we employ a multi-resolution representation of the tensor field. The resulting technique can easily be applied to a wide variety of line-based visualizations. We demonstrate this for parallel coordinates, a time-series visualization, and a phase-space diagram. Furthermore, we demonstrate how to integrate a focus+context approach by incorporating a second tensor field. Our approach achieves interactive rendering performance for large data sets containing millions of data items, due to its image-based nature and ease of implementation on GPUs. Simulation results from computational fluid dynamics are used to evaluate the performance and usefulness of the proposed method. © 2011 The Author(s).

  6. Visual coherence for large-scale line-plot visualizations

    KAUST Repository

    Muigg, Philipp; Hadwiger, Markus; Doleisch, Helmut; Grö ller, Eduard M.

    2011-01-01

    Displaying a large number of lines within a limited amount of screen space is a task that is common to many different classes of visualization techniques such as time-series visualizations, parallel coordinates, link-node diagrams, and phase-space diagrams. This paper addresses the challenging problems of cluttering and overdraw inherent to such visualizations. We generate a 2x2 tensor field during line rasterization that encodes the distribution of line orientations through each image pixel. Anisotropic diffusion of a noise texture is then used to generate a dense, coherent visualization of line orientation. In order to represent features of different scales, we employ a multi-resolution representation of the tensor field. The resulting technique can easily be applied to a wide variety of line-based visualizations. We demonstrate this for parallel coordinates, a time-series visualization, and a phase-space diagram. Furthermore, we demonstrate how to integrate a focus+context approach by incorporating a second tensor field. Our approach achieves interactive rendering performance for large data sets containing millions of data items, due to its image-based nature and ease of implementation on GPUs. Simulation results from computational fluid dynamics are used to evaluate the performance and usefulness of the proposed method. © 2011 The Author(s).

  7. Auto Draw from Excel Input Files

    Science.gov (United States)

    Strauss, Karl F.; Goullioud, Renaud; Cox, Brian; Grimes, James M.

    2011-01-01

    The design process often involves the use of Excel files during project development. To facilitate communications of the information in the Excel files, drawings are often generated. During the design process, the Excel files are updated often to reflect new input. The problem is that the drawings often lag the updates, often leading to confusion of the current state of the design. The use of this program allows visualization of complex data in a format that is more easily understandable than pages of numbers. Because the graphical output can be updated automatically, the manual labor of diagram drawing can be eliminated. The more frequent update of system diagrams can reduce confusion and reduce errors and is likely to uncover symmetric problems earlier in the design cycle, thus reducing rework and redesign.

  8. Functional and structural comparison of visual lateralization in birds – similar but still different

    Science.gov (United States)

    Ströckens, Felix

    2014-01-01

    Vertebrate brains display physiological and anatomical left-right differences, which are related to hemispheric dominances for specific functions. Functional lateralizations likely rely on structural left-right differences in intra- and interhemispheric connectivity patterns that develop in tight gene-environment interactions. The visual systems of chickens and pigeons show that asymmetrical light stimulation during ontogeny induces a dominance of the left hemisphere for visuomotor control that is paralleled by projection asymmetries within the ascending visual pathways. But structural asymmetries vary essentially between both species concerning the affected pathway (thalamo- vs. tectofugal system), constancy of effects (transient vs. permanent), and the hemisphere receiving stronger bilateral input (right vs. left). These discrepancies suggest that at least two aspects of visual processes are influenced by asymmetric light stimulation: (1) visuomotor dominance develops within the ontogenetically stronger stimulated hemisphere but not necessarily in the one receiving stronger bottom-up input. As a secondary consequence of asymmetrical light experience, lateralized top-down mechanisms play a critical role in the emergence of hemispheric dominance. (2) Ontogenetic light experiences may affect the dominant use of left- and right-hemispheric strategies. Evidences from social and spatial cognition tasks indicate that chickens rely more on a right-hemispheric global strategy whereas pigeons display a dominance of the left hemisphere. Thus, behavioral asymmetries are linked to a stronger bilateral input to the right hemisphere in chickens but to the left one in pigeons. The degree of bilateral visual input may determine the dominant visual processing strategy when redundant encoding is possible. This analysis supports that environmental stimulation affects the balance between hemispheric-specific processing by lateralized interactions of bottom-up and top-down systems

  9. Functional and structural comparison of visual lateralization in birds – similar but still different

    Directory of Open Access Journals (Sweden)

    Martina eManns

    2014-03-01

    Full Text Available Vertebrate brains display physiological and anatomical left-right differences, which are related to hemispheric dominances for specific functions. Functional lateralizations likely rely on structural left-right differences in intra- and interhemispheric connectivity patterns that develop in tight gene-environment interactions. The visual systems of chickens and pigeons show that asymmetrical light stimulation during ontogeny induces a dominance of the left hemisphere for visuomotor control that is paralleled by projection asymmetries within the ascending visual pathways. But structural asymmetries vary essentially between both species concerning the affected pathway (thalamo- vs. tectofugal system, constancy of effects (transient vs. permanent, and the hemisphere receiving stronger bilateral input (right vs. left. These discrepancies suggest that at least two aspects of visual processes are influenced by asymmetric light stimulation: 1. Visuomotor dominance develops within the ontogenetically stronger stimulated hemisphere but not necessarily in the one receiving stronger bottom-up input. As a secondary consequence of asymmetrical light experience, lateralized top-down mechanisms play a critical role in the emergence of hemispheric dominance. 2. Ontogenetic light experiences may affect the dominant use of left- and right-hemispheric strategies. Evidences from social and spatial cognition tasks indicate that chickens rely more on a right-hemispheric global strategy whereas pigeons display a dominance of the left hemisphere. Thus, behavioural asymmetries are linked to a stronger bilateral input to the right hemisphere in chickens but to the left one in pigeons. The degree of bilateral visual input may determine the dominant visual processing strategy when redundant encoding is possible. This analysis supports that environmental stimulation affects the balance between hemispheric-specific processing by lateralized interactions of bottom-up and top

  10. Design issues in the production of hyper-books and visual-books

    Directory of Open Access Journals (Sweden)

    Nadia Catenazzi

    1993-12-01

    Full Text Available This paper describes an ongoing research project in the area of electronic books. After a brief overview of the state of the art in this field, two new forms of electronic book are presented: hyper-books and visual-books. A flexible environment allows them to be produced in a semi-automatic way starting from different sources: electronic texts (as input for hyper-books and paper books (as input for visual-books. The translation process is driven by the philosophy of preserving the book metaphor in order to guarantee that electronic information is presented in a familiar way. Another important feature of our research is that hyper-books and visual-books are conceived not as isolated objects but as entities within an electronic library, which inherits most of the features of a paper-based library but introduces a number of new properties resulting from its non-physical nature.

  11. An overview of 3D software visualization.

    Science.gov (United States)

    Teyseyre, Alfredo R; Campo, Marcelo R

    2009-01-01

    Software visualization studies techniques and methods for graphically representing different aspects of software. Its main goal is to enhance, simplify and clarify the mental representation a software engineer has of a computer system. During many years, visualization in 2D space has been actively studied, but in the last decade, researchers have begun to explore new 3D representations for visualizing software. In this article, we present an overview of current research in the area, describing several major aspects like: visual representations, interaction issues, evaluation methods and development tools. We also perform a survey of some representative tools to support different tasks, i.e., software maintenance and comprehension, requirements validation and algorithm animation for educational purposes, among others. Finally, we conclude identifying future research directions.

  12. Sensory Synergy as Environmental Input Integration

    Directory of Open Access Journals (Sweden)

    Fady eAlnajjar

    2015-01-01

    Full Text Available The development of a method to feed proper environmental inputs back to the central nervous system (CNS remains one of the challenges in achieving natural movement when part of the body is replaced with an artificial device. Muscle synergies are widely accepted as a biologically plausible interpretation of the neural dynamics between the CNS and the muscular system. Yet the sensorineural dynamics of environmental feedback to the CNS has not been investigated in detail. In this study, we address this issue by exploring the concept of sensory synergy. In contrast to muscle synergy, we hypothesize that sensory synergy plays an essential role in integrating the overall environmental inputs to provide low-dimensional information to the CNS. We assume that sensor synergy and muscle synergy communicate using these low-dimensional signals. To examine our hypothesis, we conducted posture control experiments involving lateral disturbance with 9 healthy participants. Proprioceptive information represented by the changes on muscle lengths were estimated by using the musculoskeletal model analysis software SIMM. Changes on muscles lengths were then used to compute sensory synergies. The experimental results indicate that the environmental inputs were translated into the two dimensional signals and used to move the upper limb to the desired position immediately after the lateral disturbance. Participants who showed high skill in posture control were found to be likely to have a strong correlation between sensory and muscle signaling as well as high coordination between the utilized sensory synergies. These results suggest the importance of integrating environmental inputs into suitable low-dimensional signals before providing them to the CNS. This mechanism should be essential when designing the prosthesis’ sensory system to make the controller simpler

  13. Sensory synergy as environmental input integration.

    Science.gov (United States)

    Alnajjar, Fady; Itkonen, Matti; Berenz, Vincent; Tournier, Maxime; Nagai, Chikara; Shimoda, Shingo

    2014-01-01

    The development of a method to feed proper environmental inputs back to the central nervous system (CNS) remains one of the challenges in achieving natural movement when part of the body is replaced with an artificial device. Muscle synergies are widely accepted as a biologically plausible interpretation of the neural dynamics between the CNS and the muscular system. Yet the sensorineural dynamics of environmental feedback to the CNS has not been investigated in detail. In this study, we address this issue by exploring the concept of sensory synergy. In contrast to muscle synergy, we hypothesize that sensory synergy plays an essential role in integrating the overall environmental inputs to provide low-dimensional information to the CNS. We assume that sensor synergy and muscle synergy communicate using these low-dimensional signals. To examine our hypothesis, we conducted posture control experiments involving lateral disturbance with nine healthy participants. Proprioceptive information represented by the changes on muscle lengths were estimated by using the musculoskeletal model analysis software SIMM. Changes on muscles lengths were then used to compute sensory synergies. The experimental results indicate that the environmental inputs were translated into the two dimensional signals and used to move the upper limb to the desired position immediately after the lateral disturbance. Participants who showed high skill in posture control were found to be likely to have a strong correlation between sensory and muscle signaling as well as high coordination between the utilized sensory synergies. These results suggest the importance of integrating environmental inputs into suitable low-dimensional signals before providing them to the CNS. This mechanism should be essential when designing the prosthesis' sensory system to make the controller simpler.

  14. Postdictive modulation of visual orientation.

    Directory of Open Access Journals (Sweden)

    Takahiro Kawabe

    Full Text Available The present study investigated how visual orientation is modulated by subsequent orientation inputs. Observers were presented a near-vertical Gabor patch as a target, followed by a left- or right-tilted second Gabor patch as a distracter in the spatial vicinity of the target. The task of the observers was to judge whether the target was right- or left-tilted (Experiment 1 or whether the target was vertical or not (Supplementary experiment. The judgment was biased toward the orientation of the distracter (the postdictive modulation of visual orientation. The judgment bias peaked when the target and distracter were temporally separated by 100 ms, indicating a specific temporal mechanism for this phenomenon. However, when the visibility of the distracter was reduced via backward masking, the judgment bias disappeared. On the other hand, the low-visibility distracter could still cause a simultaneous orientation contrast, indicating that the distracter orientation is still processed in the visual system (Experiment 2. Our results suggest that the postdictive modulation of visual orientation stems from spatiotemporal integration of visual orientation on the basis of a slow feature matching process.

  15. Functional connectivity of visual cortex in the blind follows retinotopic organization principles.

    Science.gov (United States)

    Striem-Amit, Ella; Ovadia-Caro, Smadar; Caramazza, Alfonso; Margulies, Daniel S; Villringer, Arno; Amedi, Amir

    2015-06-01

    Is visual input during critical periods of development crucial for the emergence of the fundamental topographical mapping of the visual cortex? And would this structure be retained throughout life-long blindness or would it fade as a result of plastic, use-based reorganization? We used functional connectivity magnetic resonance imaging based on intrinsic blood oxygen level-dependent fluctuations to investigate whether significant traces of topographical mapping of the visual scene in the form of retinotopic organization, could be found in congenitally blind adults. A group of 11 fully and congenitally blind subjects and 18 sighted controls were studied. The blind demonstrated an intact functional connectivity network structural organization of the three main retinotopic mapping axes: eccentricity (centre-periphery), laterality (left-right), and elevation (upper-lower) throughout the retinotopic cortex extending to high-level ventral and dorsal streams, including characteristic eccentricity biases in face- and house-selective areas. Functional connectivity-based topographic organization in the visual cortex was indistinguishable from the normally sighted retinotopic functional connectivity structure as indicated by clustering analysis, and was found even in participants who did not have a typical retinal development in utero (microphthalmics). While the internal structural organization of the visual cortex was strikingly similar, the blind exhibited profound differences in functional connectivity to other (non-visual) brain regions as compared to the sighted, which were specific to portions of V1. Central V1 was more connected to language areas but peripheral V1 to spatial attention and control networks. These findings suggest that current accounts of critical periods and experience-dependent development should be revisited even for primary sensory areas, in that the connectivity basis for visual cortex large-scale topographical organization can develop without any

  16. Activation of Visuomotor Systems during Visually Guided Movements: A Functional MRI Study

    Science.gov (United States)

    Ellermann, Jutta M.; Siegal, Joel D.; Strupp, John P.; Ebner, Timothy J.; Ugurbil, Kâmil

    1998-04-01

    The dorsal stream is a dominant visuomotor pathway that connects the striate and extrastriate cortices to posterior parietal areas. In turn, the posterior parietal areas send projections to the frontal primary motor and premotor areas. This cortical pathway is hypothesized to be involved in the transformation of a visual input into the appropriate motor output. In this study we used functional magnetic resonance imaging (fMRI) of the entire brain to determine the patterns of activation that occurred while subjects performed a visually guided motor task. In nine human subjects, fMRI data were acquired on a 4-T whole-body MR system equipped with a head gradient coil and a birdcage RF coil using aT*2-weighted EPI sequence. Functional activation was determined for three different tasks: (1) a visuomotor task consisting of moving a cursor on a screen with a joystick in relation to various targets, (2) a hand movement task consisting of moving the joystick without visual input, and (3) a eye movement task consisting of moving the eyes alone without visual input. Blood oxygenation level-dependent (BOLD) contrast-based activation maps of each subject were generated using period cross-correlation statistics. Subsequently, each subject's brain was normalized to Talairach coordinates, and the individual maps were compared on a pixel by pixel basis. Significantly activated pixels common to at least four out of six subjects were retained to construct the final functional image. The pattern of activation during visually guided movements was consistent with the flow of information from striate and extrastriate visual areas, to the posterior parietal complex, and then to frontal motor areas. The extensive activation of this network and the reproducibility among subjects is consistent with a role for the dorsal stream in transforming visual information into motor behavior. Also extensively activated were the medial and lateral cerebellar structures, implicating the cortico

  17. Two Types of Visual Objects

    Directory of Open Access Journals (Sweden)

    Skrzypulec Błażej

    2015-06-01

    Full Text Available While it is widely accepted that human vision represents objects, it is less clear which of the various philosophical notions of ‘object’ adequately characterizes visual objects. In this paper, I show that within contemporary cognitive psychology visual objects are characterized in two distinct, incompatible ways. On the one hand, models of visual organization describe visual objects in terms of combinations of features, in accordance with the philosophical bundle theories of objects. However, models of visual persistence apply a notion of visual objects that is more similar to that endorsed in philosophical substratum theories. Here I discuss arguments that might show either that only one of the above notions of visual objects is adequate in the context of human vision, or that the category of visual objects is not uniform and contains entities properly characterized by different philosophical conceptions.

  18. Screening important inputs in models with strong interaction properties

    International Nuclear Information System (INIS)

    Saltelli, Andrea; Campolongo, Francesca; Cariboni, Jessica

    2009-01-01

    We introduce a new method for screening inputs in mathematical or computational models with large numbers of inputs. The method proposed here represents an improvement over the best available practice for this setting when dealing with models having strong interaction effects. When the sample size is sufficiently high the same design can also be used to obtain accurate quantitative estimates of the variance-based sensitivity measures: the same simulations can be used to obtain estimates of the variance-based measures according to the Sobol' and the Jansen formulas. Results demonstrate that Sobol' is more efficient for the computation of the first-order indices, while Jansen performs better for the computation of the total indices.

  19. Screening important inputs in models with strong interaction properties

    Energy Technology Data Exchange (ETDEWEB)

    Saltelli, Andrea [European Commission, Joint Research Centre, 21020 Ispra, Varese (Italy); Campolongo, Francesca [European Commission, Joint Research Centre, 21020 Ispra, Varese (Italy)], E-mail: francesca.campolongo@jrc.it; Cariboni, Jessica [European Commission, Joint Research Centre, 21020 Ispra, Varese (Italy)

    2009-07-15

    We introduce a new method for screening inputs in mathematical or computational models with large numbers of inputs. The method proposed here represents an improvement over the best available practice for this setting when dealing with models having strong interaction effects. When the sample size is sufficiently high the same design can also be used to obtain accurate quantitative estimates of the variance-based sensitivity measures: the same simulations can be used to obtain estimates of the variance-based measures according to the Sobol' and the Jansen formulas. Results demonstrate that Sobol' is more efficient for the computation of the first-order indices, while Jansen performs better for the computation of the total indices.

  20. Proprioceptive versus Visual Control in Autistic Children.

    Science.gov (United States)

    Masterton, B. A.; Biederman, G. B.

    1983-01-01

    The autistic children's presumed preference for proximal over distal sensory input was studied by requiring that "autistic," retarded, and "normal" children (7-15 years old) adapt to lateral displacement of the visual field. Only autistic Ss demonstrated transfer of adaptation to the nonadapted hand, indicating reliance on proprioception rather…

  1. Visual motion-sensitive neurons in the bumblebee brain convey information about landmarks during a navigational task

    Directory of Open Access Journals (Sweden)

    Marcel eMertes

    2014-09-01

    Full Text Available Bees use visual memories to find the spatial location of previously learnt food sites. Characteristic learning flights help acquiring these memories at newly discovered foraging locations where landmarks - salient objects in the vicinity of the goal location - can play an important role in guiding the animal’s homing behavior. Although behavioral experiments have shown that bees can use a variety of visual cues to distinguish objects as landmarks, the question of how landmark features are encoded by the visual system is still open. Recently, it could be shown that motion cues are sufficient to allow bees localizing their goal using landmarks that can hardly be discriminated from the background texture. Here, we tested the hypothesis that motion sensitive neurons in the bee’s visual pathway provide information about such landmarks during a learning flight and might, thus, play a role for goal localization. We tracked learning flights of free-flying bumblebees (Bombus terrestris in an arena with distinct visual landmarks, reconstructed the visual input during these flights, and replayed ego-perspective movies to tethered bumblebees while recording the activity of direction-selective wide-field neurons in their optic lobe. By comparing neuronal responses during a typical learning flight and targeted modifications of landmark properties in this movie we demonstrate that these objects are indeed represented in the bee’s visual motion pathway. We find that object-induced responses vary little with object texture, which is in agreement with behavioral evidence. These neurons thus convey information about landmark properties that are useful for view-based homing.

  2. Academic Training Lectures | Representing Scientific Communities by Data Visualization | 14-15 March

    CERN Multimedia

    2016-01-01

    Please note that the next series of Academic Training Lectures will take place from 14 to 15 March 2016 and will be given by Dario Rodighiero (EPFL, Lausanne, Switzerland).   Representing Scientific Communities by Data Visualisation (1/2)​ Monday, 14 March 2016 from 11 a.m. to 12 p.m. https://indico.cern.ch/event/465533/ Representing Scientific Communities by Data Visualisation (2/2)​ Tuesday, 15 March 2016 from 11 a.m. to 12 p.m. https://indico.cern.ch/event/465534/ at CERN, IT Amphitheatre (31-3-004)  Description: These lectures present research that investigates the representation of communities, and the way to foster their understanding by different audiences. Communities are complex multidimensional entities intrinsically difficult to represent synthetically. The way to represent them is likely to differ depending on the audience considered: governi...

  3. Getting more from visual working memory: Retro-cues enhance retrieval and protect from visual interference.

    Science.gov (United States)

    Souza, Alessandra S; Rerko, Laura; Oberauer, Klaus

    2016-06-01

    Visual working memory (VWM) has a limited capacity. This limitation can be mitigated by the use of focused attention: if attention is drawn to the relevant working memory content before test, performance improves (the so-called retro-cue benefit). This study tests 2 explanations of the retro-cue benefit: (a) Focused attention protects memory representations from interference by visual input at test, and (b) focusing attention enhances retrieval. Across 6 experiments using color recognition and color reproduction tasks, we varied the amount of color interference at test, and the delay between a retrieval cue (i.e., the retro-cue) and the memory test. Retro-cue benefits were larger when the memory test introduced interfering visual stimuli, showing that the retro-cue effect is in part because of protection from visual interference. However, when visual interference was held constant, retro-cue benefits were still obtained whenever the retro-cue enabled retrieval of an object from VWM but delayed response selection. Our results show that accessible information in VWM might be lost in the processes of testing memory because of visual interference and incomplete retrieval. This is not an inevitable state of affairs, though: Focused attention can be used to get the most out of VWM. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  4. Audio-visual synchrony and feature-selective attention co-amplify early visual processing.

    Science.gov (United States)

    Keitel, Christian; Müller, Matthias M

    2016-05-01

    Our brain relies on neural mechanisms of selective attention and converging sensory processing to efficiently cope with rich and unceasing multisensory inputs. One prominent assumption holds that audio-visual synchrony can act as a strong attractor for spatial attention. Here, we tested for a similar effect of audio-visual synchrony on feature-selective attention. We presented two superimposed Gabor patches that differed in colour and orientation. On each trial, participants were cued to selectively attend to one of the two patches. Over time, spatial frequencies of both patches varied sinusoidally at distinct rates (3.14 and 3.63 Hz), giving rise to pulse-like percepts. A simultaneously presented pure tone carried a frequency modulation at the pulse rate of one of the two visual stimuli to introduce audio-visual synchrony. Pulsed stimulation elicited distinct time-locked oscillatory electrophysiological brain responses. These steady-state responses were quantified in the spectral domain to examine individual stimulus processing under conditions of synchronous versus asynchronous tone presentation and when respective stimuli were attended versus unattended. We found that both, attending to the colour of a stimulus and its synchrony with the tone, enhanced its processing. Moreover, both gain effects combined linearly for attended in-sync stimuli. Our results suggest that audio-visual synchrony can attract attention to specific stimulus features when stimuli overlap in space.

  5. Adaptive Pulvinar Circuitry Supports Visual Cognition.

    Science.gov (United States)

    Bridge, Holly; Leopold, David A; Bourne, James A

    2016-02-01

    The pulvinar is the largest thalamic nucleus in primates and one of the most mysterious. Endeavors to understand its role in vision have focused on its abundant connections with the visual cortex. While its connectivity mapping in the cortex displays a broad topographic organization, its projections are also marked by considerable convergence and divergence. As a result, the pulvinar is often regarded as a central forebrain hub. Moreover, new evidence suggests that its comparatively modest input from structures such as the retina and superior colliculus may critically shape the functional organization of the visual cortex, particularly during early development. Here we review recent studies that cast fresh light on how the many convergent pathways through the pulvinar contribute to visual cognition. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Invariant Visual Object and Face Recognition: Neural and Computational Bases, and a Model, VisNet.

    Science.gov (United States)

    Rolls, Edmund T

    2012-01-01

    Neurophysiological evidence for invariant representations of objects and faces in the primate inferior temporal visual cortex is described. Then a computational approach to how invariant representations are formed in the brain is described that builds on the neurophysiology. A feature hierarchy model in which invariant representations can be built by self-organizing learning based on the temporal and spatial statistics of the visual input produced by objects as they transform in the world is described. VisNet can use temporal continuity in an associative synaptic learning rule with a short-term memory trace, and/or it can use spatial continuity in continuous spatial transformation learning which does not require a temporal trace. The model of visual processing in the ventral cortical stream can build representations of objects that are invariant with respect to translation, view, size, and also lighting. The model has been extended to provide an account of invariant representations in the dorsal visual system of the global motion produced by objects such as looming, rotation, and object-based movement. The model has been extended to incorporate top-down feedback connections to model the control of attention by biased competition in, for example, spatial and object search tasks. The approach has also been extended to account for how the visual system can select single objects in complex visual scenes, and how multiple objects can be represented in a scene. The approach has also been extended to provide, with an additional layer, for the development of representations of spatial scenes of the type found in the hippocampus.

  7. Flow visualization via partial differential equations

    NARCIS (Netherlands)

    Preusser, T.; Rumpf, M.; Telea, A.C.; Möller, T.; Hamann, B.; Russell, R.D.

    2009-01-01

    The visualization of stationary and time-dependent flow is an important and chaltenging topic in scientific visualization. lts aim is 10 represent transport phenomena govemed by vector fjelds in an intuitively understandable way. In this paper. we review the use of methods based on partial

  8. TART input manual

    International Nuclear Information System (INIS)

    Kimlinger, J.R.; Plechaty, E.F.

    1982-01-01

    The TART code is a Monte Carlo neutron/photon transport code that is only on the CRAY computer. All the input cards for the TART code are listed, and definitions for all input parameters are given. The execution and limitations of the code are described, and input for two sample problems are given

  9. Cortical and Subcortical Coordination of Visual Spatial Attention Revealed by Simultaneous EEG-fMRI Recording.

    Science.gov (United States)

    Green, Jessica J; Boehler, Carsten N; Roberts, Kenneth C; Chen, Ling-Chia; Krebs, Ruth M; Song, Allen W; Woldorff, Marty G

    2017-08-16

    Visual spatial attention has been studied in humans with both electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) individually. However, due to the intrinsic limitations of each of these methods used alone, our understanding of the systems-level mechanisms underlying attentional control remains limited. Here, we examined trial-to-trial covariations of concurrently recorded EEG and fMRI in a cued visual spatial attention task in humans, which allowed delineation of both the generators and modulators of the cue-triggered event-related oscillatory brain activity underlying attentional control function. The fMRI activity in visual cortical regions contralateral to the cued direction of attention covaried positively with occipital gamma-band EEG, consistent with activation of cortical regions representing attended locations in space. In contrast, fMRI activity in ipsilateral visual cortical regions covaried inversely with occipital alpha-band oscillations, consistent with attention-related suppression of the irrelevant hemispace. Moreover, the pulvinar nucleus of the thalamus covaried with both of these spatially specific, attention-related, oscillatory EEG modulations. Because the pulvinar's neuroanatomical geometry makes it unlikely to be a direct generator of the scalp-recorded EEG, these covariational patterns appear to reflect the pulvinar's role as a regulatory control structure, sending spatially specific signals to modulate visual cortex excitability proactively. Together, these combined EEG/fMRI results illuminate the dynamically interacting cortical and subcortical processes underlying spatial attention, providing important insight not realizable using either method alone. SIGNIFICANCE STATEMENT Noninvasive recordings of changes in the brain's blood flow using functional magnetic resonance imaging and electrical activity using electroencephalography in humans have individually shown that shifting attention to a location in space

  10. Differential effects of visual feedback on subjective visual vertical accuracy and precision.

    Directory of Open Access Journals (Sweden)

    Daniel Bjasch

    Full Text Available The brain constructs an internal estimate of the gravitational vertical by integrating multiple sensory signals. In darkness, systematic head-roll dependent errors in verticality estimates, as measured by the subjective visual vertical (SVV, occur. We hypothesized that visual feedback after each trial results in increased accuracy, as physiological adjustment errors (A-/E-effect are likely based on central computational mechanisms and investigated whether such improvements were related to adaptational shifts of perceived vertical or to a higher cognitive strategy. We asked 12 healthy human subjects to adjust a luminous arrow to vertical in various head-roll positions (0 to 120deg right-ear down, 15deg steps. After each adjustment visual feedback was provided (lights on, display of previous adjustment and of an earth-vertical cross. Control trials consisted of SVV adjustments without feedback. At head-roll angles with the largest A-effect (90, 105, and 120deg, errors were reduced significantly (p0.05 influenced. In seven subjects an additional session with two consecutive blocks (first with, then without visual feedback was completed at 90, 105 and 120deg head-roll. In these positions the error-reduction by the previous visual feedback block remained significant over the consecutive 18-24 min (post-feedback block, i.e., was still significantly (p<0.002 different from the control trials. Eleven out of 12 subjects reported having consciously added a bias to their perceived vertical based on visual feedback in order to minimize errors. We conclude that improvements of SVV accuracy by visual feedback, which remained effective after removal of feedback for ≥18 min, rather resulted from a cognitive strategy than by adapting the internal estimate of the gravitational vertical. The mechanisms behind the SVV therefore, remained stable, which is also supported by the fact that SVV precision - depending mostly on otolith input - was not affected by visual

  11. User manual of Visual Balan V. 1.0 Interactive code for water balances and refueling estimation

    International Nuclear Information System (INIS)

    Samper, J.; Huguet, L.; Ares, J.; Garcia, M. A.

    1999-01-01

    This document contains the Users Manual of Visual Balan V1.0, an updated version of Visual Balan V0.0 (Samper et al., 1997). Visual Balan V1.0 performs daily water balances in the soil, the unsaturated zone and the aquifer in a user-friendly environment which facilitates both the input data process and the postprocessing of results. The main inputs of the balance are rainfall and irrigation while the outputs are surface runoff, evapotranspiration, interception, inter flow and groundwater flow. The code evaluates all these components in a sequential manner by starting with rainfall and irrigation, which must be provided by the user, and continuing with interception, surface runoff, evapotranspiration, and potential recharge (water flux crossing the bottom of the soil). This potential recharge is the input to the unsaturated zone where water can flow horizontally as subsurface flow (inter flow) or vertically as percolation into the aquifer. (Author)

  12. Visualization of Robotic Sensor Data with Augmented Reality

    OpenAIRE

    Thorstensen, Mathias Ciarlo

    2017-01-01

    To understand a robot's intent and behavior, a robot engineer must analyze data at the input and output, but also at all intermediary steps. This might require looking at a specific subset of the system, or a single data node in isolation. A range of different data formats can be used in the systems, and require visualization in different mediums; some are text based, and best visualized in a terminal, while other types must be presented graphically, in 2D or 3D. This often makes understandin...

  13. Creating Effective Data Visualizations - Lecture 1

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    In this course I aim to give an overview of data visualisation as a field, including many of the important theoretical groundings in data visualization. We will explore the different ways of representing visual information, and the strengths/weaknesses of those approaches. Using real-world case studies, I will demonstrate techniques and best practices for visualizing complex multi-dimensional data common to high energy physics and other fields.

  14. Creating Effective Data Visualizations - Lecture 2

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    In this course I aim to give an overview of data visualisation as a field, including many of the important theoretical groundings in data visualization. We will explore the different ways of representing visual information, and the strengths/weaknesses of those approaches. Using real-world case studies, I will demonstrate techniques and best practices for visualizing complex multi-dimensional data common to high energy physics and other fields.

  15. Spontaneously emerging cortical representations of visual attributes

    Science.gov (United States)

    Kenet, Tal; Bibitchkov, Dmitri; Tsodyks, Misha; Grinvald, Amiram; Arieli, Amos

    2003-10-01

    Spontaneous cortical activity-ongoing activity in the absence of intentional sensory input-has been studied extensively, using methods ranging from EEG (electroencephalography), through voltage sensitive dye imaging, down to recordings from single neurons. Ongoing cortical activity has been shown to play a critical role in development, and must also be essential for processing sensory perception, because it modulates stimulus-evoked activity, and is correlated with behaviour. Yet its role in the processing of external information and its relationship to internal representations of sensory attributes remains unknown. Using voltage sensitive dye imaging, we previously established a close link between ongoing activity in the visual cortex of anaesthetized cats and the spontaneous firing of a single neuron. Here we report that such activity encompasses a set of dynamically switching cortical states, many of which correspond closely to orientation maps. When such an orientation state emerged spontaneously, it spanned several hypercolumns and was often followed by a state corresponding to a proximal orientation. We suggest that dynamically switching cortical states could represent the brain's internal context, and therefore reflect or influence memory, perception and behaviour.

  16. Unraveling The Connectome: Visualizing and Abstracting Large-Scale Connectomics Data

    KAUST Repository

    Al-Awami, Ali K.

    2017-04-30

    We explore visualization and abstraction approaches to represent neuronal data. Neuroscientists acquire electron microscopy volumes to reconstruct a complete wiring diagram of the neurons in the brain, called the connectome. This will be crucial to understanding brains and their development. However, the resulting data is complex and large, posing a big challenge to existing visualization techniques in terms of clarity and scalability. We describe solutions to tackle the problems of scalability and cluttered presentation. We first show how a query-guided interactive approach to visual exploration can reduce the clutter and help neuroscientists explore their data dynamically. We use a knowledge-based query algebra that facilitates the interactive creation of queries. This allows neuroscientists to pose domain-specific questions related to their research. Simple queries can be combined to form complex queries to answer more sophisticated questions. We then show how visual abstractions from 3D to 2D can significantly reduce the visual clutter and add clarity to the visualization so that scientists can focus more on the analysis. We abstract the topology of 3D neurons into a multi-scale, relative distance-preserving subway map visualization that allows scientists to interactively explore the morphological and connectivity features of neuronal cells. We then focus on the process of acquisition, where neuroscientists segment electron microscopy images to reconstruct neurons. The segmentation process of such data is tedious, time-intensive, and usually performed using a diverse set of tools. We present a novel web-based visualization system for tracking the state, progress, and evolution of segmentation data in neuroscience. Our multi-user system seamlessly integrates a diverse set of tools. Our system provides support for the management, provenance, accountability, and auditing of large-scale segmentations. Finally, we present a novel architecture to render very large

  17. The strength of attentional biases reduces as visual short-term memory load increases.

    Science.gov (United States)

    Shimi, A; Astle, D E

    2013-07-01

    Despite our visual system receiving irrelevant input that competes with task-relevant signals, we are able to pursue our perceptual goals. Attention enhances our visual processing by biasing the processing of the input that is relevant to the task at hand. The top-down signals enabling these biases are therefore important for regulating lower level sensory mechanisms. In three experiments, we examined whether we apply similar biases to successfully maintain information in visual short-term memory (VSTM). We presented participants with targets alongside distracters and we graded their perceptual similarity to vary the extent to which they competed. Experiments 1 and 2 showed that the more items held in VSTM before the onset of the distracters, the more perceptually distinct the distracters needed to be for participants to retain the target accurately. Experiment 3 extended these behavioral findings by demonstrating that the perceptual similarity between target and distracters exerted a significantly greater effect on occipital alpha amplitudes, depending on the number of items already held in VSTM. The trade-off between VSTM load and target-distracter competition suggests that VSTM and perceptual competition share a partially overlapping mechanism, namely top-down inputs into sensory areas.

  18. Neural Network Machine Learning and Dimension Reduction for Data Visualization

    Science.gov (United States)

    Liles, Charles A.

    2014-01-01

    Neural network machine learning in computer science is a continuously developing field of study. Although neural network models have been developed which can accurately predict a numeric value or nominal classification, a general purpose method for constructing neural network architecture has yet to be developed. Computer scientists are often forced to rely on a trial-and-error process of developing and improving accurate neural network models. In many cases, models are constructed from a large number of input parameters. Understanding which input parameters have the greatest impact on the prediction of the model is often difficult to surmise, especially when the number of input variables is very high. This challenge is often labeled the "curse of dimensionality" in scientific fields. However, techniques exist for reducing the dimensionality of problems to just two dimensions. Once a problem's dimensions have been mapped to two dimensions, it can be easily plotted and understood by humans. The ability to visualize a multi-dimensional dataset can provide a means of identifying which input variables have the highest effect on determining a nominal or numeric output. Identifying these variables can provide a better means of training neural network models; models can be more easily and quickly trained using only input variables which appear to affect the outcome variable. The purpose of this project is to explore varying means of training neural networks and to utilize dimensional reduction for visualizing and understanding complex datasets.

  19. A Biophysical Neural Model To Describe Spatial Visual Attention

    International Nuclear Information System (INIS)

    Hugues, Etienne; Jose, Jorge V.

    2008-01-01

    Visual scenes have enormous spatial and temporal information that are transduced into neural spike trains. Psychophysical experiments indicate that only a small portion of a spatial image is consciously accessible. Electrophysiological experiments in behaving monkeys have revealed a number of modulations of the neural activity in special visual area known as V4, when the animal is paying attention directly towards a particular stimulus location. The nature of the attentional input to V4, however, remains unknown as well as to the mechanisms responsible for these modulations. We use a biophysical neural network model of V4 to address these issues. We first constrain our model to reproduce the experimental results obtained for different external stimulus configurations and without paying attention. To reproduce the known neuronal response variability, we found that the neurons should receive about equal, or balanced, levels of excitatory and inhibitory inputs and whose levels are high as they are in in vivo conditions. Next we consider attentional inputs that can induce and reproduce the observed spiking modulations. We also elucidate the role played by the neural network to generate these modulations

  20. Disruption of visual awareness during the attentional blink is reflected by selective disruption of late-stage neural processing

    Science.gov (United States)

    Harris, Joseph A.; McMahon, Alex R.; Woldorff, Marty G.

    2015-01-01

    Any information represented in the brain holds the potential to influence behavior. It is therefore of broad interest to determine the extent and quality of neural processing of stimulus input that occurs with and without awareness. The attentional blink is a useful tool for dissociating neural and behavioral measures of perceptual visual processing across conditions of awareness. The extent of higher-order visual information beyond basic sensory signaling that is processed during the attentional blink remains controversial. To determine what neural processing at the level of visual-object identification occurs in the absence of awareness, electrophysiological responses to images of faces and houses were recorded both within and outside of the attentional blink period during a rapid serial visual presentation (RSVP) stream. Electrophysiological results were sorted according to behavioral performance (correctly identified targets versus missed targets) within these blink and non-blink periods. An early index of face-specific processing (the N170, 140–220 ms post-stimulus) was observed regardless of whether the subject demonstrated awareness of the stimulus, whereas a later face-specific effect with the same topographic distribution (500–700 ms post-stimulus) was only seen for accurate behavioral discrimination of the stimulus content. The present findings suggest a multi-stage process of object-category processing, with only the later phase being associated with explicit visual awareness. PMID:23859644

  1. Visual bias in subjective assessments of automotive sounds

    DEFF Research Database (Denmark)

    Ellermeier, Wolfgang; Legarth, Søren Vase

    2006-01-01

    In order to evaluate how strong the influence of visual input on sound quality evaluation may be, a naive sample of 20 participants was asked to judge interior automotive sound recordings while simultaneously being exposed to pictures of cars. twenty-two recordings of second-gear acceleration...

  2. How model and input uncertainty impact maize yield simulations in West Africa

    Science.gov (United States)

    Waha, Katharina; Huth, Neil; Carberry, Peter; Wang, Enli

    2015-02-01

    Crop models are common tools for simulating crop yields and crop production in studies on food security and global change. Various uncertainties however exist, not only in the model design and model parameters, but also and maybe even more important in soil, climate and management input data. We analyze the performance of the point-scale crop model APSIM and the global scale crop model LPJmL with different climate and soil conditions under different agricultural management in the low-input maize-growing areas of Burkina Faso, West Africa. We test the models’ response to different levels of input information from little to detailed information on soil, climate (1961-2000) and agricultural management and compare the models’ ability to represent the observed spatial (between locations) and temporal variability (between years) in crop yields. We found that the resolution of different soil, climate and management information influences the simulated crop yields in both models. However, the difference between models is larger than between input data and larger between simulations with different climate and management information than between simulations with different soil information. The observed spatial variability can be represented well from both models even with little information on soils and management but APSIM simulates a higher variation between single locations than LPJmL. The agreement of simulated and observed temporal variability is lower due to non-climatic factors e.g. investment in agricultural research and development between 1987 and 1991 in Burkina Faso which resulted in a doubling of maize yields. The findings of our study highlight the importance of scale and model choice and show that the most detailed input data does not necessarily improve model performance.

  3. High Performance Interactive System Dynamics Visualization

    Energy Technology Data Exchange (ETDEWEB)

    Bush, Brian W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Brunhart-Lupo, Nicholas J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Gruchalla, Kenny M [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Duckworth, Jonathan C [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-09-14

    This brochure describes a system dynamics simulation (SD) framework that supports an end-to-end analysis workflow that is optimized for deployment on ESIF facilities(Peregrine and the Insight Center). It includes (I) parallel and distributed simulation of SD models, (ii) real-time 3D visualization of running simulations, and (iii) comprehensive database-oriented persistence of simulation metadata, inputs, and outputs.

  4. High Performance Interactive System Dynamics Visualization

    Energy Technology Data Exchange (ETDEWEB)

    Bush, Brian W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Brunhart-Lupo, Nicholas J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Gruchalla, Kenny M [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Duckworth, Jonathan C [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-09-14

    This presentation describes a system dynamics simulation (SD) framework that supports an end-to-end analysis workflow that is optimized for deployment on ESIF facilities(Peregrine and the Insight Center). It includes (I) parallel and distributed simulation of SD models, (ii) real-time 3D visualization of running simulations, and (iii) comprehensive database-oriented persistence of simulation metadata, inputs, and outputs.

  5. An Approach for Generating Precipitation Input for Worst-Case Flood Modelling

    Science.gov (United States)

    Felder, Guido; Weingartner, Rolf

    2015-04-01

    There is a lack of suitable methods for creating precipitation scenarios that can be used to realistically estimate peak discharges with very low probabilities. On the one hand, existing methods are methodically questionable when it comes to physical system boundaries. On the other hand, the spatio-temporal representativeness of precipitation patterns as system input is limited. In response, this study proposes a method of deriving representative spatio-temporal precipitation patterns and presents a step towards making methodically correct estimations of infrequent floods by using a worst-case approach. A Monte-Carlo rainfall-runoff model allows for the testing of a wide range of different spatio-temporal distributions of an extreme precipitation event and therefore for the generation of a hydrograph for each of these distributions. Out of these numerous hydrographs and their corresponding peak discharges, the worst-case catchment reactions on the system input can be derived. The spatio-temporal distributions leading to the highest peak discharges are identified and can eventually be used for further investigations.

  6. Representing the Habsburg-Lorraine Dynasty in music, visual media and architecture, c. 1618-1918

    Czech Academy of Sciences Publication Activity Database

    Krummholz, Martin

    2015-01-01

    Roč. 27, 1/2 (2015), s. 32-33 ISSN 0862-612X Institutional support: RVO:68378033 Keywords : Habsburg-Lorraine dynasty * visual media * architecture * conference Subject RIV: AL - Art, Architecture, Cultural Heritage

  7. Visual Electricity Demonstrator

    Science.gov (United States)

    Lincoln, James

    2017-09-01

    The Visual Electricity Demonstrator (VED) is a linear diode array that serves as a dynamic alternative to an ammeter. A string of 48 red light-emitting diodes (LEDs) blink one after another to create the illusion of a moving current. Having the current represented visually builds an intuitive and qualitative understanding about what is happening in a circuit. In this article, I describe several activities for this device and explain how using this technology in the classroom can enhance the understanding and appreciation of physics.

  8. Flexibility and Stability in Sensory Processing Revealed Using Visual-to-Auditory Sensory Substitution

    Science.gov (United States)

    Hertz, Uri; Amedi, Amir

    2015-01-01

    The classical view of sensory processing involves independent processing in sensory cortices and multisensory integration in associative areas. This hierarchical structure has been challenged by evidence of multisensory responses in sensory areas, and dynamic weighting of sensory inputs in associative areas, thus far reported independently. Here, we used a visual-to-auditory sensory substitution algorithm (SSA) to manipulate the information conveyed by sensory inputs while keeping the stimuli intact. During scan sessions before and after SSA learning, subjects were presented with visual images and auditory soundscapes. The findings reveal 2 dynamic processes. First, crossmodal attenuation of sensory cortices changed direction after SSA learning from visual attenuations of the auditory cortex to auditory attenuations of the visual cortex. Secondly, associative areas changed their sensory response profile from strongest response for visual to that for auditory. The interaction between these phenomena may play an important role in multisensory processing. Consistent features were also found in the sensory dominance in sensory areas and audiovisual convergence in associative area Middle Temporal Gyrus. These 2 factors allow for both stability and a fast, dynamic tuning of the system when required. PMID:24518756

  9. Visuotactile motion congruence enhances gamma-band activity in visual and somatosensory cortices.

    Science.gov (United States)

    Krebber, Martin; Harwood, James; Spitzer, Bernhard; Keil, Julian; Senkowski, Daniel

    2015-08-15

    When touching and viewing a moving surface our visual and somatosensory systems receive congruent spatiotemporal input. Behavioral studies have shown that motion congruence facilitates interplay between visual and tactile stimuli, but the neural mechanisms underlying this interplay are not well understood. Neural oscillations play a role in motion processing and multisensory integration. They may also be crucial for visuotactile motion processing. In this electroencephalography study, we applied linear beamforming to examine the impact of visuotactile motion congruence on beta and gamma band activity (GBA) in visual and somatosensory cortices. Visual and tactile inputs comprised of gratings that moved either in the same or different directions. Participants performed a target detection task that was unrelated to motion congruence. While there were no effects in the beta band (13-21Hz), the power of GBA (50-80Hz) in visual and somatosensory cortices was larger for congruent compared with incongruent motion stimuli. This suggests enhanced bottom-up multisensory processing when visual and tactile gratings moved in the same direction. Supporting its behavioral relevance, GBA was correlated with shorter reaction times in the target detection task. We conclude that motion congruence plays an important role for the integrative processing of visuotactile stimuli in sensory cortices, as reflected by oscillatory responses in the gamma band. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. Adaptive learning in a compartmental model of visual cortex—how feedback enables stable category learning and refinement

    Science.gov (United States)

    Layher, Georg; Schrodt, Fabian; Butz, Martin V.; Neumann, Heiko

    2014-01-01

    The categorization of real world objects is often reflected in the similarity of their visual appearances. Such categories of objects do not necessarily form disjunct sets of objects, neither semantically nor visually. The relationship between categories can often be described in terms of a hierarchical structure. For instance, tigers and leopards build two separate mammalian categories, both of which are subcategories of the category Felidae. In the last decades, the unsupervised learning of categories of visual input stimuli has been addressed by numerous approaches in machine learning as well as in computational neuroscience. However, the question of what kind of mechanisms might be involved in the process of subcategory learning, or category refinement, remains a topic of active investigation. We propose a recurrent computational network architecture for the unsupervised learning of categorial and subcategorial visual input representations. During learning, the connection strengths of bottom-up weights from input to higher-level category representations are adapted according to the input activity distribution. In a similar manner, top-down weights learn to encode the characteristics of a specific stimulus category. Feedforward and feedback learning in combination realize an associative memory mechanism, enabling the selective top-down propagation of a category's feedback weight distribution. We suggest that the difference between the expected input encoded in the projective field of a category node and the current input pattern controls the amplification of feedforward-driven representations. Large enough differences trigger the recruitment of new representational resources and the establishment of additional (sub-) category representations. We demonstrate the temporal evolution of such learning and show how the proposed combination of an associative memory with a modulatory feedback integration successfully establishes category and subcategory representations

  11. Adaptive learning in a compartmental model of visual cortex - how feedback enables stable category learning and refinement

    Directory of Open Access Journals (Sweden)

    Georg eLayher

    2014-12-01

    Full Text Available The categorization of real world objects is often reflected in the similarity of their visual appearances. Such categories of objects do not necessarily form disjunct sets of objects, neither semantically nor visually. The relationship between categories can often be described in terms of a hierarchical structure. For instance, tigers and leopards build two separate mammalian categories, but both belong to the category of felines. In other words, tigers and leopards are subcategories of the category Felidae. In the last decades, the unsupervised learning of categories of visual input stimuli has been addressed by numerous approaches in machine learning as well as in the computational neurosciences. However, the question of what kind of mechanisms might be involved in the process of subcategory learning, or category refinement, remains a topic of active investigation. We propose a recurrent computational network architecture for the unsupervised learning of categorial and subcategorial visual input representations. During learning, the connection strengths of bottom-up weights from input to higher-level category representations are adapted according to the input activity distribution. In a similar manner, top-down weights learn to encode the characteristics of a specific stimulus category. Feedforward and feedback learning in combination realize an associative memory mechanism, enabling the selective top-down propagation of a category's feedback weight distribution. We suggest that the difference between the expected input encoded in the projective field of a category node and the current input pattern controls the amplification of feedforward-driven representations. Large enough differences trigger the recruitment of new representational resources and the establishment of (sub- category representations. We demonstrate the temporal evolution of such learning and show how the approach successully establishes category and subcategory

  12. Wearing weighted backpack dilates subjective visual duration: The role of functional linkage between weight experience and visual timing

    Directory of Open Access Journals (Sweden)

    Lina eJia

    2015-09-01

    Full Text Available Bodily state plays a critical role in our perception. In the present study, we asked the question whether and how bodily experience of weights influences time perception. Participants judged durations of a picture (a backpack or a trolley bag presented on the screen, while wearing different weight backpacks or without backpack. The results showed that the subjective dura-tion of the backpack picture was dilated when participants wore a medium weighted backpack relative to an empty backpack or without backpack, regardless of identity (e.g., color of the visual backpack. However, the duration dilation was not manifested for the picture of trolley bag. These findings suggest that weight experience modulates visual duration estimation through the linkage between the wore backpack and to-be-estimated visual target. The con-gruent action affordance between the wore backpack and visual inputs plays a critical role in the functional linkage between inner experience and time perception. We interpreted our findings within the framework of embodied time perception.

  13. Modification to the Monte Carlo N-Particle (MCNP) Visual Editor (MCNPVised) to Read in Computer Aided Design (CAD) Files

    International Nuclear Information System (INIS)

    Randolph Schwarz; Leland L. Carter; Alysia Schwarz

    2005-01-01

    Monte Carlo N-Particle Transport Code (MCNP) is the code of choice for doing complex neutron/photon/electron transport calculations for the nuclear industry and research institutions. The Visual Editor for Monte Carlo N-Particle is internationally recognized as the best code for visually creating and graphically displaying input files for MCNP. The work performed in this grant was used to enhance the capabilities of the MCNP Visual Editor to allow it to read in both 2D and 3D Computer Aided Design (CAD) files, allowing the user to electronically generate a valid MCNP input geometry

  14. Cognitive performance in visual memory and attention are influenced by many factors

    DEFF Research Database (Denmark)

    Wilms, Inge Linda; Nielsen, Simon

    Visual perception serves as the basis for much of the higher level cognitive processing as well as human activity in general. Here we present normative estimates for the following components of visual perception: the visual perceptual threshold, the visual short-term memory capacity and the visual...... perceptual encoding/decoding speed (processing speed) of visual short-term memory based on an assessment of 94 healthy subjects aged 60-75. The estimates are presented at total sample level as well as at gender level. The estimates were modelled from input from a whole-report assessment based on A Theory...... speed of Visual Short-term Memory (VTSM) but not the capacity of VSTM nor the visual threshold. The estimates will be useful for future studies into the effects of various types of intervention and training on cognition in general and visual attention in particular. (...

  15. BlockLogo: Visualization of peptide and sequence motif conservation

    DEFF Research Database (Denmark)

    Olsen, Lars Rønn; Kudahl, Ulrich Johan; Simon, Christian

    2013-01-01

    BlockLogo is a web-server application for the visualization of protein and nucleotide fragments, continuous protein sequence motifs, and discontinuous sequence motifs using calculation of block entropy from multiple sequence alignments. The user input consists of a multiple sequence alignment, se...

  16. The case of the missing visual details: Occlusion and long-term visual memory.

    Science.gov (United States)

    Williams, Carrick C; Burkle, Kyle A

    2017-10-01

    To investigate the critical information in long-term visual memory representations of objects, we used occlusion to emphasize 1 type of information or another. By occluding 1 solid side of the object (e.g., top 50%) or by occluding 50% of the object with stripes (like a picket fence), we emphasized visible information about the object, processing the visible details in the former and the object's overall form in the latter. On a token discrimination test, surprisingly, memory for solid or stripe occluded objects at either encoding (Experiment 1) or test (Experiment 2) was the same. In contrast, when occluded objects matched at encoding and test (Experiment 3) or when the occlusion shifted, revealing the entire object piecemeal (Experiment 4), memory was better for solid compared with stripe occluded objects, indicating that objects are represented differently in long-term visual memory. Critically, we also found that when the task emphasized remembering exactly what was shown, memory performance in the more detailed solid occlusion condition exceeded that in the stripe condition (Experiment 5). However, when the task emphasized the whole object form, memory was better in the stripe condition (Experiment 6) than in the solid condition. We argue that long-term visual memory can represent objects flexibly, and task demands can interact with visual information, allowing the viewer to cope with changing real-world visual environments. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  17. Effects of input uncertainty on cross-scale crop modeling

    Science.gov (United States)

    Waha, Katharina; Huth, Neil; Carberry, Peter

    2014-05-01

    data from very little to very detailed information, and compare the models' abilities to represent the spatial variability and temporal variability in crop yields. We display the uncertainty in crop yield simulations from different input data and crop models in Taylor diagrams which are a graphical summary of the similarity between simulations and observations (Taylor, 2001). The observed spatial variability can be represented well from both models (R=0.6-0.8) but APSIM predicts higher spatial variability than LPJmL due to its sensitivity to soil parameters. Simulations with the same crop model, climate and sowing dates have similar statistics and therefore similar skill to reproduce the observed spatial variability. Soil data is less important for the skill of a crop model to reproduce the observed spatial variability. However, the uncertainty in simulated spatial variability from the two crop models is larger than from input data settings and APSIM is more sensitive to input data then LPJmL. Even with a detailed, point-scale crop model and detailed input data it is difficult to capture the complexity and diversity in maize cropping systems.

  18. Input-output supervisor

    International Nuclear Information System (INIS)

    Dupuy, R.

    1970-01-01

    The input-output supervisor is the program which monitors the flow of informations between core storage and peripheral equipments of a computer. This work is composed of three parts: 1 - Study of a generalized input-output supervisor. With sample modifications it looks like most of input-output supervisors which are running now on computers. 2 - Application of this theory on a magnetic drum. 3 - Hardware requirement for time-sharing. (author) [fr

  19. Response of the Black Sea methane budget to massive short-term submarine inputs of methane

    DEFF Research Database (Denmark)

    Schmale, O.; Haeckel, M.; McGinnis, D. F.

    2011-01-01

    A steady state box model was developed to estimate the methane input into the Black Sea water column at various water depths. Our model results reveal a total input of methane of 4.7 Tg yr(-1). The model predicts that the input of methane is largest at water depths between 600 and 700 m (7......% of the total input), suggesting that the dissociation of methane gas hydrates at water depths equivalent to their upper stability limit may represent an important source of methane into the water column. In addition we discuss the effects of massive short-term methane inputs (e. g. through eruptions of deep......-water mud volcanoes or submarine landslides at intermediate water depths) on the water column methane distribution and the resulting methane emission to the atmosphere. Our non-steady state simulations predict that these inputs will be effectively buffered by intense microbial methane consumption...

  20. Visual dictionaries as intermediate features in the human brain

    Directory of Open Access Journals (Sweden)

    Kandan eRamakrishnan

    2015-01-01

    Full Text Available The human visual system is assumed to transform low level visual features to object and scene representations via features of intermediate complexity. How the brain computationally represents intermediate features is still unclear. To further elucidate this, we compared the biologically plausible HMAX model and Bag of Words (BoW model from computer vision. Both these computational models use visual dictionaries, candidate features of intermediate complexity, to represent visual scenes, and the models have been proven effective in automatic object and scene recognition. These models however differ in the computation of visual dictionaries and pooling techniques. We investigated where in the brain and to what extent human fMRI responses to short video can be accounted for by multiple hierarchical levels of the HMAX and BoW models. Brain activity of 20 subjects obtained while viewing a short video clip was analyzed voxel-wise using a distance-based variation partitioning method. Results revealed that both HMAX and BoW explain a significant amount of brain activity in early visual regions V1, V2 and V3. However BoW exhibits more consistency across subjects in accounting for brain activity compared to HMAX. Furthermore, visual dictionary representations by HMAX and BoW explain significantly some brain activity in higher areas which are believed to process intermediate features. Overall our results indicate that, although both HMAX and BoW account for activity in the human visual system, the BoW seems to more faithfully represent neural responses in low and intermediate level visual areas of the brain.

  1. The Representation of Color across the Human Visual Cortex: Distinguishing Chromatic Signals Contributing to Object Form Versus Surface Color.

    Science.gov (United States)

    Seymour, K J; Williams, M A; Rich, A N

    2016-05-01

    Many theories of visual object perception assume the visual system initially extracts borders between objects and their background and then "fills in" color to the resulting object surfaces. We investigated the transformation of chromatic signals across the human ventral visual stream, with particular interest in distinguishing representations of object surface color from representations of chromatic signals reflecting the retinal input. We used fMRI to measure brain activity while participants viewed figure-ground stimuli that differed either in the position or in the color contrast polarity of the foreground object (the figure). Multivariate pattern analysis revealed that classifiers were able to decode information about which color was presented at a particular retinal location from early visual areas, whereas regions further along the ventral stream exhibited biases for representing color as part of an object's surface, irrespective of its position on the retina. Additional analyses showed that although activity in V2 contained strong chromatic contrast information to support the early parsing of objects within a visual scene, activity in this area also signaled information about object surface color. These findings are consistent with the view that mechanisms underlying scene segmentation and the binding of color to object surfaces converge in V2. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  2. Visual communication - Information and fidelity. [of images

    Science.gov (United States)

    Huck, Freidrich O.; Fales, Carl L.; Alter-Gartenberg, Rachel; Rahman, Zia-Ur; Reichenbach, Stephen E.

    1993-01-01

    This assessment of visual communication deals with image gathering, coding, and restoration as a whole rather than as separate and independent tasks. The approach focuses on two mathematical criteria, information and fidelity, and on their relationships to the entropy of the encoded data and to the visual quality of the restored image. Past applications of these criteria to the assessment of image coding and restoration have been limited to the link that connects the output of the image-gathering device to the input of the image-display device. By contrast, the approach presented in this paper explicitly includes the critical limiting factors that constrain image gathering and display. This extension leads to an end-to-end assessment theory of visual communication that combines optical design with digital processing.

  3. VQABQ: Visual Question Answering by Basic Questions

    KAUST Repository

    Huang, Jia-Hong

    2017-03-19

    Taking an image and question as the input of our method, it can output the text-based answer of the query question about the given image, so called Visual Question Answering (VQA). There are two main modules in our algorithm. Given a natural language question about an image, the first module takes the question as input and then outputs the basic questions of the main given question. The second module takes the main question, image and these basic questions as input and then outputs the text-based answer of the main question. We formulate the basic questions generation problem as a LASSO optimization problem, and also propose a criterion about how to exploit these basic questions to help answer main question. Our method is evaluated on the challenging VQA dataset and yields state-of-the-art accuracy, 60.34% in open-ended task.

  4. VQABQ: Visual Question Answering by Basic Questions

    KAUST Repository

    Huang, Jia-Hong; Alfadly, Modar; Ghanem, Bernard

    2017-01-01

    Taking an image and question as the input of our method, it can output the text-based answer of the query question about the given image, so called Visual Question Answering (VQA). There are two main modules in our algorithm. Given a natural language question about an image, the first module takes the question as input and then outputs the basic questions of the main given question. The second module takes the main question, image and these basic questions as input and then outputs the text-based answer of the main question. We formulate the basic questions generation problem as a LASSO optimization problem, and also propose a criterion about how to exploit these basic questions to help answer main question. Our method is evaluated on the challenging VQA dataset and yields state-of-the-art accuracy, 60.34% in open-ended task.

  5. Enhancing Assisted Living Technology with Extended Visual Memory

    Directory of Open Access Journals (Sweden)

    Joo-Hwee Lim

    2011-05-01

    Full Text Available Human vision and memory are powerful cognitive faculties by which we understand the world. However, they are imperfect and further, subject to deterioration with age. We propose a cognitive-inspired computational model, Extended Visual Memory (EVM, within the Computer-Aided Vision (CAV framework, to assist human in vision-related tasks. We exploit wearable sensors such as cameras, GPS and ambient computing facilities to complement a user's vision and memory functions by answering four types of queries central to visual activities, namely, Retrieval, Understanding, Navigation and Search. Learning of EVM relies on both frequency-based and attention-driven mechanisms to store view-based visual fragments (VF, which are abstracted into high-level visual schemas (VS, both in the visual long-term memory. During inference, the visual short-term memory plays a key role in visual similarity computation between input (or its schematic representation and VF, exemplified from VS when necessary. We present an assisted living scenario, termed EViMAL (Extended Visual Memory for Assisted Living, targeted at mild dementia patients to provide novel functions such as hazard-warning, visual reminder, object look-up and event review. We envisage EVM having the potential benefits in alleviating memory loss, improving recall precision and enhancing memory capacity through external support.

  6. Visualizing Mobility of Public Transportation System.

    Science.gov (United States)

    Zeng, Wei; Fu, Chi-Wing; Arisona, Stefan Müller; Erath, Alexander; Qu, Huamin

    2014-12-01

    Public transportation systems (PTSs) play an important role in modern cities, providing shared/massive transportation services that are essential for the general public. However, due to their increasing complexity, designing effective methods to visualize and explore PTS is highly challenging. Most existing techniques employ network visualization methods and focus on showing the network topology across stops while ignoring various mobility-related factors such as riding time, transfer time, waiting time, and round-the-clock patterns. This work aims to visualize and explore passenger mobility in a PTS with a family of analytical tasks based on inputs from transportation researchers. After exploring different design alternatives, we come up with an integrated solution with three visualization modules: isochrone map view for geographical information, isotime flow map view for effective temporal information comparison and manipulation, and OD-pair journey view for detailed visual analysis of mobility factors along routes between specific origin-destination pairs. The isotime flow map linearizes a flow map into a parallel isoline representation, maximizing the visualization of mobility information along the horizontal time axis while presenting clear and smooth pathways from origin to destinations. Moreover, we devise several interactive visual query methods for users to easily explore the dynamics of PTS mobility over space and time. Lastly, we also construct a PTS mobility model from millions of real passenger trajectories, and evaluate our visualization techniques with assorted case studies with the transportation researchers.

  7. Sensitivity Analysis of Input Parameters for a Dynamic Food Chain Model DYNACON

    International Nuclear Information System (INIS)

    Hwang, Won Tae; Lee, Geun Chang; Han, Moon Hee; Cho, Gyu Seong

    2000-01-01

    The sensitivity analysis of input parameters for a dynamic food chain model DYNACON was conducted as a function of deposition data for the long-lived radionuclides ( 137 Cs, 90 Sr). Also, the influence of input parameters for the short and long-terms contamination of selected foodstuffs (cereals, leafy vegetables, milk) was investigated. The input parameters were sampled using the LHS technique, and their sensitivity indices represented as PRCC. The sensitivity index was strongly dependent on contamination period as well as deposition data. In case of deposition during the growing stages of plants, the input parameters associated with contamination by foliar absorption were relatively important in long-term contamination as well as short-term contamination. They were also important in short-term contamination in case of deposition during the non-growing stages. In long-term contamination, the influence of input parameters associated with foliar absorption decreased, while the influence of input parameters associated with root uptake increased. These phenomena were more remarkable in case of the deposition of non-growing stages than growing stages, and in case of 90 Sr deposition than 137 Cs deposition. In case of deposition during growing stages of pasture, the input parameters associated with the characteristics of cattle such as feed-milk transfer factor and daily intake rate of cattle were relatively important in contamination of milk

  8. Perceptual learning improves visual performance in juvenile amblyopia.

    Science.gov (United States)

    Li, Roger W; Young, Karen G; Hoenig, Pia; Levi, Dennis M

    2005-09-01

    To determine whether practicing a position-discrimination task improves visual performance in children with amblyopia and to determine the mechanism(s) of improvement. Five children (age range, 7-10 years) with amblyopia practiced a positional acuity task in which they had to judge which of three pairs of lines was misaligned. Positional noise was produced by distributing the individual patches of each line segment according to a Gaussian probability function. Observers were trained at three noise levels (including 0), with each observer performing between 3000 and 4000 responses in 7 to 10 sessions. Trial-by-trial feedback was provided. Four of the five observers showed significant improvement in positional acuity. In those four observers, on average, positional acuity with no noise improved by approximately 32% and with high noise by approximately 26%. A position-averaging model was used to parse the improvement into an increase in efficiency or a decrease in equivalent input noise. Two observers showed increased efficiency (51% and 117% improvements) with no significant change in equivalent input noise across sessions. The other two observers showed both a decrease in equivalent input noise (18% and 29%) and an increase in efficiency (17% and 71%). All five observers showed substantial improvement in Snellen acuity (approximately 26%) after practice. Perceptual learning can improve visual performance in amblyopic children. The improvement can be parsed into two important factors: decreased equivalent input noise and increased efficiency. Perceptual learning techniques may add an effective new method to the armamentarium of amblyopia treatments.

  9. Perceptual Training Strongly Improves Visual Motion Perception in Schizophrenia

    Science.gov (United States)

    Norton, Daniel J.; McBain, Ryan K.; Ongur, Dost; Chen, Yue

    2011-01-01

    Schizophrenia patients exhibit perceptual and cognitive deficits, including in visual motion processing. Given that cognitive systems depend upon perceptual inputs, improving patients' perceptual abilities may be an effective means of cognitive intervention. In healthy people, motion perception can be enhanced through perceptual learning, but it…

  10. Universities’ visual image and Internet communication

    OpenAIRE

    Okushova Gulnafist; Stakhovskaya Yuliya; Sharaev Pavel

    2016-01-01

    Universities of the 21st century are built on digital walls and on the Internet foundation. Their "real virtuality" of M. Castells is represented by information and communication flows that reflect various areas: education, research, culture, leisure, and others. The visual image of a university is the bridge that connects its physical and digital reality and identifies it within the information flow on the Internet. Visual image identification on the Internet and the function that the visual...

  11. Visual monitoring of reproduction in dairy herds

    DEFF Research Database (Denmark)

    Thysen, Iver; Enevoldsen, Carsten

    1994-01-01

    Two complementary approaches to produce visual information from reproduction records are described and exemplified. The Event Display shows all reproductive events, over a year, for all cows in a herd, by symbols placed in an array with columns representing calendar weeks and rows representing...... individual cows. The Reproduction Monitor consists of graphs of insemination and pregnancy rates evaluated weekly with a Bayesian technique. These visual monitoring tools are well suited to explore temporal variation in reproductive performance, they provide a quick overview of herd performance...

  12. Effects of periodontal afferent inputs on corticomotor excitability in humans

    DEFF Research Database (Denmark)

    Zhang, Y; Boudreau, S; Wang, M

    2010-01-01

    for the first dorsal interosseous (FDI) as an internal control. Burning pain intensity and mechanical sensitivity ratings to a von Frey filament applied to the application site were recorded on an electronic visual analogue scale (VAS). All subjects reported a decreased mechanical sensitivity (anova: P = 0......-injection for the LA (anovas: P > 0.22) or capsaicin (anovas: P > 0.16) sessions. These findings suggest that a transient loss or perturbation in periodontal afferent input to the brain from a single incisor is insufficient to cause changes in corticomotor excitability of the face MI, as measured by TMS in humans....

  13. User manual of Visual Balan V. 1.0 Interactive code for water balances and refueling estimation; Manual del usuario del programa Visual Balan V. 1.0. Codigo interactivo para la realizacion de balances hidrologicos y la estimacion de la recarga

    Energy Technology Data Exchange (ETDEWEB)

    Samper, J.; Huguer, L.; Ares, J.; Garcia, M. A. [Universidad de La Coruna (Spain)

    1999-07-01

    This document contains the Users Manual of Visual Balan V1.0, an updated version of Visual Balan V0.0 (Samper et al., 1997). Visual Balan V1.0 performs daily water balances in the soil, the unsaturated zone and the aquifer in a user-friendly environment which facilitates both the input data process and the postprocessing of results. The main inputs of the balance are rainfall and irrigation while the outputs are surface runoff, evapotranspiration, interception, inter flow and groundwater flow. The code evaluates all these components in a sequential manner by starting with rainfall and irrigation, which must be provided by the user, and continuing with interception, surface runoff, evapotranspiration, and potential recharge (water flux crossing the bottom of the soil). This potential recharge is the input to the unsaturated zone where water can flow horizontally as subsurface flow (inter flow) or vertically as percolation into the aquifer. (Author)

  14. Speakers of different languages process the visual world differently.

    Science.gov (United States)

    Chabal, Sarah; Marian, Viorica

    2015-06-01

    Language and vision are highly interactive. Here we show that people activate language when they perceive the visual world, and that this language information impacts how speakers of different languages focus their attention. For example, when searching for an item (e.g., clock) in the same visual display, English and Spanish speakers look at different objects. Whereas English speakers searching for the clock also look at a cloud, Spanish speakers searching for the clock also look at a gift, because the Spanish names for gift (regalo) and clock (reloj) overlap phonologically. These different looking patterns emerge despite an absence of direct language input, showing that linguistic information is automatically activated by visual scene processing. We conclude that the varying linguistic information available to speakers of different languages affects visual perception, leading to differences in how the visual world is processed. (c) 2015 APA, all rights reserved).

  15. Protein expression of MEF2C during the critical period for visual development in vervet monkeys

    OpenAIRE

    Bernad, Daniel M; Lachance, Pascal E; Chaudhuri, Avijit

    2008-01-01

    During the early development of the visual cortex, there is a critical period when neuronal connections are highly sensitive to changes in visual input. Deprivation of visual stimuli during the critical period elicits robust anatomical and physiological rearrangements in the monkey visual cortex and serves as an excellent model for activity-dependent neuroplasticity. DNA microarray experiments were previously performed in our lab to analyze gene expression patterns in area V1 of vervet monkey...

  16. Recurrent V1-V2 interaction in early visual boundary processing.

    Science.gov (United States)

    Neumann, H; Sepp, W

    1999-11-01

    A majority of cortical areas are connected via feedforward and feedback fiber projections. In feedforward pathways we mainly observe stages of feature detection and integration. The computational role of the descending pathways at different stages of processing remains mainly unknown. Based on empirical findings we suggest that the top-down feedback pathways subserve a context-dependent gain control mechanism. We propose a new computational model for recurrent contour processing in which normalized activities of orientation selective contrast cells are fed forward to the next processing stage. There, the arrangement of input activation is matched against local patterns of contour shape. The resulting activities are subsequently fed back to the previous stage to locally enhance those initial measurements that are consistent with the top-down generated responses. In all, we suggest a computational theory for recurrent processing in the visual cortex in which the significance of local measurements is evaluated on the basis of a broader visual context that is represented in terms of contour code patterns. The model serves as a framework to link physiological with perceptual data gathered in psychophysical experiments. It handles a variety of perceptual phenomena, such as the local grouping of fragmented shape outline, texture surround and density effects, and the interpolation of illusory contours.

  17. Comparing Jupiter and Saturn: dimensionless input rates from plasma sources within the magnetosphere

    Directory of Open Access Journals (Sweden)

    V. M. Vasyliūnas

    2008-06-01

    Full Text Available The quantitative significance for a planetary magnetosphere of plasma sources associated with a moon of the planet can be assessed only by expressing the plasma mass input rate in dimensionless form, as the ratio of the actual mass input to some reference value. Traditionally, the solar wind mass flux through an area equal to the cross-section of the magnetosphere has been used. Here I identify another reference value of mass input, independent of the solar wind and constructed from planetary parameters alone, which can be shown to represent a mass input sufficiently large to prevent corotation already at the source location. The source rate from Enceladus at Saturn has been reported to be an order of magnitude smaller (in absolute numbers than that from Io at Jupiter. Both reference values, however, are also smaller at Saturn than at Jupiter, by factors ~40 to 60; expressed in dimensionless form, the estimated mass input from Enceladus may be larger than that from Io by factors ~4 to 6. The magnetosphere of Saturn may thus, despite a lower mass input in kg s−1, intrinsically be more heavily mass-loaded than the magnetosphere of Jupiter.

  18. Comparing Jupiter and Saturn: dimensionless input rates from plasma sources within the magnetosphere

    Directory of Open Access Journals (Sweden)

    V. M. Vasyliūnas

    2008-06-01

    Full Text Available The quantitative significance for a planetary magnetosphere of plasma sources associated with a moon of the planet can be assessed only by expressing the plasma mass input rate in dimensionless form, as the ratio of the actual mass input to some reference value. Traditionally, the solar wind mass flux through an area equal to the cross-section of the magnetosphere has been used. Here I identify another reference value of mass input, independent of the solar wind and constructed from planetary parameters alone, which can be shown to represent a mass input sufficiently large to prevent corotation already at the source location. The source rate from Enceladus at Saturn has been reported to be an order of magnitude smaller (in absolute numbers than that from Io at Jupiter. Both reference values, however, are also smaller at Saturn than at Jupiter, by factors ~40 to 60; expressed in dimensionless form, the estimated mass input from Enceladus may be larger than that from Io by factors ~4 to 6. The magnetosphere of Saturn may thus, despite a lower mass input in kg s−1, intrinsically be more heavily mass-loaded than the magnetosphere of Jupiter.

  19. Graph-based clustering and data visualization algorithms

    CERN Document Server

    Vathy-Fogarassy, Ágnes

    2013-01-01

    This work presents a data visualization technique that combines graph-based topology representation and dimensionality reduction methods to visualize the intrinsic data structure in a low-dimensional vector space. The application of graphs in clustering and visualization has several advantages. A graph of important edges (where edges characterize relations and weights represent similarities or distances) provides a compact representation of the entire complex data set. This text describes clustering and visualization methods that are able to utilize information hidden in these graphs, based on

  20. Visual working memory is more tolerant than visual long-term memory.

    Science.gov (United States)

    Schurgin, Mark W; Flombaum, Jonathan I

    2018-05-07

    Human visual memory is tolerant, meaning that it supports object recognition despite variability across encounters at the image level. Tolerant object recognition remains one capacity in which artificial intelligence trails humans. Typically, tolerance is described as a property of human visual long-term memory (VLTM). In contrast, visual working memory (VWM) is not usually ascribed a role in tolerant recognition, with tests of that system usually demanding discriminatory power-identifying changes, not sameness. There are good reasons to expect that VLTM is more tolerant; functionally, recognition over the long-term must accommodate the fact that objects will not be viewed under identical conditions; and practically, the passive and massive nature of VLTM may impose relatively permissive criteria for thinking that two inputs are the same. But empirically, tolerance has never been compared across working and long-term visual memory. We therefore developed a novel paradigm for equating encoding and test across different memory types. In each experiment trial, participants saw two objects, memory for one tested immediately (VWM) and later for the other (VLTM). VWM performance was better than VLTM and remained robust despite the introduction of image and object variability. In contrast, VLTM performance suffered linearly as more variability was introduced into test stimuli. Additional experiments excluded interference effects as causes for the observed differences. These results suggest the possibility of a previously unidentified role for VWM in the acquisition of tolerant representations for object recognition. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  1. SEISMIC RISK CARTOGRAPHIC VISUALIZATION FOR CRISIS MANAGEMENT

    Directory of Open Access Journals (Sweden)

    Nina I. Frolova

    2014-01-01

    Full Text Available Earthquake loss estimations before future events and following strong earthquakesin emergency mode and their corresponding visualization are extremely important for properdecision on preventive measures and effective response in order to save lives and properties. The paper addresses the methodological issues of seismic risk and vulnerability assessment, mapping with GIS technology application. Requirements for simulation models,databases used at different levels, as well as ways of visualizations oriented for EmergencyManagement Agencies, as well federal and local authorities are discussed. Examples ofmapping at the different levels: global, country, region and urban one are given and theinfluence of input data uncertainties on the reliability of loss computations is analyzed.

  2. Congenital Anophthalmia and Binocular Neonatal Enucleation Differently Affect the Proteome of Primary and Secondary Visual Cortices in Mice.

    Directory of Open Access Journals (Sweden)

    Marie-Eve Laramée

    Full Text Available In blind individuals, visually deprived occipital areas are activated by non-visual stimuli. The extent of this cross-modal activation depends on the age at onset of blindness. Cross-modal inputs have access to several anatomical pathways to reactivate deprived visual areas. Ectopic cross-modal subcortical connections have been shown in anophthalmic animals but not in animals deprived of sight at a later age. Direct and indirect cross-modal cortical connections toward visual areas could also be involved, yet the number of neurons implicated is similar between blind mice and sighted controls. Changes at the axon terminal, dendritic spine or synaptic level are therefore expected upon loss of visual inputs. Here, the proteome of V1, V2M and V2L from P0-enucleated, anophthalmic and sighted mice, sharing a common genetic background (C57BL/6J x ZRDCT/An, was investigated by 2-D DIGE and Western analyses to identify molecular adaptations to enucleation and/or anophthalmia. Few proteins were differentially expressed in enucleated or anophthalmic mice in comparison to sighted mice. The loss of sight affected three pathways: metabolism, synaptic transmission and morphogenesis. Most changes were detected in V1, followed by V2M. Overall, cross-modal adaptations could be promoted in both models of early blindness but not through the exact same molecular strategy. A lower metabolic activity observed in visual areas of blind mice suggests that even if cross-modal inputs reactivate visual areas, they could remain suboptimally processed.

  3. Congenital Anophthalmia and Binocular Neonatal Enucleation Differently Affect the Proteome of Primary and Secondary Visual Cortices in Mice.

    Science.gov (United States)

    Laramée, Marie-Eve; Smolders, Katrien; Hu, Tjing-Tjing; Bronchti, Gilles; Boire, Denis; Arckens, Lutgarde

    2016-01-01

    In blind individuals, visually deprived occipital areas are activated by non-visual stimuli. The extent of this cross-modal activation depends on the age at onset of blindness. Cross-modal inputs have access to several anatomical pathways to reactivate deprived visual areas. Ectopic cross-modal subcortical connections have been shown in anophthalmic animals but not in animals deprived of sight at a later age. Direct and indirect cross-modal cortical connections toward visual areas could also be involved, yet the number of neurons implicated is similar between blind mice and sighted controls. Changes at the axon terminal, dendritic spine or synaptic level are therefore expected upon loss of visual inputs. Here, the proteome of V1, V2M and V2L from P0-enucleated, anophthalmic and sighted mice, sharing a common genetic background (C57BL/6J x ZRDCT/An), was investigated by 2-D DIGE and Western analyses to identify molecular adaptations to enucleation and/or anophthalmia. Few proteins were differentially expressed in enucleated or anophthalmic mice in comparison to sighted mice. The loss of sight affected three pathways: metabolism, synaptic transmission and morphogenesis. Most changes were detected in V1, followed by V2M. Overall, cross-modal adaptations could be promoted in both models of early blindness but not through the exact same molecular strategy. A lower metabolic activity observed in visual areas of blind mice suggests that even if cross-modal inputs reactivate visual areas, they could remain suboptimally processed.

  4. Modification to the Monte N-Particle (MCNP) Visual Editor (MCNPVised) to read in Computer Aided Design (CAD) files

    International Nuclear Information System (INIS)

    Schwarz, Randy A.; Carter, Leeland L.

    2004-01-01

    Monte Carlo N-Particle Transport Code (MCNP) (Reference 1) is the code of choice for doing complex neutron/photon/electron transport calculations for the nuclear industry and research institutions. The Visual Editor for Monte Carlo N-Particle (References 2 to 11) is recognized internationally as the best code for visually creating and graphically displaying input files for MCNP. The work performed in this grant enhanced the capabilities of the MCNP Visual Editor to allow it to read in a 2D Computer Aided Design (CAD) file, allowing the user to modify and view the 2D CAD file and then electronically generate a valid MCNP input geometry with a user specified axial extent

  5. The impact of visual gaze direction on auditory object tracking.

    Science.gov (United States)

    Pomper, Ulrich; Chait, Maria

    2017-07-05

    Subjective experience suggests that we are able to direct our auditory attention independent of our visual gaze, e.g when shadowing a nearby conversation at a cocktail party. But what are the consequences at the behavioural and neural level? While numerous studies have investigated both auditory attention and visual gaze independently, little is known about their interaction during selective listening. In the present EEG study, we manipulated visual gaze independently of auditory attention while participants detected targets presented from one of three loudspeakers. We observed increased response times when gaze was directed away from the locus of auditory attention. Further, we found an increase in occipital alpha-band power contralateral to the direction of gaze, indicative of a suppression of distracting input. Finally, this condition also led to stronger central theta-band power, which correlated with the observed effect in response times, indicative of differences in top-down processing. Our data suggest that a misalignment between gaze and auditory attention both reduce behavioural performance and modulate underlying neural processes. The involvement of central theta-band and occipital alpha-band effects are in line with compensatory neural mechanisms such as increased cognitive control and the suppression of task irrelevant inputs.

  6. A crossmodal crossover: opposite effects of visual and auditory perceptual load on steady-state evoked potentials to irrelevant visual stimuli.

    Science.gov (United States)

    Jacoby, Oscar; Hall, Sarah E; Mattingley, Jason B

    2012-07-16

    Mechanisms of attention are required to prioritise goal-relevant sensory events under conditions of stimulus competition. According to the perceptual load model of attention, the extent to which task-irrelevant inputs are processed is determined by the relative demands of discriminating the target: the more perceptually demanding the target task, the less unattended stimuli will be processed. Although much evidence supports the perceptual load model for competing stimuli within a single sensory modality, the effects of perceptual load in one modality on distractor processing in another is less clear. Here we used steady-state evoked potentials (SSEPs) to measure neural responses to irrelevant visual checkerboard stimuli while participants performed either a visual or auditory task that varied in perceptual load. Consistent with perceptual load theory, increasing visual task load suppressed SSEPs to the ignored visual checkerboards. In contrast, increasing auditory task load enhanced SSEPs to the ignored visual checkerboards. This enhanced neural response to irrelevant visual stimuli under auditory load suggests that exhausting capacity within one modality selectively compromises inhibitory processes required for filtering stimuli in another. Copyright © 2012 Elsevier Inc. All rights reserved.

  7. Manchester visual query language

    Science.gov (United States)

    Oakley, John P.; Davis, Darryl N.; Shann, Richard T.

    1993-04-01

    We report a database language for visual retrieval which allows queries on image feature information which has been computed and stored along with images. The language is novel in that it provides facilities for dealing with feature data which has actually been obtained from image analysis. Each line in the Manchester Visual Query Language (MVQL) takes a set of objects as input and produces another, usually smaller, set as output. The MVQL constructs are mainly based on proven operators from the field of digital image analysis. An example is the Hough-group operator which takes as input a specification for the objects to be grouped, a specification for the relevant Hough space, and a definition of the voting rule. The output is a ranked list of high scoring bins. The query could be directed towards one particular image or an entire image database, in the latter case the bins in the output list would in general be associated with different images. We have implemented MVQL in two layers. The command interpreter is a Lisp program which maps each MVQL line to a sequence of commands which are used to control a specialized database engine. The latter is a hybrid graph/relational system which provides low-level support for inheritance and schema evolution. In the paper we outline the language and provide examples of useful queries. We also describe our solution to the engineering problems associated with the implementation of MVQL.

  8. Collective form generation through visual participatory representation

    DEFF Research Database (Denmark)

    Day, Dennis; Sharma, Nishant; Punekar, Ravi

    2012-01-01

    In order to inspire and inform designers with the users data from participatory research, it may be important to represent data in a visual format that is easily understandable to the designers. For a case study in vehicle design, the paper outlines visual representation of data and the use...

  9. The role of pulvinar in the transmission of information in the visual hierarchy.

    Science.gov (United States)

    Cortes, Nelson; van Vreeswijk, Carl

    2012-01-01

    VISUAL RECEPTIVE FIELD (RF) ATTRIBUTES IN VISUAL CORTEX OF PRIMATES HAVE BEEN EXPLAINED MAINLY FROM CORTICAL CONNECTIONS: visual RFs progress from simple to complex through cortico-cortical pathways from lower to higher levels in the visual hierarchy. This feedforward flow of information is paired with top-down processes through the feedback pathway. Although the hierarchical organization explains the spatial properties of RFs, is unclear how a non-linear transmission of activity through the visual hierarchy can yield smooth contrast response functions in all level of the hierarchy. Depending on the gain, non-linear transfer functions create either a bimodal response to contrast, or no contrast dependence of the response in the highest level of the hierarchy. One possible mechanism to regulate this transmission of visual contrast information from low to high level involves an external component that shortcuts the flow of information through the hierarchy. A candidate for this shortcut is the Pulvinar nucleus of the thalamus. To investigate representation of stimulus contrast a hierarchical model network of ten cortical areas is examined. In each level of the network, the activity from the previous layer is integrated and then non-linearly transmitted to the next level. The arrangement of interactions creates a gradient from simple to complex RFs of increasing size as one moves from lower to higher cortical levels. The visual input is modeled as a Gaussian random input, whose width codes for the contrast. This input is applied to the first area. The output activity ratio among different contrast values is analyzed for the last level to observe sensitivity to a contrast and contrast invariant tuning. For a purely cortical system, the output of the last area can be approximately contrast invariant, but the sensitivity to contrast is poor. To account for an alternative visual processing pathway, non-reciprocal connections from and to a parallel pulvinar like structure

  10. Visual and intelligent transients and accidents analyzer based on thermal-hydraulic system code

    International Nuclear Information System (INIS)

    Meng Lin; Rui Hu; Yun Su; Ronghua Zhang; Yanhua Yang

    2005-01-01

    Full text of publication follows: Many thermal-hydraulic system codes were developed in the past twenty years, such as RELAP5, RETRAN, ATHLET, etc. Because of their general and advanced features in thermal-hydraulic computation, they are widely used in the world to analyze transients and accidents. But there are following disadvantages for most of these original thermal-hydraulic system codes. Firstly, because models are built through input decks, so the input files are complex and non-figurative, and the style of input decks is various for different users and models. Secondly, results are shown in off-line data file form. It is not convenient for analysts who may pay more attention to dynamic parameters trend and changing. Thirdly, there are few interfaces with other program in these original thermal-hydraulic system codes. This restricts the codes expanding. The subject of this paper is to develop a powerful analyzer based on these thermal-hydraulic system codes to analyze transients and accidents more simply, accurately and fleetly. Firstly, modeling is visual and intelligent. Users build the thermalhydraulic system model using component objects according to their needs, and it is not necessary for them to face bald input decks. The style of input decks created automatically by the analyzer is unified and can be accepted easily by other people. Secondly, parameters concerned by analyst can be dynamically communicated to show or even change. Thirdly, the analyzer provide interface with other programs for the thermal-hydraulic system code. Thus parallel computation between thermal-hydraulic system code and other programs become possible. In conclusion, through visual and intelligent method, the analyzer based on general and advanced thermal-hydraulic system codes can be used to analysis transients and accidents more effectively. The main purpose of this paper is to present developmental activities, assessment and application results of the visual and intelligent

  11. Visualization and Analysis-Oriented Reconstruction of Material Interfaces

    Energy Technology Data Exchange (ETDEWEB)

    Childs, Henry R.

    2010-03-05

    Reconstructing boundaries along material interfaces from volume fractions is a difficult problem, especially because the under-resolved nature of the input data allows for many correct interpretations. Worse, algorithms widely accepted as appropriate for simulation are inappropriate for visualization. In this paper, we describe a new algorithm that is specifically intended for reconstructing material interfaces for visualization and analysis requirements. The algorithm performs well with respect to memory footprint and execution time, has desirable properties in various accuracy metrics, and also produces smooth surfaces with few artifacts, even when faced with more than two materials per cell.

  12. Encoding model of temporal processing in human visual cortex.

    Science.gov (United States)

    Stigliani, Anthony; Jeska, Brianna; Grill-Spector, Kalanit

    2017-12-19

    How is temporal information processed in human visual cortex? Visual input is relayed to V1 through segregated transient and sustained channels in the retina and lateral geniculate nucleus (LGN). However, there is intense debate as to how sustained and transient temporal channels contribute to visual processing beyond V1. The prevailing view associates transient processing predominately with motion-sensitive regions and sustained processing with ventral stream regions, while the opposing view suggests that both temporal channels contribute to neural processing beyond V1. Using fMRI, we measured cortical responses to time-varying stimuli and then implemented a two temporal channel-encoding model to evaluate the contributions of each channel. Different from the general linear model of fMRI that predicts responses directly from the stimulus, the encoding approach first models neural responses to the stimulus from which fMRI responses are derived. This encoding approach not only predicts cortical responses to time-varying stimuli from milliseconds to seconds but also, reveals differential contributions of temporal channels across visual cortex. Consistent with the prevailing view, motion-sensitive regions and adjacent lateral occipitotemporal regions are dominated by transient responses. However, ventral occipitotemporal regions are driven by both sustained and transient channels, with transient responses exceeding the sustained. These findings propose a rethinking of temporal processing in the ventral stream and suggest that transient processing may contribute to rapid extraction of the content of the visual input. Importantly, our encoding approach has vast implications, because it can be applied with fMRI to decipher neural computations in millisecond resolution in any part of the brain. Copyright © 2017 the Author(s). Published by PNAS.

  13. ColloInputGenerator

    DEFF Research Database (Denmark)

    2013-01-01

    This is a very simple program to help you put together input files for use in Gries' (2007) R-based collostruction analysis program. It basically puts together a text file with a frequency list of lexemes in the construction and inserts a column where you can add the corpus frequencies. It requires...... it as input for basic collexeme collostructional analysis (Stefanowitsch & Gries 2003) in Gries' (2007) program. ColloInputGenerator is, in its current state, based on programming commands introduced in Gries (2009). Projected updates: Generation of complete work-ready frequency lists....

  14. Can explicit visual feedback of postural sway efface the effects of sensory manipulations on mediolateral balance performance?

    OpenAIRE

    Cofre Lizama, L.E.; Pijnappels, M.A.G.M.; Reeves, N.P.; Verschueren, S.M.; van Dieen, J.H.

    2016-01-01

    Explicit visual feedback on postural sway is often used in balance assessment and training. However, up-weighting of visual information may mask impairments of other sensory systems. We therefore aimed to determine whether the effects of somatosensory, vestibular, and proprioceptive manipulations on mediolateral balance are reduced by explicit visual feedback on mediolateral sway of the body center of mass and by the presence of visual information. We manipulated sensory inputs of the somatos...

  15. Examining the Effect of Age on Visual-Vestibular Self-Motion Perception Using a Driving Paradigm.

    Science.gov (United States)

    Ramkhalawansingh, Robert; Keshavarz, Behrang; Haycock, Bruce; Shahab, Saba; Campos, Jennifer L

    2017-05-01

    Previous psychophysical research has examined how younger adults and non-human primates integrate visual and vestibular cues to perceive self-motion. However, there is much to be learned about how multisensory self-motion perception changes with age, and how these changes affect performance on everyday tasks involving self-motion. Evidence suggests that older adults display heightened multisensory integration compared with younger adults; however, few previous studies have examined this for visual-vestibular integration. To explore age differences in the way that visual and vestibular cues contribute to self-motion perception, we had younger and older participants complete a basic driving task containing visual and vestibular cues. We compared their performance against a previously established control group that experienced visual cues alone. Performance measures included speed, speed variability, and lateral position. Vestibular inputs resulted in more precise speed control among older adults, but not younger adults, when traversing curves. Older adults demonstrated more variability in lateral position when vestibular inputs were available versus when they were absent. These observations align with previous evidence of age-related differences in multisensory integration and demonstrate that they may extend to visual-vestibular integration. These findings may have implications for vehicle and simulator design when considering older users.

  16. Engineering visualization utilizing advanced animation

    Science.gov (United States)

    Sabionski, Gunter R.; Robinson, Thomas L., Jr.

    1989-01-01

    Engineering visualization is the use of computer graphics to depict engineering analysis and simulation in visual form from project planning through documentation. Graphics displays let engineers see data represented dynamically which permits the quick evaluation of results. The current state of graphics hardware and software generally allows the creation of two types of 3D graphics. The use of animated video as an engineering visualization tool is presented. The engineering, animation, and videography aspects of animated video production are each discussed. Specific issues include the integration of staffing expertise, hardware, software, and the various production processes. A detailed explanation of the animation process reveals the capabilities of this unique engineering visualization method. Automation of animation and video production processes are covered and future directions are proposed.

  17. The Persuasive Power of Data Visualization.

    Science.gov (United States)

    Pandey, Anshul Vikram; Manivannan, Anjali; Nov, Oded; Satterthwaite, Margaret; Bertini, Enrico

    2014-12-01

    Data visualization has been used extensively to inform users. However, little research has been done to examine the effects of data visualization in influencing users or in making a message more persuasive. In this study, we present experimental research to fill this gap and present an evidence-based analysis of persuasive visualization. We built on persuasion research from psychology and user interfaces literature in order to explore the persuasive effects of visualization. In this experimental study we define the circumstances under which data visualization can make a message more persuasive, propose hypotheses, and perform quantitative and qualitative analyses on studies conducted to test these hypotheses. We compare visual treatments with data presented through barcharts and linecharts on the one hand, treatments with data presented through tables on the other, and then evaluate their persuasiveness. The findings represent a first step in exploring the effectiveness of persuasive visualization.

  18. DLNE: A hybridization of deep learning and neuroevolution for visual control

    DEFF Research Database (Denmark)

    Poulsen, Andreas Precht; Thorhauge, Mark; Funch, Mikkel Hvilshj

    2017-01-01

    This paper investigates the potential of combining deep learning and neuroevolution to create a bot for a simple first person shooter (FPS) game capable of aiming and shooting based on high-dimensional raw pixel input. The deep learning component is responsible for visual recognition...... on evolution, and (3) how well they allow the deep network and evolved network to interface with each other. Overall, the results suggest that combining deep learning and neuroevolution in a hybrid approach is a promising research direction that could make complex visual domains directly accessible to networks...... and translating raw pixels to compact feature representations, while the evolving network takes those features as inputs to infer actions. Two types of feature representations are evaluated in terms of (1) how precise they allow the deep network to recognize the position of the enemy, (2) their effect...

  19. Visual Orientation and Directional Selectivity through Thalamic Synchrony

    Science.gov (United States)

    Stanley, Garrett B.; Jin, Jianzhong; Wang, Yushi; Desbordes, Gaëlle; Wang, Qi; Black, Michael J.; Alonso, Jose-Manuel

    2012-01-01

    Thalamic neurons respond to visual scenes by generating synchronous spike trains on the timescale of 10 – 20 ms that are very effective at driving cortical targets. Here we demonstrate that this synchronous activity contains unexpectedly rich information about fundamental properties of visual stimuli. We report that the occurrence of synchronous firing of cat thalamic cells with highly overlapping receptive fields is strongly sensitive to the orientation and the direction of motion of the visual stimulus. We show that this stimulus selectivity is robust, remaining relatively unchanged under different contrasts and temporal frequencies (stimulus velocities). A computational analysis based on an integrate-and-fire model of the direct thalamic input to a layer 4 cortical cell reveals a strong correlation between the degree of thalamic synchrony and the nonlinear relationship between cortical membrane potential and the resultant firing rate. Together, these findings suggest a novel population code in the synchronous firing of neurons in the early visual pathway that could serve as the substrate for establishing cortical representations of the visual scene. PMID:22745507

  20. Input data preprocessing method for exchange rate forecasting via neural network

    Directory of Open Access Journals (Sweden)

    Antić Dragan S.

    2014-01-01

    Full Text Available The aim of this paper is to present a method for neural network input parameters selection and preprocessing. The purpose of this network is to forecast foreign exchange rates using artificial intelligence. Two data sets are formed for two different economic systems. Each system is represented by six categories with 70 economic parameters which are used in the analysis. Reduction of these parameters within each category was performed by using the principal component analysis method. Component interdependencies are established and relations between them are formed. Newly formed relations were used to create input vectors of a neural network. The multilayer feed forward neural network is formed and trained using batch training. Finally, simulation results are presented and it is concluded that input data preparation method is an effective way for preprocessing neural network data. [Projekat Ministarstva nauke Republike Srbije, br.TR 35005, br. III 43007 i br. III 44006

  1. Karakter Visual Keindonesiaan dalam Iklan Cetak di Indonesia

    Directory of Open Access Journals (Sweden)

    Didit Widiatmoko Suwardikun

    2008-07-01

    Full Text Available Many have tried to explore the unified identity character of an Indonesian, arousing certain tribal cultures to visually represent “Indonesian” on apparent bestowed upon Indonesia as a nation of multi-cultures. This is clearly be seen in advertisements where the expressed visuals represent periods, societal forms, political, and economical situation according to the allotted time and space. Thus, visuals on advertisements may serve as clues to understand the significance of expressions as “Indonesian” out of the memory of how things were and were done and therefore ought to be done. This study explores visuals from the advertisements of the past to understand the spirit of Indonesia as a nation for the purpose of tomorrow. The study looked into visuals of the advertisements from the Dutch colonial era, Japanese occupation period, the birth of a nation in 1950s, the new order (1970s-1990s, and end with those of reform order (2000s; in order to portray “Indonesian” in terms of figure, behavior, and attitude of a nation. The paper discusses visuals of the past to model the present and future of an “Indonesian”.

  2. VISPA - Visual Physics Analysis on Linux, Mac OS X and Windows

    International Nuclear Information System (INIS)

    Brodski, M.; Erdmann, M.; Fischer, R.; Hinzmann, A.; Klimkovich, T.; Mueller, G.; Muenzer, T.; Steggemann, J.; Winchen, T.

    2009-01-01

    Modern physics analysis is an iterative task consisting of prototyping, executing and verifying the analysis procedure. For supporting scientists in each step of this process, we developed VISPA: a toolkit based on graphical and textual elements for visual physics analysis. Unlike many other analysis frameworks VISPA runs on Linux, Windows and Mac OS X. VISPA can be used in any experiment with serial data flow. In particular, VISPA can be connected to any high energy physics experiment. Furthermore, datatypes for the usage in astroparticle physics have recently been successfully included. An analysis on the data is performed in several steps, each represented by an individual module. While modules e.g. for file input and output are already provided, additional modules can be written by the user with C++ or the Python language. From individual modules, the analysis is designed by graphical connections representing the data flow. This modular concept assists the user in fast prototyping of the analysis and improves the reusability of written source code. The execution of the analysis can be performed directly from the GUI, or on any supported computer in batch mode. Therefore the analysis can be transported from the laptop to other machines. The recently improved GUI of VISPA is based on a plug-in mechanism. Besides components for the development and execution of physics analysis, additional plug-ins are available for the visualization of e.g. the structure of high energy physics events or the properties of cosmic rays in an astroparticle physics analysis. Furthermore plug-ins have been developed to display and edit configuration files of individual experiments from within the VISPA GUI. (author)

  3. The Impact of Early Visual Deprivation on Spatial Hearing: A Comparison between Totally and Partially Visually Deprived Children

    Science.gov (United States)

    Cappagli, Giulia; Finocchietti, Sara; Cocchi, Elena; Gori, Monica

    2017-01-01

    The specific role of early visual deprivation on spatial hearing is still unclear, mainly due to the difficulty of comparing similar spatial skills at different ages and to the difficulty in recruiting young blind children from birth. In this study, the effects of early visual deprivation on the development of auditory spatial localization have been assessed in a group of seven 3–5 years old children with congenital blindness (n = 2; light perception or no perception of light) or low vision (n = 5; visual acuity range 1.1–1.7 LogMAR), with the main aim to understand if visual experience is fundamental to the development of specific spatial skills. Our study led to three main findings: firstly, totally blind children performed overall more poorly compared sighted and low vision children in all the spatial tasks performed; secondly, low vision children performed equally or better than sighted children in the same auditory spatial tasks; thirdly, higher residual levels of visual acuity are positively correlated with better spatial performance in the dynamic condition of the auditory localization task indicating that the more residual vision the better spatial performance. These results suggest that early visual experience has an important role in the development of spatial cognition, even when the visual input during the critical period of visual calibration is partially degraded like in the case of low vision children. Overall these results shed light on the importance of early assessment of spatial impairments in visually impaired children and early intervention to prevent the risk of isolation and social exclusion. PMID:28443040

  4. Economy-wide material input/output and dematerialization analysis of Jilin Province (China).

    Science.gov (United States)

    Li, MingSheng; Zhang, HuiMin; Li, Zhi; Tong, LianJun

    2010-06-01

    In this paper, both direct material input (DMI) and domestic processed output (DPO) of Jilin Province in 1990-2006 were calculated and then based on these two indexes, a dematerialization model was established. The main results are summarized as follows: (1) both direct material input and domestic processed output increase at a steady rate during 1990-2006, with average annual growth rates of 4.19% and 2.77%, respectively. (2) The average contribution rate of material input to economic growth is 44%, indicating that the economic growth is visibly extensive. (3) During the studied period, accumulative quantity of material input dematerialization is 11,543 x 10(4) t and quantity of waste dematerialization is 5,987 x10(4) t. Moreover, dematerialization gaps are positive, suggesting that the potential of dematerialization has been well fulfilled. (4) In most years of the analyzed period, especially 2003-2006, the economic system of Jilin Province represents an unsustainable state. The accelerated economic growth relies mostly on excessive resources consumption after the Revitalization Strategy of Northeast China was launched.

  5. Early Visual Cortex as a Multiscale Cognitive Blackboard

    NARCIS (Netherlands)

    Roelfsema, Pieter R.; de Lange, Floris P.

    2016-01-01

    Neurons in early visual cortical areas not only represent incoming visual information but are also engaged by higher level cognitive processes, including attention, working memory, imagery, and decision-making. Are these cognitive effects an epiphenomenon or are they functionally relevant for these

  6. Early visual cortex as a multiscale cognitive blackboard.

    NARCIS (Netherlands)

    Roelfsema, P.R.; De Lange, Floris

    2016-01-01

    Neurons in early visual cortical areas not only represent incoming visual information but are also engaged by higher level cognitive processes, including attention, working memory, imagery, and decision-making. Are these cognitive effects an epiphenomenon or are they functionally relevant for these

  7. A link between visual disambiguation and visual memory.

    Science.gov (United States)

    Hegdé, Jay; Kersten, Daniel

    2010-11-10

    Sensory information in the retinal image is typically too ambiguous to support visual object recognition by itself. Theories of visual disambiguation posit that to disambiguate, and thus interpret, the incoming images, the visual system must integrate the sensory information with previous knowledge of the visual world. However, the underlying neural mechanisms remain unclear. Using functional magnetic resonance imaging (fMRI) of human subjects, we have found evidence for functional specialization for storing disambiguating information in memory versus interpreting incoming ambiguous images. Subjects viewed two-tone, "Mooney" images, which are typically ambiguous when seen for the first time but are quickly disambiguated after viewing the corresponding unambiguous color images. Activity in one set of regions, including a region in the medial parietal cortex previously reported to play a key role in Mooney image disambiguation, closely reflected memory for previously seen color images but not the subsequent disambiguation of Mooney images. A second set of regions, including the superior temporal sulcus, showed the opposite pattern, in that their responses closely reflected the subjects' percepts of the disambiguated Mooney images on a stimulus-to-stimulus basis but not the memory of the corresponding color images. Functional connectivity between the two sets of regions was stronger during those trials in which the disambiguated percept was stronger. This functional interaction between brain regions that specialize in storing disambiguating information in memory versus interpreting incoming ambiguous images may represent a general mechanism by which previous knowledge disambiguates visual sensory information.

  8. Visual perception and imagery: a new molecular hypothesis.

    Science.gov (United States)

    Bókkon, I

    2009-05-01

    Here, we put forward a redox molecular hypothesis about the natural biophysical substrate of visual perception and visual imagery. This hypothesis is based on the redox and bioluminescent processes of neuronal cells in retinotopically organized cytochrome oxidase-rich visual areas. Our hypothesis is in line with the functional roles of reactive oxygen and nitrogen species in living cells that are not part of haphazard process, but rather a very strict mechanism used in signaling pathways. We point out that there is a direct relationship between neuronal activity and the biophoton emission process in the brain. Electrical and biochemical processes in the brain represent sensory information from the external world. During encoding or retrieval of information, electrical signals of neurons can be converted into synchronized biophoton signals by bioluminescent radical and non-radical processes. Therefore, information in the brain appears not only as an electrical (chemical) signal but also as a regulated biophoton (weak optical) signal inside neurons. During visual perception, the topological distribution of photon stimuli on the retina is represented by electrical neuronal activity in retinotopically organized visual areas. These retinotopic electrical signals in visual neurons can be converted into synchronized biophoton signals by radical and non-radical processes in retinotopically organized mitochondria-rich areas. As a result, regulated bioluminescent biophotons can create intrinsic pictures (depictive representation) in retinotopically organized cytochrome oxidase-rich visual areas during visual imagery and visual perception. The long-term visual memory is interpreted as epigenetic information regulated by free radicals and redox processes. This hypothesis does not claim to solve the secret of consciousness, but proposes that the evolution of higher levels of complexity made the intrinsic picture representation of the external visual world possible by regulated

  9. Excessive sensitivity to uncertain visual input in L-dopa induced dyskinesias in Parkinson’s disease: further implications for cerebellar involvement

    Directory of Open Access Journals (Sweden)

    James eStevenson

    2014-02-01

    Full Text Available When faced with visual uncertainty during motor performance, humans rely more on predictive forward models and proprioception and attribute lesser importance to the ambiguous visual feedback. Though disrupted predictive control is typical of patients with cerebellar disease, sensorimotor deficits associated with the involuntary and often unconscious nature of L-dopa-induced dyskinesias in Parkinson’s disease (PD suggests dyskinetic subjects may also demonstrate impaired predictive motor control. Methods: We investigated the motor performance of 9 dyskinetic and 10 non-dyskinetic PD subjects on and off L-dopa, and of 10 age-matched control subjects, during a large-amplitude, overlearned, visually-guided tracking task. Ambiguous visual feedback was introduced by adding ‘jitter’ to a moving target that followed a Lissajous pattern. Root mean square (RMS tracking error was calculated, and ANOVA, robust multivariate linear regression and linear dynamical system analyses were used to determine the contribution of speed and ambiguity to tracking performance. Results: Increasing target ambiguity and speed contributed significantly more to the RMS error of dyskinetic subjects off medication. L-dopa improved the RMS tracking performance of both PD groups. At higher speeds, controls and PDs without dyskinesia were able to effectively de-weight ambiguous visual information. Conclusions: PDs’ visually-guided motor performance degrades with visual jitter and speed of movement to a greater degree compared to age-matched controls. However, there are fundamental differences in PDs with and without dyskinesia: subjects without dyskinesia are generally slow, and less responsive to dynamic changes in motor task requirements but, PDs with dyskinesia there was a trade-off between overall performance and inappropriate reliance on ambiguous visual feedback. This is likely associated with functional changes in posterior parietal-ponto-cerebellar pathways.

  10. Visual analytics of inherently noisy crowdsourced data on ultra high resolution displays

    Science.gov (United States)

    Huynh, Andrew; Ponto, Kevin; Lin, Albert Yu-Min; Kuester, Falko

    The increasing prevalence of distributed human microtasking, crowdsourcing, has followed the exponential increase in data collection capabilities. The large scale and distributed nature of these microtasks produce overwhelming amounts of information that is inherently noisy due to the nature of human input. Furthermore, these inputs create a constantly changing dataset with additional information added on a daily basis. Methods to quickly visualize, filter, and understand this information over temporal and geospatial constraints is key to the success of crowdsourcing. This paper present novel methods to visually analyze geospatial data collected through crowdsourcing on top of remote sensing satellite imagery. An ultra high resolution tiled display system is used to explore the relationship between human and satellite remote sensing data at scale. A case study is provided that evaluates the presented technique in the context of an archaeological field expedition. A team in the field communicated in real-time with and was guided by researchers in the remote visual analytics laboratory, swiftly sifting through incoming crowdsourced data to identify target locations that were identified as viable archaeological sites.

  11. Decoding Visual Location From Neural Patterns in the Auditory Cortex of the Congenitally Deaf

    Science.gov (United States)

    Almeida, Jorge; He, Dongjun; Chen, Quanjing; Mahon, Bradford Z.; Zhang, Fan; Gonçalves, Óscar F.; Fang, Fang; Bi, Yanchao

    2016-01-01

    Sensory cortices of individuals who are congenitally deprived of a sense can exhibit considerable plasticity and be recruited to process information from the senses that remain intact. Here, we explored whether the auditory cortex of congenitally deaf individuals represents visual field location of a stimulus—a dimension that is represented in early visual areas. We used functional MRI to measure neural activity in auditory and visual cortices of congenitally deaf and hearing humans while they observed stimuli typically used for mapping visual field preferences in visual cortex. We found that the location of a visual stimulus can be successfully decoded from the patterns of neural activity in auditory cortex of congenitally deaf but not hearing individuals. This is particularly true for locations within the horizontal plane and within peripheral vision. These data show that the representations stored within neuroplastically changed auditory cortex can align with dimensions that are typically represented in visual cortex. PMID:26423461

  12. Data-Driven Visualization and Group Analysis of Multichannel EEG Coherence with Functional Units

    NARCIS (Netherlands)

    Caat, Michael ten; Maurits, Natasha M.; Roerdink, Jos B.T.M.

    2008-01-01

    A typical data- driven visualization of electroencephalography ( EEG) coherence is a graph layout, with vertices representing electrodes and edges representing significant coherences between electrode signals. A drawback of this layout is its visual clutter for multichannel EEG. To reduce clutter,

  13. Octopus vulgaris uses visual information to determine the location of its arm.

    Science.gov (United States)

    Gutnick, Tamar; Byrne, Ruth A; Hochner, Binyamin; Kuba, Michael

    2011-03-22

    Octopuses are intelligent, soft-bodied animals with keen senses that perform reliably in a variety of visual and tactile learning tasks. However, researchers have found them disappointing in that they consistently fail in operant tasks that require them to combine central nervous system reward information with visual and peripheral knowledge of the location of their arms. Wells claimed that in order to filter and integrate an abundance of multisensory inputs that might inform the animal of the position of a single arm, octopuses would need an exceptional computing mechanism, and "There is no evidence that such a system exists in Octopus, or in any other soft bodied animal." Recent electrophysiological experiments, which found no clear somatotopic organization in the higher motor centers, support this claim. We developed a three-choice maze that required an octopus to use a single arm to reach a visually marked goal compartment. Using this operant task, we show for the first time that Octopus vulgaris is capable of guiding a single arm in a complex movement to a location. Thus, we claim that octopuses can combine peripheral arm location information with visual input to control goal-directed complex movements. Copyright © 2011 Elsevier Ltd. All rights reserved.

  14. Input-output linearizing tracking control of induction machine with the included magnetic saturation

    DEFF Research Database (Denmark)

    Dolinar, Drago; Ljusev, Petar; Stumberger, Gorazd

    2003-01-01

    The tracking control design of an induction motor, based on input-output linearisation with magnetic saturation included is addressed. The magnetic saturation is represented by a nonlinear magnetising curve for the iron core and is used in the control, the observer of the state variables......, and in the load torque estimator. An input-output linearising control is used to achieve better tracking performances. It is based on the mixed 'stator current - rotor flux linkage' induction motor model with magnetic saturation considered in the stationary reference frame. Experimental results show...... that the proposed input-output linearising tracking control with saturation included behaves considerably better than the one without saturation, and that it introduces smaller position and speed errors, and better motor stiffness on account of the increased computational complexity....

  15. Visualizing Sound Directivity via Smartphone Sensors

    OpenAIRE

    Hawley, Scott H.; McClain Jr, Robert E.

    2017-01-01

    We present a fast, simple method for automated data acquisition and visualization of sound directivity, made convenient and accessible via a smartphone app, "Polar Pattern Plotter." The app synchronizes measurements of sound volume with the phone's angular orientation obtained from either compass, gyroscope or accelerometer sensors and produces a graph and exportable data file. It is generalizable to various sound sources and receivers via the use of an input-jack-adaptor to supplant the smar...

  16. Visual sensation during pecking in pigeons.

    Science.gov (United States)

    Ostheim, J

    1997-10-01

    During the final down-thrust of a pigeon's head, the eyes are closed gradually, a response that was thought to block visual input. This phase of pecking was therefore assumed to be under feed-forward control exclusively. Analysis of high resolution video-recordings showed that visual information collected during the down-thrust of the head could be used for 'on-line' modulations of pecks in progress. We thus concluded that the final down-thrust of the head is not exclusively controlled by feed-forward mechanisms but also by visual feedback components. We could further establish that as a rule the eyes are never closed completely but instead the eyelids form a slit which leaves a part of the pupil uncovered. The width of the slit between the pigeon' eyelids is highly sensitive to both, ambient luminance and the visual background against which seeds are offered. It was concluded that eyelid slits increase the focal depth of retinal images at extreme near-field viewing-conditions. Applying pharmacological methods we could confirm that pupil size and eyelid slit width are controlled through conjoint neuronal mechanisms. This shared neuronal network is particularly sensitive to drugs that affect dopamine receptors.

  17. Diversity and wiring variability of visual local neurons in the Drosophila medulla M6 stratum.

    Science.gov (United States)

    Chin, An-Lun; Lin, Chih-Yung; Fu, Tsai-Feng; Dickson, Barry J; Chiang, Ann-Shyn

    2014-12-01

    Local neurons in the vertebrate retina are instrumental in transforming visual inputs to extract contrast, motion, and color information and in shaping bipolar-to-ganglion cell transmission to the brain. In Drosophila, UV vision is represented by R7 inner photoreceptor neurons that project to the medulla M6 stratum, with relatively little known of this downstream substrate. Here, using R7 terminals as references, we generated a 3D volume model of the M6 stratum, which revealed a retinotopic map for UV representations. Using this volume model as a common 3D framework, we compiled and analyzed the spatial distributions of more than 200 single M6-specific local neurons (M6-LNs). Based on the segregation of putative dendrites and axons, these local neurons were classified into two families, directional and nondirectional. Neurotransmitter immunostaining suggested a signal routing model in which some visual information is relayed by directional M6-LNs from the anterior to the posterior M6 and all visual information is inhibited by a diverse population of nondirectional M6-LNs covering the entire M6 stratum. Our findings suggest that the Drosophila medulla M6 stratum contains diverse LNs that form repeating functional modules similar to those found in the vertebrate inner plexiform layer. © 2014 Wiley Periodicals, Inc.

  18. How Is the Serial Order of a Visual Sequence Represented? Insights from Transposition Latencies

    Science.gov (United States)

    Hurlstone, Mark J.; Hitch, Graham J.

    2018-01-01

    A central goal of research on short-term memory (STM) over the past 2 decades has been to identify the mechanisms that underpin the representation of serial order, and to establish whether these mechanisms are the same across different modalities and domains (e.g., verbal, visual, spatial). A fruitful approach to addressing this question has…

  19. Interactive visual steering--rapid visual prototyping of a common rail injection system.

    Science.gov (United States)

    Matković, Kresimir; Gracanin, Denis; Jelović, Mario; Hauser, Helwig

    2008-01-01

    Interactive steering with visualization has been a common goal of the visualization research community for twenty years, but it is rarely ever realized in practice. In this paper we describe a successful realization of a tightly coupled steering loop, integrating new simulation technology and interactive visual analysis in a prototyping environment for automotive industry system design. Due to increasing pressure on car manufacturers to meet new emission regulations, to improve efficiency, and to reduce noise, both simulation and visualization are pushed to their limits. Automotive system components, such as the powertrain system or the injection system have an increasing number of parameters, and new design approaches are required. It is no longer possible to optimize such a system solely based on experience or forward optimization. By coupling interactive visualization with the simulation back-end (computational steering), it is now possible to quickly prototype a new system, starting from a non-optimized initial prototype and the corresponding simulation model. The prototyping continues through the refinement of the simulation model, of the simulation parameters and through trial-and-error attempts to an optimized solution. The ability to early see the first results from a multidimensional simulation space--thousands of simulations are run for a multidimensional variety of input parameters--and to quickly go back into the simulation and request more runs in particular parameter regions of interest significantly improves the prototyping process and provides a deeper understanding of the system behavior. The excellent results which we achieved for the common rail injection system strongly suggest that our approach has a great potential of being generalized to other, similar scenarios.

  20. Examining Practice in Secondary Visual Arts Education

    Science.gov (United States)

    Mitchell, Donna Mathewson

    2015-01-01

    Teaching in secondary visual arts classrooms is complex and challenging work. While it is implicated in much research, the complexity of the lived experience of secondary visual arts teaching has rarely been the subject of sustained and synthesized research. In this paper, the potential of practice as a concept to examine and represent secondary…

  1. Visualizing Contour Trees within Histograms

    DEFF Research Database (Denmark)

    Kraus, Martin

    2010-01-01

    Many of the topological features of the isosurfaces of a scalar volume field can be compactly represented by its contour tree. Unfortunately, the contour trees of most real-world volume data sets are too complex to be visualized by dot-and-line diagrams. Therefore, we propose a new visualization...... that is suitable for large contour trees and efficiently conveys the topological structure of the most important isosurface components. This visualization is integrated into a histogram of the volume data; thus, it offers strictly more information than a traditional histogram. We present algorithms...... to automatically compute the graph layout and to calculate appropriate approximations of the contour tree and the surface area of the relevant isosurface components. The benefits of this new visualization are demonstrated with the help of several publicly available volume data sets....

  2. Modern Algorithms for Real-Time Terrain Visualization on Commodity Hardware

    Directory of Open Access Journals (Sweden)

    Radek Bartoň

    2011-05-01

    Full Text Available The amount of input data acquired from a remote sensing equipment is rapidly growing.  Interactive visualization of those datasets is a necessity for their correct interpretation. With the ability of modern hardware to display hundreds of millions of triangles per second, it is possible to visualize the massive terrains at one pixel display error on HD displays with interactive frame rates when batched rendering is applied. Algorithms able to do this are an area of intensive research and a topic of this article. The paper first explains some of the theory around the terrain visualization, categorizes its algorithms according to several criteria and describes six of the most significant methods in more details.

  3. Total dose induced increase in input offset voltage in JFET input operational amplifiers

    International Nuclear Information System (INIS)

    Pease, R.L.; Krieg, J.; Gehlhausen, M.; Black, J.

    1999-01-01

    Four different types of commercial JFET input operational amplifiers were irradiated with ionizing radiation under a variety of test conditions. All experienced significant increases in input offset voltage (Vos). Microprobe measurement of the electrical characteristics of the de-coupled input JFETs demonstrates that the increase in Vos is a result of the mismatch of the degraded JFETs. (authors)

  4. 2D co-ordinate transformation based on a spike timing-dependent plasticity learning mechanism.

    Science.gov (United States)

    Wu, QingXiang; McGinnity, Thomas Martin; Maguire, Liam; Belatreche, Ammar; Glackin, Brendan

    2008-11-01

    In order to plan accurate motor actions, the brain needs to build an integrated spatial representation associated with visual stimuli and haptic stimuli. Since visual stimuli are represented in retina-centered co-ordinates and haptic stimuli are represented in body-centered co-ordinates, co-ordinate transformations must occur between the retina-centered co-ordinates and body-centered co-ordinates. A spiking neural network (SNN) model, which is trained with spike-timing-dependent-plasticity (STDP), is proposed to perform a 2D co-ordinate transformation of the polar representation of an arm position to a Cartesian representation, to create a virtual image map of a haptic input. Through the visual pathway, a position signal corresponding to the haptic input is used to train the SNN with STDP synapses such that after learning the SNN can perform the co-ordinate transformation to generate a representation of the haptic input with the same co-ordinates as a visual image. The model can be applied to explain co-ordinate transformation in spiking neuron based systems. The principle can be used in artificial intelligent systems to process complex co-ordinate transformations represented by biological stimuli.

  5. COGEDIF - automatic TORT and DORT input generation from MORSE combinatorial geometry models

    International Nuclear Information System (INIS)

    Castelli, R.A.; Barnett, D.A.

    1992-01-01

    COGEDIF is an interactive utility which was developed to automate the preparation of two and three dimensional geometrical inputs for the ORNL-TORT and DORT discrete ordinates programs from complex three dimensional models described using the MORSE combinatorial geometry input description. The program creates either continuous or disjoint mesh input based upon the intersections of user defined meshing planes and the MORSE body definitions. The composition overlay of the combinatorial geometry is used to create the composition mapping of the discretized geometry based upon the composition found at the centroid of each of the mesh cells. This program simplifies the process of using discrete orthogonal mesh cells to represent non-orthogonal geometries in large models which require mesh sizes of the order of a million cells or more. The program was specifically written to take advantage of the new TORT disjoint mesh option which was developed at ORNL

  6. Rietica - a visual Rietveld program

    International Nuclear Information System (INIS)

    Hunter, B.A.

    2000-01-01

    Full text: Rietica is a Rietveld analysis program that allows interaction with the refinement process on a cycle by cycle basis using a graphic interface. Rietica was developed to aid in the creation and updating of the Rietveld input files, as well as to control the LHPM program. It is a Windows 95 based program allowing point and click control of all the functionality of LHPM. Some of the features of the Rietica program are: multiple x-ray and/or neutron histograms (datasets) allowing different scales, zeros, peak profile types and values, backgrounds, wavelengths, preferred orientations for each histogram - all of which are refinable, (1) ability to calculate and refine neutron time-of-flight (TOF) data, (2) ability to refine Δf' and Δf'' now allows extra flexibility with synchrotron diffraction data, (3) can interpolate x-ray form factors from a series of (sin(θ)/λ, f) values (eg Bessel functions), (4) a new absorption correction formula allowing μR > 1.0 for cylindrical geometry and a flat plate absorption correction based surface roughness, (5) new background functions, (6) quantitative phase analysis routines, (7) point and click data entry and program control, (8) data display and easy background and excluded region input via mouse control, (9) control of the refinement process, including automatic updating of the calculated pattern display after each step, (10) can refine using 3 strategies: a) a point and click mode of data entry and refinement control (beginners/intermediate/advanced users); b) a manual editing of the input file, but still with all the benefits of the visual interface and online graphics (intermediate/advanced users); and c) a Basic scripting language that can be used to control the refinement process and graphics output. It could be used, say, for automatic refinement of large numbers of datasets (advanced users). The program currently reads in LHPM and GSAS/Fullprof/DWBS experiment files and export in a variety of formats. The

  7. The effect of transcranial direct current stimulation on contrast sensitivity and visual evoked potential amplitude in adults with amblyopia

    OpenAIRE

    Ding, Zhaofeng; Li, Jinrong; Spiegel, Daniel P.; Chen, Zidong; Chan, Lily; Luo, Guangwei; Yuan, Junpeng; Deng, Daming; Yu, Minbin; Thompson, Benjamin

    2016-01-01

    Amblyopia is a neurodevelopmental disorder of vision that occurs when the visual cortex receives decorrelated inputs from the two eyes during an early critical period of development. Amblyopic eyes are subject to suppression from the fellow eye, generate weaker visual evoked potentials (VEPs) than fellow eyes and have multiple visual deficits including impairments in visual acuity and contrast sensitivity. Primate models and human psychophysics indicate that stronger suppression is associated...

  8. Neural computation of visual imaging based on Kronecker product in the primary visual cortex

    Directory of Open Access Journals (Sweden)

    Guozheng Yao

    2010-03-01

    Full Text Available Abstract Background What kind of neural computation is actually performed by the primary visual cortex and how is this represented mathematically at the system level? It is an important problem in the visual information processing, but has not been well answered. In this paper, according to our understanding of retinal organization and parallel multi-channel topographical mapping between retina and primary visual cortex V1, we divide an image into orthogonal and orderly array of image primitives (or patches, in which each patch will evoke activities of simple cells in V1. From viewpoint of information processing, this activated process, essentially, involves optimal detection and optimal matching of receptive fields of simple cells with features contained in image patches. For the reconstruction of the visual image in the visual cortex V1 based on the principle of minimum mean squares error, it is natural to use the inner product expression in neural computation, which then is transformed into matrix form. Results The inner product is carried out by using Kronecker product between patches and function architecture (or functional column in localized and oriented neural computing. Compared with Fourier Transform, the mathematical description of Kronecker product is simple and intuitive, so is the algorithm more suitable for neural computation of visual cortex V1. Results of computer simulation based on two-dimensional Gabor pyramid wavelets show that the theoretical analysis and the proposed model are reasonable. Conclusions Our results are: 1. The neural computation of the retinal image in cortex V1 can be expressed to Kronecker product operation and its matrix form, this algorithm is implemented by the inner operation between retinal image primitives and primary visual cortex's column. It has simple, efficient and robust features, which is, therefore, such a neural algorithm, which can be completed by biological vision. 2. It is more suitable

  9. Neurons in the thalamic reticular nucleus are selective for diverse and complex visual features

    Directory of Open Access Journals (Sweden)

    Vishal eVaingankar

    2012-12-01

    Full Text Available All visual signals the cortex receives are influenced by the perigeniculate sector of the thalamic reticular nucleus, which receives input from relay cells in the lateral geniculate and provides feedback inhibition in return. Relay cells have been studied in quantitative depth; they behave in a roughly linear fashion and have receptive fields with a stereotyped centre-surround structure. We know far less about reticular neurons. Qualitative studies indicate they simply pool ascending input to generate nonselective gain control. Yet the perigeniculate is complicated; local cells are densely interconnected and fire lengthy bursts. Thus, we employed quantitative methods to explore the perigeniculate, using relay cells as controls. By adapting methods of spike-triggered averaging and covariance analysis for bursts, we identified both first and second order features that build reticular receptive fields. The shapes of these spatiotemporal subunits varied widely; no stereotyped pattern emerged. Companion experiments showed that the shape of the first but not second order features could be explained by the overlap of On and Off inputs to a given cell. Moreover, we assessed the predictive power of the receptive field and how much information each component subunit conveyed. Linear-nonlinear models including multiple subunits performed better than those made with just one; further each subunit encoded different visual information. Model performance for reticular cells was always lesser than for relay cells, however, indicating that reticular cells process inputs nonlinearly. All told, our results suggest that the perigeniculate encodes diverse visual features to selectively modulate activity transmitted downstream

  10. Neurons in the thalamic reticular nucleus are selective for diverse and complex visual features

    Science.gov (United States)

    Vaingankar, Vishal; Soto-Sanchez, Cristina; Wang, Xin; Sommer, Friedrich T.; Hirsch, Judith A.

    2012-01-01

    All visual signals the cortex receives are influenced by the perigeniculate sector (PGN) of the thalamic reticular nucleus, which receives input from relay cells in the lateral geniculate and provides feedback inhibition in return. Relay cells have been studied in quantitative depth; they behave in a roughly linear fashion and have receptive fields with a stereotyped center-surround structure. We know far less about reticular neurons. Qualitative studies indicate they simply pool ascending input to generate non-selective gain control. Yet the perigeniculate is complicated; local cells are densely interconnected and fire lengthy bursts. Thus, we employed quantitative methods to explore the perigeniculate using relay cells as controls. By adapting methods of spike-triggered averaging and covariance analysis for bursts, we identified both first and second order features that build reticular receptive fields. The shapes of these spatiotemporal subunits varied widely; no stereotyped pattern emerged. Companion experiments showed that the shape of the first but not second order features could be explained by the overlap of On and Off inputs to a given cell. Moreover, we assessed the predictive power of the receptive field and how much information each component subunit conveyed. Linear-non-linear (LN) models including multiple subunits performed better than those made with just one; further each subunit encoded different visual information. Model performance for reticular cells was always lesser than for relay cells, however, indicating that reticular cells process inputs non-linearly. All told, our results suggest that the perigeniculate encodes diverse visual features to selectively modulate activity transmitted downstream. PMID:23269915

  11. Interactive data visualization foundations, techniques, and applications

    CERN Document Server

    Ward, Matthew; Keim, Daniel

    2010-01-01

    Visualization is the process of representing data, information, and knowledge in a visual form to support the tasks of exploration, confirmation, presentation, and understanding. This book is designed as a textbook for students, researchers, analysts, professionals, and designers of visualization techniques, tools, and systems. It covers the full spectrum of the field, including mathematical and analytical aspects, ranging from its foundations to human visual perception; from coded algorithms for different types of data, information and tasks to the design and evaluation of new visualization techniques. Sample programs are provided as starting points for building one's own visualization tools. Numerous data sets have been made available that highlight different application areas and allow readers to evaluate the strengths and weaknesses of different visualization methods. Exercises, programming projects, and related readings are given for each chapter. The book concludes with an examination of several existin...

  12. High-intensity erotic visual stimuli de-activate the primary visual cortex in women.

    Science.gov (United States)

    Huynh, Hieu K; Beers, Caroline; Willemsen, Antoon; Lont, Erna; Laan, Ellen; Dierckx, Rudi; Jansen, Monique; Sand, Michael; Weijmar Schultz, Willibrord; Holstege, Gert

    2012-06-01

    The primary visual cortex, Brodmann's area (BA 17), plays a vital role in basic survival mechanisms in humans. In most neuro-imaging studies in which the volunteers have to watch pictures or movies, the primary visual cortex is similarly activated independent of the content of the pictures or movies. However, in case the volunteers perform demanding non-visual tasks, the primary visual cortex becomes de-activated, although the amount of incoming visual sensory information is the same. Do low- and high-intensity erotic movies, compared to neutral movies, produce similar de-activation of the primary visual cortex? Brain activation/de-activation was studied by Positron Emission Tomography scanning of the brains of 12 healthy heterosexual premenopausal women, aged 18-47, who watched neutral, low- and high-intensity erotic film segments. We measured differences in regional cerebral blood flow (rCBF) in the primary visual cortex during watching neutral, low-intensity erotic, and high-intensity erotic film segments. Watching high-intensity erotic, but not low-intensity erotic movies, compared to neutral movies resulted in strong de-activation of the primary (BA 17) and adjoining parts of the secondary visual cortex. The strong de-activation during watching high-intensity erotic film might represent compensation for the increased blood supply in the brain regions involved in sexual arousal, also because high-intensity erotic movies do not require precise scanning of the visual field, because the impact is clear to the observer. © 2012 International Society for Sexual Medicine.

  13. Creation of an Accurate Algorithm to Detect Snellen Best Documented Visual Acuity from Ophthalmology Electronic Health Record Notes.

    Science.gov (United States)

    Mbagwu, Michael; French, Dustin D; Gill, Manjot; Mitchell, Christopher; Jackson, Kathryn; Kho, Abel; Bryar, Paul J

    2016-05-04

    Visual acuity is the primary measure used in ophthalmology to determine how well a patient can see. Visual acuity for a single eye may be recorded in multiple ways for a single patient visit (eg, Snellen vs. Jäger units vs. font print size), and be recorded for either distance or near vision. Capturing the best documented visual acuity (BDVA) of each eye in an individual patient visit is an important step for making electronic ophthalmology clinical notes useful in research. Currently, there is limited methodology for capturing BDVA in an efficient and accurate manner from electronic health record (EHR) notes. We developed an algorithm to detect BDVA for right and left eyes from defined fields within electronic ophthalmology clinical notes. We designed an algorithm to detect the BDVA from defined fields within 295,218 ophthalmology clinical notes with visual acuity data present. About 5668 unique responses were identified and an algorithm was developed to map all of the unique responses to a structured list of Snellen visual acuities. Visual acuity was captured from a total of 295,218 ophthalmology clinical notes during the study dates. The algorithm identified all visual acuities in the defined visual acuity section for each eye and returned a single BDVA for each eye. A clinician chart review of 100 random patient notes showed a 99% accuracy detecting BDVA from these records and 1% observed error. Our algorithm successfully captures best documented Snellen distance visual acuity from ophthalmology clinical notes and transforms a variety of inputs into a structured Snellen equivalent list. Our work, to the best of our knowledge, represents the first attempt at capturing visual acuity accurately from large numbers of electronic ophthalmology notes. Use of this algorithm can benefit research groups interested in assessing visual acuity for patient centered outcome. All codes used for this study are currently available, and will be made available online at https://phekb.org.

  14. Spatial Coding of Individuals with Visual Impairments

    Science.gov (United States)

    Papadopoulos, Konstantinos; Koustriava, Eleni; Kartasidou, Lefkothea

    2012-01-01

    The aim of this study is to examine the ability of children and adolescents with visual impairments to code and represent near space. Moreover, it examines the impact of the strategies they use and individual differences in their performance. A total of 30 individuals with visual impairments up to the age of 18 were given eight different object…

  15. Retinal input to efferent target amacrine cells in the avian retina

    Science.gov (United States)

    Lindstrom, Sarah H.; Azizi, Nason; Weller, Cynthia; Wilson, Martin

    2012-01-01

    The bird visual system includes a substantial projection, of unknown function, from a midbrain nucleus to the contralateral retina. Every centrifugal, or efferent, neuron originating in the midbrain nucleus makes synaptic contact with the soma of a single, unique amacrine cell, the target cell (TC). By labeling efferent neurons in the midbrain we have been able to identify their terminals in retinal slices and make patch clamp recordings from TCs. TCs generate Na+ based action potentials triggered by spontaneous EPSPs originating from multiple classes of presynaptic neurons. Exogenously applied glutamate elicited inward currents having the mixed pharmacology of NMDA, kainate and inward rectifying AMPA receptors. Exogenously applied GABA elicited currents entirely suppressed by GABAzine, and therefore mediated by GABAA receptors. Immunohistochemistry showed the vesicular glutamate transporter, vGluT2, to be present in the characteristic synaptic boutons of efferent terminals, whereas the GABA synthetic enzyme, GAD, was present in much smaller processes of intrinsic retinal neurons. Extracellular recording showed that exogenously applied GABA was directly excitatory to TCs and, consistent with this, NKCC, the Cl− transporter often associated with excitatory GABAergic synapses, was identified in TCs by antibody staining. The presence of excitatory retinal input to TCs implies that TCs are not merely slaves to their midbrain input; instead, their output reflects local retinal activity and descending input from the midbrain. PMID:20650017

  16. Shape perception simultaneously up- and downregulates neural activity in the primary visual cortex.

    Science.gov (United States)

    Kok, Peter; de Lange, Floris P

    2014-07-07

    An essential part of visual perception is the grouping of local elements (such as edges and lines) into coherent shapes. Previous studies have shown that this grouping process modulates neural activity in the primary visual cortex (V1) that is signaling the local elements [1-4]. However, the nature of this modulation is controversial. Some studies find that shape perception reduces neural activity in V1 [2, 5, 6], while others report increased V1 activity during shape perception [1, 3, 4, 7-10]. Neurocomputational theories that cast perception as a generative process [11-13] propose that feedback connections carry predictions (i.e., the generative model), while feedforward connections signal the mismatch between top-down predictions and bottom-up inputs. Within this framework, the effect of feedback on early visual cortex may be either enhancing or suppressive, depending on whether the feedback signal is met by congruent bottom-up input. Here, we tested this hypothesis by quantifying the spatial profile of neural activity in V1 during the perception of illusory shapes using population receptive field mapping. We find that shape perception concurrently increases neural activity in regions of V1 that have a receptive field on the shape but do not receive bottom-up input and suppresses activity in regions of V1 that receive bottom-up input that is predicted by the shape. These effects were not modulated by task requirements. Together, these findings suggest that shape perception changes lower-order sensory representations in a highly specific and automatic manner, in line with theories that cast perception in terms of hierarchical generative models. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Reconfigurable Auditory-Visual Display

    Science.gov (United States)

    Begault, Durand R. (Inventor); Anderson, Mark R. (Inventor); McClain, Bryan (Inventor); Miller, Joel D. (Inventor)

    2008-01-01

    System and method for visual and audible communication between a central operator and N mobile communicators (N greater than or equal to 2), including an operator transceiver and interface, configured to receive and display, for the operator, visually perceptible and audibly perceptible signals from each of the mobile communicators. The interface (1) presents an audible signal from each communicator as if the audible signal is received from a different location relative to the operator and (2) allows the operator to select, to assign priority to, and to display, the visual signals and the audible signals received from a specified communicator. Each communicator has an associated signal transmitter that is configured to transmit at least one of the visual signals and the audio signal associated with the communicator, where at least one of the signal transmitters includes at least one sensor that senses and transmits a sensor value representing a selected environmental or physiological parameter associated with the communicator.

  18. Using visual language to represent interdisciplinary content in urban development: Selected findings

    Directory of Open Access Journals (Sweden)

    Špela Verovšek

    2013-01-01

    Full Text Available This article addresses visual language in architecture and spatial disciplines, using it as a means of communicating and conveying information, knowledge and ideas about space that are permeated by their interdisciplinary character. We focus in particular on the transmission of messages between professionals and the general public, arguing that this process aids the long-term formation of a responsible and critical public, which is then able to take an active part in sustainable planning and design practices. The article highlights some findings of an empirical study of 245 people that tested the effectiveness of selected presentation techniques in communicating spatial messages to the general public and placing them in the framework of existing knowledge.

  19. Frida Kahlo: Visual Articulations of Suffering and Loss.

    Science.gov (United States)

    Nixon, Lois LaCivita

    1996-01-01

    Illustrates the value of interdisciplinary approaches to patient care by exploring visual articulations of suffering as rendered by one artist. Makes general observations about the nature of humanities courses offered to medical students and depicts a visual portrayal of an illness story representing personal perspectives about patient suffering…

  20. PCRELAP5: a visual graphic preprocessor for RELAP5

    International Nuclear Information System (INIS)

    Monaco, Daniel F.; Sabundjian, Gaianê

    2017-01-01

    The aim of this work is to develop PCRELAP5, a visual preprocessor for RELAP5, reducing time, effort and maintenance costs spent in new projects for RELAP5. This preprocessor allows user to draw new nuclear power plant nodalization in a completely interactive way, and input parameters for each node in a more user-friendly experience. Once parameters are changed on screen, the input cards of RELAP5 code are changed in real time. RELAP5 users will have a tool to reduce time and effort for new studies and existing projects. Therefore, this project proposes to significantly leverage studies related to nuclear accident analysis, making the RELAP5 code more user-friendly. In order to demonstrate this preprocessor capability, the CANON experiment will be used as an example. The PCRELAP5 preprocessor is being developed using Microsoft® Visual Studio® as a Microsoft® Excel® add-in, due to the low cost of distribution and maintenance, and also allowing new RELAP5 projects be leveraged by the MS Excel® flexibility. (author)

  1. PCRELAP5: a visual graphic preprocessor for RELAP5

    Energy Technology Data Exchange (ETDEWEB)

    Monaco, Daniel F.; Sabundjian, Gaianê, E-mail: monacod@usp.br, E-mail: gdjian@ipen.br [Instituto de Pesquisas Energéticas e Nucleares (IPEN/CNEN-SP), São Paulo, SP (Brazil)

    2017-07-01

    The aim of this work is to develop PCRELAP5, a visual preprocessor for RELAP5, reducing time, effort and maintenance costs spent in new projects for RELAP5. This preprocessor allows user to draw new nuclear power plant nodalization in a completely interactive way, and input parameters for each node in a more user-friendly experience. Once parameters are changed on screen, the input cards of RELAP5 code are changed in real time. RELAP5 users will have a tool to reduce time and effort for new studies and existing projects. Therefore, this project proposes to significantly leverage studies related to nuclear accident analysis, making the RELAP5 code more user-friendly. In order to demonstrate this preprocessor capability, the CANON experiment will be used as an example. The PCRELAP5 preprocessor is being developed using Microsoft® Visual Studio® as a Microsoft® Excel® add-in, due to the low cost of distribution and maintenance, and also allowing new RELAP5 projects be leveraged by the MS Excel® flexibility. (author)

  2. A Computer-Based Visual Analog Scale,

    Science.gov (United States)

    1992-06-01

    34 keys on the computer keyboard or other input device. The initial position of the arrow is always in the center of the scale to prevent biasing the...3 REFERENCES 1. Gift, A.G., "Visual Analogue Scales: Measurement of Subjective Phenomena." Nursing Research, Vol. 38, pp. 286-288, 1989. 2. Ltmdberg...3. Menkes, D.B., Howard, R.C., Spears, G.F., and Cairns, E.R., "Salivary THC Following Cannabis Smoking Correlates With Subjective Intoxication and

  3. Postoperative increase in grey matter volume in visual cortex after unilateral cataract surgery

    DEFF Research Database (Denmark)

    Lou, Astrid R.; Madsen, Kristoffer Hougaard; Julian, Hanne O.

    2013-01-01

    Purpose:  The developing visual cortex has a strong potential to undergo plastic changes. Little is known about the potential of the ageing visual cortex to express plasticity. A pertinent question is whether therapeutic interventions can trigger plastic changes in the ageing visual cortex by res...... of visual input from both eyes. We conclude that activity-dependent cortical plasticity is preserved in the ageing visual cortex and may be triggered by restoring impaired vision.......Purpose:  The developing visual cortex has a strong potential to undergo plastic changes. Little is known about the potential of the ageing visual cortex to express plasticity. A pertinent question is whether therapeutic interventions can trigger plastic changes in the ageing visual cortex...... surgery induces a regional increase in grey matter in areas V1 and V2 of the visual cortex. Results:  In all patients, cataract surgery immediately improved visual acuity, contrast sensitivity and mean sensitivity in the visual field of the operated eye. The improvement in vision was stable throughout...

  4. A switch from inter-ocular to inter-hemispheric suppression following monocular deprivation in the rat visual cortex

    NARCIS (Netherlands)

    Pietrasanta, M.; Restani, L.; Cerri, C.; Olcese, U.; Medini, P.; Caleo, M.

    2014-01-01

    Binocularity is a key property of primary visual cortex (V1) neurons that is widely used to study synaptic integration in the brain and plastic mechanisms following an altered visual experience. However, it is not clear how the inputs from the two eyes converge onto binocular neurons, and how their

  5. Visual processing in anorexia nervosa and body dysmorphic disorder: similarities, differences, and future research directions

    Science.gov (United States)

    Madsen, Sarah K.; Bohon, Cara; Feusner, Jamie D.

    2013-01-01

    Anorexia nervosa (AN) and body dysmorphic disorder (BDD) are psychiatric disorders that involve distortion of the experience of one’s physical appearance. In AN, individuals believe that they are overweight, perceive their body as “fat,” and are preoccupied with maintaining a low body weight. In BDD, individuals are preoccupied with misperceived defects in physical appearance, most often of the face. Distorted visual perception may contribute to these cardinal symptoms, and may be a common underlying phenotype. This review surveys the current literature on visual processing in AN and BDD, addressing lower- to higher-order stages of visual information processing and perception. We focus on peer-reviewed studies of AN and BDD that address ophthalmologic abnormalities, basic neural processing of visual input, integration of visual input with other systems, neuropsychological tests of visual processing, and representations of whole percepts (such as images of faces, bodies, and other objects). The literature suggests a pattern in both groups of over-attention to detail, reduced processing of global features, and a tendency to focus on symptom-specific details in their own images (body parts in AN, facial features in BDD), with cognitive strategy at least partially mediating the abnormalities. Visuospatial abnormalities were also evident when viewing images of others and for non-appearance related stimuli. Unfortunately no study has directly compared AN and BDD, and most studies were not designed to disentangle disease-related emotional responses from lower-order visual processing. We make recommendations for future studies to improve the understanding of visual processing abnormalities in AN and BDD. PMID:23810196

  6. Social Network: a Cytoscape app for visualizing co-authorship networks.

    Science.gov (United States)

    Kofia, Victor; Isserlin, Ruth; Buchan, Alison M J; Bader, Gary D

    2015-01-01

    Networks that represent connections between individuals can be valuable analytic tools. The Social Network Cytoscape app is capable of creating a visual summary of connected individuals automatically. It does this by representing relationships as networks where each node denotes an individual and an edge linking two individuals represents a connection. The app focuses on creating visual summaries of individuals connected by co-authorship links in academia, created from bibliographic databases like PubMed, Scopus and InCites. The resulting co-authorship networks can be visualized and analyzed to better understand collaborative research networks or to communicate the extent of collaboration and publication productivity among a group of researchers, like in a grant application or departmental review report. It can also be useful as a research tool to identify important research topics, researchers and papers in a subject area.

  7. Integrative and distinctive coding of visual and conceptual object features in the ventral visual stream.

    Science.gov (United States)

    Martin, Chris B; Douglas, Danielle; Newsome, Rachel N; Man, Louisa Ly; Barense, Morgan D

    2018-02-02

    A significant body of research in cognitive neuroscience is aimed at understanding how object concepts are represented in the human brain. However, it remains unknown whether and where the visual and abstract conceptual features that define an object concept are integrated. We addressed this issue by comparing the neural pattern similarities among object-evoked fMRI responses with behavior-based models that independently captured the visual and conceptual similarities among these stimuli. Our results revealed evidence for distinctive coding of visual features in lateral occipital cortex, and conceptual features in the temporal pole and parahippocampal cortex. By contrast, we found evidence for integrative coding of visual and conceptual object features in perirhinal cortex. The neuroanatomical specificity of this effect was highlighted by results from a searchlight analysis. Taken together, our findings suggest that perirhinal cortex uniquely supports the representation of fully specified object concepts through the integration of their visual and conceptual features. © 2018, Martin et al.

  8. Integrative and distinctive coding of visual and conceptual object features in the ventral visual stream

    Science.gov (United States)

    Douglas, Danielle; Newsome, Rachel N; Man, Louisa LY

    2018-01-01

    A significant body of research in cognitive neuroscience is aimed at understanding how object concepts are represented in the human brain. However, it remains unknown whether and where the visual and abstract conceptual features that define an object concept are integrated. We addressed this issue by comparing the neural pattern similarities among object-evoked fMRI responses with behavior-based models that independently captured the visual and conceptual similarities among these stimuli. Our results revealed evidence for distinctive coding of visual features in lateral occipital cortex, and conceptual features in the temporal pole and parahippocampal cortex. By contrast, we found evidence for integrative coding of visual and conceptual object features in perirhinal cortex. The neuroanatomical specificity of this effect was highlighted by results from a searchlight analysis. Taken together, our findings suggest that perirhinal cortex uniquely supports the representation of fully specified object concepts through the integration of their visual and conceptual features. PMID:29393853

  9. Audiovisual associations alter the perception of low-level visual motion

    Directory of Open Access Journals (Sweden)

    Hulusi eKafaligonul

    2015-03-01

    Full Text Available Motion perception is a pervasive nature of vision and is affected by both immediate pattern of sensory inputs and prior experiences acquired through associations. Recently, several studies reported that an association can be established quickly between directions of visual motion and static sounds of distinct frequencies. After the association is formed, sounds are able to change the perceived direction of visual motion. To determine whether such rapidly acquired audiovisual associations and their subsequent influences on visual motion perception are dependent on the involvement of higher-order attentive tracking mechanisms, we designed psychophysical experiments using regular and reverse-phi random dot motions isolating low-level pre-attentive motion processing. Our results show that an association between the directions of low-level visual motion and static sounds can be formed and this audiovisual association alters the subsequent perception of low-level visual motion. These findings support the view that audiovisual associations are not restricted to high-level attention based motion system and early-level visual motion processing has some potential role.

  10. ELIMINATING CONSERVATISM IN THE PIPING SYSTEM ANALYSIS PROCESS THROUGH APPLICATION OF A SUITE OF LOCALLY APPROPRIATE SEISMIC INPUT MOTIONS

    International Nuclear Information System (INIS)

    Crawford, Anthony L.; Spears, Robert E.; Russell, Mark J.

    2009-01-01

    Seismic analysis is of great importance in the evaluation of nuclear systems due to the heavy influence such loading has on their designs. Current Department of Energy seismic analysis techniques for a nuclear safety-related piping system typically involve application of a single conservative seismic input applied to the entire system (1). A significant portion of this conservatism comes from the need to address the overlapping uncertainties in the seismic input and in the building response that transmits that input motion to the piping system. The approach presented in this paper addresses these two sources of uncertainty through the application of a suite of 32 input motions whose collective performance addresses the total uncertainty while each individual motion represents a single variation of it. It represents an extension of the soil-structure interaction analysis methodology of SEI/ASCE 43-05 (2) from the structure to individual piping components. Because this approach is computationally intensive, automation and other measures have been developed to make such an analysis efficient. These measures are detailed in this paper

  11. Visual system plasticity in mammals: the story of monocular enucleation-induced vision loss

    Science.gov (United States)

    Nys, Julie; Scheyltjens, Isabelle; Arckens, Lutgarde

    2015-01-01

    The groundbreaking work of Hubel and Wiesel in the 1960’s on ocular dominance plasticity instigated many studies of the visual system of mammals, enriching our understanding of how the development of its structure and function depends on high quality visual input through both eyes. These studies have mainly employed lid suturing, dark rearing and eye patching applied to different species to reduce or impair visual input, and have created extensive knowledge on binocular vision. However, not all aspects and types of plasticity in the visual cortex have been covered in full detail. In that regard, a more drastic deprivation method like enucleation, leading to complete vision loss appears useful as it has more widespread effects on the afferent visual pathway and even on non-visual brain regions. One-eyed vision due to monocular enucleation (ME) profoundly affects the contralateral retinorecipient subcortical and cortical structures thereby creating a powerful means to investigate cortical plasticity phenomena in which binocular competition has no vote.In this review, we will present current knowledge about the specific application of ME as an experimental tool to study visual and cross-modal brain plasticity and compare early postnatal stages up into adulthood. The structural and physiological consequences of this type of extensive sensory loss as documented and studied in several animal species and human patients will be discussed. We will summarize how ME studies have been instrumental to our current understanding of the differentiation of sensory systems and how the structure and function of cortical circuits in mammals are shaped in response to such an extensive alteration in experience. In conclusion, we will highlight future perspectives and the clinical relevance of adding ME to the list of more longstanding deprivation models in visual system research. PMID:25972788

  12. Auditory/visual distance estimation: accuracy and variability

    Directory of Open Access Journals (Sweden)

    Paul Wallace Anderson

    2014-10-01

    Full Text Available Past research has shown that auditory distance estimation improves when listeners are given the opportunity to see all possible sound sources when compared to no visual input. It has also been established that distance estimation is more accurate in vision than in audition. The present study investigates the degree to which auditory distance estimation is improved when matched with a congruent visual stimulus. Virtual sound sources based on binaural room impulse response (BRIR measurements made from distances ranging from approximately 0.3 to 9.8 m in a concert hall were used as auditory stimuli. Visual stimuli were photographs taken from the listener’s perspective at each distance in the impulse response measurement setup presented on a large HDTV monitor. Listeners were asked to estimate egocentric distance to the sound source in each of three conditions: auditory only (A, visual only (V, and congruent auditory/visual stimuli (A+V. Each condition was presented within its own block. Sixty-two listeners were tested in order to quantify the response variability inherent in auditory distance perception. Distance estimates from both the V and A+V conditions were found to be considerably more accurate and less variable than estimates from the A condition.

  13. TacTool: a tactile rapid prototyping tool for visual interfaces

    NARCIS (Netherlands)

    Keyson, D.V.; Tang, H.K.; Anzai, Y.; Ogawa, K.; Mori, H.

    1995-01-01

    This paper describes the TacTool development tool and input device for designing and evaluating visual user interfaces with tactile feedback. TacTool is currently supported by the IPO trackball with force feedback in the x and y directions. The tool is designed to enable both the designer and the

  14. Modeling visual problem solving as analogical reasoning.

    Science.gov (United States)

    Lovett, Andrew; Forbus, Kenneth

    2017-01-01

    We present a computational model of visual problem solving, designed to solve problems from the Raven's Progressive Matrices intelligence test. The model builds on the claim that analogical reasoning lies at the heart of visual problem solving, and intelligence more broadly. Images are compared via structure mapping, aligning the common relational structure in 2 images to identify commonalities and differences. These commonalities or differences can themselves be reified and used as the input for future comparisons. When images fail to align, the model dynamically rerepresents them to facilitate the comparison. In our analysis, we find that the model matches adult human performance on the Standard Progressive Matrices test, and that problems which are difficult for the model are also difficult for people. Furthermore, we show that model operations involving abstraction and rerepresentation are particularly difficult for people, suggesting that these operations may be critical for performing visual problem solving, and reasoning more generally, at the highest level. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  15. A Visual Language for Protein Design

    KAUST Repository

    Cox, Robert Sidney

    2017-02-08

    As protein engineering becomes more sophisticated, practitioners increasingly need to share diagrams for communicating protein designs. To this end, we present a draft visual language, Protein Language, that describes the high-level architecture of an engineered protein with easy-to draw glyphs, intended to be compatible with other biological diagram languages such as SBOL Visual and SBGN. Protein Language consists of glyphs for representing important features (e.g., globular domains, recognition and localization sequences, sites of covalent modification, cleavage and catalysis), rules for composing these glyphs to represent complex architectures, and rules constraining the scaling and styling of diagrams. To support Protein Language we have implemented an extensible web-based software diagram tool, Protein Designer, that uses Protein Language in a

  16. A Visual Language for Protein Design

    KAUST Repository

    Cox, Robert Sidney; McLaughlin, James Alastair; Grunberg, Raik; Beal, Jacob; Wipat, Anil; Sauro, Herbert M.

    2017-01-01

    As protein engineering becomes more sophisticated, practitioners increasingly need to share diagrams for communicating protein designs. To this end, we present a draft visual language, Protein Language, that describes the high-level architecture of an engineered protein with easy-to draw glyphs, intended to be compatible with other biological diagram languages such as SBOL Visual and SBGN. Protein Language consists of glyphs for representing important features (e.g., globular domains, recognition and localization sequences, sites of covalent modification, cleavage and catalysis), rules for composing these glyphs to represent complex architectures, and rules constraining the scaling and styling of diagrams. To support Protein Language we have implemented an extensible web-based software diagram tool, Protein Designer, that uses Protein Language in a

  17. VISUAL ABILITY IN AMBLYOPIC CHILDREN COMPARED TO CHILDREN WITH NORMAL VISUAL ACUITY

    Directory of Open Access Journals (Sweden)

    Zorica Tončić

    2016-03-01

    Full Text Available Vision rehabilitation in adults with low vision, even in children, is achieved with special devices, called low vision aids, LVA. The aim of the study is to determine the degree of visual function in amblyopic children and daily activities that are best related to those of normally sighted peers with normal visual acuity. The subjects were divided into two groups, matched 1:1 by age and gender: the first group consisted of 19 amblyopic children, and the second one consisted of 19 children with normal visual acuity. The questionnaire used to assess visual ability was Cardiff Visual Ability Questionnaire for Children (CVAQC, a reliable instrument for measuring visual ability in children with low vision. The study was conducted in the only rehabilitation center for amblyopic children in this region, so this is also a pioneer study. The overall result of CVAQC in amblyopic children was 1.287 log vs.-2.956, representing statistically significantly poorer visual ability in comparison to peers without vision deficit (p˂0.005. Amblyopic children function best in entertainment activities, especially in listening music (-2.31 log; as for sport, these children report swimming to be their favourite activity (-0.99 log. In the field of education they show best results in language acquisition (-0.79 log and the worst in mathematics (3.13 log. The greatest problem is reading small print texts books (2.61 log. Low vision children have poorer result of visual function in comparison to their peers with normal visual acuity. A precise deficit assessment in the most important spheres of life can be determined by using the questionnaires, so the rehabilitation can be rightly chosen.

  18. A latent low-dimensional common input drives a pool of motor neurons: a probabilistic latent state-space model.

    Science.gov (United States)

    Feeney, Daniel F; Meyer, François G; Noone, Nicholas; Enoka, Roger M

    2017-10-01

    Motor neurons appear to be activated with a common input signal that modulates the discharge activity of all neurons in the motor nucleus. It has proven difficult for neurophysiologists to quantify the variability in a common input signal, but characterization of such a signal may improve our understanding of how the activation signal varies across motor tasks. Contemporary methods of quantifying the common input to motor neurons rely on compiling discrete action potentials into continuous time series, assuming the motor pool acts as a linear filter, and requiring signals to be of sufficient duration for frequency analysis. We introduce a space-state model in which the discharge activity of motor neurons is modeled as inhomogeneous Poisson processes and propose a method to quantify an abstract latent trajectory that represents the common input received by motor neurons. The approach also approximates the variation in synaptic noise in the common input signal. The model is validated with four data sets: a simulation of 120 motor units, a pair of integrate-and-fire neurons with a Renshaw cell providing inhibitory feedback, the discharge activity of 10 integrate-and-fire neurons, and the discharge times of concurrently active motor units during an isometric voluntary contraction. The simulations revealed that a latent state-space model is able to quantify the trajectory and variability of the common input signal across all four conditions. When compared with the cumulative spike train method of characterizing common input, the state-space approach was more sensitive to the details of the common input current and was less influenced by the duration of the signal. The state-space approach appears to be capable of detecting rather modest changes in common input signals across conditions. NEW & NOTEWORTHY We propose a state-space model that explicitly delineates a common input signal sent to motor neurons and the physiological noise inherent in synaptic signal

  19. Input and execution

    International Nuclear Information System (INIS)

    Carr, S.; Lane, G.; Rowling, G.

    1986-11-01

    This document describes the input procedures, input data files and operating instructions for the SYVAC A/C 1.03 computer program. SYVAC A/C 1.03 simulates the groundwater mediated movement of radionuclides from underground facilities for the disposal of low and intermediate level wastes to the accessible environment, and provides an estimate of the subsequent radiological risk to man. (author)

  20. Interactive visual exploration and analysis of origin-destination data

    Science.gov (United States)

    Ding, Linfang; Meng, Liqiu; Yang, Jian; Krisp, Jukka M.

    2018-05-01

    In this paper, we propose a visual analytics approach for the exploration of spatiotemporal interaction patterns of massive origin-destination data. Firstly, we visually query the movement database for data at certain time windows. Secondly, we conduct interactive clustering to allow the users to select input variables/features (e.g., origins, destinations, distance, and duration) and to adjust clustering parameters (e.g. distance threshold). The agglomerative hierarchical clustering method is applied for the multivariate clustering of the origin-destination data. Thirdly, we design a parallel coordinates plot for visualizing the precomputed clusters and for further exploration of interesting clusters. Finally, we propose a gradient line rendering technique to show the spatial and directional distribution of origin-destination clusters on a map view. We implement the visual analytics approach in a web-based interactive environment and apply it to real-world floating car data from Shanghai. The experiment results show the origin/destination hotspots and their spatial interaction patterns. They also demonstrate the effectiveness of our proposed approach.

  1. Proprioception contributes to the sense of agency during visual observation of hand movements: evidence from temporal judgments of action

    DEFF Research Database (Denmark)

    Balslev, Daniela; Cole, Jonathan; Miall, R Chris

    2007-01-01

    The ability to recognize visually one's own movement is important for motor control and, through attribution of agency, for social interactions. Agency of actions may be decided by comparisons of visual feedback, efferent signals, and proprioceptive inputs. Because the ability to identify one's own...

  2. Retinal oscillations carry visual information to cortex

    Directory of Open Access Journals (Sweden)

    Kilian Koepsell

    2009-04-01

    Full Text Available Thalamic relay cells fire action potentials that transmit information from retina to cortex. The amount of information that spike trains encode is usually estimated from the precision of spike timing with respect to the stimulus. Sensory input, however, is only one factor that influences neural activity. For example, intrinsic dynamics, such as oscillations of networks of neurons, also modulate firing pattern. Here, we asked if retinal oscillations might help to convey information to neurons downstream. Specifically, we made whole-cell recordings from relay cells to reveal retinal inputs (EPSPs and thalamic outputs (spikes and then analyzed these events with information theory. Our results show that thalamic spike trains operate as two multiplexed channels. One channel, which occupies a low frequency band (<30 Hz, is encoded by average firing rate with respect to the stimulus and carries information about local changes in the visual field over time. The other operates in the gamma frequency band (40-80 Hz and is encoded by spike timing relative to retinal oscillations. At times, the second channel conveyed even more information than the first. Because retinal oscillations involve extensive networks of ganglion cells, it is likely that the second channel transmits information about global features of the visual scene.

  3. The Effect of Visual Stimuli on Stability and Complexity of Postural Control

    Directory of Open Access Journals (Sweden)

    Haizhen Luo

    2018-02-01

    Full Text Available Visual input could benefit balance control or increase postural sway, and it is far from fully understanding the effect of visual stimuli on postural stability and its underlying mechanism. In this study, the effect of different visual inputs on stability and complexity of postural control was examined by analyzing the mean velocity (MV, SD, and fuzzy approximate entropy (fApEn of the center of pressure (COP signal during quiet upright standing. We designed five visual exposure conditions: eyes-closed, eyes-open (EO, and three virtual reality (VR scenes (VR1–VR3. The VR scenes were a limited field view of an optokinetic drum rotating around yaw (VR1, pitch (VR2, and roll (VR3 axes, respectively. Sixteen healthy subjects were involved in the experiment, and their COP trajectories were assessed from the force plate data. MV, SD, and fApEn of the COP in anterior–posterior (AP, medial–lateral (ML directions were calculated. Two-way analysis of variance with repeated measures was conducted to test the statistical significance. We found that all the three parameters obtained the lowest values in the EO condition, and highest in the VR3 condition. We also found that the active neuromuscular intervention, indicated by fApEn, in response to changing the visual exposure conditions were more adaptive in AP direction, and the stability, indicated by SD, in ML direction reflected the changes of visual scenes. MV was found to capture both instability and active neuromuscular control dynamics. It seemed that the three parameters provided compensatory information about the postural control in the immersive virtual environment.

  4. A Pervasive Parallel Processing Framework for Data Visualization and Analysis at Extreme Scale

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Kwan-Liu [Univ. of California, Davis, CA (United States)

    2017-02-01

    Most of today’s visualization libraries and applications are based off of what is known today as the visualization pipeline. In the visualization pipeline model, algorithms are encapsulated as “filtering” components with inputs and outputs. These components can be combined by connecting the outputs of one filter to the inputs of another filter. The visualization pipeline model is popular because it provides a convenient abstraction that allows users to combine algorithms in powerful ways. Unfortunately, the visualization pipeline cannot run effectively on exascale computers. Experts agree that the exascale machine will comprise processors that contain many cores. Furthermore, physical limitations will prevent data movement in and out of the chip (that is, between main memory and the processing cores) from keeping pace with improvements in overall compute performance. To use these processors to their fullest capability, it is essential to carefully consider memory access. This is where the visualization pipeline fails. Each filtering component in the visualization library is expected to take a data set in its entirety, perform some computation across all of the elements, and output the complete results. The process of iterating over all elements must be repeated in each filter, which is one of the worst possible ways to traverse memory when trying to maximize the number of executions per memory access. This project investigates a new type of visualization framework that exhibits a pervasive parallelism necessary to run on exascale machines. Our framework achieves this by defining algorithms in terms of functors, which are localized, stateless operations. Functors can be composited in much the same way as filters in the visualization pipeline. But, functors’ design allows them to be concurrently running on massive amounts of lightweight threads. Only with such fine-grained parallelism can we hope to fill the billions of threads we expect will be necessary for

  5. Data visualizations and infographics

    CERN Document Server

    Mauldin, Sarah K C

    2015-01-01

    Graphics which visually represent data or complex ideas are oftentimes easier for people to understand and digest than standalone statistics. A map shaded with different colors to represent religious affiliations or income levels enables researchers to quickly identify trends and patterns. New free tools and applications offer librarians the opportunity to organize and manipulate data to quickly create these helpful graphics. Learn how to overlay data sets on maps, create infographics for library services and instruction, use mindmapping for group brainstorming sessions, produce de

  6. Attention enhances contrast appearance via increased input baseline of neural responses.

    Science.gov (United States)

    Cutrone, Elizabeth K; Heeger, David J; Carrasco, Marisa

    2014-12-30

    Covert spatial attention increases the perceived contrast of stimuli at attended locations, presumably via enhancement of visual neural responses. However, the relation between perceived contrast and the underlying neural responses has not been characterized. In this study, we systematically varied stimulus contrast, using a two-alternative, forced-choice comparison task to probe the effect of attention on appearance across the contrast range. We modeled performance in the task as a function of underlying neural contrast-response functions. Fitting this model to the observed data revealed that an increased input baseline in the neural responses accounted for the enhancement of apparent contrast with spatial attention. © 2014 ARVO.

  7. Low-cost USB interface for operant research using Arduino and Visual Basic.

    Science.gov (United States)

    Escobar, Rogelio; Pérez-Herrera, Carlos A

    2015-03-01

    This note describes the design of a low-cost interface using Arduino microcontroller boards and Visual Basic programming for operant conditioning research. The board executes one program in Arduino programming language that polls the state of the inputs and generates outputs in an operant chamber. This program communicates through a USB port with another program written in Visual Basic 2010 Express Edition running on a laptop, desktop, netbook computer, or even a tablet equipped with Windows operating system. The Visual Basic program controls schedules of reinforcement and records real-time data. A single Arduino board can be used to control a total of 52 inputs/output lines, and multiple Arduino boards can be used to control multiple operant chambers. An external power supply and a series of micro relays are required to control 28-V DC devices commonly used in operant chambers. Instructions for downloading and using the programs to generate simple and concurrent schedules of reinforcement are provided. Testing suggests that the interface is reliable, accurate, and could serve as an inexpensive alternative to commercial equipment. © Society for the Experimental Analysis of Behavior.

  8. Adding sound to theory of mind: Comparing children's development of mental-state understanding in the auditory and visual realms.

    Science.gov (United States)

    Hasni, Anita A; Adamson, Lauren B; Williamson, Rebecca A; Robins, Diana L

    2017-12-01

    Theory of mind (ToM) gradually develops during the preschool years. Measures of ToM usually target visual experience, but auditory experiences also provide valuable social information. Given differences between the visual and auditory modalities (e.g., sights persist, sounds fade) and the important role environmental input plays in social-cognitive development, we asked whether modality might influence the progression of ToM development. The current study expands Wellman and Liu's ToM scale (2004) by testing 66 preschoolers using five standard visual ToM tasks and five newly crafted auditory ToM tasks. Age and gender effects were found, with 4- and 5-year-olds demonstrating greater ToM abilities than 3-year-olds and girls passing more tasks than boys; there was no significant effect of modality. Both visual and auditory tasks formed a scalable set. These results indicate that there is considerable consistency in when children are able to use visual and auditory inputs to reason about various aspects of others' mental states. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Information matching the content of visual working memory is prioritized for conscious access.

    Science.gov (United States)

    Gayet, Surya; Paffen, Chris L E; Van der Stigchel, Stefan

    2013-12-01

    Visual working memory (VWM) is used to retain relevant information for imminent goal-directed behavior. In the experiments reported here, we found that VWM helps to prioritize relevant information that is not yet available for conscious experience. In five experiments, we demonstrated that information matching VWM content reaches visual awareness faster than does information not matching VWM content. Our findings suggest a functional link between VWM and visual awareness: The content of VWM is recruited to funnel down the vast amount of sensory input to that which is relevant for subsequent behavior and therefore requires conscious access.

  10. Can Simulation Credibility Be Improved Using Sensitivity Analysis to Understand Input Data Effects on Model Outcome?

    Science.gov (United States)

    Myers, Jerry G.; Young, M.; Goodenow, Debra A.; Keenan, A.; Walton, M.; Boley, L.

    2015-01-01

    Model and simulation (MS) credibility is defined as, the quality to elicit belief or trust in MS results. NASA-STD-7009 [1] delineates eight components (Verification, Validation, Input Pedigree, Results Uncertainty, Results Robustness, Use History, MS Management, People Qualifications) that address quantifying model credibility, and provides guidance to the model developers, analysts, and end users for assessing the MS credibility. Of the eight characteristics, input pedigree, or the quality of the data used to develop model input parameters, governing functions, or initial conditions, can vary significantly. These data quality differences have varying consequences across the range of MS application. NASA-STD-7009 requires that the lowest input data quality be used to represent the entire set of input data when scoring the input pedigree credibility of the model. This requirement provides a conservative assessment of model inputs, and maximizes the communication of the potential level of risk of using model outputs. Unfortunately, in practice, this may result in overly pessimistic communication of the MS output, undermining the credibility of simulation predictions to decision makers. This presentation proposes an alternative assessment mechanism, utilizing results parameter robustness, also known as model input sensitivity, to improve the credibility scoring process for specific simulations.

  11. High-resolution Self-Organizing Maps for advanced visualization and dimension reduction.

    Science.gov (United States)

    Saraswati, Ayu; Nguyen, Van Tuc; Hagenbuchner, Markus; Tsoi, Ah Chung

    2018-05-04

    Kohonen's Self Organizing feature Map (SOM) provides an effective way to project high dimensional input features onto a low dimensional display space while preserving the topological relationships among the input features. Recent advances in algorithms that take advantages of modern computing hardware introduced the concept of high resolution SOMs (HRSOMs). This paper investigates the capabilities and applicability of the HRSOM as a visualization tool for cluster analysis and its suitabilities to serve as a pre-processor in ensemble learning models. The evaluation is conducted on a number of established benchmarks and real-world learning problems, namely, the policeman benchmark, two web spam detection problems, a network intrusion detection problem, and a malware detection problem. It is found that the visualization resulted from an HRSOM provides new insights concerning these learning problems. It is furthermore shown empirically that broad benefits from the use of HRSOMs in both clustering and classification problems can be expected. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. PLEXOS Input Data Generator

    Energy Technology Data Exchange (ETDEWEB)

    2017-02-01

    The PLEXOS Input Data Generator (PIDG) is a tool that enables PLEXOS users to better version their data, automate data processing, collaborate in developing inputs, and transfer data between different production cost modeling and other power systems analysis software. PIDG can process data that is in a generalized format from multiple input sources, including CSV files, PostgreSQL databases, and PSS/E .raw files and write it to an Excel file that can be imported into PLEXOS with only limited manual intervention.

  13. REcall Venice - Exploring disciplines of visual literacy through difficult heritage

    DEFF Research Database (Denmark)

    Tvedebrink, Tenna Doktor Olsen; Fisker, Anna Marie; Møller, Hans Ramsgaard

    2015-01-01

    According to James Elkin visual literacy is interpreted as material representations, which communicate knowledge and create insight through their visual appearance. Based on the EU Cultural Heritage project REcall, we argue that visual literacy can also relate to interdisciplinary knowledge rooted......, and archeologists question the role of architectural environments when dealing with war heritage. Today, there are still traces left from WWII in the European architectural environments, traces that by visual literacy represent unpleasant memories. However, these visual literacies have shaped our environment, yet...

  14. Audiovisual Perception of Noise Vocoded Speech in Dyslexic and Non-Dyslexic Adults: The Role of Low-Frequency Visual Modulations

    Science.gov (United States)

    Megnin-Viggars, Odette; Goswami, Usha

    2013-01-01

    Visual speech inputs can enhance auditory speech information, particularly in noisy or degraded conditions. The natural statistics of audiovisual speech highlight the temporal correspondence between visual and auditory prosody, with lip, jaw, cheek and head movements conveying information about the speech envelope. Low-frequency spatial and…

  15. Set Theory Applied to Uniquely Define the Inputs to Territorial Systems in Emergy Analyses

    Science.gov (United States)

    The language of set theory can be utilized to represent the emergy involved in all processes. In this paper we use set theory in an emergy evaluation to ensure an accurate representation of the inputs to territorial systems. We consider a generic territorial system and we describ...

  16. Trends in interactive visualization state-of-the-art survey

    CERN Document Server

    Wu, Xindong

    2008-01-01

    The purpose of Interactive Visualization is to develop scientific methods to increase scientists'' abilities to explore data and to understand better the results of experiments based on extensive calculations. This book provides readers with insight in Interactive Visualization from various perspectives, representing the developments in the field.

  17. Partial recovery of visual-spatial remapping of touch after restoring vision in a congenitally blind man.

    Science.gov (United States)

    Ley, Pia; Bottari, Davide; Shenoy, Bhamy H; Kekunnaya, Ramesh; Röder, Brigitte

    2013-05-01

    In an initial processing step, sensory events are encoded in modality specific representations in the brain but seem to be automatically remapped into a supra-modal, presumably visual-external frame of reference. To test whether there is a sensitive phase in the first years of life during which visual input is crucial for the acquisition of this remapping process, we tested a single case of a congenitally blind man whose sight was restored after the age of two years. HS performed a tactile temporal order judgment task (TOJ) which required judging the temporal order of two tactile stimuli, one presented to each index finger. In addition, a visual-tactile cross-modal congruency task was run, in which spatially congruent and spatially incongruent visual distractor stimuli were presented together with tactile stimuli. The tactile stimuli had to be localized. Both tasks were performed with an uncrossed and a crossed hand posture. Similar to congenitally blind individuals HS did not show a crossing effect in the tactile TOJ task suggesting an anatomical rather than visual-external coding of touch. In the visual-tactile task, however, external remapping of touch was observed though incomplete compared to sighted controls. These data support the hypothesis of a sensitive phase for the acquisition of an automatic use of visual-spatial representations for coding tactile input. Nonetheless, these representations seem to be acquired to some extent after the end of congenital blindness but seem to be recruited only in the context of visual stimuli and are used with a reduced efficiency. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Filling the Astronomical Void - A Visual Medium for a Visual Subject

    Science.gov (United States)

    Ryan, J.

    1996-12-01

    Astronomy is fundamentally a visual subject. The modern science of astronomy has at its foundation the ancient art of observing the sky visually. The visual elements of astronomy are arguably the most important. Every person in the entire world is affected by visually-observed astronomical phenomena such as the seasonal variations in daylight. However, misconceptions abound and the average person cannot recognize the simple signs in the sky that point to the direction, the hour and the season. Educators and astronomy popularizers widely lament that astronomy is not appreciated in our society. Yet, there is a remarkable dearth of popular literature for teaching the visual elements of astronomy. This is what I refer to as *the astronomical void.* Typical works use illustrations sparsely, relying most heavily on text-based descriptions of the visual astronomical phenomena. Such works leave significant inferential gaps to the inexperienced reader, who is unequipped for making astronomical observations. Thus, the astronomical void remains unfilled by much of the currently available literature. I therefore propose the introduction of a visually-oriented medium for teaching the visual elements of Astronomy. To this end, I have prepared a series of astronomy "comic strips" that are intended to fill the astronomical void. By giving the illustrations the central place, the comic strip medium permits the depiction of motion and other sequential activity, thus effectively representing astronomical phenomena. In addition to the practical advantages, the comic strip is a "user friendly" medium that is inviting and entertaining to a reader. At the present time, I am distributing a monthly comic strip entitled *Starman*, which appears in the newsletters of over 120 local astronomy organizations and on the web at http://www.cyberdrive.net/ starman. I hope to eventually publish a series of full-length books and believe that astronomical comic strips will help expand the perimeter of

  19. Prestimulus neural oscillations inhibit visual perception via modulation of response gain.

    Science.gov (United States)

    Chaumon, Maximilien; Busch, Niko A

    2014-11-01

    The ongoing state of the brain radically affects how it processes sensory information. How does this ongoing brain activity interact with the processing of external stimuli? Spontaneous oscillations in the alpha range are thought to inhibit sensory processing, but little is known about the psychophysical mechanisms of this inhibition. We recorded ongoing brain activity with EEG while human observers performed a visual detection task with stimuli of different contrast intensities. To move beyond qualitative description, we formally compared psychometric functions obtained under different levels of ongoing alpha power and evaluated the inhibitory effect of ongoing alpha oscillations in terms of contrast or response gain models. This procedure opens the way to understanding the actual functional mechanisms by which ongoing brain activity affects visual performance. We found that strong prestimulus occipital alpha oscillations-but not more anterior mu oscillations-reduce performance most strongly for stimuli of the highest intensities tested. This inhibitory effect is best explained by a divisive reduction of response gain. Ongoing occipital alpha oscillations thus reflect changes in the visual system's input/output transformation that are independent of the sensory input to the system. They selectively scale the system's response, rather than change its sensitivity to sensory information.

  20. Cortico-Cortical Receptive Field Estimates in Human Visual Cortex

    Directory of Open Access Journals (Sweden)

    Koen V Haak

    2012-05-01

    Full Text Available Human visual cortex comprises many visual areas that contain a map of the visual field (Wandell et al 2007, Neuron 56, 366–383. These visual field maps can be identified readily in individual subjects with functional magnetic resonance imaging (fMRI during experimental sessions that last less than an hour (Wandell and Winawer 2011, Vis Res 718–737. Hence, visual field mapping with fMRI has been, and still is, a heavily used technique to examine the organisation of both normal and abnormal human visual cortex (Haak et al 2011, ACNR, 11(3, 20–21. However, visual field mapping cannot reveal every aspect of human visual cortex organisation. For example, the information processed within a visual field map arrives from somewhere and is sent to somewhere, and visual field mapping does not derive these input/output relationships. Here, we describe a new, model-based analysis for estimating the dependence between signals in distinct cortical regions using functional magnetic resonance imaging (fMRI data. Just as a stimulus-referred receptive field predicts the neural response as a function of the stimulus contrast, the neural-referred receptive field predicts the neural response as a function of responses elsewhere in the nervous system. When applied to two cortical regions, this function can be called the cortico-cortical receptive field (CCRF. We model the CCRF as a Gaussian-weighted region on the cortical surface and apply the model to data from both stimulus-driven and resting-state experimental conditions in visual cortex.

  1. A Cross-Layer Approach for Maximizing Visual Entropy Using Closed-Loop Downlink MIMO

    Directory of Open Access Journals (Sweden)

    Hyungkeuk Lee

    2008-07-01

    Full Text Available We propose an adaptive video transmission scheme to achieve unequal error protection in a closed loop multiple input multiple output (MIMO system for wavelet-based video coding. In this scheme, visual entropy is employed as a video quality metric in agreement with the human visual system (HVS, and the associated visual weight is used to obtain a set of optimal powers in the MIMO system for maximizing the visual quality of the reconstructed video. For ease of cross-layer optimization, the video sequence is divided into several streams, and the visual importance of each stream is quantified using the visual weight. Moreover, an adaptive load balance control, named equal termination scheduling (ETS, is proposed to improve the throughput of visually important data with higher priority. An optimal solution for power allocation is derived as a closed form using a Lagrangian relaxation method. In the simulation results, a highly improved visual quality is demonstrated in the reconstructed video via the cross-layer approach by means of visual entropy.

  2. Designing of all optical generalized circuit for two-input binary and multi-valued logical operations

    Science.gov (United States)

    Bhowmik, Panchatapa; Roy, Jitendra Nath; Chattopadhyay, Tanay

    2014-11-01

    This paper presents a generalized all optical circuit of two-input logical operation (both binary and multi-valued), using an optical nonlinear material (OPNLM) based switch. The inputs of the logic gates are represented by different polarization states of light. This model is simple, practical and very much useful for future all optical information processing. Proposed scheme can work for different wavelengths and for different materials. The simulation result with the nonlinear material gold nanoparticle embedded in optically transparent matrices alumina (Al2O3) is also presented in the paper.

  3. Understanding and representing natural language meaning

    Science.gov (United States)

    Waltz, D. L.; Maran, L. R.; Dorfman, M. H.; Dinitz, R.; Farwell, D.

    1982-12-01

    During this contract period the authors have: (1) continued investigation of events and actions by means of representation schemes called 'event shape diagrams'; (2) written a parsing program which selects appropriate word and sentence meanings by a parallel process know as activation and inhibition; (3) begun investigation of the point of a story or event by modeling the motivations and emotional behaviors of story characters; (4) started work on combining and translating two machine-readable dictionaries into a lexicon and knowledge base which will form an integral part of our natural language understanding programs; (5) made substantial progress toward a general model for the representation of cognitive relations by comparing English scene and event descriptions with similar descriptions in other languages; (6) constructed a general model for the representation of tense and aspect of verbs; (7) made progress toward the design of an integrated robotics system which accepts English requests, and uses visual and tactile inputs in making decisions and learning new tasks.

  4. Frequency spectrum might act as communication code between retina and visual cortex I.

    Science.gov (United States)

    Yang, Xu; Gong, Bo; Lu, Jian-Wei

    2015-01-01

    To explore changes and possible communication relationship of local potential signals recorded simultaneously from retina and visual cortex I (V1). Fourteen C57BL/6J mice were measured with pattern electroretinogram (PERG) and pattern visually evoked potential (PVEP) and fast Fourier transform has been used to analyze the frequency components of those signals. The amplitude of PERG and PVEP was measured at about 36.7 µV and 112.5 µV respectively and the dominant frequency of PERG and PVEP, however, stay unchanged and both signals do not have second, or otherwise, harmonic generation. The results suggested that retina encodes visual information in the way of frequency spectrum and then transfers it to primary visual cortex. The primary visual cortex accepts and deciphers the input visual information coded from retina. Frequency spectrum may act as communication code between retina and V1.

  5. Studies of Visual Attention in Physics Problem Solving

    Science.gov (United States)

    Madsen, Adrian M.

    2013-01-01

    The work described here represents an effort to understand and influence visual attention while solving physics problems containing a diagram. Our visual system is guided by two types of processes--top-down and bottom-up. The top-down processes are internal and determined by ones prior knowledge and goals. The bottom-up processes are external and…

  6. A probabilistic graphical model based stochastic input model construction

    International Nuclear Information System (INIS)

    Wan, Jiang; Zabaras, Nicholas

    2014-01-01

    Model reduction techniques have been widely used in modeling of high-dimensional stochastic input in uncertainty quantification tasks. However, the probabilistic modeling of random variables projected into reduced-order spaces presents a number of computational challenges. Due to the curse of dimensionality, the underlying dependence relationships between these random variables are difficult to capture. In this work, a probabilistic graphical model based approach is employed to learn the dependence by running a number of conditional independence tests using observation data. Thus a probabilistic model of the joint PDF is obtained and the PDF is factorized into a set of conditional distributions based on the dependence structure of the variables. The estimation of the joint PDF from data is then transformed to estimating conditional distributions under reduced dimensions. To improve the computational efficiency, a polynomial chaos expansion is further applied to represent the random field in terms of a set of standard random variables. This technique is combined with both linear and nonlinear model reduction methods. Numerical examples are presented to demonstrate the accuracy and efficiency of the probabilistic graphical model based stochastic input models. - Highlights: • Data-driven stochastic input models without the assumption of independence of the reduced random variables. • The problem is transformed to a Bayesian network structure learning problem. • Examples are given in flows in random media

  7. SSYST-3. Input description

    International Nuclear Information System (INIS)

    Meyder, R.

    1983-12-01

    The code system SSYST-3 is designed to analyse the thermal and mechanical behaviour of a fuel rod during a LOCA. The report contains a complete input-list for all modules and several tested inputs for a LOCA analysis. (orig.)

  8. The mere exposure effect for visual image.

    Science.gov (United States)

    Inoue, Kazuya; Yagi, Yoshihiko; Sato, Nobuya

    2018-02-01

    Mere exposure effect refers to a phenomenon in which repeated stimuli are evaluated more positively than novel stimuli. We investigated whether this effect occurs for internally generated visual representations (i.e., visual images). In an exposure phase, a 5 × 5 dot array was presented, and a pair of dots corresponding to the neighboring vertices of an invisible polygon was sequentially flashed (in red), creating an invisible polygon. In Experiments 1, 2, and 4, participants visualized and memorized the shapes of invisible polygons based on different sequences of flashed dots, whereas in Experiment 3, participants only memorized positions of these dots. In a subsequent rating phase, participants visualized the shape of the invisible polygon from allocations of numerical characters on its vertices, and then rated their preference for invisible polygons (Experiments 1, 2, and 3). In contrast, in Experiment 4, participants rated the preference for visible polygons. Results showed that the mere exposure effect appeared only when participants visualized the shape of invisible polygons in both the exposure and rating phases (Experiments 1 and 2), suggesting that the mere exposure effect occurred for internalized visual images. This implies that the sensory inputs from repeated stimuli play a minor role in the mere exposure effect. Absence of the mere exposure effect in Experiment 4 suggests that the consistency of processing between exposure and rating phases plays an important role in the mere exposure effect.

  9. Linear Narratives, Arbitrary Relationships: Arbitrary Relationships: Mimesis and Direct Communication for Effectively Representing Engineering Realities Multimodally

    Science.gov (United States)

    Jeyaraj, Joseph

    2017-01-01

    Engineers communicate multimodally using written and visual communication, but there is not much theorizing on why they do so and how. This essay, therefore, examines why engineers communicate multimodally, what, in the context of representing engineering realities, are the strengths and weaknesses of written and visual communication, and how,…

  10. Material input of nuclear fuel

    International Nuclear Information System (INIS)

    Rissanen, S.; Tarjanne, R.

    2001-01-01

    The Material Input (MI) of nuclear fuel, expressed in terms of the total amount of natural material needed for manufacturing a product, is examined. The suitability of the MI method for assessing the environmental impacts of fuels is also discussed. Material input is expressed as a Material Input Coefficient (MIC), equalling to the total mass of natural material divided by the mass of the completed product. The material input coefficient is, however, only an intermediate result, which should not be used as such for the comparison of different fuels, because the energy contents of nuclear fuel is about 100 000-fold compared to the energy contents of fossil fuels. As a final result, the material input is expressed in proportion to the amount of generated electricity, which is called MIPS (Material Input Per Service unit). Material input is a simplified and commensurable indicator for the use of natural material, but because it does not take into account the harmfulness of materials or the way how the residual material is processed, it does not alone express the amount of environmental impacts. The examination of the mere amount does not differentiate between for example coal, natural gas or waste rock containing usually just sand. Natural gas is, however, substantially more harmful for the ecosystem than sand. Therefore, other methods should also be used to consider the environmental load of a product. The material input coefficient of nuclear fuel is calculated using data from different types of mines. The calculations are made among other things by using the data of an open pit mine (Key Lake, Canada), an underground mine (McArthur River, Canada) and a by-product mine (Olympic Dam, Australia). Furthermore, the coefficient is calculated for nuclear fuel corresponding to the nuclear fuel supply of Teollisuuden Voima (TVO) company in 2001. Because there is some uncertainty in the initial data, the inaccuracy of the final results can be even 20-50 per cent. The value

  11. The relation between input-output transformation and gastrointestinal nematode infections on dairy farms.

    Science.gov (United States)

    van der Voort, M; Van Meensel, J; Lauwers, L; Van Huylenbroeck, G; Charlier, J

    2016-02-01

    Efficiency analysis is used for assessing links between technical efficiency (TE) of livestock farms and animal diseases. However, previous studies often do not make the link with the allocation of inputs and mainly present average effects that ignore the often huge differences among farms. In this paper, we studied the relationship between exposure to gastrointestinal (GI) nematode infections, the TE and the input allocation on dairy farms. Although the traditional cost allocative efficiency (CAE) indicator adequately measures how a given input allocation differs from the cost-minimising input allocation, they do not represent the unique input allocation of farms. Similar CAE scores may be obtained for farms with different input allocations. Therefore, we propose an adjusted allocative efficiency index (AAEI) to measure the unique input allocation of farms. Combining this AAEI with the TE score allows determining the unique input-output position of each farm. The method is illustrated by estimating efficiency scores using data envelopment analysis (DEA) on a sample of 152 dairy farms in Flanders for which both accountancy and parasitic monitoring data were available. Three groups of farms with a different input-output position can be distinguished based on cluster analysis: (1) technically inefficient farms, with a relatively low use of concentrates per 100 l milk and a high exposure to infection, (2) farms with an intermediate TE, relatively high use of concentrates per 100 l milk and a low exposure to infection, (3) farms with the highest TE, relatively low roughage use per 100 l milk and a relatively high exposure to infection. Correlation analysis indicates for each group how the level of exposure to GI nematodes is associated or not with improved economic performance. The results suggest that improving both the economic performance and exposure to infection seems only of interest for highly TE farms. The findings indicate that current farm recommendations

  12. Visualization of Pulsar Search Data

    Science.gov (United States)

    Foster, R. S.; Wolszczan, A.

    1993-05-01

    The search for periodic signals from rotating neutron stars or pulsars has been a computationally taxing problem to astronomers for more than twenty-five years. Over this time interval, increases in computational capability have allowed ever more sensitive searches, covering a larger parameter space. The volume of input data and the general presence of radio frequency interference typically produce numerous spurious signals. Visualization of the search output and enhanced real-time processing of significant candidate events allow the pulsar searcher to optimally processes and search for new radio pulsars. The pulsar search algorithm and visualization system presented in this paper currently runs on serial RISC based workstations, a traditional vector based super computer, and a massively parallel computer. A description of the serial software algorithm and its modifications for massively parallel computing are describe. The results of four successive searches for millisecond period radio pulsars using the Arecibo telescope at 430 MHz have resulted in the successful detection of new long-period and millisecond period radio pulsars.

  13. A visual Fortran user interface for CITATION code

    International Nuclear Information System (INIS)

    Albarhoum, M.; Zaidan, N.

    2006-11-01

    A user interface is designed to enable running the CITATION code under Windows. Four sections of CITATION input file are arranged in the form of 4 interfaces, in which all the parameters of the section can be modified dynamically. The help for each parameter (item) can be read from a general help for the section which, in turn, can be visualized upon selecting the section from the program general menu. (author)

  14. Chemical sensors are hybrid-input memristors

    Science.gov (United States)

    Sysoev, V. I.; Arkhipov, V. E.; Okotrub, A. V.; Pershin, Y. V.

    2018-04-01

    Memristors are two-terminal electronic devices whose resistance depends on the history of input signal (voltage or current). Here we demonstrate that the chemical gas sensors can be considered as memristors with a generalized (hybrid) input, namely, with the input consisting of the voltage, analyte concentrations and applied temperature. The concept of hybrid-input memristors is demonstrated experimentally using a single-walled carbon nanotubes chemical sensor. It is shown that with respect to the hybrid input, the sensor exhibits some features common with memristors such as the hysteretic input-output characteristics. This different perspective on chemical gas sensors may open new possibilities for smart sensor applications.

  15. Using Random Forests to Select Optimal Input Variables for Short-Term Wind Speed Forecasting Models

    Directory of Open Access Journals (Sweden)

    Hui Wang

    2017-10-01

    Full Text Available Achieving relatively high-accuracy short-term wind speed forecasting estimates is a precondition for the construction and grid-connected operation of wind power forecasting systems for wind farms. Currently, most research is focused on the structure of forecasting models and does not consider the selection of input variables, which can have significant impacts on forecasting performance. This paper presents an input variable selection method for wind speed forecasting models. The candidate input variables for various leading periods are selected and random forests (RF is employed to evaluate the importance of all variable as features. The feature subset with the best evaluation performance is selected as the optimal feature set. Then, kernel-based extreme learning machine is constructed to evaluate the performance of input variables selection based on RF. The results of the case study show that by removing the uncorrelated and redundant features, RF effectively extracts the most strongly correlated set of features from the candidate input variables. By finding the optimal feature combination to represent the original information, RF simplifies the structure of the wind speed forecasting model, shortens the training time required, and substantially improves the model’s accuracy and generalization ability, demonstrating that the input variables selected by RF are effective.

  16. A neural mechanism of dynamic gating of task-relevant information by top-down influence in primary visual cortex.

    Science.gov (United States)

    Kamiyama, Akikazu; Fujita, Kazuhisa; Kashimori, Yoshiki

    2016-12-01

    Visual recognition involves bidirectional information flow, which consists of bottom-up information coding from retina and top-down information coding from higher visual areas. Recent studies have demonstrated the involvement of early visual areas such as primary visual area (V1) in recognition and memory formation. V1 neurons are not passive transformers of sensory inputs but work as adaptive processor, changing their function according to behavioral context. Top-down signals affect tuning property of V1 neurons and contribute to the gating of sensory information relevant to behavior. However, little is known about the neuronal mechanism underlying the gating of task-relevant information in V1. To address this issue, we focus on task-dependent tuning modulations of V1 neurons in two tasks of perceptual learning. We develop a model of the V1, which receives feedforward input from lateral geniculate nucleus and top-down input from a higher visual area. We show here that the change in a balance between excitation and inhibition in V1 connectivity is necessary for gating task-relevant information in V1. The balance change well accounts for the modulations of tuning characteristic and temporal properties of V1 neuronal responses. We also show that the balance change of V1 connectivity is shaped by top-down signals with temporal correlations reflecting the perceptual strategies of the two tasks. We propose a learning mechanism by which synaptic balance is modulated. To conclude, top-down signal changes the synaptic balance between excitation and inhibition in V1 connectivity, enabling early visual area such as V1 to gate context-dependent information under multiple task performances. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  17. Visual Aversive Learning Compromises Sensory Discrimination.

    Science.gov (United States)

    Shalev, Lee; Paz, Rony; Avidan, Galia

    2018-03-14

    activations in the anterior cingulate cortex, insula, and amygdala during aversive learning, compared with neutral learning. Importantly, similar findings were also evident in the early visual cortex during trials with aversive/neutral context, but with identical visual information. The demonstration of this phenomenon in the visual modality is important, as it provides support to the notion that aversive learning can influence perception via a central mechanism, independent of input modality. Given the dominance of the visual system in human perception, our findings hold relevance to daily life, as well as imply a potential etiology for anxiety disorders. Copyright © 2018 the authors 0270-6474/18/382766-14$15.00/0.

  18. Finding the best visualization of an ontology

    DEFF Research Database (Denmark)

    Fabritius, Christina; Madsen, Nadia; Clausen, Jens

    2006-01-01

    An ontology is a classification model for a given domain.In information retrieval ontologies are used to perform broad searches.An ontology can be visualized as nodes and edges. Each node represents an element and each edge a relation between a parent and a child element. Working with an ontology....... One method uses a discrete location model to create an initial solution and we propose heuristic methods to further improve the visual result. We evaluate the visual results according to our success criteria and the feedback from users. Running times of the heuristic indicate that an improved version...

  19. Finding the best visualization of an ontology

    DEFF Research Database (Denmark)

    Fabritius, Christina Valentin; Madsen, Nadia Lyngaa; Clausen, Jens

    2004-01-01

    An ontology is a classification model for a given domain. In information retrieval ontologies are used to perform broad searches. An ontology can be visualized as nodes and edges. Each node represents an element and each edge a relation between a parent and a child element. Working with an ontology....... One method uses a discrete location model to create an initial solution and we propose heuristic methods to further improve the visual result. We evaluate the visual results according to our success criteria and the feedback from users. Running times of the heuristic indicate that an improved version...

  20. An amodal shared resource model of language-mediated visual attention

    Directory of Open Access Journals (Sweden)

    Alastair Charles Smith

    2013-08-01

    Full Text Available Language-mediated visual attention describes the interaction of two fundamental components of the human cognitive system, language and vision. Within this paper we present an amodal shared resource model of language-mediated visual attention that offers a description of the information and processes involved in this complex multimodal behaviour and a potential explanation for how this ability is acquired. We demonstrate that the model is not only sufficient to account for the experimental effects of Visual World Paradigm studies but also that these effects are emergent properties of the architecture of the model itself, rather than requiring separate information processing channels or modular processing systems. The model provides an explicit description of the connection between the modality-specific input from language and vision and the distribution of eye gaze in language mediated visual attention. The paper concludes by discussing future applications for the model, specifically its potential for investigating the factors driving observed individual differences in language mediated eye gaze.

  1. Visually driven chaining of elementary swim patterns into a goal-directed motor sequence: a virtual reality study of zebrafish prey capture

    Directory of Open Access Journals (Sweden)

    Chintan A Trivedi

    2013-05-01

    Full Text Available Prey capture behavior critically depends on rapid processing of sensory input in order to track, approach and catch the target. When using vision, the nervous system faces the problem of extracting relevant information from a continuous stream of input in order to detect and categorize visible objects as potential prey and to select appropriate motor patterns for approach. For prey capture, many vertebrates exhibit intermittent locomotion, in which discrete motor patterns are chained into a sequence, interrupted by short periods of rest. Here, using high-speed recordings of full-length prey capture sequences performed by freely swimming zebrafish larvae in the presence of a single paramecium, we provide a detailed kinematic analysis of first and subsequent swim bouts during prey capture. Using Fourier analysis, we show that individual swim bouts represent an elementary motor pattern. Changes in orientation are directed towards the target on a graded scale and are implemented by an asymmetric tail bend component superimposed on this basic motor pattern. To further investigate the role of visual feedback on the efficiency and speed of this complex behavior, we developed a closed-loop virtual reality setup in which minimally restrained larvae recapitulated interconnected swim patterns closely resembling those observed during prey capture in freely moving fish. Systematic variation of stimulus properties showed that prey capture is initiated within a narrow range of stimulus size and velocity. Furthermore, variations in the delay and location of swim-triggered visual feedback showed that the reaction time of secondary and later swims is shorter for stimuli that appear within a narrow spatio-temporal window following a swim. This suggests that the larva may generate an expectation of stimulus position, which enables accelerated motor sequencing if the expectation is met by appropriate visual feedback.

  2. Visually driven chaining of elementary swim patterns into a goal-directed motor sequence: a virtual reality study of zebrafish prey capture

    Science.gov (United States)

    Trivedi, Chintan A.; Bollmann, Johann H.

    2013-01-01

    Prey capture behavior critically depends on rapid processing of sensory input in order to track, approach, and catch the target. When using vision, the nervous system faces the problem of extracting relevant information from a continuous stream of input in order to detect and categorize visible objects as potential prey and to select appropriate motor patterns for approach. For prey capture, many vertebrates exhibit intermittent locomotion, in which discrete motor patterns are chained into a sequence, interrupted by short periods of rest. Here, using high-speed recordings of full-length prey capture sequences performed by freely swimming zebrafish larvae in the presence of a single paramecium, we provide a detailed kinematic analysis of first and subsequent swim bouts during prey capture. Using Fourier analysis, we show that individual swim bouts represent an elementary motor pattern. Changes in orientation are directed toward the target on a graded scale and are implemented by an asymmetric tail bend component superimposed on this basic motor pattern. To further investigate the role of visual feedback on the efficiency and speed of this complex behavior, we developed a closed-loop virtual reality setup in which minimally restrained larvae recapitulated interconnected swim patterns closely resembling those observed during prey capture in freely moving fish. Systematic variation of stimulus properties showed that prey capture is initiated within a narrow range of stimulus size and velocity. Furthermore, variations in the delay and location of swim triggered visual feedback showed that the reaction time of secondary and later swims is shorter for stimuli that appear within a narrow spatio-temporal window following a swim. This suggests that the larva may generate an expectation of stimulus position, which enables accelerated motor sequencing if the expectation is met by appropriate visual feedback. PMID:23675322

  3. Impact of magnetic saturation on the input-output linearising tracking control of an induction motor

    DEFF Research Database (Denmark)

    Dolinar, Drago; Ljusev, Petar; Stumberger, Gorazd

    2004-01-01

    This paper deals with the tracking control design of an induction motor, based on input-output linearization with magnetic saturation included. Magnetic saturation is represented by the nonlinear magnetizing curve of the iron core and is used in the control design, the observer of state variables......, and in the load torque estimator. An input-output linearising control is used to achieve better tracking performances of the drive. It is based on the mixed ”stator current - rotor flux linkage” induction motor model with magnetic saturation considered in the stationary reference frame. Experimental results show...... that the proposed input-output linearising tracking control with the included saturation behaves considerably better than the one without saturation, and that it introduces smaller position and speed errors, and better motor stiffness on account of the increased computational complexity....

  4. Basic multisensory functions can be acquired after congenital visual pattern deprivation in humans.

    Science.gov (United States)

    Putzar, Lisa; Gondan, Matthias; Röder, Brigitte

    2012-01-01

    People treated for bilateral congenital cataracts offer a model to study the influence of visual deprivation in early infancy on visual and multisensory development. We investigated cross-modal integration capabilities in cataract patients using a simple detection task that provided redundant information to two different senses. In both patients and controls, redundancy gains were consistent with coactivation models, indicating an integrated processing of modality-specific information. This finding is in contrast with recent studies showing impaired higher-level multisensory interactions in cataract patients. The present results suggest that basic cross-modal integrative processes for simple short stimuli do not depend on visual and/or crossmodal input since birth.

  5. Basic multisensory functions can be acquired after congenital visual pattern deprivation in humans

    DEFF Research Database (Denmark)

    Putzar, L.; Gondan, Matthias; Röder, B.

    2012-01-01

    People treated for bilateral congenital cataracts offer a model to study the influence of visual deprivation in early infancy on visual and multisensory development. We investigated cross-modal integration capabilities in cataract patients using a simple detection task that provided redundant...... information to two different senses. In both patients and controls, redundancy gains were consistent with coactivation models, indicating an integrated processing of modality-specific information. This finding is in contrast with recent studies showing impaired higher-level multisensory interactions...... in cataract patients. The present results suggest that basic cross-modal integrative processes for simple short stimuli do not depend on visual and/or crossmodal input since birth....

  6. RIP Input Tables From WAPDEG for LA Design Selection: Continuous Pre-Closure Ventilation

    International Nuclear Information System (INIS)

    K.G. Mon

    1999-01-01

    The purpose of this calculation is to document the creation of .tables for input into Integrated Probabilistic Simulator for Environmental Systems (RIP) version 5.19.01 (Golder Associates 1998) from Waste Package Degradation (WAPDEG) version 3.09 (CRWMS M and O 1998b. ''Software Routine Report for WAPDEG'' (Version 3.09)) simulations. This calculation details the creation of the RIP input tables (representing waste package corrosion degradation over time) for the License Application Design Selection (LADS) analysis of the effects of continuous pre-closure ventilation. Ventilation during the operational phase of the repository could remove considerable water from the system, as well as reduce temperatures. Pre-closure ventilation is LADS Design Feature 7

  7. SBOL Visual: A Graphical Language for Genetic Designs

    Science.gov (United States)

    Adler, Aaron; Beal, Jacob; Bhatia, Swapnil; Cai, Yizhi; Chen, Joanna; Clancy, Kevin; Galdzicki, Michal; Hillson, Nathan J.; Le Novère, Nicolas; Maheshwari, Akshay J.; McLaughlin, James Alastair; Myers, Chris J.; P, Umesh; Pocock, Matthew; Rodriguez, Cesar; Soldatova, Larisa; Stan, Guy-Bart V.; Swainston, Neil; Wipat, Anil; Sauro, Herbert M.

    2015-01-01

    Synthetic Biology Open Language (SBOL) Visual is a graphical standard for genetic engineering. It consists of symbols representing DNA subsequences, including regulatory elements and DNA assembly features. These symbols can be used to draw illustrations for communication and instruction, and as image assets for computer-aided design. SBOL Visual is a community standard, freely available for personal, academic, and commercial use (Creative Commons CC0 license). We provide prototypical symbol images that have been used in scientific publications and software tools. We encourage users to use and modify them freely, and to join the SBOL Visual community: http://www.sbolstandard.org/visual. PMID:26633141

  8. Pain hypersensitivity in congenital blindness is associated with faster central processing of C-fibre input

    DEFF Research Database (Denmark)

    Slimani, H.; Plaghki, L.; Ptito, M.

    2016-01-01

    Background We have recently shown that visual deprivation from birth exacerbates responses to painful thermal stimuli. However, the mechanisms underlying pain hypersensitivity in congenital blindness are unclear. Methods To study the contribution of Aδ- and C-fibres in pain perception, we measure...... The increased sensitivity to painful thermal stimulation in congenital blindness may be due to more efficient central processing of C-fibre–mediated input, which may help to avoid impending dangerous encounters with stimuli that threaten the bodily integrity....

  9. H∞ Loop Shaping Control of Input Saturated Systems with Norm-Bounded Parametric Uncertainty

    Directory of Open Access Journals (Sweden)

    Renan Lima Pereira

    2015-01-01

    Full Text Available This paper proposes a gain-scheduling control design strategy for a class of linear systems with the presence of both input saturation constraints and norm-bounded parametric uncertainty. LMI conditions are derived in order to obtain a gain-scheduled controller that ensures the robust stability and performance of the closed loop system. The main steps to obtain such a controller are given. Differently from other gain-scheduled approaches in the literature, this one focuses on the problem of H∞ loop shaping control design with input saturation nonlinearity and norm-bounded uncertainty to reduce the effect of the disturbance input on the controlled outputs. Here, the design problem has been formulated in the four-block H∞ synthesis framework, in which it is possible to describe the parametric uncertainty and the input saturation nonlinearity as perturbations to normalized coprime factors of the shaped plant. As a result, the shaped plant is represented as a linear parameter-varying (LPV system while the norm-bounded uncertainty and input saturation are incorporated. This procedure yields a linear parameter-varying structure for the controller that ensures the stability of the polytopic LPV shaped plant from the vertex property. Finally, the effectiveness of the method is illustrated through application to a physical system: a VTOL “vertical taking-off landing” helicopter.

  10. Visual physics analysis-from desktop to physics analysis at your fingertips

    International Nuclear Information System (INIS)

    Bretz, H-P; Erdmann, M; Fischer, R; Hinzmann, A; Klingebiel, D; Komm, M; Lingemann, J; Rieger, M; Müller, G; Steggemann, J; Winchen, T

    2012-01-01

    Visual Physics Analysis (VISPA) is an analysis environment with applications in high energy and astroparticle physics. Based on a data-flow-driven paradigm, it allows users to combine graphical steering with self-written C++ and Python modules. This contribution presents new concepts integrated in VISPA: layers, convenient analysis execution, and web-based physics analysis. While the convenient execution offers full flexibility to vary settings for the execution phase of an analysis, layers allow to create different views of the analysis already during its design phase. Thus, one application of layers is to define different stages of an analysis (e.g. event selection and statistical analysis). However, there are other use cases such as to independently optimize settings for different types of input data in order to guide all data through the same analysis flow. The new execution feature makes job submission to local clusters as well as the LHC Computing Grid possible directly from VISPA. Web-based physics analysis is realized in the VISPA-Web project, which represents a whole new way to design and execute analyses via a standard web browser.

  11. Neural decoding of visual imagery during sleep.

    Science.gov (United States)

    Horikawa, T; Tamaki, M; Miyawaki, Y; Kamitani, Y

    2013-05-03

    Visual imagery during sleep has long been a topic of persistent speculation, but its private nature has hampered objective analysis. Here we present a neural decoding approach in which machine-learning models predict the contents of visual imagery during the sleep-onset period, given measured brain activity, by discovering links between human functional magnetic resonance imaging patterns and verbal reports with the assistance of lexical and image databases. Decoding models trained on stimulus-induced brain activity in visual cortical areas showed accurate classification, detection, and identification of contents. Our findings demonstrate that specific visual experience during sleep is represented by brain activity patterns shared by stimulus perception, providing a means to uncover subjective contents of dreaming using objective neural measurement.

  12. Interactive volume visualization of general polyhedral grids

    KAUST Repository

    Muigg, Philipp

    2011-12-01

    This paper presents a novel framework for visualizing volumetric data specified on complex polyhedral grids, without the need to perform any kind of a priori tetrahedralization. These grids are composed of polyhedra that often are non-convex and have an arbitrary number of faces, where the faces can be non-planar with an arbitrary number of vertices. The importance of such grids in state-of-the-art simulation packages is increasing rapidly. We propose a very compact, face-based data structure for representing such meshes for visualization, called two-sided face sequence lists (TSFSL), as well as an algorithm for direct GPU-based ray-casting using this representation. The TSFSL data structure is able to represent the entire mesh topology in a 1D TSFSL data array of face records, which facilitates the use of efficient 1D texture accesses for visualization. In order to scale to large data sizes, we employ a mesh decomposition into bricks that can be handled independently, where each brick is then composed of its own TSFSL array. This bricking enables memory savings and performance improvements for large meshes. We illustrate the feasibility of our approach with real-world application results, by visualizing highly complex polyhedral data from commercial state-of-the-art simulation packages. © 2011 IEEE.

  13. Frequency spectrum might act as communication code between retina and visual cortex I

    Directory of Open Access Journals (Sweden)

    Xu Yang

    2015-12-01

    Full Text Available AIM: To explore changes and possible communication relationship of local potential signals recorded simultaneously from retina and visual cortex I (V1. METHODS: Fourteen C57BL/6J mice were measured with pattern electroretinogram (PERG and pattern visually evoked potential (PVEP and fast Fourier transform has been used to analyze the frequency components of those signals. RESULTS: The amplitude of PERG and PVEP was measured at about 36.7 µV and 112.5 µV respectively and the dominant frequency of PERG and PVEP, however, stay unchanged and both signals do not have second, or otherwise, harmonic generation. CONCLUSION: The results suggested that retina encodes visual information in the way of frequency spectrum and then transfers it to primary visual cortex. The primary visual cortex accepts and deciphers the input visual information coded from retina. Frequency spectrum may act as communication code between retina and V1.

  14. Eye position effects on the remapped memory trace of visual motion in cortical area MST.

    Science.gov (United States)

    Inaba, Naoko; Kawano, Kenji

    2016-02-23

    After a saccade, most MST neurons respond to moving visual stimuli that had existed in their post-saccadic receptive fields and turned off before the saccade ("trans-saccadic memory remapping"). Neuronal responses in higher visual processing areas are known to be modulated in relation to gaze angle to represent image location in spatiotopic coordinates. In the present study, we investigated the eye position effects after saccades and found that the gaze angle modulated the visual sensitivity of MST neurons after saccades both to the actually existing visual stimuli and to the visual memory traces remapped by the saccades. We suggest that two mechanisms, trans-saccadic memory remapping and gaze modulation, work cooperatively in individual MST neurons to represent a continuous visual world.

  15. The Visual Identity Project

    Science.gov (United States)

    Tennant-Gadd, Laurie; Sansone, Kristina Lamour

    2008-01-01

    Identity is the focus of the middle-school visual arts program at Cambridge Friends School (CFS) in Cambridge, Massachusetts. Sixth graders enter the middle school and design a personal logo as their first major project in the art studio. The logo becomes a way for students to introduce themselves to their teachers and to represent who they are…

  16. Visualization of fuel rod burnup analysis by Scilab

    International Nuclear Information System (INIS)

    Tsai, Chiung-Wen

    2013-01-01

    The goal of this technical note is to provide an alternative, the freeware Scilab, by which means we may construct custom GUIs and distribute them without extra constrains and cost. A post-processor has been constructed by Scilab to visualize the fuel rod burnup analysis data calculated by FRAPCON-3.4. This post-processor incorporates a graphical user interface (GUI), providing users a rapid overview of the characteristics of the numerical results with 2-D and 3-D graphs, as well as the animations of fuel temperature distribution. An assessment case input file provided by FRAPCON user group was applied to demonstrate the construction of a post-processor with GUI by object-oriented GUI tool, as well as the capability of visualization functions of Scilab

  17. Visualization of fuel rod burnup analysis by Scilab

    Energy Technology Data Exchange (ETDEWEB)

    Tsai, Chiung-Wen, E-mail: d937121@oz.nthu.edu.tw

    2013-12-15

    The goal of this technical note is to provide an alternative, the freeware Scilab, by which means we may construct custom GUIs and distribute them without extra constrains and cost. A post-processor has been constructed by Scilab to visualize the fuel rod burnup analysis data calculated by FRAPCON-3.4. This post-processor incorporates a graphical user interface (GUI), providing users a rapid overview of the characteristics of the numerical results with 2-D and 3-D graphs, as well as the animations of fuel temperature distribution. An assessment case input file provided by FRAPCON user group was applied to demonstrate the construction of a post-processor with GUI by object-oriented GUI tool, as well as the capability of visualization functions of Scilab.

  18. SemVisM: semantic visualizer for medical image

    Science.gov (United States)

    Landaeta, Luis; La Cruz, Alexandra; Baranya, Alexander; Vidal, María.-Esther

    2015-01-01

    SemVisM is a toolbox that combines medical informatics and computer graphics tools for reducing the semantic gap between low-level features and high-level semantic concepts/terms in the images. This paper presents a novel strategy for visualizing medical data annotated semantically, combining rendering techniques, and segmentation algorithms. SemVisM comprises two main components: i) AMORE (A Modest vOlume REgister) to handle input data (RAW, DAT or DICOM) and to initially annotate the images using terms defined on medical ontologies (e.g., MesH, FMA or RadLex), and ii) VOLPROB (VOlume PRObability Builder) for generating the annotated volumetric data containing the classified voxels that belong to a particular tissue. SemVisM is built on top of the semantic visualizer ANISE.1

  19. Segregation of Visual Response Properties in the Mouse Superior Colliculus and Their Modulation during Locomotion

    Science.gov (United States)

    2017-01-01

    The superior colliculus (SC) receives direct input from the retina and integrates it with information about sound, touch, and state of the animal that is relayed from other parts of the brain to initiate specific behavioral outcomes. The superficial SC layers (sSC) contain cells that respond to visual stimuli, whereas the deep SC layers (dSC) contain cells that also respond to auditory and somatosensory stimuli. Here, we used a large-scale silicon probe recording system to examine the visual response properties of SC cells of head-fixed and alert male mice. We found cells with diverse response properties including: (1) orientation/direction-selective (OS/DS) cells with a firing rate that is suppressed by drifting sinusoidal gratings (negative OS/DS cells); (2) suppressed-by-contrast cells; (3) cells with complex-like spatial summation nonlinearity; and (4) cells with Y-like spatial summation nonlinearity. We also found specific response properties that are enriched in different depths of the SC. The sSC is enriched with cells with small RFs, high evoked firing rates (FRs), and sustained temporal responses, whereas the dSC is enriched with the negative OS/DS cells and with cells with large RFs, low evoked FRs, and transient temporal responses. Locomotion modulates the activity of the SC cells both additively and multiplicatively and changes the preferred spatial frequency of some SC cells. These results provide the first description of the negative OS/DS cells and demonstrate that the SC segregates cells with different response properties and that the behavioral state of a mouse affects SC activity. SIGNIFICANCE STATEMENT The superior colliculus (SC) receives visual input from the retina in its superficial layers (sSC) and induces eye/head-orientating movements and innate defensive responses in its deeper layers (dSC). Despite their importance, very little is known about the visual response properties of dSC neurons. Using high-density electrode recordings and novel

  20. Computational Model of Primary Visual Cortex Combining Visual Attention for Action Recognition.

    Directory of Open Access Journals (Sweden)

    Na Shu

    Full Text Available Humans can easily understand other people's actions through visual systems, while computers cannot. Therefore, a new bio-inspired computational model is proposed in this paper aiming for automatic action recognition. The model focuses on dynamic properties of neurons and neural networks in the primary visual cortex (V1, and simulates the procedure of information processing in V1, which consists of visual perception, visual attention and representation of human action. In our model, a family of the three-dimensional spatial-temporal correlative Gabor filters is used to model the dynamic properties of the classical receptive field of V1 simple cell tuned to different speeds and orientations in time for detection of spatiotemporal information from video sequences. Based on the inhibitory effect of stimuli outside the classical receptive field caused by lateral connections of spiking neuron networks in V1, we propose surround suppressive operator to further process spatiotemporal information. Visual attention model based on perceptual grouping is integrated into our model to filter and group different regions. Moreover, in order to represent the human action, we consider the characteristic of the neural code: mean motion map based on analysis of spike trains generated by spiking neurons. The experimental evaluation on some publicly available action datasets and comparison with the state-of-the-art approaches demonstrate the superior performance of the proposed model.

  1. Impact of visual repetition rate on intrinsic properties of low frequency fluctuations in the visual network.

    Directory of Open Access Journals (Sweden)

    Yi-Chia Li

    Full Text Available BACKGROUND: Visual processing network is one of the functional networks which have been reliably identified to consistently exist in human resting brains. In our work, we focused on this network and investigated the intrinsic properties of low frequency (0.01-0.08 Hz fluctuations (LFFs during changes of visual stimuli. There were two main questions to be discussed in this study: intrinsic properties of LFFs regarding (1 interactions between visual stimuli and resting-state; (2 impact of repetition rate of visual stimuli. METHODOLOGY/PRINCIPAL FINDINGS: We analyzed scanning sessions that contained rest and visual stimuli in various repetition rates with a novel method. The method included three numerical approaches involving ICA (Independent Component Analyses, fALFF (fractional Amplitude of Low Frequency Fluctuation, and Coherence, to respectively investigate the modulations of visual network pattern, low frequency fluctuation power, and interregional functional connectivity during changes of visual stimuli. We discovered when resting-state was replaced by visual stimuli, more areas were involved in visual processing, and both stronger low frequency fluctuations and higher interregional functional connectivity occurred in visual network. With changes of visual repetition rate, the number of areas which were involved in visual processing, low frequency fluctuation power, and interregional functional connectivity in this network were also modulated. CONCLUSIONS/SIGNIFICANCE: To combine the results of prior literatures and our discoveries, intrinsic properties of LFFs in visual network are altered not only by modulations of endogenous factors (eye-open or eye-closed condition; alcohol administration and disordered behaviors (early blind, but also exogenous sensory stimuli (visual stimuli with various repetition rates. It demonstrates that the intrinsic properties of LFFs are valuable to represent physiological states of human brains.

  2. Linear and quadratic models of point process systems: contributions of patterned input to output.

    Science.gov (United States)

    Lindsay, K A; Rosenberg, J R

    2012-08-01

    In the 1880's Volterra characterised a nonlinear system using a functional series connecting continuous input and continuous output. Norbert Wiener, in the 1940's, circumvented problems associated with the application of Volterra series to physical problems by deriving from it a new series of terms that are mutually uncorrelated with respect to Gaussian processes. Subsequently, Brillinger, in the 1970's, introduced a point-process analogue of Volterra's series connecting point-process inputs to the instantaneous rate of point-process output. We derive here a new series from this analogue in which its terms are mutually uncorrelated with respect to Poisson processes. This new series expresses how patterned input in a spike train, represented by third-order cross-cumulants, is converted into the instantaneous rate of an output point-process. Given experimental records of suitable duration, the contribution of arbitrary patterned input to an output process can, in principle, be determined. Solutions for linear and quadratic point-process models with one and two inputs and a single output are investigated. Our theoretical results are applied to isolated muscle spindle data in which the spike trains from the primary and secondary endings from the same muscle spindle are recorded in response to stimulation of one and then two static fusimotor axons in the absence and presence of a random length change imposed on the parent muscle. For a fixed mean rate of input spikes, the analysis of the experimental data makes explicit which patterns of two input spikes contribute to an output spike. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. The Role of the Oculomotor System in Updating Visual-Spatial Working Memory across Saccades

    OpenAIRE

    Boon, Paul J.; Belopolsky, Artem V.; Theeuwes, Jan

    2016-01-01

    Visual-spatial working memory (VSWM) helps us to maintain and manipulate visual information in the absence of sensory input. It has been proposed that VSWM is an emergent property of the oculomotor system. In the present study we investigated the role of the oculomotor system in updating of spatial working memory representations across saccades. Participants had to maintain a location in memory while making a saccade to a different location. During the saccade the target was displaced, which ...

  4. Automation of RELAP5 input calibration and code validation using genetic algorithm

    International Nuclear Information System (INIS)

    Phung, Viet-Anh; Kööp, Kaspar; Grishchenko, Dmitry; Vorobyev, Yury; Kudinov, Pavel

    2016-01-01

    Highlights: • Automated input calibration and code validation using genetic algorithm is presented. • Predictions generally overlap experiments for individual system response quantities (SRQs). • It was not possible to predict simultaneously experimental maximum flow rate and oscillation period. • Simultaneous consideration of multiple SRQs is important for code validation. - Abstract: Validation of system thermal-hydraulic codes is an important step in application of the codes to reactor safety analysis. The goal of the validation process is to determine how well a code can represent physical reality. This is achieved by comparing predicted and experimental system response quantities (SRQs) taking into account experimental and modelling uncertainties. Parameters which are required for the code input but not measured directly in the experiment can become an important source of uncertainty in the code validation process. Quantification of such parameters is often called input calibration. Calibration and uncertainty quantification may become challenging tasks when the number of calibrated input parameters and SRQs is large and dependencies between them are complex. If only engineering judgment is employed in the process, the outcome can be prone to so called “user effects”. The goal of this work is to develop an automated approach to input calibration and RELAP5 code validation against data on two-phase natural circulation flow instability. Multiple SRQs are used in both calibration and validation. In the input calibration, we used genetic algorithm (GA), a heuristic global optimization method, in order to minimize the discrepancy between experimental and simulation data by identifying optimal combinations of uncertain input parameters in the calibration process. We demonstrate the importance of the proper selection of SRQs and respective normalization and weighting factors in the fitness function. In the code validation, we used maximum flow rate as the

  5. Automation of RELAP5 input calibration and code validation using genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Phung, Viet-Anh, E-mail: vaphung@kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden); Kööp, Kaspar, E-mail: kaspar@safety.sci.kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden); Grishchenko, Dmitry, E-mail: dmitry@safety.sci.kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden); Vorobyev, Yury, E-mail: yura3510@gmail.com [National Research Center “Kurchatov Institute”, Kurchatov square 1, Moscow 123182 (Russian Federation); Kudinov, Pavel, E-mail: pavel@safety.sci.kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden)

    2016-04-15

    Highlights: • Automated input calibration and code validation using genetic algorithm is presented. • Predictions generally overlap experiments for individual system response quantities (SRQs). • It was not possible to predict simultaneously experimental maximum flow rate and oscillation period. • Simultaneous consideration of multiple SRQs is important for code validation. - Abstract: Validation of system thermal-hydraulic codes is an important step in application of the codes to reactor safety analysis. The goal of the validation process is to determine how well a code can represent physical reality. This is achieved by comparing predicted and experimental system response quantities (SRQs) taking into account experimental and modelling uncertainties. Parameters which are required for the code input but not measured directly in the experiment can become an important source of uncertainty in the code validation process. Quantification of such parameters is often called input calibration. Calibration and uncertainty quantification may become challenging tasks when the number of calibrated input parameters and SRQs is large and dependencies between them are complex. If only engineering judgment is employed in the process, the outcome can be prone to so called “user effects”. The goal of this work is to develop an automated approach to input calibration and RELAP5 code validation against data on two-phase natural circulation flow instability. Multiple SRQs are used in both calibration and validation. In the input calibration, we used genetic algorithm (GA), a heuristic global optimization method, in order to minimize the discrepancy between experimental and simulation data by identifying optimal combinations of uncertain input parameters in the calibration process. We demonstrate the importance of the proper selection of SRQs and respective normalization and weighting factors in the fitness function. In the code validation, we used maximum flow rate as the

  6. The effect of learning on the function of monkey extrastriate visual cortex.

    Directory of Open Access Journals (Sweden)

    Gregor Rainer

    2004-02-01

    Full Text Available One of the most remarkable capabilities of the adult brain is its ability to learn and continuously adapt to an ever-changing environment. While many studies have documented how learning improves the perception and identification of visual stimuli, relatively little is known about how it modifies the underlying neural mechanisms. We trained monkeys to identify natural images that were degraded by interpolation with visual noise. We found that learning led to an improvement in monkeys' ability to identify these indeterminate visual stimuli. We link this behavioral improvement to a learning-dependent increase in the amount of information communicated by V4 neurons. This increase was mediated by a specific enhancement in neural activity. Our results reveal a mechanism by which learning increases the amount of information that V4 neurons are able to extract from the visual environment. This suggests that V4 plays a key role in resolving indeterminate visual inputs by coordinated interaction between bottom-up and top-down processing streams.

  7. The effect of long-term changes in plant inputs on soil carbon stocks

    Science.gov (United States)

    Georgiou, K.; Li, Z.; Torn, M. S.

    2017-12-01

    Soil organic carbon (SOC) is the largest actively-cycling terrestrial reservoir of C and an integral component of thriving natural and managed ecosystems. C input interventions (e.g., litter removal or organic amendments) are common in managed landscapes and present an important decision for maintaining healthy soils in sustainable agriculture and forestry. Furthermore, climate and land-cover change can also affect the amount of plant C inputs that enter the soil through changes in plant productivity, allocation, and rooting depth. Yet, the processes that dictate the response of SOC to such changes in C inputs are poorly understood and inadequately represented in predictive models. Long-term litter manipulations are an invaluable resource for exploring key controls of SOC storage and validating model representations. Here we explore the response of SOC to long-term changes in plant C inputs across a range of biomes and soil types. We synthesize and analyze data from long-term litter manipulation field experiments, and focus our meta-analysis on changes to total SOC stocks, microbial biomass carbon, and mineral-associated (`protected') carbon pools and explore the relative contribution of above- versus below-ground C inputs. Our cross-site data comparison reveals that divergent SOC responses are observed between forest sites, particularly for treatments that increase C inputs to the soil. We explore trends among key variables (e.g., microbial biomass to SOC ratios) that inform soil C model representations. The assembled dataset is an important benchmark for evaluating process-based hypotheses and validating divergent model formulations.

  8. Filling-in and suppression of visual perception from context: a Bayesian account of perceptual biases by contextual influences.

    Directory of Open Access Journals (Sweden)

    Li Zhaoping

    2008-02-01

    Full Text Available Visual object recognition and sensitivity to image features are largely influenced by contextual inputs. We study influences by contextual bars on the bias to perceive or infer the presence of a target bar, rather than on the sensitivity to image features. Human observers judged from a briefly presented stimulus whether a target bar of a known orientation and shape is present at the center of a display, given a weak or missing input contrast at the target location with or without a context of other bars. Observers are more likely to perceive a target when the context has a weaker rather than stronger contrast. When the context can perceptually group well with the would-be target, weak contrast contextual bars bias the observers to perceive a target relative to the condition without contexts, as if to fill in the target. Meanwhile, high-contrast contextual bars, regardless of whether they group well with the target, bias the observers to perceive no target. A Bayesian model of visual inference is shown to account for the data well, illustrating that the context influences the perception in two ways: (1 biasing observers' prior belief that a target should be present according to visual grouping principles, and (2 biasing observers' internal model of the likely input contrasts caused by a target bar. According to this model, our data suggest that the context does not influence the perceived target contrast despite its influence on the bias to perceive the target's presence, thereby suggesting that cortical areas beyond the primary visual cortex are responsible for the visual inferences.

  9. MDS MIC Catalog Inputs

    Science.gov (United States)

    Johnson-Throop, Kathy A.; Vowell, C. W.; Smith, Byron; Darcy, Jeannette

    2006-01-01

    This viewgraph presentation reviews the inputs to the MDS Medical Information Communique (MIC) catalog. The purpose of the group is to provide input for updating the MDS MIC Catalog and to request that MMOP assign Action Item to other working groups and FSs to support the MITWG Process for developing MIC-DDs.

  10. Visual search of illusory contours: Shape and orientation effects

    Directory of Open Access Journals (Sweden)

    Gvozdenović Vasilije

    2008-01-01

    Full Text Available Illusory contours are specific class of visual stimuli that represent stimuli configurations perceived as integral irrespective of the fact that they are given in fragmented uncompleted wholes. Due to their specific features, illusory contours gained much attention in last decade representing prototype of stimuli used in investigations focused on binding problem. On the other side, investigations of illusory contours are related to problem of the level of their visual processing. Neurophysiologic studies show that processing of illusory contours proceed relatively early, on the V2 level, on the other hand most of experimental studies claim that illusory contours are perceived with engagement of visual attention, binding their elements to whole percept. This research is focused on two experiments in which visual search of illusory contours are based on shape and orientation. The main experimental procedure evolved the task proposed by Bravo and Nakayama where instead of detection, subjects were performing identification of one among two possible targets. In the first experiment subjects detected the presence of illusory square or illusory triangle, while in the second experiment subject were detecting two different orientations of illusory triangle. The results are interpreted in terms of visual search and feature integration theory. Beside the type of visual search task, search type proved to be dependent of specific features of illusory shapes which further complicate theoretical interpretation of the level of their perception.

  11. Learned image representations for visual recognition

    DEFF Research Database (Denmark)

    Larsen, Anders Boesen Lindbo

    This thesis addresses the problem of extracting image structures for representing images effectively in order to solve visual recognition tasks. Problems from diverse research areas (medical imaging, material science and food processing) have motivated large parts of the methodological development...

  12. Brain regions activated by the passive processing of visually- and auditorily-presented words measured by averaged PET images of blood flow change

    International Nuclear Information System (INIS)

    Peterson, S.E.; Fox, P.T.; Posner, M.I.; Raichle, M.E.

    1987-01-01

    A limited number of regions specific to input modality are activated by the auditory and visual presentation of single words. These regions include primary auditory and visual cortex, and modality-specific higher order region that may be performing computations at a word level of analysis

  13. Dynamic Output Feedback Robust MPC with Input Saturation Based on Zonotopic Set-Membership Estimation

    Directory of Open Access Journals (Sweden)

    Xubin Ping

    2016-01-01

    Full Text Available For quasi-linear parameter varying (quasi-LPV systems with bounded disturbance, a synthesis approach of dynamic output feedback robust model predictive control (OFRMPC with the consideration of input saturation is investigated. The saturated dynamic output feedback controller is represented by a convex hull involving the actual dynamic output controller and an introduced auxiliary controller. By taking both the actual output feedback controller and the auxiliary controller with a parameter-dependent form, the main optimization problem can be formulated as convex optimization. The consideration of input saturation in the main optimization problem reduces the conservatism of dynamic output feedback controller design. The estimation error set and bounded disturbance are represented by zonotopes and refreshed by zonotopic set-membership estimation. Compared with the previous results, the proposed algorithm can not only guarantee the recursive feasibility of the optimization problem, but also improve the control performance at the cost of higher computational burden. A nonlinear continuous stirred tank reactor (CSTR example is given to illustrate the effectiveness of the approach.

  14. Occam's razor and petascale visual data analysis

    Energy Technology Data Exchange (ETDEWEB)

    Bethel, E Wes [Lawrence Berkeley National Laboratory (LBNL); Johnson, Chris [University of Utah; Ahern, Sean [ORNL; Bell, J. [Lawrence Berkeley National Laboratory (LBNL); Bremer, Peer-Timo [Lawrence Livermore National Laboratory (LLNL); Childs, Hank [Lawrence Livermore National Laboratory (LLNL); Cormier-Michel, E [unknown; Day, M. [Lawrence Berkeley National Laboratory (LBNL); Deines, E. [University of California, Davis; Fogal, T. [University of Utah; Garth, Christoph [unknown; Geddes, C.G.R. [unknown; Hagen, H [unknown; Hamann, Bernd [unknown; Hansen, Charles [University of Utah; Jacobsen, Janet [Lawrence Berkeley National Laboratory (LBNL); Joy, Kenneth [University of California, Davis; Kruger, J. [University of Utah; Meredith, Jeremy S [ORNL; Messmer, P [unknown; Ostrouchov, George [ORNL; Pascucci, Valerio [Lawrence Livermore National Laboratory (LLNL); Potter, K. [University of Utah; Prabhat, [Lawrence Berkeley National Laboratory (LBNL); Pugmire, Dave [ORNL; Ruebel, O [unknown; Sanderson, Allen [University of Utah; Silva, C. [University of Utah; Ushizima, D. [Lawrence Berkeley National Laboratory (LBNL); Weber, G. [Lawrence Berkeley National Laboratory (LBNL); Whitlock, B. [Lawrence Livermore National Laboratory (LLNL); Wu, K. [Lawrence Berkeley National Laboratory (LBNL)

    2009-01-01

    One of the central challenges facing visualization research is how to effectively enable knowledge discovery. An effective approach will likely combine application architectures that are capable of running on today's largest platforms to address the challenges posed by large data with visual data analysis techniques that help find, represent, and effectively convey scientifically interesting features and phenomena.

  15. Dose uncertainties for large solar particle events: Input spectra variability and human geometry approximations

    International Nuclear Information System (INIS)

    Townsend, Lawrence W.; Zapp, E. Neal

    1999-01-01

    The true uncertainties in estimates of body organ absorbed dose and dose equivalent, from exposures of interplanetary astronauts to large solar particle events (SPEs), are essentially unknown. Variations in models used to parameterize SPE proton spectra for input into space radiation transport and shielding computer codes can result in uncertainty about the reliability of dose predictions for these events. Also, different radiation transport codes and their input databases can yield significant differences in dose predictions, even for the same input spectra. Different results may also be obtained for the same input spectra and transport codes if different spacecraft and body self-shielding distributions are assumed. Heretofore there have been no systematic investigations of the variations in dose and dose equivalent resulting from these assumptions and models. In this work we present a study of the variability in predictions of organ dose and dose equivalent arising from the use of different parameters to represent the same incident SPE proton data and from the use of equivalent sphere approximations to represent human body geometry. The study uses the BRYNTRN space radiation transport code to calculate dose and dose equivalent for the skin, ocular lens and bone marrow using the October 1989 SPE as a model event. Comparisons of organ dose and dose equivalent, obtained with a realistic human geometry model and with the oft-used equivalent sphere approximation, are also made. It is demonstrated that variations of 30-40% in organ dose and dose equivalent are obtained for slight variations in spectral fitting parameters obtained when various data points are included or excluded from the fitting procedure. It is further demonstrated that extrapolating spectra from low energy (≤30 MeV) proton fluence measurements, rather than using fluence data extending out to 100 MeV results in dose and dose equivalent predictions that are underestimated by factors as large as 2

  16. Joint analysis of input and parametric uncertainties in watershed water quality modeling: A formal Bayesian approach

    Science.gov (United States)

    Han, Feng; Zheng, Yi

    2018-06-01

    Significant Input uncertainty is a major source of error in watershed water quality (WWQ) modeling. It remains challenging to address the input uncertainty in a rigorous Bayesian framework. This study develops the Bayesian Analysis of Input and Parametric Uncertainties (BAIPU), an approach for the joint analysis of input and parametric uncertainties through a tight coupling of Markov Chain Monte Carlo (MCMC) analysis and Bayesian Model Averaging (BMA). The formal likelihood function for this approach is derived considering a lag-1 autocorrelated, heteroscedastic, and Skew Exponential Power (SEP) distributed error model. A series of numerical experiments were performed based on a synthetic nitrate pollution case and on a real study case in the Newport Bay Watershed, California. The Soil and Water Assessment Tool (SWAT) and Differential Evolution Adaptive Metropolis (DREAM(ZS)) were used as the representative WWQ model and MCMC algorithm, respectively. The major findings include the following: (1) the BAIPU can be implemented and used to appropriately identify the uncertain parameters and characterize the predictive uncertainty; (2) the compensation effect between the input and parametric uncertainties can seriously mislead the modeling based management decisions, if the input uncertainty is not explicitly accounted for; (3) the BAIPU accounts for the interaction between the input and parametric uncertainties and therefore provides more accurate calibration and uncertainty results than a sequential analysis of the uncertainties; and (4) the BAIPU quantifies the credibility of different input assumptions on a statistical basis and can be implemented as an effective inverse modeling approach to the joint inference of parameters and inputs.

  17. ViA: a perceptual visualization assistant

    Science.gov (United States)

    Healey, Chris G.; St. Amant, Robert; Elhaddad, Mahmoud S.

    2000-05-01

    This paper describes an automated visualized assistant called ViA. ViA is designed to help users construct perceptually optical visualizations to represent, explore, and analyze large, complex, multidimensional datasets. We have approached this problem by studying what is known about the control of human visual attention. By harnessing the low-level human visual system, we can support our dual goals of rapid and accurate visualization. Perceptual guidelines that we have built using psychophysical experiments form the basis for ViA. ViA uses modified mixed-initiative planning algorithms from artificial intelligence to search of perceptually optical data attribute to visual feature mappings. Our perceptual guidelines are integrated into evaluation engines that provide evaluation weights for a given data-feature mapping, and hints on how that mapping might be improved. ViA begins by asking users a set of simple questions about their dataset and the analysis tasks they want to perform. Answers to these questions are used in combination with the evaluation engines to identify and intelligently pursue promising data-feature mappings. The result is an automatically-generated set of mappings that are perceptually salient, but that also respect the context of the dataset and users' preferences about how they want to visualize their data.

  18. Automated visual direction : LDRD 38623 final report.

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Robert J.

    2005-01-01

    Mobile manipulator systems used by emergency response operators consist of an articulated robot arm, a remotely driven base, a collection of cameras, and a remote communications link. Typically the system is completely teleoperated, with the operator using live video feedback to monitor and assess the environment, plan task activities, and to conduct the operations via remote control input devices. The capabilities of these systems are limited, and operators rarely attempt sophisticated operations such as retrieving and utilizing tools, deploying sensors, or building up world models. This project has focused on methods to utilize this video information to enable monitored autonomous behaviors for the mobile manipulator system, with the goal of improving the overall effectiveness of the human/robot system. Work includes visual servoing, visual targeting, utilization of embedded video in 3-D models, and improved methods of camera utilization and calibration.

  19. Multivariate spatiotemporal visualizations for mobile devices in Flyover Country

    Science.gov (United States)

    Loeffler, S.; Thorn, R.; Myrbo, A.; Roth, R.; Goring, S. J.; Williams, J.

    2017-12-01

    Visualizing and interacting with complex multivariate and spatiotemporal datasets on mobile devices is challenging due to their smaller screens, reduced processing power, and limited data connectivity. Pollen data require visualizing pollen assemblages spatially, temporally, and across multiple taxa to understand plant community dynamics through time. Drawing from cartography, information visualization, and paleoecology, we have created new mobile-first visualization techniques that represent multiple taxa across many sites and enable user interaction. Using pollen datasets from the Neotoma Paleoecology Database as a case study, the visualization techniques allow ecological patterns and trends to be quickly understood on a mobile device compared to traditional pollen diagrams and maps. This flexible visualization system can be used for datasets beyond pollen, with the only requirements being point-based localities and multiple variables changing through time or depth.

  20. Synchronous activity in cat visual cortex encodes collinear and cocircular contours.

    Science.gov (United States)

    Samonds, Jason M; Zhou, Zhiyi; Bernard, Melanie R; Bonds, A B

    2006-04-01

    We explored how contour information in primary visual cortex might be embedded in the simultaneous activity of multiple cells recorded with a 100-electrode array. Synchronous activity in cat visual cortex was more selective and predictable in discriminating between drifting grating and concentric ring stimuli than changes in firing rate. Synchrony was found even between cells with wholly different orientation preferences when their receptive fields were circularly aligned, and membership in synchronous groups was orientation and curvature dependent. The existence of synchrony between cocircular cells reinforces its role as a general mechanism for contour integration and shape detection as predicted by association field concepts. Our data suggest that cortical synchrony results from common and synchronous input from earlier visual areas and that it could serve to shape extrastriate response selectivity.