WorldWideScience

Sample records for visual hierarchical stimuli

  1. A comparative analysis of global and local processing of hierarchical visual stimuli in young children (Homo sapiens) and monkeys (Cebus apella).

    Science.gov (United States)

    De Lillo, Carlo; Spinozzi, Giovanna; Truppa, Valentina; Naylor, Donna M

    2005-05-01

    Results obtained with preschool children (Homo sapiens) were compared with results previously obtained from capuchin monkeys (Cebus apella) in matching-to-sample tasks featuring hierarchical visual stimuli. In Experiment 1, monkeys, in contrast with children, showed an advantage in matching the stimuli on the basis of their local features. These results were replicated in a 2nd experiment in which control trials enabled the authors to rule out that children used spurious cues to solve the matching task. In a 3rd experiment featuring conditions in which the density of the stimuli was manipulated, monkeys' accuracy in the processing of the global shape of the stimuli was negatively affected by the separation of the local elements, whereas children's performance was robust across testing conditions. Children's response latencies revealed a global precedence in the 2nd and 3rd experiments. These results show differences in the processing of hierarchical stimuli by humans and monkeys that emerge early during childhood. 2005 APA, all rights reserved

  2. The influence of visual and phonological features on the hemispheric processing of hierarchical Navon letters.

    Science.gov (United States)

    Aiello, Marilena; Merola, Sheila; Lasaponara, Stefano; Pinto, Mario; Tomaiuolo, Francesco; Doricchi, Fabrizio

    2018-01-31

    The possibility of allocating attentional resources to the "global" shape or to the "local" details of pictorial stimuli helps visual processing. Investigations with hierarchical Navon letters, that are large "global" letters made up of small "local" ones, consistently demonstrate a right hemisphere advantage for global processing and a left hemisphere advantage for local processing. Here we investigated how the visual and phonological features of the global and local components of Navon letters influence these hemispheric advantages. In a first study in healthy participants, we contrasted the hemispheric processing of hierarchical letters with global and local items competing for response selection, to the processing of hierarchical letters in which a letter, a false-letter conveying no phonological information or a geometrical shape presented at the unattended level did not compete for response selection. In a second study, we investigated the hemispheric processing of hierarchical stimuli in which global and local letters were both visually and phonologically congruent (e.g. large uppercase G made of smaller uppercase G), visually incongruent and phonologically congruent (e.g. large uppercase G made of small lowercase g) or visually incongruent and phonologically incongruent (e.g. large uppercase G made of small lowercase or uppercase M). In a third study, we administered the same tasks to a right brain damaged patient with a lesion involving pre-striate areas engaged by global processing. The results of the first two experiments showed that the global abilities of the left hemisphere are limited because of its strong susceptibility to interference from local letters even when these are irrelevant to the task. Phonological features played a crucial role in this interference because the interference was entirely maintained also when letters at the global and local level were presented in different uppercase vs. lowercase formats. In contrast, when local features

  3. Visual arts training is linked to flexible attention to local and global levels of visual stimuli.

    Science.gov (United States)

    Chamberlain, Rebecca; Wagemans, Johan

    2015-10-01

    Observational drawing skill has been shown to be associated with the ability to focus on local visual details. It is unclear whether superior performance in local processing is indicative of the ability to attend to, and flexibly switch between, local and global levels of visual stimuli. It is also unknown whether these attentional enhancements remain specific to observational drawing skill or are a product of a wide range of artistic activities. The current study aimed to address these questions by testing if flexible visual processing predicts artistic group membership and observational drawing skill in a sample of first-year bachelor's degree art students (n=23) and non-art students (n=23). A pattern of local and global visual processing enhancements was found in relation to artistic group membership and drawing skill, with local processing ability found to be specifically related to individual differences in drawing skill. Enhanced global processing and more fluent switching between local and global levels of hierarchical stimuli predicted both drawing skill and artistic group membership, suggesting that these are beneficial attentional mechanisms for art-making in a range of domains. These findings support a top-down attentional model of artistic expertise and shed light on the domain specific and domain-general attentional enhancements induced by proficiency in the visual arts. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Visual hierarchical processing and lateralization of cognitive functions through domestic chicks' eyes.

    Directory of Open Access Journals (Sweden)

    Cinzia Chiandetti

    Full Text Available Hierarchical stimuli have proven effective for investigating principles of visual organization in humans. A large body of evidence suggests that the analysis of the global forms precedes the analysis of the local forms in our species. Studies on lateralization also indicate that analytic and holistic encoding strategies are separated between the two hemispheres of the brain. This raises the question of whether precedence effects may reflect the activation of lateralized functions within the brain. Non-human animals have perceptual organization and functional lateralization that are comparable to that of humans. Here we trained the domestic chick in a concurrent discrimination task involving hierarchical stimuli. Then, we evaluated the animals for analytic and holistic encoding strategies in a series of transformational tests by relying on a monocular occlusion technique. A local precedence emerged in both the left and the right hemisphere, adding further evidence in favour of analytic processing in non-human animals.

  5. Holistic face categorization in higher-level cortical visual areas of the normal and prosopagnosic brain: towards a non-hierarchical view of face perception

    Directory of Open Access Journals (Sweden)

    Bruno Rossion

    2011-01-01

    Full Text Available How a visual stimulus is initially categorized as a face in a network of human brain areas remains largely unclear. Hierarchical neuro-computational models of face perception assume that the visual stimulus is first decomposed in local parts in lower order visual areas. These parts would then be combined into a global representation in higher order face-sensitive areas of the occipito-temporal cortex. Here we tested this view in fMRI with visual stimuli that are categorized as faces based on their global configuration rather than their local parts (2-tones Mooney figures and Arcimboldo’s facelike paintings. Compared to the same inverted visual stimuli that are not categorized as faces, these stimuli activated the right middle fusiform gyrus (Fusiform face area, FFA and superior temporal sulcus (pSTS, with no significant activation in the posteriorly located inferior occipital gyrus (i.e., no occipital face area, OFA. This observation is strengthened by behavioral and neural evidence for normal face categorization of these stimuli in a brain-damaged prosopagnosic patient (PS whose intact right middle fusiform gyrus and superior temporal sulcus are devoid of any potential face-sensitive inputs from the lesioned right inferior occipital cortex. Together, these observations indicate that face-preferential activation may emerge in higher order visual areas of the right hemisphere without any face-preferential inputs from lower order visual areas, supporting a non-hierarchical view of face perception in the visual cortex.

  6. Hierarchical acquisition of visual specificity in spatial contextual cueing.

    Science.gov (United States)

    Lie, Kin-Pou

    2015-01-01

    Spatial contextual cueing refers to visual search performance's being improved when invariant associations between target locations and distractor spatial configurations are learned incidentally. Using the instance theory of automatization and the reverse hierarchy theory of visual perceptual learning, this study explores the acquisition of visual specificity in spatial contextual cueing. Two experiments in which detailed visual features were irrelevant for distinguishing between spatial contexts found that spatial contextual cueing was visually generic in difficult trials when the trials were not preceded by easy trials (Experiment 1) but that spatial contextual cueing progressed to visual specificity when difficult trials were preceded by easy trials (Experiment 2). These findings support reverse hierarchy theory, which predicts that even when detailed visual features are irrelevant for distinguishing between spatial contexts, spatial contextual cueing can progress to visual specificity if the stimuli remain constant, the task is difficult, and difficult trials are preceded by easy trials. However, these findings are inconsistent with instance theory, which predicts that when detailed visual features are irrelevant for distinguishing between spatial contexts, spatial contextual cueing will not progress to visual specificity. This study concludes that the acquisition of visual specificity in spatial contextual cueing is more plausibly hierarchical, rather than instance-based.

  7. Effect of Size Change and Brightness Change of Visual Stimuli on Loudness Perception and Pitch Perception of Auditory Stimuli

    Directory of Open Access Journals (Sweden)

    Syouya Tanabe

    2011-10-01

    Full Text Available People obtain a lot of information from visual and auditory sensation on daily life. Regarding the effect of visual stimuli on perception of auditory stimuli, studies of phonological perception and sound localization have been made in great numbers. This study examined the effect of visual stimuli on perception in loudness and pitch of auditory stimuli. We used the image of figures whose size or brightness was changed as visual stimuli, and the sound of pure tone whose loudness or pitch was changed as auditory stimuli. Those visual and auditory stimuli were combined independently to make four types of audio-visual multisensory stimuli for psychophysical experiments. In the experiments, participants judged change in loudness or pitch of auditory stimuli, while they judged the direction of size change or the kind of a presented figure in visual stimuli. Therefore they cannot neglect visual stimuli while they judged auditory stimuli. As a result, perception in loudness and pitch were promoted significantly around their difference limen, when the image was getting bigger or brighter, compared with the case in which the image had no changes. This indicates that perception in loudness and pitch were affected by change in size and brightness of visual stimuli.

  8. Bio-inspired fabrication of stimuli-responsive photonic crystals with hierarchical structures and their applications

    International Nuclear Information System (INIS)

    Lu, Tao; Peng, Wenhong; Zhu, Shenmin; Zhang, Di

    2016-01-01

    When the constitutive materials of photonic crystals (PCs) are stimuli-responsive, the resultant PCs exhibit optical properties that can be tuned by the stimuli. This can be exploited for promising applications in colour displays, biological and chemical sensors, inks and paints, and many optically active components. However, the preparation of the required photonic structures is the first issue to be solved. In the past two decades, approaches such as microfabrication and self-assembly have been developed to incorporate stimuli-responsive materials into existing periodic structures for the fabrication of PCs, either as the initial building blocks or as the surrounding matrix. Generally, the materials that respond to thermal, pH, chemical, optical, electrical, or magnetic stimuli are either soft or aggregate, which is why the manufacture of three-dimensional hierarchical photonic structures with responsive properties is a great challenge. Recently, inspired by biological PCs in nature which exhibit both flexible and responsive properties, researchers have developed various methods to synthesize metals and metal oxides with hierarchical structures by using a biological PC as the template. This review will focus on the recent developments in this field. In particular, PCs with biological hierarchical structures that can be tuned by external stimuli have recently been successfully fabricated. These findings offer innovative insights into the design of responsive PCs and should be of great importance for future applications of these materials. (topical review)

  9. Effects of Auditory Stimuli on Visual Velocity Perception

    Directory of Open Access Journals (Sweden)

    Michiaki Shibata

    2011-10-01

    Full Text Available We investigated the effects of auditory stimuli on the perceived velocity of a moving visual stimulus. Previous studies have reported that the duration of visual events is perceived as being longer for events filled with auditory stimuli than for events not filled with auditory stimuli, ie, the so-called “filled-duration illusion.” In this study, we have shown that auditory stimuli also affect the perceived velocity of a moving visual stimulus. In Experiment 1, a moving comparison stimulus (4.2∼5.8 deg/s was presented together with filled (or unfilled white-noise bursts or with no sound. The standard stimulus was a moving visual stimulus (5 deg/s presented before or after the comparison stimulus. The participants had to judge which stimulus was moving faster. The results showed that the perceived velocity in the auditory-filled condition was lower than that in the auditory-unfilled and no-sound conditions. In Experiment 2, we investigated the effects of auditory stimuli on velocity adaptation. The results showed that the effects of velocity adaptation in the auditory-filled condition were weaker than those in the no-sound condition. These results indicate that auditory stimuli tend to decrease the perceived velocity of a moving visual stimulus.

  10. Instructed fear stimuli bias visual attention

    NARCIS (Netherlands)

    Deltomme, Berre; Mertens, G.; Tibboel, Helen; Braem, Senne

    We investigated whether stimuli merely instructed to be fear-relevant can bias visual attention, even when the fear relation was never experienced before. Participants performed a dot-probe task with pictures of naturally fear-relevant (snake or spider) or -irrelevant (bird or butterfly) stimuli.

  11. Effects of Temporal Congruity Between Auditory and Visual Stimuli Using Rapid Audio-Visual Serial Presentation.

    Science.gov (United States)

    An, Xingwei; Tang, Jiabei; Liu, Shuang; He, Feng; Qi, Hongzhi; Wan, Baikun; Ming, Dong

    2016-10-01

    Combining visual and auditory stimuli in event-related potential (ERP)-based spellers gained more attention in recent years. Few of these studies notice the difference of ERP components and system efficiency caused by the shifting of visual and auditory onset. Here, we aim to study the effect of temporal congruity of auditory and visual stimuli onset on bimodal brain-computer interface (BCI) speller. We designed five visual and auditory combined paradigms with different visual-to-auditory delays (-33 to +100 ms). Eleven participants attended in this study. ERPs were acquired and aligned according to visual and auditory stimuli onset, respectively. ERPs of Fz, Cz, and PO7 channels were studied through the statistical analysis of different conditions both from visual-aligned ERPs and audio-aligned ERPs. Based on the visual-aligned ERPs, classification accuracy was also analyzed to seek the effects of visual-to-auditory delays. The latencies of ERP components depended mainly on the visual stimuli onset. Auditory stimuli onsets influenced mainly on early component accuracies, whereas visual stimuli onset determined later component accuracies. The latter, however, played a dominate role in overall classification. This study is important for further studies to achieve better explanations and ultimately determine the way to optimize the bimodal BCI application.

  12. Hierarchically organized layout for visualization of biochemical pathways.

    Science.gov (United States)

    Tsay, Jyh-Jong; Wu, Bo-Liang; Jeng, Yu-Sen

    2010-01-01

    Many complex pathways are described as hierarchical structures in which a pathway is recursively partitioned into several sub-pathways, and organized hierarchically as a tree. The hierarchical structure provides a natural way to visualize the global structure of a complex pathway. However, none of the previous research on pathway visualization explores the hierarchical structures provided by many complex pathways. In this paper, we aim to develop algorithms that can take advantages of hierarchical structures, and give layouts that explore the global structures as well as local structures of pathways. We present a new hierarchically organized layout algorithm to produce layouts for hierarchically organized pathways. Our algorithm first decomposes a complex pathway into sub-pathway groups along the hierarchical organization, and then partition each sub-pathway group into basic components. It then applies conventional layout algorithms, such as hierarchical layout and force-directed layout, to compute the layout of each basic component. Finally, component layouts are joined to form a final layout of the pathway. Our main contribution is the development of algorithms for decomposing pathways and joining layouts. Experiment shows that our algorithm is able to give comprehensible visualization for pathways with hierarchies, cycles as well as complex structures. It clearly renders the global component structures as well as the local structure in each component. In addition, it runs very fast, and gives better visualization for many examples from previous related research. 2009 Elsevier B.V. All rights reserved.

  13. Perceived duration of visual and tactile stimuli depends on perceived speed

    Directory of Open Access Journals (Sweden)

    Alice eTomassini

    2011-09-01

    Full Text Available It is known that the perceived duration of visual stimuli is strongly influenced by speed: faster moving stimuli appear to last longer. To test whether this is a general property of sensory systems we asked participants to reproduce the duration of visual and tactile gratings, and visuo-tactile gratings moving at a variable speed (3.5 – 15 cm/s for three different durations (400, 600 and 800 ms. For both modalities, the apparent duration of the stimulus increased strongly with stimulus speed, more so for tactile than for visual stimuli. In addition, visual stimuli were perceived to last approximately 200 ms longer than tactile stimuli. The apparent duration of visuo-tactile stimuli lay between the unimodal estimates, as the Bayesian account predicts, but the bimodal precision of the reproduction did not show the theoretical improvement. A cross-modal speed-matching task revealed that visual stimuli were perceived to move faster than tactile stimuli. To test whether the large difference in the perceived duration of visual and tactile stimuli resulted from the difference in their perceived speed, we repeated the time reproduction task with visual and tactile stimuli matched in apparent speed. This reduced, but did not completely eliminate the difference in apparent duration. These results show that for both vision and touch, perceived duration depends on speed, pointing to common strategies of time perception.

  14. Gender differences in identifying emotions from auditory and visual stimuli.

    Science.gov (United States)

    Waaramaa, Teija

    2017-12-01

    The present study focused on gender differences in emotion identification from auditory and visual stimuli produced by two male and two female actors. Differences in emotion identification from nonsense samples, language samples and prolonged vowels were investigated. It was also studied whether auditory stimuli can convey the emotional content of speech without visual stimuli, and whether visual stimuli can convey the emotional content of speech without auditory stimuli. The aim was to get a better knowledge of vocal attributes and a more holistic understanding of the nonverbal communication of emotion. Females tended to be more accurate in emotion identification than males. Voice quality parameters played a role in emotion identification in both genders. The emotional content of the samples was best conveyed by nonsense sentences, better than by prolonged vowels or shared native language of the speakers and participants. Thus, vocal non-verbal communication tends to affect the interpretation of emotion even in the absence of language. The emotional stimuli were better recognized from visual stimuli than auditory stimuli by both genders. Visual information about speech may not be connected to the language; instead, it may be based on the human ability to understand the kinetic movements in speech production more readily than the characteristics of the acoustic cues.

  15. Endogenous sequential cortical activity evoked by visual stimuli.

    Science.gov (United States)

    Carrillo-Reid, Luis; Miller, Jae-Eun Kang; Hamm, Jordan P; Jackson, Jesse; Yuste, Rafael

    2015-06-10

    Although the functional properties of individual neurons in primary visual cortex have been studied intensely, little is known about how neuronal groups could encode changing visual stimuli using temporal activity patterns. To explore this, we used in vivo two-photon calcium imaging to record the activity of neuronal populations in primary visual cortex of awake mice in the presence and absence of visual stimulation. Multidimensional analysis of the network activity allowed us to identify neuronal ensembles defined as groups of cells firing in synchrony. These synchronous groups of neurons were themselves activated in sequential temporal patterns, which repeated at much higher proportions than chance and were triggered by specific visual stimuli such as natural visual scenes. Interestingly, sequential patterns were also present in recordings of spontaneous activity without any sensory stimulation and were accompanied by precise firing sequences at the single-cell level. Moreover, intrinsic dynamics could be used to predict the occurrence of future neuronal ensembles. Our data demonstrate that visual stimuli recruit similar sequential patterns to the ones observed spontaneously, consistent with the hypothesis that already existing Hebbian cell assemblies firing in predefined temporal sequences could be the microcircuit substrate that encodes visual percepts changing in time. Copyright © 2015 Carrillo-Reid et al.

  16. Multisensory training can promote or impede visual perceptual learning of speech stimuli: visual-tactile vs. visual-auditory training.

    Science.gov (United States)

    Eberhardt, Silvio P; Auer, Edward T; Bernstein, Lynne E

    2014-01-01

    In a series of studies we have been investigating how multisensory training affects unisensory perceptual learning with speech stimuli. Previously, we reported that audiovisual (AV) training with speech stimuli can promote auditory-only (AO) perceptual learning in normal-hearing adults but can impede learning in congenitally deaf adults with late-acquired cochlear implants. Here, impeder and promoter effects were sought in normal-hearing adults who participated in lipreading training. In Experiment 1, visual-only (VO) training on paired associations between CVCVC nonsense word videos and nonsense pictures demonstrated that VO words could be learned to a high level of accuracy even by poor lipreaders. In Experiment 2, visual-auditory (VA) training in the same paradigm but with the addition of synchronous vocoded acoustic speech impeded VO learning of the stimuli in the paired-associates paradigm. In Experiment 3, the vocoded AO stimuli were shown to be less informative than the VO speech. Experiment 4 combined vibrotactile speech stimuli with the visual stimuli during training. Vibrotactile stimuli were shown to promote visual perceptual learning. In Experiment 5, no-training controls were used to show that training with visual speech carried over to consonant identification of untrained CVCVC stimuli but not to lipreading words in sentences. Across this and previous studies, multisensory training effects depended on the functional relationship between pathways engaged during training. Two principles are proposed to account for stimulus effects: (1) Stimuli presented to the trainee's primary perceptual pathway will impede learning by a lower-rank pathway. (2) Stimuli presented to the trainee's lower rank perceptual pathway will promote learning by a higher-rank pathway. The mechanisms supporting these principles are discussed in light of multisensory reverse hierarchy theory (RHT).

  17. Haptic and Audio-visual Stimuli: Enhancing Experiences and Interaction

    NARCIS (Netherlands)

    Nijholt, Antinus; Dijk, Esko O.; Lemmens, Paul M.C.; Luitjens, S.B.

    2010-01-01

    The intention of the symposium on Haptic and Audio-visual stimuli at the EuroHaptics 2010 conference is to deepen the understanding of the effect of combined Haptic and Audio-visual stimuli. The knowledge gained will be used to enhance experiences and interactions in daily life. To this end, a

  18. Feature-Based Visual Short-Term Memory Is Widely Distributed and Hierarchically Organized.

    Science.gov (United States)

    Dotson, Nicholas M; Hoffman, Steven J; Goodell, Baldwin; Gray, Charles M

    2018-06-15

    Feature-based visual short-term memory is known to engage both sensory and association cortices. However, the extent of the participating circuit and the neural mechanisms underlying memory maintenance is still a matter of vigorous debate. To address these questions, we recorded neuronal activity from 42 cortical areas in monkeys performing a feature-based visual short-term memory task and an interleaved fixation task. We find that task-dependent differences in firing rates are widely distributed throughout the cortex, while stimulus-specific changes in firing rates are more restricted and hierarchically organized. We also show that microsaccades during the memory delay encode the stimuli held in memory and that units modulated by microsaccades are more likely to exhibit stimulus specificity, suggesting that eye movements contribute to visual short-term memory processes. These results support a framework in which most cortical areas, within a modality, contribute to mnemonic representations at timescales that increase along the cortical hierarchy. Copyright © 2018 Elsevier Inc. All rights reserved.

  19. Visual and auditory stimuli associated with swallowing. An fMRI study

    International Nuclear Information System (INIS)

    Kawai, Takeshi; Watanabe, Yutaka; Tonogi, Morio; Yamane, Gen-yuki; Abe, Shinichi; Yamada, Yoshiaki; Callan, Akiko

    2009-01-01

    We focused on brain areas activated by audiovisual stimuli related to swallowing motions. In this study, three kinds of stimuli related to human swallowing movement (auditory stimuli alone, visual stimuli alone, or audiovisual stimuli) were presented to the subjects, and activated brain areas were measured using functional MRI (fMRI) and analyzed. When auditory stimuli alone were presented, the supplementary motor area was activated. When visual stimuli alone were presented, the premotor and primary motor areas of the left and right hemispheres and prefrontal area of the left hemisphere were activated. When audiovisual stimuli were presented, the prefrontal and premotor areas of the left and right hemispheres were activated. Activation of Broca's area, which would have been characteristic of mirror neuron system activation on presentation of motion images, was not observed; however, activation of brain areas related to swallowing motion programming and performance was verified for auditory, visual and audiovisual stimuli related to swallowing motion. These results suggest that audiovisual stimuli related to swallowing motion could be applied to the treatment of patients with dysphagia. (author)

  20. Positive mood broadens visual attention to positive stimuli.

    Science.gov (United States)

    Wadlinger, Heather A; Isaacowitz, Derek M

    2006-03-01

    In an attempt to investigate the impact of positive emotions on visual attention within the context of Fredrickson's (1998) broaden-and-build model, eye tracking was used in two studies to measure visual attentional preferences of college students (n=58, n=26) to emotional pictures. Half of each sample experienced induced positive mood immediately before viewing slides of three similarly-valenced images, in varying central-peripheral arrays. Attentional breadth was determined by measuring the percentage viewing time to peripheral images as well as by the number of visual saccades participants made per slide. Consistent with Fredrickson's theory, the first study showed that individuals induced into positive mood fixated more on peripheral stimuli than did control participants; however, this only held true for highly-valenced positive stimuli. Participants under induced positive mood also made more frequent saccades for slides of neutral and positive valence. A second study showed that these effects were not simply due to differences in emotional arousal between stimuli. Selective attentional broadening to positive stimuli may act both to facilitate later building of resources as well as to maintain current positive affective states.

  1. Hierarchical representation of shapes in visual cortex-from localized features to figural shape segregation.

    Science.gov (United States)

    Tschechne, Stephan; Neumann, Heiko

    2014-01-01

    Visual structures in the environment are segmented into image regions and those combined to a representation of surfaces and prototypical objects. Such a perceptual organization is performed by complex neural mechanisms in the visual cortex of primates. Multiple mutually connected areas in the ventral cortical pathway receive visual input and extract local form features that are subsequently grouped into increasingly complex, more meaningful image elements. Such a distributed network of processing must be capable to make accessible highly articulated changes in shape boundary as well as very subtle curvature changes that contribute to the perception of an object. We propose a recurrent computational network architecture that utilizes hierarchical distributed representations of shape features to encode surface and object boundary over different scales of resolution. Our model makes use of neural mechanisms that model the processing capabilities of early and intermediate stages in visual cortex, namely areas V1-V4 and IT. We suggest that multiple specialized component representations interact by feedforward hierarchical processing that is combined with feedback signals driven by representations generated at higher stages. Based on this, global configurational as well as local information is made available to distinguish changes in the object's contour. Once the outline of a shape has been established, contextual contour configurations are used to assign border ownership directions and thus achieve segregation of figure and ground. The model, thus, proposes how separate mechanisms contribute to distributed hierarchical cortical shape representation and combine with processes of figure-ground segregation. Our model is probed with a selection of stimuli to illustrate processing results at different processing stages. We especially highlight how modulatory feedback connections contribute to the processing of visual input at various stages in the processing hierarchy.

  2. Multisensory stimuli improve relative localisation judgments compared to unisensory auditory or visual stimuli

    OpenAIRE

    Bizley, Jennifer; Wood, Katherine; Freeman, Laura

    2018-01-01

    Observers performed a relative localisation task in which they reported whether the second of two sequentially presented signals occurred to the left or right of the first. Stimuli were detectability-matched auditory, visual, or auditory-visual signals and the goal was to compare changes in performance with eccentricity across modalities. Visual performance was superior to auditory at the midline, but inferior in the periphery, while auditory-visual performance exceeded both at all locations....

  3. The role of supramolecular chemistry in stimuli responsive and hierarchically structured functional organic materials

    NARCIS (Netherlands)

    Schenning, A.P.H.J.; Bastiaansen, C.W.M.; Broer, D.J.; Debije, M.G.

    2014-01-01

    ABSTRACT: In this review, we show the important role of supramolecular chemistry in the fabrication of stimuli responsive and hierarchically structured liquid crystalline polymer networks. Supramolecular interactions can be used to create three dimensional order or as molecular triggers in materials

  4. Auditory stimulus timing influences perceived duration of co-occurring visual stimuli

    Directory of Open Access Journals (Sweden)

    Vincenzo eRomei

    2011-09-01

    Full Text Available There is increasing interest in multisensory influences upon sensory-specific judgements, such as when auditory stimuli affect visual perception. Here we studied whether the duration of an auditory event can objectively affect the perceived duration of a co-occurring visual event. On each trial, participants were presented with a pair of successive flashes and had to judge whether the first or second was longer. Two beeps were presented with the flashes. The order of short and long stimuli could be the same across audition and vision (audiovisual congruent or reversed, so that the longer flash was accompanied by the shorter beep and vice versa (audiovisual incongruent; or the two beeps could have the same duration as each other. Beeps and flashes could onset synchronously or asynchronously. In a further control experiment, the beep durations were much longer (tripled than the flashes. Results showed that visual duration-discrimination sensitivity (d' was significantly higher for congruent (and significantly lower for incongruent audiovisual synchronous combinations, relative to the visual only presentation. This effect was abolished when auditory and visual stimuli were presented asynchronously, or when sound durations tripled those of flashes. We conclude that the temporal properties of co-occurring auditory stimuli influence the perceived duration of visual stimuli and that this can reflect genuine changes in visual sensitivity rather than mere response bias.

  5. Auditory-visual aversive stimuli modulate the conscious experience of fear.

    Science.gov (United States)

    Taffou, Marine; Guerchouche, Rachid; Drettakis, George; Viaud-Delmon, Isabelle

    2013-01-01

    In a natural environment, affective information is perceived via multiple senses, mostly audition and vision. However, the impact of multisensory information on affect remains relatively undiscovered. In this study, we investigated whether the auditory-visual presentation of aversive stimuli influences the experience of fear. We used the advantages of virtual reality to manipulate multisensory presentation and to display potentially fearful dog stimuli embedded in a natural context. We manipulated the affective reactions evoked by the dog stimuli by recruiting two groups of participants: dog-fearful and non-fearful participants. The sensitivity to dog fear was assessed psychometrically by a questionnaire and also at behavioral and subjective levels using a Behavioral Avoidance Test (BAT). Participants navigated in virtual environments, in which they encountered virtual dog stimuli presented through the auditory channel, the visual channel or both. They were asked to report their fear using Subjective Units of Distress. We compared the fear for unimodal (visual or auditory) and bimodal (auditory-visual) dog stimuli. Dog-fearful participants as well as non-fearful participants reported more fear in response to bimodal audiovisual compared to unimodal presentation of dog stimuli. These results suggest that fear is more intense when the affective information is processed via multiple sensory pathways, which might be due to a cross-modal potentiation. Our findings have implications for the field of virtual reality-based therapy of phobias. Therapies could be refined and improved by implicating and manipulating the multisensory presentation of the feared situations.

  6. Hierarchical sets: analyzing pangenome structure through scalable set visualizations

    Science.gov (United States)

    2017-01-01

    Abstract Motivation: The increase in available microbial genome sequences has resulted in an increase in the size of the pangenomes being analyzed. Current pangenome visualizations are not intended for the pangenome sizes possible today and new approaches are necessary in order to convert the increase in available information to increase in knowledge. As the pangenome data structure is essentially a collection of sets we explore the potential for scalable set visualization as a tool for pangenome analysis. Results: We present a new hierarchical clustering algorithm based on set arithmetics that optimizes the intersection sizes along the branches. The intersection and union sizes along the hierarchy are visualized using a composite dendrogram and icicle plot, which, in pangenome context, shows the evolution of pangenome and core size along the evolutionary hierarchy. Outlying elements, i.e. elements whose presence pattern do not correspond with the hierarchy, can be visualized using hierarchical edge bundles. When applied to pangenome data this plot shows putative horizontal gene transfers between the genomes and can highlight relationships between genomes that is not represented by the hierarchy. We illustrate the utility of hierarchical sets by applying it to a pangenome based on 113 Escherichia and Shigella genomes and find it provides a powerful addition to pangenome analysis. Availability and Implementation: The described clustering algorithm and visualizations are implemented in the hierarchicalSets R package available from CRAN (https://cran.r-project.org/web/packages/hierarchicalSets) Contact: thomasp85@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28130242

  7. Spatiotemporal Relationships among Audiovisual Stimuli Modulate Auditory Facilitation of Visual Target Discrimination.

    Science.gov (United States)

    Li, Qi; Yang, Huamin; Sun, Fang; Wu, Jinglong

    2015-03-01

    Sensory information is multimodal; through audiovisual interaction, task-irrelevant auditory stimuli tend to speed response times and increase visual perception accuracy. However, mechanisms underlying these performance enhancements have remained unclear. We hypothesize that task-irrelevant auditory stimuli might provide reliable temporal and spatial cues for visual target discrimination and behavioral response enhancement. Using signal detection theory, the present study investigated the effects of spatiotemporal relationships on auditory facilitation of visual target discrimination. Three experiments were conducted where an auditory stimulus maintained reliable temporal and/or spatial relationships with visual target stimuli. Results showed that perception sensitivity (d') to visual target stimuli was enhanced only when a task-irrelevant auditory stimulus maintained reliable spatiotemporal relationships with a visual target stimulus. When only reliable spatial or temporal information was contained, perception sensitivity was not enhanced. These results suggest that reliable spatiotemporal relationships between visual and auditory signals are required for audiovisual integration during a visual discrimination task, most likely due to a spread of attention. These results also indicate that auditory facilitation of visual target discrimination follows from late-stage cognitive processes rather than early stage sensory processes. © 2015 SAGE Publications.

  8. WebVis: a hierarchical web homepage visualizer

    Science.gov (United States)

    Renteria, Jose C.; Lodha, Suresh K.

    2000-02-01

    WebVis, the Hierarchical Web Home Page Visualizer, is a tool for managing home web pages. The user can access this tool via the WWW and obtain a hierarchical visualization of one's home web pages. WebVis is a real time interactive tool that supports many different queries on the statistics of internal files such as sizes, age, and type. In addition, statistics on embedded information such as VRML files, Java applets, images and sound files can be extracted and queried. Results of these queries are visualized using color, shape and size of different nodes of the hierarchy. The visualization assists the user in a variety of task, such as quickly finding outdated information or locate large files. WebVIs is one solution to the growing web space maintenance problem. Implementation of WebVis is realized with Perl and Java. Perl pattern matching and file handling routines are used to collect and process web space linkage information and web document information. Java utilizes the collected information to produce visualization of the web space. Java also provides WebVis with real time interactivity, while running off the WWW. Some WebVis examples of home web page visualization are presented.

  9. Attentional load modulates responses of human primary visual cortex to invisible stimuli.

    Science.gov (United States)

    Bahrami, Bahador; Lavie, Nilli; Rees, Geraint

    2007-03-20

    Visual neuroscience has long sought to determine the extent to which stimulus-evoked activity in visual cortex depends on attention and awareness. Some influential theories of consciousness maintain that the allocation of attention is restricted to conscious representations [1, 2]. However, in the load theory of attention [3], competition between task-relevant and task-irrelevant stimuli for limited-capacity attention does not depend on conscious perception of the irrelevant stimuli. The critical test is whether the level of attentional load in a relevant task would determine unconscious neural processing of invisible stimuli. Human participants were scanned with high-field fMRI while they performed a foveal task of low or high attentional load. Irrelevant, invisible monocular stimuli were simultaneously presented peripherally and were continuously suppressed by a flashing mask in the other eye [4]. Attentional load in the foveal task strongly modulated retinotopic activity evoked in primary visual cortex (V1) by the invisible stimuli. Contrary to traditional views [1, 2, 5, 6], we found that availability of attentional capacity determines neural representations related to unconscious processing of continuously suppressed stimuli in human primary visual cortex. Spillover of attention to cortical representations of invisible stimuli (under low load) cannot be a sufficient condition for their awareness.

  10. Heightened attentional capture by visual food stimuli in anorexia nervosa.

    Science.gov (United States)

    Neimeijer, Renate A M; Roefs, Anne; de Jong, Peter J

    2017-08-01

    The present study was designed to test the hypothesis that anorexia nervosa (AN) patients are relatively insensitive to the attentional capture of visual food stimuli. Attentional avoidance of food might help AN patients to prevent more elaborate processing of food stimuli and the subsequent generation of craving, which might enable AN patients to maintain their strict diet. Participants were 66 restrictive AN spectrum patients and 55 healthy controls. A single-target rapid serial visual presentation task was used with food and disorder-neutral cues as critical distracter stimuli and disorder-neutral pictures as target stimuli. AN spectrum patients showed diminished task performance when visual food cues were presented in close temporal proximity of the to-be-identified target. In contrast to our hypothesis, results indicate that food cues automatically capture AN spectrum patients' attention. One explanation could be that the enhanced attentional capture of food cues in AN is driven by the relatively high threat value of food items in AN. Implications and suggestions for future research are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  11. Brain response to visual sexual stimuli in homosexual pedophiles.

    Science.gov (United States)

    Schiffer, Boris; Krueger, Tillmann; Paul, Thomas; de Greiff, Armin; Forsting, Michael; Leygraf, Norbert; Schedlowski, Manfred; Gizewski, Elke

    2008-01-01

    The neurobiological mechanisms of deviant sexual preferences such as pedophilia are largely unknown. The objective of this study was to analyze whether brain activation patterns of homosexual pedophiles differed from those of a nonpedophile homosexual control group during visual sexual stimulation. A consecutive sample of 11 pedophile forensic inpatients exclusively attracted to boys and 12 age-matched homosexual control participants from a comparable socioeconomic stratum underwent functional magnetic resonance imaging during a visual sexual stimulation procedure that used sexually stimulating and emotionally neutral photographs. Sexual arousal was assessed according to a subjective rating scale. In contrast to sexually neutral pictures, in both groups sexually arousing pictures having both homosexual and pedophile content activated brain areas known to be involved in processing visual stimuli containing emotional content, including the occipitotemporal and prefrontal cortices. However, during presentation of the respective sexual stimuli, the thalamus, globus pallidus and striatum, which correspond to the key areas of the brain involved in sexual arousal and behaviour, showed significant activation in pedophiles, but not in control subjects. Central processing of visual sexual stimuli in homosexual pedophiles seems to be comparable to that in nonpedophile control subjects. However, compared with homosexual control subjects, activation patterns in pedophiles refer more strongly to subcortical regions, which have previously been discussed in the context of processing reward signals and also play an important role in addictive and stimulus-controlled behaviour. Thus future studies should further elucidate the specificity of these brain regions for the processing of sexual stimuli in pedophilia and should address the generally weaker activation pattern in homosexual men.

  12. Distraction by deviance: comparing the effects of auditory and visual deviant stimuli on auditory and visual target processing.

    Science.gov (United States)

    Leiva, Alicia; Parmentier, Fabrice B R; Andrés, Pilar

    2015-01-01

    We report the results of oddball experiments in which an irrelevant stimulus (standard, deviant) was presented before a target stimulus and the modality of these stimuli was manipulated orthogonally (visual/auditory). Experiment 1 showed that auditory deviants yielded distraction irrespective of the target's modality while visual deviants did not impact on performance. When participants were forced to attend the distractors in order to detect a rare target ("target-distractor"), auditory deviants yielded distraction irrespective of the target's modality and visual deviants yielded a small distraction effect when targets were auditory (Experiments 2 & 3). Visual deviants only produced distraction for visual targets when deviant stimuli were not visually distinct from the other distractors (Experiment 4). Our results indicate that while auditory deviants yield distraction irrespective of the targets' modality, visual deviants only do so when attended and under selective conditions, at least when irrelevant and target stimuli are temporally and perceptually decoupled.

  13. Age-Related Change in Shifting Attention between Global and Local Levels of Hierarchical Stimuli

    Science.gov (United States)

    Huizinga, Mariette; Burack, Jacob A.; Van der Molen, Maurits W.

    2010-01-01

    The focus of this study was the developmental pattern of the ability to shift attention between global and local levels of hierarchical stimuli. Children aged 7 years and 11 years and 21-year-old adults were administered a task (two experiments) that allowed for the examination of 1) the direction of attention to global or local stimulus levels;…

  14. Hierarchical Sets: Analyzing Pangenome Structure through Scalable Set Visualizations

    DEFF Research Database (Denmark)

    Pedersen, Thomas Lin

    2017-01-01

    of hierarchical sets by applying it to a pangenome based on 113 Escherichia and Shigella genomes and find it provides a powerful addition to pangenome analysis. The described clustering algorithm and visualizations are implemented in the hierarchicalSets R package available from CRAN (https...

  15. United we sense, divided we fail: context-driven perception of ambiguous visual stimuli.

    NARCIS (Netherlands)

    Klink, P.C.; van Wezel, R.J.A.; van Ee, R.

    2012-01-01

    Ambiguous visual stimuli provide the brain with sensory information that contains conflicting evidence for multiple mutually exclusive interpretations. Two distinct aspects of the phenomenological experience associated with viewing ambiguous visual stimuli are the apparent stability of perception

  16. United we sense, divided we fail: context-driven perception of ambiguous visual stimuli

    NARCIS (Netherlands)

    Klink, P. C; van Wezel, Richard Jack Anton; van Ee, R.

    2012-01-01

    Ambiguous visual stimuli provide the brain with sensory information that contains conflicting evidence for multiple mutually exclusive interpretations. Two distinct aspects of the phenomenological experience associated with viewing ambiguous visual stimuli are the apparent stability of perception

  17. Heightened attentional capture by visual food stimuli in Anorexia Nervosa

    NARCIS (Netherlands)

    Neimeijer, Renate A.M.; Roefs, Anne; de Jong, Peter J.

    The present study was designed to test the hypothesis that anorexia nervosa (AN) patients are relatively insensitive to the attentional capture of visual food stimuli. Attentional avoidance of food might help AN patients to prevent more elaborate processing of food stimuli and the subsequent

  18. Gestalt perceptual organization of visual stimuli captures attention automatically: Electrophysiological evidence

    Directory of Open Access Journals (Sweden)

    Francesco Marini

    2016-08-01

    Full Text Available The visual system leverages organizational regularities of perceptual elements to create meaningful representations of the world. One clear example of such function, which has been formalized in the Gestalt psychology principles, is the perceptual grouping of simple visual elements (e.g., lines and arcs into unitary objects (e.g., forms and shapes. The present study sought to characterize automatic attentional capture and related cognitive processing of Gestalt-like visual stimuli at the psychophysiological level by using event-related potentials (ERPs. We measured ERPs during a simple visual reaction time task with bilateral presentations of physically matched elements with or without a Gestalt organization. Results showed that Gestalt (vs. non-Gestalt stimuli are characterized by a larger N2pc together with enhanced ERP amplitudes of non-lateralized components (N1, N2, P3 starting around 150ms post-stimulus onset. Thus, we conclude that Gestalt stimuli capture attention automatically and entail characteristic psychophysiological signatures at both early and late processing stages.

  19. How stimuli presentation format affects visual attention and choice outcomes in choice experiments

    DEFF Research Database (Denmark)

    Orquin, Jacob Lund; Mueller Loose, Simone

    This study analyses visual attention and part-worth utilities in choice experiments across three different choice stimuli presentation formats. Visual attention and choice behaviour in discrete choice experiments are found to be strongly affected by stimuli presentation format. These results...

  20. A crossmodal crossover: opposite effects of visual and auditory perceptual load on steady-state evoked potentials to irrelevant visual stimuli.

    Science.gov (United States)

    Jacoby, Oscar; Hall, Sarah E; Mattingley, Jason B

    2012-07-16

    Mechanisms of attention are required to prioritise goal-relevant sensory events under conditions of stimulus competition. According to the perceptual load model of attention, the extent to which task-irrelevant inputs are processed is determined by the relative demands of discriminating the target: the more perceptually demanding the target task, the less unattended stimuli will be processed. Although much evidence supports the perceptual load model for competing stimuli within a single sensory modality, the effects of perceptual load in one modality on distractor processing in another is less clear. Here we used steady-state evoked potentials (SSEPs) to measure neural responses to irrelevant visual checkerboard stimuli while participants performed either a visual or auditory task that varied in perceptual load. Consistent with perceptual load theory, increasing visual task load suppressed SSEPs to the ignored visual checkerboards. In contrast, increasing auditory task load enhanced SSEPs to the ignored visual checkerboards. This enhanced neural response to irrelevant visual stimuli under auditory load suggests that exhausting capacity within one modality selectively compromises inhibitory processes required for filtering stimuli in another. Copyright © 2012 Elsevier Inc. All rights reserved.

  1. Gender differences in pre-attentive change detection for visual but not auditory stimuli.

    Science.gov (United States)

    Yang, Xiuxian; Yu, Yunmiao; Chen, Lu; Sun, Hailian; Qiao, Zhengxue; Qiu, Xiaohui; Zhang, Congpei; Wang, Lin; Zhu, Xiongzhao; He, Jincai; Zhao, Lun; Yang, Yanjie

    2016-01-01

    Despite ongoing debate about gender differences in pre-attention processes, little is known about gender effects on change detection for auditory and visual stimuli. We explored gender differences in change detection while processing duration information in auditory and visual modalities. We investigated pre-attentive processing of duration information using a deviant-standard reverse oddball paradigm (50 ms/150 ms) for auditory and visual mismatch negativity (aMMN and vMMN) in males and females (n=21/group). In the auditory modality, decrement and increment aMMN were observed at 150-250 ms after the stimulus onset, and there was no significant gender effect on MMN amplitudes in temporal or fronto-central areas. In contrast, in the visual modality, only increment vMMN was observed at 180-260 ms after the onset of stimulus, and it was higher in males than in females. No gender effect was found in change detection for auditory stimuli, but change detection was facilitated for visual stimuli in males. Gender effects should be considered in clinical studies of pre-attention for visual stimuli. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  2. Learning of arbitrary association between visual and auditory novel stimuli in adults: the "bond effect" of haptic exploration.

    Directory of Open Access Journals (Sweden)

    Benjamin Fredembach

    Full Text Available BACKGROUND: It is well-known that human beings are able to associate stimuli (novel or not perceived in their environment. For example, this ability is used by children in reading acquisition when arbitrary associations between visual and auditory stimuli must be learned. The studies tend to consider it as an "implicit" process triggered by the learning of letter/sound correspondences. The study described in this paper examined whether the addition of the visuo-haptic exploration would help adults to learn more effectively the arbitrary association between visual and auditory novel stimuli. METHODOLOGY/PRINCIPAL FINDINGS: Adults were asked to learn 15 new arbitrary associations between visual stimuli and their corresponding sounds using two learning methods which differed according to the perceptual modalities involved in the exploration of the visual stimuli. Adults used their visual modality in the "classic" learning method and both their visual and haptic modalities in the "multisensory" learning one. After both learning methods, participants showed a similar above-chance ability to recognize the visual and auditory stimuli and the audio-visual associations. However, the ability to recognize the visual-auditory associations was better after the multisensory method than after the classic one. CONCLUSION/SIGNIFICANCE: This study revealed that adults learned more efficiently the arbitrary association between visual and auditory novel stimuli when the visual stimuli were explored with both vision and touch. The results are discussed from the perspective of how they relate to the functional differences of the manual haptic modality and the hypothesis of a "haptic bond" between visual and auditory stimuli.

  3. Fusion and rivalry are dependent on the perceptual meaning of visual stimuli.

    Science.gov (United States)

    Andrews, Timothy J; Lotto, R Beau

    2004-03-09

    We view the world with two eyes and yet are typically only aware of a single, coherent image. Arguably the simplest explanation for this is that the visual system unites the two monocular stimuli into a common stream that eventually leads to a single coherent sensation. However, this notion is inconsistent with the well-known phenomenon of rivalry; when physically different stimuli project to the same retinal location, the ensuing perception alternates between the two monocular views in space and time. Although fundamental for understanding the principles of binocular vision and visual awareness, the mechanisms under-lying binocular rivalry remain controversial. Specifically, there is uncertainty about what determines whether monocular images undergo fusion or rivalry. By taking advantage of the perceptual phenomenon of color contrast, we show that physically identical monocular stimuli tend to rival-not fuse-when they signify different objects at the same location in visual space. Conversely, when physically different monocular stimuli are likely to represent the same object at the same location in space, fusion is more likely to result. The data suggest that what competes for visual awareness in the two eyes is not the physical similarity between images but the similarity in their perceptual/empirical meaning.

  4. Effects of inter- and intramodal selective attention to non-spatial visual stimuli: An event-related potential analysis.

    NARCIS (Netherlands)

    de Ruiter, M.B.; Kok, A.; van der Schoot, M.

    1998-01-01

    Event-related potentials (ERPs) were recorded to trains of rapidly presented auditory and visual stimuli. ERPs in conditions in which Ss attended to different features of visual stimuli were compared with ERPs to the same type of stimuli when Ss attended to different features of auditory stimuli,

  5. Visual question answering using hierarchical dynamic memory networks

    Science.gov (United States)

    Shang, Jiayu; Li, Shiren; Duan, Zhikui; Huang, Junwei

    2018-04-01

    Visual Question Answering (VQA) is one of the most popular research fields in machine learning which aims to let the computer learn to answer natural language questions with images. In this paper, we propose a new method called hierarchical dynamic memory networks (HDMN), which takes both question attention and visual attention into consideration impressed by Co-Attention method, which is the best (or among the best) algorithm for now. Additionally, we use bi-directional LSTMs, which have a better capability to remain more information from the question and image, to replace the old unit so that we can capture information from both past and future sentences to be used. Then we rebuild the hierarchical architecture for not only question attention but also visual attention. What's more, we accelerate the algorithm via a new technic called Batch Normalization which helps the network converge more quickly than other algorithms. The experimental result shows that our model improves the state of the art on the large COCO-QA dataset, compared with other methods.

  6. Modification of sudden onset auditory ERP by involuntary attention to visual stimuli.

    Science.gov (United States)

    Oray, Serkan; Lu, Zhong-Lin; Dawson, Michael E

    2002-03-01

    To investigate the cross-modal nature of the exogenous attention system, we studied how involuntary attention in the visual modality affects ERPs elicited by sudden onset of events in the auditory modality. Relatively loud auditory white noise bursts were presented to subjects with random and long inter-trial intervals. The noise bursts were either presented alone, or paired with a visual stimulus with a visual to auditory onset asynchrony of 120 ms. In a third condition, the visual stimuli were shown alone. All three conditions, auditory alone, visual alone, and paired visual/auditory, were randomly inter-mixed and presented with equal probabilities. Subjects were instructed to fixate on a point in front of them without task instructions concerning either the auditory or visual stimuli. ERPs were recorded from 28 scalp sites throughout every experimental session. Compared to ERPs in the auditory alone condition, pairing the auditory noise bursts with the visual stimulus reduced the amplitude of the auditory N100 component at Cz by 40% and the auditory P200/P300 component at Cz by 25%. No significant topographical change was observed in the scalp distributions of the N100 and P200/P300. Our results suggest that involuntary attention to visual stimuli suppresses early sensory (N100) as well as late cognitive (P200/P300) processing of sudden auditory events. The activation of the exogenous attention system by sudden auditory onset can be modified by involuntary visual attention in a cross-model, passive prepulse inhibition paradigm.

  7. Brain activation by visual erotic stimuli in healthy middle aged males.

    Science.gov (United States)

    Kim, S W; Sohn, D W; Cho, Y-H; Yang, W S; Lee, K-U; Juh, R; Ahn, K-J; Chung, Y-A; Han, S-I; Lee, K H; Lee, C U; Chae, J-H

    2006-01-01

    The objective of the present study was to identify brain centers, whose activity changes are related to erotic visual stimuli in healthy, heterosexual, middle aged males. Ten heterosexual, right-handed males with normal sexual function were entered into the present study (mean age 52 years, range 46-55). All potential subjects were screened over 1 h interview, and were encouraged to fill out questionnaires including the Brief Male Sexual Function Inventory. All subjects with a history of sexual arousal disorder or erectile dysfunction were excluded. We performed functional brain magnetic resonance imaging (fMRI) in male volunteers when an alternatively combined erotic and nonerotic film was played for 14 min and 9 s. The major areas of activation associated with sexual arousal to visual stimuli were occipitotemporal area, anterior cingulate gyrus, insula, orbitofrontal cortex, caudate nucleus. However, hypothalamus and thalamus were not activated. We suggest that the nonactivation of hypothalamus and thalamus in middle aged males may be responsible for the lesser physiological arousal in response to the erotic visual stimuli.

  8. Hierarchical organization of brain functional networks during visual tasks.

    Science.gov (United States)

    Zhuo, Zhao; Cai, Shi-Min; Fu, Zhong-Qian; Zhang, Jie

    2011-09-01

    The functional network of the brain is known to demonstrate modular structure over different hierarchical scales. In this paper, we systematically investigated the hierarchical modular organizations of the brain functional networks that are derived from the extent of phase synchronization among high-resolution EEG time series during a visual task. In particular, we compare the modular structure of the functional network from EEG channels with that of the anatomical parcellation of the brain cortex. Our results show that the modular architectures of brain functional networks correspond well to those from the anatomical structures over different levels of hierarchy. Most importantly, we find that the consistency between the modular structures of the functional network and the anatomical network becomes more pronounced in terms of vision, sensory, vision-temporal, motor cortices during the visual task, which implies that the strong modularity in these areas forms the functional basis for the visual task. The structure-function relationship further reveals that the phase synchronization of EEG time series in the same anatomical group is much stronger than that of EEG time series from different anatomical groups during the task and that the hierarchical organization of functional brain network may be a consequence of functional segmentation of the brain cortex.

  9. Dynamic Stimuli And Active Processing In Human Visual Perception

    Science.gov (United States)

    Haber, Ralph N.

    1990-03-01

    Theories of visual perception traditionally have considered a static retinal image to be the starting point for processing; and has considered processing both to be passive and a literal translation of that frozen, two dimensional, pictorial image. This paper considers five problem areas in the analysis of human visually guided locomotion, in which the traditional approach is contrasted to newer ones that utilize dynamic definitions of stimulation, and an active perceiver: (1) differentiation between object motion and self motion, and among the various kinds of self motion (e.g., eyes only, head only, whole body, and their combinations); (2) the sources and contents of visual information that guide movement; (3) the acquisition and performance of perceptual motor skills; (4) the nature of spatial representations, percepts, and the perceived layout of space; and (5) and why the retinal image is a poor starting point for perceptual processing. These newer approaches argue that stimuli must be considered as dynamic: humans process the systematic changes in patterned light when objects move and when they themselves move. Furthermore, the processing of visual stimuli must be active and interactive, so that perceivers can construct panoramic and stable percepts from an interaction of stimulus information and expectancies of what is contained in the visual environment. These developments all suggest a very different approach to the computational analyses of object location and identification, and of the visual guidance of locomotion.

  10. Opposite Distortions in Interval Timing Perception for Visual and Auditory Stimuli with Temporal Modulations.

    Science.gov (United States)

    Yuasa, Kenichi; Yotsumoto, Yuko

    2015-01-01

    When an object is presented visually and moves or flickers, the perception of its duration tends to be overestimated. Such an overestimation is called time dilation. Perceived time can also be distorted when a stimulus is presented aurally as an auditory flutter, but the mechanisms and their relationship to visual processing remains unclear. In the present study, we measured interval timing perception while modulating the temporal characteristics of visual and auditory stimuli, and investigated whether the interval times of visually and aurally presented objects shared a common mechanism. In these experiments, participants compared the durations of flickering or fluttering stimuli to standard stimuli, which were presented continuously. Perceived durations for auditory flutters were underestimated, while perceived durations of visual flickers were overestimated. When auditory flutters and visual flickers were presented simultaneously, these distortion effects were cancelled out. When auditory flutters were presented with a constantly presented visual stimulus, the interval timing perception of the visual stimulus was affected by the auditory flutters. These results indicate that interval timing perception is governed by independent mechanisms for visual and auditory processing, and that there are some interactions between the two processing systems.

  11. A Hierarchical Visualization Analysis Model of Power Big Data

    Science.gov (United States)

    Li, Yongjie; Wang, Zheng; Hao, Yang

    2018-01-01

    Based on the conception of integrating VR scene and power big data analysis, a hierarchical visualization analysis model of power big data is proposed, in which levels are designed, targeting at different abstract modules like transaction, engine, computation, control and store. The regularly departed modules of power data storing, data mining and analysis, data visualization are integrated into one platform by this model. It provides a visual analysis solution for the power big data.

  12. Amplitude-modulated stimuli reveal auditory-visual interactions in brain activity and brain connectivity

    Directory of Open Access Journals (Sweden)

    Mark eLaing

    2015-10-01

    Full Text Available The temporal congruence between auditory and visual signals coming from the same source can be a powerful means by which the brain integrates information from different senses. To investigate how the brain uses temporal information to integrate auditory and visual information from continuous yet unfamiliar stimuli, we use amplitude-modulated tones and size-modulated shapes with which we could manipulate the temporal congruence between the sensory signals. These signals were independently modulated at a slow or a fast rate. Participants were presented with auditory-only, visual-only or auditory-visual (AV trials in the scanner. On AV trials, the auditory and visual signal could have the same (AV congruent or different modulation rates (AV incongruent. Using psychophysiological interaction analyses, we found that auditory regions showed increased functional connectivity predominantly with frontal regions for AV incongruent relative to AV congruent stimuli. We further found that superior temporal regions, shown previously to integrate auditory and visual signals, showed increased connectivity with frontal and parietal regions for the same contrast. Our findings provide evidence that both activity in a network of brain regions and their connectivity are important for auditory-visual integration, and help to bridge the gap between transient and familiar AV stimuli used in previous studies.

  13. Toward a reliable gaze-independent hybrid BCI combining visual and natural auditory stimuli.

    Science.gov (United States)

    Barbosa, Sara; Pires, Gabriel; Nunes, Urbano

    2016-03-01

    Brain computer interfaces (BCIs) are one of the last communication options for patients in the locked-in state (LIS). For complete LIS patients, interfaces must be gaze-independent due to their eye impairment. However, unimodal gaze-independent approaches typically present levels of performance substantially lower than gaze-dependent approaches. The combination of multimodal stimuli has been pointed as a viable way to increase users' performance. A hybrid visual and auditory (HVA) P300-based BCI combining simultaneously visual and auditory stimulation is proposed. Auditory stimuli are based on natural meaningful spoken words, increasing stimuli discrimination and decreasing user's mental effort in associating stimuli to the symbols. The visual part of the interface is covertly controlled ensuring gaze-independency. Four conditions were experimentally tested by 10 healthy participants: visual overt (VO), visual covert (VC), auditory (AU) and covert HVA. Average online accuracy for the hybrid approach was 85.3%, which is more than 32% over VC and AU approaches. Questionnaires' results indicate that the HVA approach was the less demanding gaze-independent interface. Interestingly, the P300 grand average for HVA approach coincides with an almost perfect sum of P300 evoked separately by VC and AU tasks. The proposed HVA-BCI is the first solution simultaneously embedding natural spoken words and visual words to provide a communication lexicon. Online accuracy and task demand of the approach compare favorably with state-of-the-art. The proposed approach shows that the simultaneous combination of visual covert control and auditory modalities can effectively improve the performance of gaze-independent BCIs. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Amplitude-modulated stimuli reveal auditory-visual interactions in brain activity and brain connectivity.

    Science.gov (United States)

    Laing, Mark; Rees, Adrian; Vuong, Quoc C

    2015-01-01

    The temporal congruence between auditory and visual signals coming from the same source can be a powerful means by which the brain integrates information from different senses. To investigate how the brain uses temporal information to integrate auditory and visual information from continuous yet unfamiliar stimuli, we used amplitude-modulated tones and size-modulated shapes with which we could manipulate the temporal congruence between the sensory signals. These signals were independently modulated at a slow or a fast rate. Participants were presented with auditory-only, visual-only, or auditory-visual (AV) trials in the fMRI scanner. On AV trials, the auditory and visual signal could have the same (AV congruent) or different modulation rates (AV incongruent). Using psychophysiological interaction analyses, we found that auditory regions showed increased functional connectivity predominantly with frontal regions for AV incongruent relative to AV congruent stimuli. We further found that superior temporal regions, shown previously to integrate auditory and visual signals, showed increased connectivity with frontal and parietal regions for the same contrast. Our findings provide evidence that both activity in a network of brain regions and their connectivity are important for AV integration, and help to bridge the gap between transient and familiar AV stimuli used in previous studies.

  15. Suppressed visual looming stimuli are not integrated with auditory looming signals: Evidence from continuous flash suppression.

    Science.gov (United States)

    Moors, Pieter; Huygelier, Hanne; Wagemans, Johan; de-Wit, Lee; van Ee, Raymond

    2015-01-01

    Previous studies using binocular rivalry have shown that signals in a modality other than the visual can bias dominance durations depending on their congruency with the rivaling stimuli. More recently, studies using continuous flash suppression (CFS) have reported that multisensory integration influences how long visual stimuli remain suppressed. In this study, using CFS, we examined whether the contrast thresholds for detecting visual looming stimuli are influenced by a congruent auditory stimulus. In Experiment 1, we show that a looming visual stimulus can result in lower detection thresholds compared to a static concentric grating, but that auditory tone pips congruent with the looming stimulus did not lower suppression thresholds any further. In Experiments 2, 3, and 4, we again observed no advantage for congruent multisensory stimuli. These results add to our understanding of the conditions under which multisensory integration is possible, and suggest that certain forms of multisensory integration are not evident when the visual stimulus is suppressed from awareness using CFS.

  16. Precuneus-prefrontal activity during awareness of visual verbal stimuli

    DEFF Research Database (Denmark)

    Kjaer, T W; Nowak, M; Kjær, Klaus Wilbrandt

    2001-01-01

    Awareness is a personal experience, which is only accessible to the rest of world through interpretation. We set out to identify a neural correlate of visual awareness, using brief subliminal and supraliminal verbal stimuli while measuring cerebral blood flow distribution with H(2)(15)O PET. Awar...

  17. Contingent capture of involuntary visual attention interferes with detection of auditory stimuli.

    Science.gov (United States)

    Kamke, Marc R; Harris, Jill

    2014-01-01

    The involuntary capture of attention by salient visual stimuli can be influenced by the behavioral goals of an observer. For example, when searching for a target item, irrelevant items that possess the target-defining characteristic capture attention more strongly than items not possessing that feature. Such contingent capture involves a shift of spatial attention toward the item with the target-defining characteristic. It is not clear, however, if the associated decrements in performance for detecting the target item are entirely due to involuntary orienting of spatial attention. To investigate whether contingent capture also involves a non-spatial interference, adult observers were presented with streams of visual and auditory stimuli and were tasked with simultaneously monitoring for targets in each modality. Visual and auditory targets could be preceded by a lateralized visual distractor that either did, or did not, possess the target-defining feature (a specific color). In agreement with the contingent capture hypothesis, target-colored distractors interfered with visual detection performance (response time and accuracy) more than distractors that did not possess the target color. Importantly, the same pattern of results was obtained for the auditory task: visual target-colored distractors interfered with sound detection. The decrement in auditory performance following a target-colored distractor suggests that contingent capture involves a source of processing interference in addition to that caused by a spatial shift of attention. Specifically, we argue that distractors possessing the target-defining characteristic enter a capacity-limited, serial stage of neural processing, which delays detection of subsequently presented stimuli regardless of the sensory modality.

  18. Contingent capture of involuntary visual attention interferes with detection of auditory stimuli

    Directory of Open Access Journals (Sweden)

    Marc R. Kamke

    2014-06-01

    Full Text Available The involuntary capture of attention by salient visual stimuli can be influenced by the behavioral goals of an observer. For example, when searching for a target item, irrelevant items that possess the target-defining characteristic capture attention more strongly than items not possessing that feature. Such contingent capture involves a shift of spatial attention toward the item with the target-defining characteristic. It is not clear, however, if the associated decrements in performance for detecting the target item are entirely due to involuntary orienting of spatial attention. To investigate whether contingent capture also involves a non-spatial interference, adult observers were presented with streams of visual and auditory stimuli and were tasked with simultaneously monitoring for targets in each modality. Visual and auditory targets could be preceded by a lateralized visual distractor that either did, or did not, possess the target-defining feature (a specific color. In agreement with the contingent capture hypothesis, target-colored distractors interfered with visual detection performance (response time and accuracy more than distractors that did not possess the target color. Importantly, the same pattern of results was obtained for the auditory task: visual target-colored distractors interfered with sound detection. The decrement in auditory performance following a target-colored distractor suggests that contingent capture involves a source of processing interference in addition to that caused by a spatial shift of attention. Specifically, we argue that distractors possessing the target-defining characteristic enter a capacity-limited, serial stage of neural processing, which delays detection of subsequently presented stimuli regardless of the sensory modality.

  19. Enhanced recognition memory in grapheme-color synaesthesia for different categories of visual stimuli.

    Science.gov (United States)

    Ward, Jamie; Hovard, Peter; Jones, Alicia; Rothen, Nicolas

    2013-01-01

    Memory has been shown to be enhanced in grapheme-color synaesthesia, and this enhancement extends to certain visual stimuli (that don't induce synaesthesia) as well as stimuli comprised of graphemes (which do). Previous studies have used a variety of testing procedures to assess memory in synaesthesia (e.g., free recall, recognition, associative learning) making it hard to know the extent to which memory benefits are attributable to the stimulus properties themselves, the testing method, participant strategies, or some combination of these factors. In the first experiment, we use the same testing procedure (recognition memory) for a variety of stimuli (written words, non-words, scenes, and fractals) and also check which memorization strategies were used. We demonstrate that grapheme-color synaesthetes show enhanced memory across all these stimuli, but this is not found for a non-visual type of synaesthesia (lexical-gustatory). In the second experiment, the memory advantage for scenes is explored further by manipulating the properties of the old and new images (changing color, orientation, or object presence). Again, grapheme-color synaesthetes show a memory advantage for scenes across all manipulations. Although recognition memory is generally enhanced in this study, the largest effects were found for abstract visual images (fractals) and scenes for which color can be used to discriminate old/new status.

  20. HD-MTL: Hierarchical Deep Multi-Task Learning for Large-Scale Visual Recognition.

    Science.gov (United States)

    Fan, Jianping; Zhao, Tianyi; Kuang, Zhenzhong; Zheng, Yu; Zhang, Ji; Yu, Jun; Peng, Jinye

    2017-02-09

    In this paper, a hierarchical deep multi-task learning (HD-MTL) algorithm is developed to support large-scale visual recognition (e.g., recognizing thousands or even tens of thousands of atomic object classes automatically). First, multiple sets of multi-level deep features are extracted from different layers of deep convolutional neural networks (deep CNNs), and they are used to achieve more effective accomplishment of the coarseto- fine tasks for hierarchical visual recognition. A visual tree is then learned by assigning the visually-similar atomic object classes with similar learning complexities into the same group, which can provide a good environment for determining the interrelated learning tasks automatically. By leveraging the inter-task relatedness (inter-class similarities) to learn more discriminative group-specific deep representations, our deep multi-task learning algorithm can train more discriminative node classifiers for distinguishing the visually-similar atomic object classes effectively. Our hierarchical deep multi-task learning (HD-MTL) algorithm can integrate two discriminative regularization terms to control the inter-level error propagation effectively, and it can provide an end-to-end approach for jointly learning more representative deep CNNs (for image representation) and more discriminative tree classifier (for large-scale visual recognition) and updating them simultaneously. Our incremental deep learning algorithms can effectively adapt both the deep CNNs and the tree classifier to the new training images and the new object classes. Our experimental results have demonstrated that our HD-MTL algorithm can achieve very competitive results on improving the accuracy rates for large-scale visual recognition.

  1. The Lurking Snake in the Grass: Interference of Snake Stimuli in Visually Taxing Conditions

    Directory of Open Access Journals (Sweden)

    Sandra Cristina Soares

    2012-04-01

    Full Text Available Based on evolutionary considerations, it was hypothesized that humans have been shaped to easily spot snakes in visually cluttered scenes that might otherwise hide camouflaged snakes. This hypothesis was tested in a visual search experiment in which I assessed automatic attention capture to evolutionarily-relevant distractor stimuli (snakes, in comparison with another animal which is also feared but where this fear has a disputed evolutionary origin (spiders, and neutral stimuli (mushrooms. Sixty participants were engaged in a task that involved the detection of a target (a bird among pictures of fruits. Unexpectedly, on some trials, a snake, a spider, or a mushroom replaced one of the fruits. The question of interest was whether the distracting stimuli slowed the reaction times for finding the target (the bird to different degrees. Perceptual load of the task was manipulated by increments in the set size (6 or 12 items on different trials. The findings showed that snake stimuli were processed preferentially, particularly under conditions where attentional resources were depleted, which reinforced the role of this evolutionarily-relevant stimulus in accessing the visual system (Isbell, 2009.

  2. The influence of response competition on cerebral asymmetries for processing hierarchical stimuli revealed by ERP recordings

    OpenAIRE

    Malinowski, Peter; Hübner, Ronald; Keil, Andreas; Gruber, Thomas

    2002-01-01

    It is widely accepted that the left and right hemispheres differ with respect to the processing of global and local aspects of visual stimuli. Recently, behavioural experiments have shown that this processing asymmetry strongly depends on the response competition between the global and local levels of a stimulus. Here we report electrophysiological data that underline this observation. Hemispheric differences for global/local processing were mainly observed for responseincompatible stimuli an...

  3. Comparisons of memory for nonverbal auditory and visual sequential stimuli.

    Science.gov (United States)

    McFarland, D J; Cacace, A T

    1995-01-01

    Properties of auditory and visual sensory memory were compared by examining subjects' recognition performance of randomly generated binary auditory sequential frequency patterns and binary visual sequential color patterns within a forced-choice paradigm. Experiment 1 demonstrated serial-position effects in auditory and visual modalities consisting of both primacy and recency effects. Experiment 2 found that retention of auditory and visual information was remarkably similar when assessed across a 10s interval. Experiments 3 and 4, taken together, showed that the recency effect in sensory memory is affected more by the type of response required (recognition vs. reproduction) than by the sensory modality employed. These studies suggest that auditory and visual sensory memory stores for nonverbal stimuli share similar properties with respect to serial-position effects and persistence over time.

  4. Enhanced Recognition Memory in Grapheme-Colour Synaesthesia for Different Categories of Visual Stimuli

    Directory of Open Access Journals (Sweden)

    Jamie eWard

    2013-10-01

    Full Text Available Memory has been shown to be enhanced in grapheme-colour synaesthesia, and this enhancement extends to certain visual stimuli (that don’t induce synaesthesia as well as stimuli comprised of graphemes (which do. Previous studies have used a variety of testing procedures to assess memory in synaesthesia (e.g. free recall, recognition, associative learning making it hard to know the extent to which memory benefits are attributable to the stimulus properties themselves, the testing method, participant strategies, or some combination of these factors. In the first experiment, we use the same testing procedure (recognition memory for a variety of stimuli (written words, nonwords, scenes, and fractals and also check which memorisation strategies were used. We demonstrate that grapheme-colour synaesthetes show enhanced memory across all these stimuli, but this is not found for a non-visual type of synaesthesia (lexical-gustatory. In the second experiment, the memory advantage for scenes is explored further by manipulating the properties of the old and new images (changing colour, orientation, or object presence. Again, grapheme-colour synaesthetes show a memory advantage for scenes across all manipulations. Although recognition memory is generally enhanced in this study, the largest effects were found for abstract visual images (fractals and scenes for which colour can be used to discriminate old/new status.

  5. Visual and cross-modal cues increase the identification of overlapping visual stimuli in Balint's syndrome.

    Science.gov (United States)

    D'Imperio, Daniela; Scandola, Michele; Gobbetto, Valeria; Bulgarelli, Cristina; Salgarello, Matteo; Avesani, Renato; Moro, Valentina

    2017-10-01

    Cross-modal interactions improve the processing of external stimuli, particularly when an isolated sensory modality is impaired. When information from different modalities is integrated, object recognition is facilitated probably as a result of bottom-up and top-down processes. The aim of this study was to investigate the potential effects of cross-modal stimulation in a case of simultanagnosia. We report a detailed analysis of clinical symptoms and an 18 F-fluorodeoxyglucose (FDG) brain positron emission tomography/computed tomography (PET/CT) study of a patient affected by Balint's syndrome, a rare and invasive visual-spatial disorder following bilateral parieto-occipital lesions. An experiment was conducted to investigate the effects of visual and nonvisual cues on performance in tasks involving the recognition of overlapping pictures. Four modalities of sensory cues were used: visual, tactile, olfactory, and auditory. Data from neuropsychological tests showed the presence of ocular apraxia, optic ataxia, and simultanagnosia. The results of the experiment indicate a positive effect of the cues on the recognition of overlapping pictures, not only in the identification of the congruent valid-cued stimulus (target) but also in the identification of the other, noncued stimuli. All the sensory modalities analyzed (except the auditory stimulus) were efficacious in terms of increasing visual recognition. Cross-modal integration improved the patient's ability to recognize overlapping figures. However, while in the visual unimodal modality both bottom-up (priming, familiarity effect, disengagement of attention) and top-down processes (mental representation and short-term memory, the endogenous orientation of attention) are involved, in the cross-modal integration it is semantic representations that mainly activate visual recognition processes. These results are potentially useful for the design of rehabilitation training for attentional and visual-perceptual deficits.

  6. Hierarchical reorganization of dimensions in OLAP visualizations.

    Science.gov (United States)

    Lafon, Sébastien; Bouali, Fatma; Guinot, Christiane; Venturini, Gilles

    2013-11-01

    In this paper, we propose a new method for the visual reorganization of online analytical processing (OLAP) cubes that aims at improving their visualization. Our method addresses dimensions with hierarchically organized members. It uses a genetic algorithm that reorganizes k-ary trees. Genetic operators perform permutations of subtrees to optimize a visual homogeneity function. We propose several ways to reorganize an OLAP cube depending on which set of members is selected for the reorganization: all of the members, only the displayed members, or the members at a given level (level by level approach). The results that are evaluated by using optimization criteria show that our algorithm has a reliable performance even when it is limited to 1 minute runs. Our algorithm was integrated in an interactive 3D interface for OLAP. A user study was conducted to evaluate our approach with users. The results highlight the usefulness of reorganization in two OLAP tasks.

  7. Klinefelter syndrome has increased brain responses to auditory stimuli and motor output, but not to visual stimuli or Stroop adaptation.

    Science.gov (United States)

    Wallentin, Mikkel; Skakkebæk, Anne; Bojesen, Anders; Fedder, Jens; Laurberg, Peter; Østergaard, John R; Hertz, Jens Michael; Pedersen, Anders Degn; Gravholt, Claus Højbjerg

    2016-01-01

    Klinefelter syndrome (47, XXY) (KS) is a genetic syndrome characterized by the presence of an extra X chromosome and low level of testosterone, resulting in a number of neurocognitive abnormalities, yet little is known about brain function. This study investigated the fMRI-BOLD response from KS relative to a group of Controls to basic motor, perceptual, executive and adaptation tasks. Participants (N: KS = 49; Controls = 49) responded to whether the words "GREEN" or "RED" were displayed in green or red (incongruent versus congruent colors). One of the colors was presented three times as often as the other, making it possible to study both congruency and adaptation effects independently. Auditory stimuli saying "GREEN" or "RED" had the same distribution, making it possible to study effects of perceptual modality as well as Frequency effects across modalities. We found that KS had an increased response to motor output in primary motor cortex and an increased response to auditory stimuli in auditory cortices, but no difference in primary visual cortices. KS displayed a diminished response to written visual stimuli in secondary visual regions near the Visual Word Form Area, consistent with the widespread dyslexia in the group. No neural differences were found in inhibitory control (Stroop) or in adaptation to differences in stimulus frequencies. Across groups we found a strong positive correlation between age and BOLD response in the brain's motor network with no difference between groups. No effects of testosterone level or brain volume were found. In sum, the present findings suggest that auditory and motor systems in KS are selectively affected, perhaps as a compensatory strategy, and that this is not a systemic effect as it is not seen in the visual system.

  8. Klinefelter syndrome has increased brain responses to auditory stimuli and motor output, but not to visual stimuli or Stroop adaptation

    Directory of Open Access Journals (Sweden)

    Mikkel Wallentin

    2016-01-01

    Full Text Available Klinefelter syndrome (47, XXY (KS is a genetic syndrome characterized by the presence of an extra X chromosome and low level of testosterone, resulting in a number of neurocognitive abnormalities, yet little is known about brain function. This study investigated the fMRI-BOLD response from KS relative to a group of Controls to basic motor, perceptual, executive and adaptation tasks. Participants (N: KS = 49; Controls = 49 responded to whether the words “GREEN” or “RED” were displayed in green or red (incongruent versus congruent colors. One of the colors was presented three times as often as the other, making it possible to study both congruency and adaptation effects independently. Auditory stimuli saying “GREEN” or “RED” had the same distribution, making it possible to study effects of perceptual modality as well as Frequency effects across modalities. We found that KS had an increased response to motor output in primary motor cortex and an increased response to auditory stimuli in auditory cortices, but no difference in primary visual cortices. KS displayed a diminished response to written visual stimuli in secondary visual regions near the Visual Word Form Area, consistent with the widespread dyslexia in the group. No neural differences were found in inhibitory control (Stroop or in adaptation to differences in stimulus frequencies. Across groups we found a strong positive correlation between age and BOLD response in the brain's motor network with no difference between groups. No effects of testosterone level or brain volume were found. In sum, the present findings suggest that auditory and motor systems in KS are selectively affected, perhaps as a compensatory strategy, and that this is not a systemic effect as it is not seen in the visual system.

  9. Implicit integration in a case of integrative visual agnosia.

    Science.gov (United States)

    Aviezer, Hillel; Landau, Ayelet N; Robertson, Lynn C; Peterson, Mary A; Soroker, Nachum; Sacher, Yaron; Bonneh, Yoram; Bentin, Shlomo

    2007-05-15

    We present a case (SE) with integrative visual agnosia following ischemic stroke affecting the right dorsal and the left ventral pathways of the visual system. Despite his inability to identify global hierarchical letters [Navon, D. (1977). Forest before trees: The precedence of global features in visual perception. Cognitive Psychology, 9, 353-383], and his dense object agnosia, SE showed normal global-to-local interference when responding to local letters in Navon hierarchical stimuli and significant picture-word identity priming in a semantic decision task for words. Since priming was absent if these features were scrambled, it stands to reason that these effects were not due to priming by distinctive features. The contrast between priming effects induced by coherent and scrambled stimuli is consistent with implicit but not explicit integration of features into a unified whole. We went on to show that possible/impossible object decisions were facilitated by words in a word-picture priming task, suggesting that prompts could activate perceptually integrated images in a backward fashion. We conclude that the absence of SE's ability to identify visual objects except through tedious serial construction reflects a deficit in accessing an integrated visual representation through bottom-up visual processing alone. However, top-down generated images can help activate these visual representations through semantic links.

  10. Physiological and behavioral reactions elicited by simulated and real-life visual and acoustic helicopter stimuli in dairy goats

    Science.gov (United States)

    2011-01-01

    Background Anecdotal reports and a few scientific publications suggest that flyovers of helicopters at low altitude may elicit fear- or anxiety-related behavioral reactions in grazing feral and farm animals. We investigated the behavioral and physiological stress reactions of five individually housed dairy goats to different acoustic and visual stimuli from helicopters and to combinations of these stimuli under controlled environmental (indoor) conditions. The visual stimuli were helicopter animations projected on a large screen in front of the enclosures of the goats. Acoustic and visual stimuli of a tractor were also presented. On the final day of the study the goats were exposed to two flyovers (altitude 50 m and 75 m) of a Chinook helicopter while grazing in a pasture. Salivary cortisol, behavior, and heart rate of the goats were registered before, during and after stimulus presentations. Results The goats reacted alert to the visual and/or acoustic stimuli that were presented in their room. They raised their heads and turned their ears forward in the direction of the stimuli. There was no statistically reliable rise of the average velocity of moving of the goats in their enclosure and no increase of the duration of moving during presentation of the stimuli. Also there was no increase in heart rate or salivary cortisol concentration during the indoor test sessions. Surprisingly, no physiological and behavioral stress responses were observed during the flyover of a Chinook at 50 m, which produced a peak noise of 110 dB. Conclusions We conclude that the behavior and physiology of goats are unaffected by brief episodes of intense, adverse visual and acoustic stimulation such as the sight and noise of overflying helicopters. The absence of a physiological stress response and of elevated emotional reactivity of goats subjected to helicopter stimuli is discussed in relation to the design and testing schedule of this study. PMID:21496239

  11. The sensory channel of presentation alters subjective ratings and autonomic responses towards disgusting stimuli -Blood pressure, heart rate and skin conductance in response to visual, auditory, haptic and olfactory presented disgusting stimuli-

    Directory of Open Access Journals (Sweden)

    Ilona eCroy

    2013-09-01

    Full Text Available Disgust causes specific reaction patterns, observable in mimic responses and body reactions. Most research on disgust deals with visual stimuli. However, pictures may cause another disgust experience than sounds, odors or tactile stimuli. Therefore disgust experience evoked by four different sensory channels was compared.A total of 119 participants received 3 different disgusting and one control stimulus, each presented through the visual, auditory, tactile and olfactory channel. Ratings of evoked disgust as well as responses of the autonomic nervous system (heart rate, skin conductance level, systolic blood pressure were recorded and the effect of stimulus labeling and of repeated presentation was analyzed. Ratings suggested that disgust could be evoked through all senses; they were highest for visual stimuli. However, autonomic reaction towards disgusting stimuli differed according to the channel of presentation. In contrast to the other, olfactory disgust stimuli provoked a strong decrease of systolic blood pressure. Additionally, labeling enhanced disgust ratings and autonomic reaction for olfactory and tactile, but not for visual and auditory stimuli. Repeated presentation indicated that participant’s disgust rating diminishes to all but olfactory disgust stimuli. Taken together we argue that the sensory channel through which a disgust reaction is evoked matters.

  12. Near-Infrared Trigged Stimulus-Responsive Photonic Crystals with Hierarchical Structures.

    Science.gov (United States)

    Lu, Tao; Pan, Hui; Ma, Jun; Li, Yao; Zhu, Shenmin; Zhang, Di

    2017-10-04

    Stimuli-responsive photonic crystals (PCs) trigged by light would provide a novel intuitive and quantitative method for noninvasive detection. Inspired by the flame-detecting aptitude of fire beetles and the hierarchical photonic structures of butterfly wings, we herein developed near-infrared stimuli-responsive PCs through coupling photothermal Fe 3 O 4 nanoparticles with thermoresponsive poly(N-isopropylacrylamide) (PNIPAM), with hierarchical photonic structured butterfly wing scales as the template. The nanoparticles within 10 s transferred near-infrared radiation into heat that triggered the phase transition of PNIPAM; this almost immediately posed an anticipated effect on the PNIPAM refractive index and resulted in a composite spectrum change of ∼26 nm, leading to the direct visual readout. It is noteworthy that the whole process is durable and stable mainly owing to the chemical bonding formed between PNIPAM and the biotemplate. We envision that this biologically inspired approach could be utilized in a broad range of applications and would have a great impact on various monitoring processes and medical sensing.

  13. Effects of emotional valence and three-dimensionality of visual stimuli on brain activation: an fMRI study.

    Science.gov (United States)

    Dores, A R; Almeida, I; Barbosa, F; Castelo-Branco, M; Monteiro, L; Reis, M; de Sousa, L; Caldas, A Castro

    2013-01-01

    Examining changes in brain activation linked with emotion-inducing stimuli is essential to the study of emotions. Due to the ecological potential of techniques such as virtual reality (VR), inspection of whether brain activation in response to emotional stimuli can be modulated by the three-dimensional (3D) properties of the images is important. The current study sought to test whether the activation of brain areas involved in the emotional processing of scenarios of different valences can be modulated by 3D. Therefore, the focus was made on the interaction effect between emotion-inducing stimuli of different emotional valences (pleasant, unpleasant and neutral valences) and visualization types (2D, 3D). However, main effects were also analyzed. The effect of emotional valence and visualization types and their interaction were analyzed through a 3 × 2 repeated measures ANOVA. Post-hoc t-tests were performed under a ROI-analysis approach. The results show increased brain activation for the 3D affective-inducing stimuli in comparison with the same stimuli in 2D scenarios, mostly in cortical and subcortical regions that are related to emotional processing, in addition to visual processing regions. This study has the potential of clarify brain mechanisms involved in the processing of emotional stimuli (scenarios' valence) and their interaction with three-dimensionality.

  14. Visual laterality in dolphins: importance of the familiarity of stimuli

    Science.gov (United States)

    2012-01-01

    Background Many studies of cerebral asymmetries in different species lead, on the one hand, to a better understanding of the functions of each cerebral hemisphere and, on the other hand, to develop an evolutionary history of hemispheric laterality. Our animal model is particularly interesting because of its original evolutionary path, i.e. return to aquatic life after a terrestrial phase. The rare reports concerning visual laterality of marine mammals investigated mainly discrimination processes. As dolphins are migrant species they are confronted to a changing environment. Being able to categorize new versus familiar objects would allow dolphins a rapid adaptation to novel environments. Visual laterality could be a prerequisite to this adaptability. To date, no study, to our knowledge, has analyzed the environmental factors that could influence their visual laterality. Results We investigated visual laterality expressed spontaneously at the water surface by a group of five common bottlenose dolphins (Tursiops truncatus) in response to various stimuli. The stimuli presented ranged from very familiar objects (known and manipulated previously) to familiar objects (known but never manipulated) to unfamiliar objects (unknown, never seen previously). At the group level, dolphins used their left eye to observe very familiar objects and their right eye to observe unfamiliar objects. However, eyes are used indifferently to observe familiar objects with intermediate valence. Conclusion Our results suggest different visual cerebral processes based either on the global shape of well-known objects or on local details of unknown objects. Moreover, the manipulation of an object appears necessary for these dolphins to construct a global representation of an object enabling its immediate categorization for subsequent use. Our experimental results pointed out some cognitive capacities of dolphins which might be crucial for their wild life given their fission-fusion social system

  15. Visual laterality in dolphins: importance of the familiarity of stimuli

    Directory of Open Access Journals (Sweden)

    Blois-Heulin Catherine

    2012-01-01

    Full Text Available Abstract Background Many studies of cerebral asymmetries in different species lead, on the one hand, to a better understanding of the functions of each cerebral hemisphere and, on the other hand, to develop an evolutionary history of hemispheric laterality. Our animal model is particularly interesting because of its original evolutionary path, i.e. return to aquatic life after a terrestrial phase. The rare reports concerning visual laterality of marine mammals investigated mainly discrimination processes. As dolphins are migrant species they are confronted to a changing environment. Being able to categorize new versus familiar objects would allow dolphins a rapid adaptation to novel environments. Visual laterality could be a prerequisite to this adaptability. To date, no study, to our knowledge, has analyzed the environmental factors that could influence their visual laterality. Results We investigated visual laterality expressed spontaneously at the water surface by a group of five common bottlenose dolphins (Tursiops truncatus in response to various stimuli. The stimuli presented ranged from very familiar objects (known and manipulated previously to familiar objects (known but never manipulated to unfamiliar objects (unknown, never seen previously. At the group level, dolphins used their left eye to observe very familiar objects and their right eye to observe unfamiliar objects. However, eyes are used indifferently to observe familiar objects with intermediate valence. Conclusion Our results suggest different visual cerebral processes based either on the global shape of well-known objects or on local details of unknown objects. Moreover, the manipulation of an object appears necessary for these dolphins to construct a global representation of an object enabling its immediate categorization for subsequent use. Our experimental results pointed out some cognitive capacities of dolphins which might be crucial for their wild life given their

  16. Motivationally Significant Stimuli Show Visual Prior Entry: Evidence for Attentional Capture

    Science.gov (United States)

    West, Greg L.; Anderson, Adam A. K.; Pratt, Jay

    2009-01-01

    Previous studies that have found attentional capture effects for stimuli of motivational significance do not directly measure initial attentional deployment, leaving it unclear to what extent these items produce attentional capture. Visual prior entry, as measured by temporal order judgments (TOJs), rests on the premise that allocated attention…

  17. Retinal image quality and visual stimuli processing by simulation of partial eye cataract

    Science.gov (United States)

    Ozolinsh, Maris; Danilenko, Olga; Zavjalova, Varvara

    2016-10-01

    Visual stimuli were demonstrated on a 4.3'' mobile phone screen inside a "Virtual Reality" adapter that allowed separation of the left and right eye visual fields. Contrast of the retina image thus can be controlled by the image on the phone screen and parallel to that at appropriate geometry by the AC voltage applied to scattering PDLC cell inside the adapter. Such optical pathway separation allows to demonstrate to both eyes spatially variant images, that after visual binocular fusion acquire their characteristic indications. As visual stimuli we used grey and different color (two opponent components to vision - red-green in L*a*b* color space) spatially periodical stimuli for left and right eyes; and with spatial content that by addition or subtraction resulted as clockwise or counter clockwise slanted Gabor gratings. We performed computer modeling with numerical addition or subtraction of signals similar to processing in brain via stimuli input decomposition in luminance and color opponency components. It revealed the dependence of the perception psychophysical equilibrium point between clockwise or counter clockwise perception of summation on one eye image contrast and color saturation, and on the strength of the retinal aftereffects. Existence of a psychophysical equilibrium point in perception of summation is only in the presence of a prior adaptation to a slanted periodical grating and at the appropriate slant orientation of adaptation grating and/or at appropriate spatial grating pattern phase according to grating nods. Actual observer perception experiments when one eye images were deteriorated by simulated cataract approved the shift of mentioned psychophysical equilibrium point on the degree of artificial cataract. We analyzed also the mobile devices stimuli emission spectra paying attention to areas sensitive to macula pigments absorption spectral maxima and blue areas where the intense irradiation can cause in abnormalities in periodic melatonin

  18. Visual sensitivity for luminance and chromatic stimuli during the execution of smooth pursuit and saccadic eye movements.

    Science.gov (United States)

    Braun, Doris I; Schütz, Alexander C; Gegenfurtner, Karl R

    2017-07-01

    Visual sensitivity is dynamically modulated by eye movements. During saccadic eye movements, sensitivity is reduced selectively for low-spatial frequency luminance stimuli and largely unaffected for high-spatial frequency luminance and chromatic stimuli (Nature 371 (1994), 511-513). During smooth pursuit eye movements, sensitivity for low-spatial frequency luminance stimuli is moderately reduced while sensitivity for chromatic and high-spatial frequency luminance stimuli is even increased (Nature Neuroscience, 11 (2008), 1211-1216). Since these effects are at least partly of different polarity, we investigated the combined effects of saccades and smooth pursuit on visual sensitivity. For the time course of chromatic sensitivity, we found that detection rates increased slightly around pursuit onset. During saccades to static and moving targets, detection rates dropped briefly before the saccade and reached a minimum at saccade onset. This reduction of chromatic sensitivity was present whenever a saccade was executed and it was not modified by subsequent pursuit. We also measured contrast sensitivity for flashed high- and low-spatial frequency luminance and chromatic stimuli during saccades and pursuit. During saccades, the reduction of contrast sensitivity was strongest for low-spatial frequency luminance stimuli (about 90%). However, a significant reduction was also present for chromatic stimuli (about 58%). Chromatic sensitivity was increased during smooth pursuit (about 12%). These results suggest that the modulation of visual sensitivity during saccades and smooth pursuit is more complex than previously assumed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Prey capture behaviour evoked by simple visual stimuli in larval zebrafish

    Directory of Open Access Journals (Sweden)

    Isaac Henry Bianco

    2011-12-01

    Full Text Available Understanding how the nervous system recognises salient stimuli in the environ- ment and selects and executes the appropriate behavioural responses is a fundamen- tal question in systems neuroscience. To facilitate the neuroethological study of visually-guided behaviour in larval zebrafish, we developed virtual reality assays in which precisely controlled visual cues can be presented to larvae whilst their behaviour is automatically monitored using machine-vision algorithms. Freely swimming larvae responded to moving stimuli in a size-dependent manner: they directed multiple low amplitude orienting turns (∼ 20◦ towards small moving spots (1◦ but reacted to larger spots (10◦ with high-amplitude aversive turns (∼ 60◦. The tracking of small spots led us to examine how larvae respond to prey during hunting routines. By analysing movie sequences of larvae hunting parame- cia, we discovered that all prey capture routines commence with eye convergence and larvae maintain their eyes in a highly converged position for the duration of the prey-tracking and capture swim phases. We adapted our virtual reality assay to deliver artificial visual cues to partially restrained larvae and found that small moving spots evoked convergent eye movements and J-turns of the tail, which are defining features of natural hunting. We propose that eye convergence represents the engagement of a predatory mode of behaviour in larval fish and serves to increase the region of binocular visual space to enable stereoscopic targeting of prey.

  20. Hierarchical tone mapping for high dynamic range image visualization

    Science.gov (United States)

    Qiu, Guoping; Duan, Jiang

    2005-07-01

    In this paper, we present a computationally efficient, practically easy to use tone mapping techniques for the visualization of high dynamic range (HDR) images in low dynamic range (LDR) reproduction devices. The new method, termed hierarchical nonlinear linear (HNL) tone-mapping operator maps the pixels in two hierarchical steps. The first step allocates appropriate numbers of LDR display levels to different HDR intensity intervals according to the pixel densities of the intervals. The second step linearly maps the HDR intensity intervals to theirs allocated LDR display levels. In the developed HNL scheme, the assignment of LDR display levels to HDR intensity intervals is controlled by a very simple and flexible formula with a single adjustable parameter. We also show that our new operators can be used for the effective enhancement of ordinary images.

  1. Neurochemical responses to chromatic and achromatic stimuli in the human visual cortex.

    Science.gov (United States)

    Bednařík, Petr; Tkáč, Ivan; Giove, Federico; Eberly, Lynn E; Deelchand, Dinesh K; Barreto, Felipe R; Mangia, Silvia

    2018-02-01

    In the present study, we aimed at determining the metabolic responses of the human visual cortex during the presentation of chromatic and achromatic stimuli, known to preferentially activate two separate clusters of neuronal populations (called "blobs" and "interblobs") with distinct sensitivity to color or luminance features. Since blobs and interblobs have different cytochrome-oxidase (COX) content and micro-vascularization level (i.e., different capacities for glucose oxidation), different functional metabolic responses during chromatic vs. achromatic stimuli may be expected. The stimuli were optimized to evoke a similar load of neuronal activation as measured by the bold oxygenation level dependent (BOLD) contrast. Metabolic responses were assessed using functional 1 H MRS at 7 T in 12 subjects. During both chromatic and achromatic stimuli, we observed the typical increases in glutamate and lactate concentration, and decreases in aspartate and glucose concentration, that are indicative of increased glucose oxidation. However, within the detection sensitivity limits, we did not observe any difference between metabolic responses elicited by chromatic and achromatic stimuli. We conclude that the higher energy demands of activated blobs and interblobs are supported by similar increases in oxidative metabolism despite the different capacities of these neuronal populations.

  2. Exploring combinations of auditory and visual stimuli for gaze-independent brain-computer interfaces.

    Directory of Open Access Journals (Sweden)

    Xingwei An

    Full Text Available For Brain-Computer Interface (BCI systems that are designed for users with severe impairments of the oculomotor system, an appropriate mode of presenting stimuli to the user is crucial. To investigate whether multi-sensory integration can be exploited in the gaze-independent event-related potentials (ERP speller and to enhance BCI performance, we designed a visual-auditory speller. We investigate the possibility to enhance stimulus presentation by combining visual and auditory stimuli within gaze-independent spellers. In this study with N = 15 healthy users, two different ways of combining the two sensory modalities are proposed: simultaneous redundant streams (Combined-Speller and interleaved independent streams (Parallel-Speller. Unimodal stimuli were applied as control conditions. The workload, ERP components, classification accuracy and resulting spelling speed were analyzed for each condition. The Combined-speller showed a lower workload than uni-modal paradigms, without the sacrifice of spelling performance. Besides, shorter latencies, lower amplitudes, as well as a shift of the temporal and spatial distribution of discriminative information were observed for Combined-speller. These results are important and are inspirations for future studies to search the reason for these differences. For the more innovative and demanding Parallel-Speller, where the auditory and visual domains are independent from each other, a proof of concept was obtained: fifteen users could spell online with a mean accuracy of 87.7% (chance level <3% showing a competitive average speed of 1.65 symbols per minute. The fact that it requires only one selection period per symbol makes it a good candidate for a fast communication channel. It brings a new insight into the true multisensory stimuli paradigms. Novel approaches for combining two sensory modalities were designed here, which are valuable for the development of ERP-based BCI paradigms.

  3. [WMN: a negative ERPs component related to working memory during non-target visual stimuli processing].

    Science.gov (United States)

    Zhao, Lun; Wei, Jin-he

    2003-10-01

    To study non-target stimuli processing in the brain. Features of the event-related potentials (ERPs) from non-target stimuli during selective response task (SR) was compared with that during visual selective discrimination (DR) task in 26 normal subjects. The stimuli consisted of two color LED flashes (red and green) appeared randomly in left (LVF) or right (RVF) visual field with same probability. ERPs were derived at 9 electrode sites on the scalp under 2 task conditions: a) SR, making switch response to the target (NT) stimuli from LVF or RVF in one direction and making no response to the non-target (NT) ones; b) DR, making switching response to T stimuli differentially, i.e., to the left for T from LVF and to the right for T from RVF. 1) the non-target stimuli in DR conditions, compared with that in SR condition, elicited smaller P2 and P3 components and larger N2 component at the frontal brain areas; 2) a significant negative component, named as WMN (working memory negativity), appeared in the non-target ERPs during DR in the period of 100 to 700 ms post stimulation which was predominant at the frontal brain areas. According to the major difference between brain activities for non-target stimuli during SR and DR, the predominant appearance of WMN at the frontal brain areas demonstrated that the non-target stimulus processing was an active process and was related to working memory, i.e., the temporary elimination and the retrieval of the response mode which was stored in working memory.

  4. Metabolic response to optic centers to visual stimuli in the albino rat: anatomical and physiological considerations

    International Nuclear Information System (INIS)

    Toga, A.W.; Collins, R.C.

    1981-01-01

    The functional organization of the visual system was studied in the albino rat. Metabolic differences were measured using the 14 C-2-deoxyglucose (DG) autoradiographic technique during visual stimulation of one entire retina in unrestrained animals. All optic centers responded to changes in light intensity but to different degrees. The greatest change occurred in the superior colliculus, less in the lateral geniculate, and considerably less in second-order sites such as layer IV of visual cortex. These optic centers responded in particular to on/off stimuli, but showed no incremental change during pattern reversal or movement of orientation stimuli. Both the superior colliculus and lateral geniculate increased their metabolic rate as the frequency of stimulation increased, but the magnitude was twice as great in the colliculus. The histological pattern of metabolic change in the visual system was not homogenous. In the superior colliculus glucose utilization increased only in stratum griseum superficiale and was greatest in visuotopic regions representing the peripheral portions of the visual field. Similarly, in the lateral geniculate, only the dorsal nucleus showed an increased response to greater stimulus frequencies. Second-order regions of the visual system showed changes in metabolism in response to visual stimulation, but no incremental response specific for type or frequency of stimuli. To label proteins of axoplasmic transport to study the terminal fields of retinal projections 14 C-amino acids were used. This was done to study how the differences in the magnitude of the metabolic response among optic centers were related to the relative quantity of retinofugal projections to these centers

  5. Complex scenes and situations visualization in hierarchical learning algorithm with dynamic 3D NeoAxis engine

    Science.gov (United States)

    Graham, James; Ternovskiy, Igor V.

    2013-06-01

    We applied a two stage unsupervised hierarchical learning system to model complex dynamic surveillance and cyber space monitoring systems using a non-commercial version of the NeoAxis visualization software. The hierarchical scene learning and recognition approach is based on hierarchical expectation maximization, and was linked to a 3D graphics engine for validation of learning and classification results and understanding the human - autonomous system relationship. Scene recognition is performed by taking synthetically generated data and feeding it to a dynamic logic algorithm. The algorithm performs hierarchical recognition of the scene by first examining the features of the objects to determine which objects are present, and then determines the scene based on the objects present. This paper presents a framework within which low level data linked to higher-level visualization can provide support to a human operator and be evaluated in a detailed and systematic way.

  6. P1-32: Response of Human Visual System to Paranormal Stimuli Appearing in Three-Dimensional Display

    Directory of Open Access Journals (Sweden)

    Jisoo Hong

    2012-10-01

    Full Text Available Three-dimensional (3D display became one of indispensable features of commercial TVs in recent years. However, the 3D content displayed by 3D display may contain the abrupt change of depth when the scene changes, which might be considered as a paranormal stimulus. Because the human visual system is not accustomed to such paranormal stimuli in natural conditions, they can cause unexpected responses which usually induce discomfort. Following the change of depth expressed by 3D display, the eyeballs rotate to match the convergence to the new 3D image position. The amount of rotation varies according to the initial longitudinal location and depth displacement of 3D image. Because the change of depth is abrupt, there is delay in human visual system following the change and such delay can be a source of discomfort. To guarantee the safety in watching 3D TV, the acceptable level of displacement in the longitudinal direction should be revealed quantitatively. Additionally, the artificially generated scenes also can provide paranormal stimuli such as periodic depth variations. In the presentation, we investigate the response of human visual system to such paranormal stimuli given by 3D display system. Using the result of investigation, we can give guideline to creating the 3D content to minimize the discomfort coming from the paranormal stimuli.

  7. A Basic Study on P300 Event-Related Potentials Evoked by Simultaneous Presentation of Visual and Auditory Stimuli for the Communication Interface

    Directory of Open Access Journals (Sweden)

    Masami Hashimoto

    2011-10-01

    Full Text Available We have been engaged in the development of a brain-computer interface (BCI based on the cognitive P300 event-related potentials (ERPs evoked by simultaneous presentation of visual and auditory stimuli in order to assist with the communication in severe physical limitation persons. The purpose of the simultaneous presentation of these stimuli is to give the user more choices as commands. First, we extracted P300 ERPs by either visual oddball paradigm or auditory oddball paradigm. Then amplitude and latency of the P300 ERPs were measured. Second, visual and auditory stimuli were presented simultaneously, we measured the P300 ERPs varying the condition of combinations of these stimuli. In this report, we used 3 colors as visual stimuli and 3 types of MIDI sounds as auditory stimuli. Two types of simultaneous presentations were examined. The one was conducted with random combination. The other was called group stimulation, combining one color, such as red, and one MIDI sound, such as piano, in order to make a group; three groups were made. Each group was presented to users randomly. We evaluated the possibility of BCI using these stimuli from the amplitudes and the latencies of P300 ERPs.

  8. Visualization of hierarchically structured information for human-computer interaction

    Energy Technology Data Exchange (ETDEWEB)

    Cheon, Suh Hyun; Lee, J. K.; Choi, I. K.; Kye, S. C.; Lee, N. K. [Dongguk University, Seoul (Korea)

    2001-11-01

    Visualization techniques can be used to support operator's information navigation tasks on the system especially consisting of an enormous volume of information, such as operating information display system and computerized operating procedure system in advanced control room of nuclear power plants. By offering an easy understanding environment of hierarchically structured information, these techniques can reduce the operator's supplementary navigation task load. As a result of that, operators can pay more attention on the primary tasks and ultimately improve the cognitive task performance. In this report, an interface was designed and implemented using hyperbolic visualization technique, which is expected to be applied as a means of optimizing operator's information navigation tasks. 15 refs., 19 figs., 32 tabs. (Author)

  9. The processing of auditory and visual recognition of self-stimuli.

    Science.gov (United States)

    Hughes, Susan M; Nicholson, Shevon E

    2010-12-01

    This study examined self-recognition processing in both the auditory and visual modalities by determining how comparable hearing a recording of one's own voice was to seeing photograph of one's own face. We also investigated whether the simultaneous presentation of auditory and visual self-stimuli would either facilitate or inhibit self-identification. Ninety-one participants completed reaction-time tasks of self-recognition when presented with their own faces, own voices, and combinations of the two. Reaction time and errors made when responding with both the right and left hand were recorded to determine if there were lateralization effects on these tasks. Our findings showed that visual self-recognition for facial photographs appears to be superior to auditory self-recognition for voice recordings. Furthermore, a combined presentation of one's own face and voice appeared to inhibit rather than facilitate self-recognition and there was a left-hand advantage for reaction time on the combined-presentation tasks. Copyright © 2010 Elsevier Inc. All rights reserved.

  10. Hierarchical representation of shapes in visual cortex - from localized features to figural shape segregation

    Directory of Open Access Journals (Sweden)

    Stephan eTschechne

    2014-08-01

    Full Text Available Visual structures in the environment are effortlessly segmented into image regions and those combined to a representation of surfaces and prototypical objects. Such a perceptual organization is performed by complex neural mechanisms in the visual cortex of primates. Multiple mutually connected areas in the ventral cortical pathway receive visual input and extract local form features that are subsequently grouped into increasingly complex, more meaningful image elements. At this stage, highly articulated changes in shape boundary as well as very subtle curvature changes contribute to the perception of an object.We propose a recurrent computational network architecture that utilizes a hierarchical distributed representation of shape features to encode boundary features over different scales of resolution. Our model makes use of neural mechanisms that model the processing capabilities of early and intermediate stages in visual cortex, namely areas V1-V4 and IT. We suggest that multiple specialized component representations interact by feedforward hierarchical processing that is combined with feedback from representations generated at higher stages. In so doing, global configurational as well as local information is available to distinguish changes in the object's contour. Once the outline of a shape has been established, contextual contour configurations are used to assign border ownership directions and thus achieve segregation of figure and ground. This combines separate findings about the generation of cortical shape representation using hierarchical representations with figure-ground segregation mechanisms.Our model is probed with a selection of artificial and real world images to illustrate processing results at different processing stages. We especially highlight how modulatory feedback connections contribute to the processing of visual input at various stages in the processing hierarchy.

  11. Correlation between MEG and BOLD fMRI signals induced by visual flicker stimuli

    Institute of Scientific and Technical Information of China (English)

    Chu Renxin; Holroyd Tom; Duyn Jeff

    2007-01-01

    The goal of this work was to investigate how the MEG signal amplitude correlates with that of BOLD fMRI.To investigate the correlation between fMRI and macroscopic electrical activity, BOLD fMRI and MEG was performed on the same subjects (n =5). A visual flicker stimulus of varying temporal frequency was used to elicit neural responses in early visual areas. A strong similarity was observed in frequency tuning curves between both modalities.Although, averaged over subjects, the BOLD tuning curve was somewhat broader than MEG, both BOLD and MEG had maxima at a flicker frequency of 10 Hz. Also, we measured the first and second harmonic components as the stimuli frequency by MEG. In the low stimuli frequency (less than 6 Hz), the second harmonic has comparable amplitude with the first harmonic, which implies that neural frequency response is nonlinear and has more nonlinear components in low frequency than in high frequency.

  12. Association of Concurrent fNIRS and EEG Signatures in Response to Auditory and Visual Stimuli.

    Science.gov (United States)

    Chen, Ling-Chia; Sandmann, Pascale; Thorne, Jeremy D; Herrmann, Christoph S; Debener, Stefan

    2015-09-01

    Functional near-infrared spectroscopy (fNIRS) has been proven reliable for investigation of low-level visual processing in both infants and adults. Similar investigation of fundamental auditory processes with fNIRS, however, remains only partially complete. Here we employed a systematic three-level validation approach to investigate whether fNIRS could capture fundamental aspects of bottom-up acoustic processing. We performed a simultaneous fNIRS-EEG experiment with visual and auditory stimulation in 24 participants, which allowed the relationship between changes in neural activity and hemoglobin concentrations to be studied. In the first level, the fNIRS results showed a clear distinction between visual and auditory sensory modalities. Specifically, the results demonstrated area specificity, that is, maximal fNIRS responses in visual and auditory areas for the visual and auditory stimuli respectively, and stimulus selectivity, whereby the visual and auditory areas responded mainly toward their respective stimuli. In the second level, a stimulus-dependent modulation of the fNIRS signal was observed in the visual area, as well as a loudness modulation in the auditory area. Finally in the last level, we observed significant correlations between simultaneously-recorded visual evoked potentials and deoxygenated hemoglobin (DeoxyHb) concentration, and between late auditory evoked potentials and oxygenated hemoglobin (OxyHb) concentration. In sum, these results suggest good sensitivity of fNIRS to low-level sensory processing in both the visual and the auditory domain, and provide further evidence of the neurovascular coupling between hemoglobin concentration changes and non-invasive brain electrical activity.

  13. Stress Induction and Visual Working Memory Performance: The Effects of Emotional and Non-Emotional Stimuli

    Directory of Open Access Journals (Sweden)

    Zahra Khayyer

    2017-05-01

    Full Text Available Background Some studies have shown working memory impairment following stressful situations. Also, researchers have found that working memory performance depends on many different factors such as emotional load of stimuli and gender. Objectives The present study aimed to determine the effects of stress induction on visual working memory (VWM performance among female and male university students. Methods This quasi-experimental research employed a posttest with only control group design (within-group study. A total of 62 university students (32 males and 30 females were randomly selected and allocated to experimental and control groups (mean age of 23.73. Using cold presser test (CPT, stress was induced and then, an n-back task was implemented to evaluate visual working memory function (such as the number of true items, time reactions, and the number of wrong items through emotional and non-emotional pictures. 100 pictures were selected from the international affective picture system (IASP with different valences. Results Results showed that stress impaired different visual working memory functions (P < 0.002 for true scores, P < 0.001 for reaction time, and P < 0.002 for wrong items. Conclusions In general, stress significantly decreases the VWM performances. On the one hand, females were strongly impressed by stress more than males and on the other hand, the VWM performance was better for emotional stimuli than non-emotional stimuli.

  14. Seeing music: The perception of melodic 'ups and downs' modulates the spatial processing of visual stimuli.

    Science.gov (United States)

    Romero-Rivas, Carlos; Vera-Constán, Fátima; Rodríguez-Cuadrado, Sara; Puigcerver, Laura; Fernández-Prieto, Irune; Navarra, Jordi

    2018-05-10

    Musical melodies have "peaks" and "valleys". Although the vertical component of pitch and music is well-known, the mechanisms underlying its mental representation still remain elusive. We show evidence regarding the importance of previous experience with melodies for crossmodal interactions to emerge. The impact of these crossmodal interactions on other perceptual and attentional processes was also studied. Melodies including two tones with different frequency (e.g., E4 and D3) were repeatedly presented during the study. These melodies could either generate strong predictions (e.g., E4-D3-E4-D3-E4-[D3]) or not (e.g., E4-D3-E4-E4-D3-[?]). After the presentation of each melody, the participants had to judge the colour of a visual stimulus that appeared in a position that was, according to the traditional vertical connotations of pitch, either congruent (e.g., high-low-high-low-[up]), incongruent (high-low-high-low-[down]) or unpredicted with respect to the melody. Behavioural and electroencephalographic responses to the visual stimuli were obtained. Congruent visual stimuli elicited faster responses at the end of the experiment than at the beginning. Additionally, incongruent visual stimuli that broke the spatial prediction generated by the melody elicited larger P3b amplitudes (reflecting 'surprise' responses). Our results suggest that the passive (but repeated) exposure to melodies elicits spatial predictions that modulate the processing of other sensory events. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. Numerosity estimation in visual stimuli in the absence of luminance-based cues.

    Directory of Open Access Journals (Sweden)

    Peter Kramer

    2011-02-01

    Full Text Available Numerosity estimation is a basic preverbal ability that humans share with many animal species and that is believed to be foundational of numeracy skills. It is notoriously difficult, however, to establish whether numerosity estimation is based on numerosity itself, or on one or more non-numerical cues like-in visual stimuli-spatial extent and density. Frequently, different non-numerical cues are held constant on different trials. This strategy, however, still allows numerosity estimation to be based on a combination of non-numerical cues rather than on any particular one by itself.Here we introduce a novel method, based on second-order (contrast-based visual motion, to create stimuli that exclude all first-order (luminance-based cues to numerosity. We show that numerosities can be estimated almost as well in second-order motion as in first-order motion.The results show that numerosity estimation need not be based on first-order spatial filtering, first-order density perception, or any other processing of luminance-based cues to numerosity. Our method can be used as an effective tool to control non-numerical variables in studies of numerosity estimation.

  16. Cortical responses from adults and infants to complex visual stimuli.

    Science.gov (United States)

    Schulman-Galambos, C; Galambos, R

    1978-10-01

    Event-related potentials (ERPs) time-locked to the onset of visual stimuli were extracted from the EEG of normal adult (N = 16) and infant (N = 23) subjects. Subjects were not required to make any response. Stimuli delivered to the adults were 150 msec exposures of 2 sets of colored slides projected in 4 blocks, 2 in focus and 2 out of focus. Infants received 2-sec exposures of slides showing people, colored drawings or scenes from Disneyland, as well as 2-sec illuminations of the experimenter as she played a game or of a TV screen the baby was watching. The adult ERPs showed 6 waves (N1 through P4) in the 140--600-msec range; this included a positive wave at around 350 msec that was large when the stimuli were focused and smaller when they were not. The waves in the 150--200-msec range, by contrast, steadily dropped in amplitude as the experiment progressed. The infant ERPs differed greatly from the adult ones in morphology, usually showing a positive (latency about 200 msec)--negative(5--600msec)--positive(1000msec) sequence. This ERP appeared in all the stimulus conditions; its presence or absence, furthermore, was correlated with whether or not the baby seemed interested in the stimuli. Four infants failed to produce these ERPs; an independent measure of attention to the stimuli, heart rate deceleration, was demonstrated in two of them. An electrode placed beneath the eye to monitor eye movements yielded ERPs closely resembling those derived from the scalp in most subjects; reasons are given for assigning this response to activity in the brain, probably at the frontal pole. This study appears to be one of the first to search for cognitive 'late waves' in a no-task situation. The results suggest that further work with such task-free paradigms may yield additional useful techniques for studying the ERP.

  17. High-performance execution of psychophysical tasks with complex visual stimuli in MATLAB

    Science.gov (United States)

    Asaad, Wael F.; Santhanam, Navaneethan; McClellan, Steven

    2013-01-01

    Behavioral, psychological, and physiological experiments often require the ability to present sensory stimuli, monitor and record subjects' responses, interface with a wide range of devices, and precisely control the timing of events within a behavioral task. Here, we describe our recent progress developing an accessible and full-featured software system for controlling such studies using the MATLAB environment. Compared with earlier reports on this software, key new features have been implemented to allow the presentation of more complex visual stimuli, increase temporal precision, and enhance user interaction. These features greatly improve the performance of the system and broaden its applicability to a wider range of possible experiments. This report describes these new features and improvements, current limitations, and quantifies the performance of the system in a real-world experimental setting. PMID:23034363

  18. Stimuli-responsive liquid crystalline materials

    NARCIS (Netherlands)

    Debije, M.G.; Schenning, A.P.H.J.; Hashmi, Saleem

    2016-01-01

    Stimuli-responsive materials which respond to triggers from the environment by changing their properties are one of the focal points in materials science. For precise functional properties, well-defined hierarchically ordered supramolecular materials are crucial. The self-assembly of liquid crystals

  19. Auditory preferences of young children with and without hearing loss for meaningful auditory-visual compound stimuli.

    Science.gov (United States)

    Zupan, Barbra; Sussman, Joan E

    2009-01-01

    Experiment 1 examined modality preferences in children and adults with normal hearing to combined auditory-visual stimuli. Experiment 2 compared modality preferences in children using cochlear implants participating in an auditory emphasized therapy approach to the children with normal hearing from Experiment 1. A second objective in both experiments was to evaluate the role of familiarity in these preferences. Participants were exposed to randomized blocks of photographs and sounds of ten familiar and ten unfamiliar animals in auditory-only, visual-only and auditory-visual trials. Results indicated an overall auditory preference in children, regardless of hearing status, and a visual preference in adults. Familiarity only affected modality preferences in adults who showed a strong visual preference to unfamiliar stimuli only. The similar degree of auditory responses in children with hearing loss to those from children with normal hearing is an original finding and lends support to an auditory emphasis for habilitation. Readers will be able to (1) Describe the pattern of modality preferences reported in young children without hearing loss; (2) Recognize that differences in communication mode may affect modality preferences in young children with hearing loss; and (3) Understand the role of familiarity in modality preferences in children with and without hearing loss.

  20. Preserved suppression of salient irrelevant stimuli during visual search in Age-Associated Memory Impairment

    Directory of Open Access Journals (Sweden)

    Laura eLorenzo-López

    2016-01-01

    Full Text Available Previous studies have suggested that older adults with age-associated memory impairment (AAMI may show a significant decline in attentional resource capacity and inhibitory processes in addition to memory impairment. In the present paper, the potential attentional capture by task-irrelevant stimuli was examined in older adults with AAMI compared to healthy older adults using scalp-recorded event-related brain potentials (ERPs. ERPs were recorded during the execution of a visual search task, in which the participants had to detect the presence of a target stimulus that differed from distractors by orientation. To explore the automatic attentional capture phenomenon, an irrelevant distractor stimulus defined by a different feature (color was also presented without previous knowledge of the participants. A consistent N2pc, an electrophysiological indicator of attentional deployment, was present for target stimuli but not for task-irrelevant color stimuli, suggesting that these irrelevant distractors did not attract attention in AAMI older adults. Furthermore, the N2pc for targets was significantly delayed in AAMI patients compared to healthy older controls. Together, these findings suggest a specific impairment of the attentional selection process of relevant target stimuli in these individuals and indicate that the mechanism of top-down suppression of entirely task-irrelevant stimuli is preserved, at least when the target and the irrelevant stimuli are perceptually very different.

  1. Contingent capture of involuntary visual attention interferes with detection of auditory stimuli

    OpenAIRE

    Kamke, Marc R.; Harris, Jill

    2014-01-01

    The involuntary capture of attention by salient visual stimuli can be influenced by the behavioral goals of an observer. For example, when searching for a target item, irrelevant items that possess the target-defining characteristic capture attention more strongly than items not possessing that feature. Such contingent capture involves a shift of spatial attention toward the item with the target-defining characteristic. It is not clear, however, if the associated decrements in performance for...

  2. Learning efficient visual search for stimuli containing diagnostic spatial configurations and color-shape conjunctions.

    Science.gov (United States)

    Reavis, Eric A; Frank, Sebastian M; Tse, Peter U

    2018-04-12

    Visual search is often slow and difficult for complex stimuli such as feature conjunctions. Search efficiency, however, can improve with training. Search for stimuli that can be identified by the spatial configuration of two elements (e.g., the relative position of two colored shapes) improves dramatically within a few hundred trials of practice. Several recent imaging studies have identified neural correlates of this learning, but it remains unclear what stimulus properties participants learn to use to search efficiently. Influential models, such as reverse hierarchy theory, propose two major possibilities: learning to use information contained in low-level image statistics (e.g., single features at particular retinotopic locations) or in high-level characteristics (e.g., feature conjunctions) of the task-relevant stimuli. In a series of experiments, we tested these two hypotheses, which make different predictions about the effect of various stimulus manipulations after training. We find relatively small effects of manipulating low-level properties of the stimuli (e.g., changing their retinotopic location) and some conjunctive properties (e.g., color-position), whereas the effects of manipulating other conjunctive properties (e.g., color-shape) are larger. Overall, the findings suggest conjunction learning involving such stimuli might be an emergent phenomenon that reflects multiple different learning processes, each of which capitalizes on different types of information contained in the stimuli. We also show that both targets and distractors are learned, and that reversing learned target and distractor identities impairs performance. This suggests that participants do not merely learn to discriminate target and distractor stimuli, they also learn stimulus identity mappings that contribute to performance improvements.

  3. Use of Sine Shaped High-Frequency Rhythmic Visual Stimuli Patterns for SSVEP Response Analysis and Fatigue Rate Evaluation in Normal Subjects

    Directory of Open Access Journals (Sweden)

    Ahmadreza Keihani

    2018-05-01

    Full Text Available Background: Recent EEG-SSVEP signal based BCI studies have used high frequency square pulse visual stimuli to reduce subjective fatigue. However, the effect of total harmonic distortion (THD has not been considered. Compared to CRT and LCD monitors, LED screen displays high-frequency wave with better refresh rate. In this study, we present high frequency sine wave simple and rhythmic patterns with low THD rate by LED to analyze SSVEP responses and evaluate subjective fatigue in normal subjects.Materials and Methods: We used patterns of 3-sequence high-frequency sine waves (25, 30, and 35 Hz to design our visual stimuli. Nine stimuli patterns, 3 simple (repetition of each of above 3 frequencies e.g., P25-25-25 and 6 rhythmic (all of the frequencies in 6 different sequences e.g., P25-30-35 were chosen. A hardware setup with low THD rate (<0.1% was designed to present these patterns on LED. Twenty two normal subjects (aged 23–30 (25 ± 2.1 yrs were enrolled. Visual analog scale (VAS was used for subjective fatigue evaluation after presentation of each stimulus pattern. PSD, CCA, and LASSO methods were employed to analyze SSVEP responses. The data including SSVEP features and fatigue rate for different visual stimuli patterns were statistically evaluated.Results: All 9 visual stimuli patterns elicited SSVEP responses. Overall, obtained accuracy rates were 88.35% for PSD and > 90% for CCA and LASSO (for TWs > 1 s. High frequency rhythmic patterns group with low THD rate showed higher accuracy rate (99.24% than simple patterns group (98.48%. Repeated measure ANOVA showed significant difference between rhythmic pattern features (P < 0.0005. Overall, there was no significant difference between the VAS of rhythmic [3.85 ± 2.13] compared to the simple patterns group [3.96 ± 2.21], (P = 0.63. Rhythmic group had lower within group VAS variation (min = P25-30-35 [2.90 ± 2.45], max = P35-25-30 [4.81 ± 2.65] as well as least individual pattern VAS (P25

  4. System to induce and measure embodiment of an artificial hand with programmable convergent visual and tactile stimuli.

    Science.gov (United States)

    Benz, Heather L; Sieff, Talia R; Alborz, Mahsa; Kontson, Kimberly; Kilpatrick, Elizabeth; Civillico, Eugene F

    2016-08-01

    The sense of prosthesis embodiment, or the feeling that the device has been incorporated into a user's body image, may be enhanced by emerging technology such as invasive electrical stimulation for sensory feedback. In turn, prosthesis embodiment may be linked to increased prosthesis use and improved functional outcomes. We describe the development of a tool to assay artificial hand embodiment in a quantitative way in people with intact limbs, and characterize its operation. The system delivers temporally coordinated visual and tactile stimuli at a programmable latency while recording limb temperature. When programmed to deliver visual and tactile stimuli synchronously, recorded latency between the two was 33 ± 24 ms in the final pilot subject. This system enables standardized assays of the conditions necessary for prosthesis embodiment.

  5. An fMRI investigation into the effect of preceding stimuli during visual oddball tasks.

    Science.gov (United States)

    Fajkus, Jiří; Mikl, Michal; Shaw, Daniel Joel; Brázdil, Milan

    2015-08-15

    This study investigates the modulatory effect of stimulus sequence on neural responses to novel stimuli. A group of 34 healthy volunteers underwent event-related functional magnetic resonance imaging while performing a three-stimulus visual oddball task, involving randomly presented frequent stimuli and two types of infrequent stimuli - targets and distractors. We developed a modified categorization of rare stimuli that incorporated the type of preceding rare stimulus, and analyzed the event-related functional data according to this sequence categorization; specifically, we explored hemodynamic response modulation associated with increasing rare-to-rare stimulus interval. For two consecutive targets, a modulation of brain function was evident throughout posterior midline and lateral temporal cortex, while responses to targets preceded by distractors were modulated in a widely distributed fronto-parietal system. As for distractors that follow targets, brain function was modulated throughout a set of posterior brain structures. For two successive distractors, however, no significant modulation was observed, which is consistent with previous studies and our primary hypothesis. The addition of the aforementioned technique extends the possibilities of conventional oddball task analysis, enabling researchers to explore the effects of the whole range of rare stimuli intervals. This methodology can be applied to study a wide range of associated cognitive mechanisms, such as decision making, expectancy and attention. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Perceptual Discrimination of Basic Object Features Is Not Facilitated When Priming Stimuli Are Prevented From Reaching Awareness by Means of Visual Masking.

    Science.gov (United States)

    Peel, Hayden J; Sperandio, Irene; Laycock, Robin; Chouinard, Philippe A

    2018-01-01

    Our understanding of how form, orientation and size are processed within and outside of awareness is limited and requires further investigation. Therefore, we investigated whether or not the visual discrimination of basic object features can be influenced by subliminal processing of stimuli presented beforehand. Visual masking was used to render stimuli perceptually invisible. Three experiments examined if visible and invisible primes could facilitate the subsequent feature discrimination of visible targets. The experiments differed in the kind of perceptual discrimination that participants had to make. Namely, participants were asked to discriminate visual stimuli on the basis of their form, orientation, or size. In all three experiments, we demonstrated reliable priming effects when the primes were visible but not when the primes were made invisible. Our findings underscore the importance of conscious awareness in facilitating the perceptual discrimination of basic object features.

  7. Perceptual Discrimination of Basic Object Features Is Not Facilitated When Priming Stimuli Are Prevented From Reaching Awareness by Means of Visual Masking

    Science.gov (United States)

    Peel, Hayden J.; Sperandio, Irene; Laycock, Robin; Chouinard, Philippe A.

    2018-01-01

    Our understanding of how form, orientation and size are processed within and outside of awareness is limited and requires further investigation. Therefore, we investigated whether or not the visual discrimination of basic object features can be influenced by subliminal processing of stimuli presented beforehand. Visual masking was used to render stimuli perceptually invisible. Three experiments examined if visible and invisible primes could facilitate the subsequent feature discrimination of visible targets. The experiments differed in the kind of perceptual discrimination that participants had to make. Namely, participants were asked to discriminate visual stimuli on the basis of their form, orientation, or size. In all three experiments, we demonstrated reliable priming effects when the primes were visible but not when the primes were made invisible. Our findings underscore the importance of conscious awareness in facilitating the perceptual discrimination of basic object features. PMID:29725292

  8. Stress Sensitive Healthy Females Show Less Left Amygdala Activation in Response to Withdrawal-Related Visual Stimuli under Passive Viewing Conditions

    Science.gov (United States)

    Baeken, Chris; Van Schuerbeek, Peter; De Raedt, Rudi; Vanderhasselt, Marie-Anne; De Mey, Johan; Bossuyt, Axel; Luypaert, Robert

    2012-01-01

    The amygdalae are key players in the processing of a variety of emotional stimuli. Especially aversive visual stimuli have been reported to attract attention and activate the amygdalae. However, as it has been argued that passively viewing withdrawal-related images could attenuate instead of activate amygdalae neuronal responses, its role under…

  9. Visual attention to spatial and non-spatial visual stimuli is affected differentially by age: effects on event-related brain potentials and performance data.

    NARCIS (Netherlands)

    Talsma, D.; Kok, A.; Ridderinkhof, K.R.

    2006-01-01

    To assess selective attention processes in young and old adults, behavioral and event-related potential (ERP) measures were recorded. Streams of visual stimuli were presented from left or right locations (Experiment 1) or from a central location and comprising two different spatial frequencies

  10. Audiovisual Integration Delayed by Stimulus Onset Asynchrony Between Auditory and Visual Stimuli in Older Adults.

    Science.gov (United States)

    Ren, Yanna; Yang, Weiping; Nakahashi, Kohei; Takahashi, Satoshi; Wu, Jinglong

    2017-02-01

    Although neuronal studies have shown that audiovisual integration is regulated by temporal factors, there is still little knowledge about the impact of temporal factors on audiovisual integration in older adults. To clarify how stimulus onset asynchrony (SOA) between auditory and visual stimuli modulates age-related audiovisual integration, 20 younger adults (21-24 years) and 20 older adults (61-80 years) were instructed to perform an auditory or visual stimuli discrimination experiment. The results showed that in younger adults, audiovisual integration was altered from an enhancement (AV, A ± 50 V) to a depression (A ± 150 V). In older adults, the alterative pattern was similar to that for younger adults with the expansion of SOA; however, older adults showed significantly delayed onset for the time-window-of-integration and peak latency in all conditions, which further demonstrated that audiovisual integration was delayed more severely with the expansion of SOA, especially in the peak latency for V-preceded-A conditions in older adults. Our study suggested that audiovisual facilitative integration occurs only within a certain SOA range (e.g., -50 to 50 ms) in both younger and older adults. Moreover, our results confirm that the response for older adults was slowed and provided empirical evidence that integration ability is much more sensitive to the temporal alignment of audiovisual stimuli in older adults.

  11. Hierarchical neural network model of the visual system determining figure/ground relation

    Science.gov (United States)

    Kikuchi, Masayuki

    2017-07-01

    One of the most important functions of the visual perception in the brain is figure/ground interpretation from input images. Figural region in 2D image corresponding to object in 3D space are distinguished from background region extended behind the object. Previously the author proposed a neural network model of figure/ground separation constructed on the standpoint that local geometric features such as curvatures and outer angles at corners are extracted and propagated along input contour in a single layer network (Kikuchi & Akashi, 2001). However, such a processing principle has the defect that signal propagation requires manyiterations despite the fact that actual visual system determines figure/ground relation within the short period (Zhou et al., 2000). In order to attain speed-up for determining figure/ground, this study incorporates hierarchical architecture into the previous model. This study confirmed the effect of the hierarchization as for the computation time by simulation. As the number of layers increased, the required computation time reduced. However, such speed-up effect was saturatedas the layers increased to some extent. This study attempted to explain this saturation effect by the notion of average distance between vertices in the area of complex network, and succeeded to mimic the saturation effect by computer simulation.

  12. Emotion attribution to basic parametric static and dynamic stimuli

    NARCIS (Netherlands)

    Visch, V.; Goudbeek, M.B.; Cohn, J.; Nijholt, A.; Pantic, P.

    2009-01-01

    The following research investigates the effect of basic visual stimuli on the attribution of basic emotions by the viewer. In an empirical study (N = 33) we used two groups of visually minimal expressive stimuli: dynamic and static. The dynamic stimuli consisted of an animated circle moving

  13. Visual cortex and auditory cortex activation in early binocularly blind macaques: A BOLD-fMRI study using auditory stimuli.

    Science.gov (United States)

    Wang, Rong; Wu, Lingjie; Tang, Zuohua; Sun, Xinghuai; Feng, Xiaoyuan; Tang, Weijun; Qian, Wen; Wang, Jie; Jin, Lixin; Zhong, Yufeng; Xiao, Zebin

    2017-04-15

    Cross-modal plasticity within the visual and auditory cortices of early binocularly blind macaques is not well studied. In this study, four healthy neonatal macaques were assigned to group A (control group) or group B (binocularly blind group). Sixteen months later, blood oxygenation level-dependent functional imaging (BOLD-fMRI) was conducted to examine the activation in the visual and auditory cortices of each macaque while being tested using pure tones as auditory stimuli. The changes in the BOLD response in the visual and auditory cortices of all macaques were compared with immunofluorescence staining findings. Compared with group A, greater BOLD activity was observed in the bilateral visual cortices of group B, and this effect was particularly obvious in the right visual cortex. In addition, more activated volumes were found in the bilateral auditory cortices of group B than of group A, especially in the right auditory cortex. These findings were consistent with the fact that there were more c-Fos-positive cells in the bilateral visual and auditory cortices of group B compared with group A (p visual cortices of binocularly blind macaques can be reorganized to process auditory stimuli after visual deprivation, and this effect is more obvious in the right than the left visual cortex. These results indicate the establishment of cross-modal plasticity within the visual and auditory cortices. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  14. Sensory Symptoms and Processing of Nonverbal Auditory and Visual Stimuli in Children with Autism Spectrum Disorder

    Science.gov (United States)

    Stewart, Claire R.; Sanchez, Sandra S.; Grenesko, Emily L.; Brown, Christine M.; Chen, Colleen P.; Keehn, Brandon; Velasquez, Francisco; Lincoln, Alan J.; Müller, Ralph-Axel

    2016-01-01

    Atypical sensory responses are common in autism spectrum disorder (ASD). While evidence suggests impaired auditory-visual integration for verbal information, findings for nonverbal stimuli are inconsistent. We tested for sensory symptoms in children with ASD (using the Adolescent/Adult Sensory Profile) and examined unisensory and bisensory…

  15. The relationship between age and brain response to visual erotic stimuli in healthy heterosexual males.

    Science.gov (United States)

    Seo, Y; Jeong, B; Kim, J-W; Choi, J

    2010-01-01

    The various changes of sexuality, including decreased sexual desire and erectile dysfunction, are also accompanied with aging. To understand the effect of aging on sexuality, we explored the relationship between age and the visual erotic stimulation-related brain response in sexually active male subjects. Twelve healthy, heterosexual male subjects (age 22-47 years) were recorded the functional magnetic resonance imaging (fMRI) signals of their brain activation elicited by passive viewing erotic (ERO), happy-faced (HA) couple, food and nature pictures. Mixed effect analysis and correlation analysis were performed to investigate the relationship between the age and the change of brain activity elicited by erotic stimuli. Our results showed age was positively correlated with the activation of right occipital fusiform gyrus and amygdala, and negatively correlated with the activation of right insula and inferior frontal gyrus. These findings suggest age might be related with functional decline in brain regions being involved in both interoceptive sensation and prefrontal modulation while it is related with the incremental activity of the brain region for early processing of visual emotional stimuli in sexually healthy men.

  16. Brain reactivity to visual food stimuli after moderate-intensity exercise in children.

    Science.gov (United States)

    Masterson, Travis D; Kirwan, C Brock; Davidson, Lance E; Larson, Michael J; Keller, Kathleen L; Fearnbach, S Nicole; Evans, Alyssa; LeCheminant, James D

    2017-09-19

    Exercise may play a role in moderating eating behaviors. The purpose of this study was to examine the effect of an acute bout of exercise on neural responses to visual food stimuli in children ages 8-11 years. We hypothesized that acute exercise would result in reduced activity in reward areas of the brain. Using a randomized cross-over design, 26 healthy weight children completed two separate laboratory conditions (exercise; sedentary). During the exercise condition, each participant completed a 30-min bout of exercise at moderate-intensity (~ 67% HR maximum) on a motor-driven treadmill. During the sedentary session, participants sat continuously for 30 min. Neural responses to high- and low-calorie pictures of food were determined immediately following each condition using functional magnetic resonance imaging. There was a significant exercise condition*stimulus-type (high- vs. low-calorie pictures) interaction in the left hippocampus and right medial temporal lobe (p visual food stimuli differently following an acute bout of exercise compared to a non-exercise sedentary session in 8-11 year-old children. Specifically, an acute bout of exercise results in greater activation to high-calorie and reduced activation to low-calorie pictures of food in both the left hippocampus and right medial temporal lobe. This study shows that response to external food cues can be altered by exercise and understanding this mechanism will inform the development of future interventions aimed at altering energy intake in children.

  17. Virtual reality stimuli for force platform posturography.

    Science.gov (United States)

    Tossavainen, Timo; Juhola, Martti; Ilmari, Pyykö; Aalto, Heikki; Toppila, Esko

    2002-01-01

    People relying much on vision in the control of posture are known to have an elevated risk of falling. Dependence on visual control is an important parameter in the diagnosis of balance disorders. We have previously shown that virtual reality methods can be used to produce visual stimuli that affect balance, but suitable stimuli need to be found. In this study the effect of six different virtual reality stimuli on the balance of 22 healthy test subjects was evaluated using force platform posturography. According to the tests two of the stimuli have a significant effect on balance.

  18. Visual attention to meaningful stimuli by 1- to 3-year olds: implications for the measurement of memory.

    Science.gov (United States)

    Hayne, Harlene; Jaeger, Katja; Sonne, Trine; Gross, Julien

    2016-11-01

    The visual recognition memory (VRM) paradigm has been widely used to measure memory during infancy and early childhood; it has also been used to study memory in human and nonhuman adults. Typically, participants are familiarized with stimuli that have no special significance to them. Under these conditions, greater attention to the novel stimulus during the test (i.e., novelty preference) is used as the primary index of memory. Here, we took a novel approach to the VRM paradigm and tested 1-, 2-, and 3-year olds using photos of meaningful stimuli that were drawn from the participants' own environment (e.g., photos of their mother, father, siblings, house). We also compared their performance to that of participants of the same age who were tested in an explicit pointing version of the VRM task. Two- and 3-year olds exhibited a strong familiarity preference for some, but not all, of the meaningful stimuli; 1-year olds did not. At no age did participants exhibit the kind of novelty preference that is commonly used to define memory in the VRM task. Furthermore, when compared to pointing, looking measures provided a rough approximation of recognition memory, but in some instances, the looking measure underestimated retention. The use of meaningful stimuli raise important questions about the way in which visual attention is interpreted in the VRM paradigm, and may provide new opportunities to measure memory during infancy and early childhood. © 2016 Wiley Periodicals, Inc.

  19. Auditory Preferences of Young Children with and without Hearing Loss for Meaningful Auditory-Visual Compound Stimuli

    Science.gov (United States)

    Zupan, Barbra; Sussman, Joan E.

    2009-01-01

    Experiment 1 examined modality preferences in children and adults with normal hearing to combined auditory-visual stimuli. Experiment 2 compared modality preferences in children using cochlear implants participating in an auditory emphasized therapy approach to the children with normal hearing from Experiment 1. A second objective in both…

  20. Effective visualization assay for alcohol content sensing and methanol differentiation with solvent stimuli-responsive supramolecular ionic materials.

    Science.gov (United States)

    Zhang, Li; Qi, Hetong; Wang, Yuexiang; Yang, Lifen; Yu, Ping; Mao, Lanqun

    2014-08-05

    This study demonstrates a rapid visualization assay for on-spot sensing of alcohol content as well as for discriminating methanol-containing beverages with solvent stimuli-responsive supramolecular ionic material (SIM). The SIM is synthesized by ionic self-assembling of imidazolium-based dication C10(mim)2 and dianionic 2,2'-azino-bis (3-ethylbenzothiazoline-6-sulfonic acid) (ABTS) in water and shows water stability, a solvent stimuli-responsive property, and adaptive encapsulation capability. The rationale for the visualization assay demonstrated here is based on the combined utilization of the unique properties of SIM, including its water stability, ethanol stimuli-responsive feature, and adaptive encapsulation capability toward optically active rhodamine 6G (Rh6G); the addition of ethanol into a stable aqueous dispersion of Rh6G-encapsulated SIM (Rh6G-SIM) destructs the Rh6G-SIM structure, resulting in the release of Rh6G from SIM into the solvent. Alcohol content can thus be visualized with the naked eyes through the color change of the dispersion caused by the addition of ethanol. Alcohol content can also be quantified by measuring the fluorescence line of Rh6G released from Rh6G-SIM on a thin-layer chromatography (TLC) plate in response to alcoholic beverages. By fixing the diffusion distance of the mobile phase, the fluorescence line of Rh6G shows a linear relationship with alcohol content (vol %) within a concentration range from 15% to 40%. We utilized this visualization assay for on-spot visualizing of the alcohol contents of three Chinese commercial spirits and discriminating methanol-containing counterfeit beverages. We found that addition of a trace amount of methanol leads to a large increase of the length of Rh6G on TLC plates, which provides a method to identify methanol adulterated beverages with labeled ethanol content. This study provides a simple yet effective assay for alcohol content sensing and methanol differentiation.

  1. Visual Sexual Stimuli-Cue or Reward? A Perspective for Interpreting Brain Imaging Findings on Human Sexual Behaviors

    NARCIS (Netherlands)

    Gola, M.; Wordecha, M.; Marchewka, A.; Sescousse, G.T.

    2016-01-01

    There is an increasing number of neuroimaging studies using visual sexual stimuli (VSS), especially within the emerging field of research on compulsive sexual behaviors (CSB). A central question in this field is whether behaviors such as excessive pornography consumption share common brain

  2. Global perception depends on coherent work of bilateral visual cortices: transcranial magnetic stimulation (TMS) studies.

    Science.gov (United States)

    Zhang, Xin; Han, ShiHui

    2007-08-01

    Previous research suggests that the right and left hemispheres dominate global and local perception of hierarchical patterns, respectively. The current work examined whether global perception of hierarchical stimuli requires coherent work of bilateral visual cortices using transcranial magnetic stimulation (TMS). Subjects discriminated global or local properties of compound letters in Experiment 1. Reaction times were recorded when single-pulse real TMS or sham TMS was delivered over the left or right visual cortex. While a global precedence effect (i.e., faster responses to global than local targets and stronger global-to-local interference than the reverse) was observed, TMS decreased global-to-local interference whereas increased local-to-global interference. Experiment 2 ruled out the possibility that the effects observed in Experiment 1 resulted from perceptual learning. Experiment 3 used compound shapes and observed TMS effect similar to that in Experiment 1. Moreover, TMS also slowed global RTs whereas speeded up local RTs in Experiment 3. Finally, the TMS effects observed in Experiments 1 and 3 did not differ between the conditions when TMS was applied over the left and right hemispheres. The results support a coherence hypothesis that global perception of compound stimuli depends upon the coherent work of bilateral visual cortices.

  3. Resting-state functional connectivity remains unaffected by preceding exposure to aversive visual stimuli.

    Science.gov (United States)

    Geissmann, Léonie; Gschwind, Leo; Schicktanz, Nathalie; Deuring, Gunnar; Rosburg, Timm; Schwegler, Kyrill; Gerhards, Christiane; Milnik, Annette; Pflueger, Marlon O; Mager, Ralph; de Quervain, Dominique J F; Coynel, David

    2018-02-15

    While much is known about immediate brain activity changes induced by the confrontation with emotional stimuli, the subsequent temporal unfolding of emotions has yet to be explored. To investigate whether exposure to emotionally aversive pictures affects subsequent resting-state networks differently from exposure to neutral pictures, a resting-state fMRI study implementing a two-group repeated-measures design in healthy young adults (N = 34) was conducted. We focused on investigating (i) patterns of amygdala whole-brain and hippocampus connectivity in both a seed-to-voxel and seed-to-seed approach, (ii) whole-brain resting-state networks with an independent component analysis coupled with dual regression, and (iii) the amygdala's fractional amplitude of low frequency fluctuations, all while EEG recording potential fluctuations in vigilance. In spite of the successful emotion induction, as demonstrated by stimuli rating and a memory-facilitating effect of negative emotionality, none of the resting-state measures was differentially affected by picture valence. In conclusion, resting-state networks connectivity as well as the amygdala's low frequency oscillations appear to be unaffected by preceding exposure to widely used emotionally aversive visual stimuli in healthy young adults. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  4. Time- and Space-Order Effects in Timed Discrimination of Brightness and Size of Paired Visual Stimuli

    Science.gov (United States)

    Patching, Geoffrey R.; Englund, Mats P.; Hellstrom, Ake

    2012-01-01

    Despite the importance of both response probability and response time for testing models of choice, there is a dearth of chronometric studies examining systematic asymmetries that occur over time- and space-orders in the method of paired comparisons. In this study, systematic asymmetries in discriminating the magnitude of paired visual stimuli are…

  5. Roll motion stimuli : sensory conflict, perceptual weighting and motion sickness

    NARCIS (Netherlands)

    Graaf, B. de; Bles, W.; Bos, J.E.

    1998-01-01

    In an experiment with seventeen subjects interactions of visual roll motion stimuli and vestibular body tilt stimuli were examined in determining the subjective vertical. Interindi-vidual differences in weighting the visual information were observed, but in general visual and vestibular responses

  6. Temporal attention for visual food stimuli in restrained eaters.

    Science.gov (United States)

    Neimeijer, Renate A M; de Jong, Peter J; Roefs, Anne

    2013-05-01

    Although restrained eaters try to limit their food intake, they often fail and indulge in exactly those foods that they want to avoid. A possible explanation is a temporal attentional bias for food cues. It could be that for these people food stimuli are processed relatively efficiently and require less attentional resources to enter awareness. Once a food stimulus has captured attention, it may be preferentially processed and granted prioritized access to limited cognitive resources. This might help explain why restrained eaters often fail in their attempts to restrict their food intake. A Rapid Serial Visual Presentation task consisting of dual and single target trials with food and neutral pictures as targets and/or distractors was administered to restrained (n=40) and unrestrained (n=40) eaters to study temporal attentional bias. Results indicated that (1) food cues did not diminish the attentional blink in restrained eaters when presented as second target; (2) specifically restrained eaters showed an interference effect of identifying food targets on the identification of preceding neutral targets; (3) for both restrained and unrestrained eaters, food cues enhanced the attentional blink; (4) specifically in restrained eaters, food distractors elicited an attention blink in the single target trials. In restrained eaters, food cues get prioritized access to limited cognitive resources, even if this processing priority interferes with their current goals. This temporal attentional bias for food stimuli might help explain why restrained eaters typically have difficulties maintaining their diet rules. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. Iris Image Classification Based on Hierarchical Visual Codebook.

    Science.gov (United States)

    Zhenan Sun; Hui Zhang; Tieniu Tan; Jianyu Wang

    2014-06-01

    Iris recognition as a reliable method for personal identification has been well-studied with the objective to assign the class label of each iris image to a unique subject. In contrast, iris image classification aims to classify an iris image to an application specific category, e.g., iris liveness detection (classification of genuine and fake iris images), race classification (e.g., classification of iris images of Asian and non-Asian subjects), coarse-to-fine iris identification (classification of all iris images in the central database into multiple categories). This paper proposes a general framework for iris image classification based on texture analysis. A novel texture pattern representation method called Hierarchical Visual Codebook (HVC) is proposed to encode the texture primitives of iris images. The proposed HVC method is an integration of two existing Bag-of-Words models, namely Vocabulary Tree (VT), and Locality-constrained Linear Coding (LLC). The HVC adopts a coarse-to-fine visual coding strategy and takes advantages of both VT and LLC for accurate and sparse representation of iris texture. Extensive experimental results demonstrate that the proposed iris image classification method achieves state-of-the-art performance for iris liveness detection, race classification, and coarse-to-fine iris identification. A comprehensive fake iris image database simulating four types of iris spoof attacks is developed as the benchmark for research of iris liveness detection.

  8. Heart rate reactivity associated to positive and negative food and non-food visual stimuli.

    Science.gov (United States)

    Kuoppa, Pekka; Tarvainen, Mika P; Karhunen, Leila; Narvainen, Johanna

    2016-08-01

    Using food as a stimuli is known to cause multiple psychophysiological reactions. Heart rate variability (HRV) is common tool for assessing physiological reactions in autonomic nervous system. However, the findings in HRV related to food stimuli have not been consistent. In this paper the quick changes in HRV related to positive and negative food and non-food visual stimuli are investigated. Electrocardiogram (ECG) was measured from 18 healthy females while being stimulated with the pictures. Subjects also filled Three-Factor Eating Questionnaire to determine their eating behavior. The inter-beat-interval time series and the HRV parameters were extracted from the ECG. The quick change in HRV parameters were studied by calculating the change from baseline value (10 s window before stimulus) to value after the onset of the stimulus (10 s window during stimulus). The paired t-test showed significant difference between positive and negative food pictures but not between positive and negative non-food pictures. All the HRV parameters decreased for positive food pictures while they stayed the same or increased a little for negative food pictures. The eating behavior characteristic cognitive restraint was negatively correlated with HRV parameters that describe decreasing of heart rate.

  9. The selective processing of emotional visual stimuli while detecting auditory targets: an ERP analysis.

    Science.gov (United States)

    Schupp, Harald T; Stockburger, Jessica; Bublatzky, Florian; Junghöfer, Markus; Weike, Almut I; Hamm, Alfons O

    2008-09-16

    Event-related potential studies revealed an early posterior negativity (EPN) for emotional compared to neutral pictures. Exploring the emotion-attention relationship, a previous study observed that a primary visual discrimination task interfered with the emotional modulation of the EPN component. To specify the locus of interference, the present study assessed the fate of selective visual emotion processing while attention is directed towards the auditory modality. While simply viewing a rapid and continuous stream of pleasant, neutral, and unpleasant pictures in one experimental condition, processing demands of a concurrent auditory target discrimination task were systematically varied in three further experimental conditions. Participants successfully performed the auditory task as revealed by behavioral performance and selected event-related potential components. Replicating previous results, emotional pictures were associated with a larger posterior negativity compared to neutral pictures. Of main interest, increasing demands of the auditory task did not modulate the selective processing of emotional visual stimuli. With regard to the locus of interference, selective emotion processing as indexed by the EPN does not seem to reflect shared processing resources of visual and auditory modality.

  10. Normal Threshold Size of Stimuli in Children Using a Game-Based Visual Field Test.

    Science.gov (United States)

    Wang, Yanfang; Ali, Zaria; Subramani, Siddharth; Biswas, Susmito; Fenerty, Cecilia; Henson, David B; Aslam, Tariq

    2017-06-01

    The aim of this study was to demonstrate and explore the ability of novel game-based perimetry to establish normal visual field thresholds in children. One hundred and eighteen children (aged 8.0 ± 2.8 years old) with no history of visual field loss or significant medical history were recruited. Each child had one eye tested using a game-based visual field test 'Caspar's Castle' at four retinal locations 12.7° (N = 118) from fixation. Thresholds were established repeatedly using up/down staircase algorithms with stimuli of varying diameter (luminance 20 cd/m 2 , duration 200 ms, background luminance 10 cd/m 2 ). Relationships between threshold and age were determined along with measures of intra- and intersubject variability. The Game-based visual field test was able to establish threshold estimates in the full range of children tested. Threshold size reduced with increasing age in children. Intrasubject variability and intersubject variability were inversely related to age in children. Normal visual field thresholds were established for specific locations in children using a novel game-based visual field test. These could be used as a foundation for developing a game-based perimetry screening test for children.

  11. Schizophrenia spectrum participants have reduced visual contrast sensitivity to chromatic (red/green and luminance (light/dark stimuli: new insights into information processing, visual channel function and antipsychotic effects

    Directory of Open Access Journals (Sweden)

    Kristin Suzanne Cadenhead

    2013-08-01

    Full Text Available Background: Individuals with schizophrenia spectrum diagnoses have deficient visual information processing as assessed by a variety of paradigms including visual backward masking, motion perception and visual contrast sensitivity (VCS. In the present study, the VCS paradigm was used to investigate potential differences in magnocellular (M versus parvocellular (P channel function that might account for the observed information processing deficits of schizophrenia spectrum patients. Specifically, VCS for near threshold luminance (black/white stimuli is known to be governed primarily by the M channel, while VCS for near threshold chromatic (red/green stimuli is governed by the P channel. Methods: VCS for luminance and chromatic stimuli (counterphase-reversing sinusoidal gratings, 1.22 c/deg, 8.3 Hz was assessed in 53 patients with schizophrenia (including 5 off antipsychotic medication, 22 individuals diagnosed with schizotypal personality disorder and 53 healthy comparison subjects. Results: Schizophrenia spectrum groups demonstrated reduced VCS in both conditions relative to normals, and there was no significant group by condition interaction effect. Post-hoc analyses suggest that it was the patients with schizophrenia on antipsychotic medication as well as SPD participants who accounted for the deficits in the luminance condition. Conclusions: These results demonstrate visual information processing deficits in schizophrenia spectrum populations but do not support the notion of selective abnormalities in the function of subcortical channels as suggested by previous studies. Further work is needed in a longitudinal design to further assess VCS as a vulnerability marker for psychosis as well as the effect of antipsychotic agents on performance in schizophrenia spectrum populations.

  12. The Effect of Visual Stimuli on Stability and Complexity of Postural Control

    Directory of Open Access Journals (Sweden)

    Haizhen Luo

    2018-02-01

    Full Text Available Visual input could benefit balance control or increase postural sway, and it is far from fully understanding the effect of visual stimuli on postural stability and its underlying mechanism. In this study, the effect of different visual inputs on stability and complexity of postural control was examined by analyzing the mean velocity (MV, SD, and fuzzy approximate entropy (fApEn of the center of pressure (COP signal during quiet upright standing. We designed five visual exposure conditions: eyes-closed, eyes-open (EO, and three virtual reality (VR scenes (VR1–VR3. The VR scenes were a limited field view of an optokinetic drum rotating around yaw (VR1, pitch (VR2, and roll (VR3 axes, respectively. Sixteen healthy subjects were involved in the experiment, and their COP trajectories were assessed from the force plate data. MV, SD, and fApEn of the COP in anterior–posterior (AP, medial–lateral (ML directions were calculated. Two-way analysis of variance with repeated measures was conducted to test the statistical significance. We found that all the three parameters obtained the lowest values in the EO condition, and highest in the VR3 condition. We also found that the active neuromuscular intervention, indicated by fApEn, in response to changing the visual exposure conditions were more adaptive in AP direction, and the stability, indicated by SD, in ML direction reflected the changes of visual scenes. MV was found to capture both instability and active neuromuscular control dynamics. It seemed that the three parameters provided compensatory information about the postural control in the immersive virtual environment.

  13. Deep Learning Predicts Correlation between a Functional Signature of Higher Visual Areas and Sparse Firing of Neurons

    Directory of Open Access Journals (Sweden)

    Chengxu Zhuang

    2017-10-01

    Full Text Available Visual information in the visual cortex is processed in a hierarchical manner. Recent studies show that higher visual areas, such as V2, V3, and V4, respond more vigorously to images with naturalistic higher-order statistics than to images lacking them. This property is a functional signature of higher areas, as it is much weaker or even absent in the primary visual cortex (V1. However, the mechanism underlying this signature remains elusive. We studied this problem using computational models. In several typical hierarchical visual models including the AlexNet, VggNet, and SHMAX, this signature was found to be prominent in higher layers but much weaker in lower layers. By changing both the model structure and experimental settings, we found that the signature strongly correlated with sparse firing of units in higher layers but not with any other factors, including model structure, training algorithm (supervised or unsupervised, receptive field size, and property of training stimuli. The results suggest an important role of sparse neuronal activity underlying this special feature of higher visual areas.

  14. Mastering algebra retrains the visual system to perceive hierarchical structure in equations.

    Science.gov (United States)

    Marghetis, Tyler; Landy, David; Goldstone, Robert L

    2016-01-01

    Formal mathematics is a paragon of abstractness. It thus seems natural to assume that the mathematical expert should rely more on symbolic or conceptual processes, and less on perception and action. We argue instead that mathematical proficiency relies on perceptual systems that have been retrained to implement mathematical skills. Specifically, we investigated whether the visual system-in particular, object-based attention-is retrained so that parsing algebraic expressions and evaluating algebraic validity are accomplished by visual processing. Object-based attention occurs when the visual system organizes the world into discrete objects, which then guide the deployment of attention. One classic signature of object-based attention is better perceptual discrimination within, rather than between, visual objects. The current study reports that object-based attention occurs not only for simple shapes but also for symbolic mathematical elements within algebraic expressions-but only among individuals who have mastered the hierarchical syntax of algebra. Moreover, among these individuals, increased object-based attention within algebraic expressions is associated with a better ability to evaluate algebraic validity. These results suggest that, in mastering the rules of algebra, people retrain their visual system to represent and evaluate abstract mathematical structure. We thus argue that algebraic expertise involves the regimentation and reuse of evolutionarily ancient perceptual processes. Our findings implicate the visual system as central to learning and reasoning in mathematics, leading us to favor educational approaches to mathematics and related STEM fields that encourage students to adapt, not abandon, their use of perception.

  15. Brain processing of visual sexual stimuli in healthy men: a functional magnetic resonance imaging study.

    Science.gov (United States)

    Mouras, Harold; Stoléru, Serge; Bittoun, Jacques; Glutron, Dominique; Pélégrini-Issac, Mélanie; Paradis, Anne-Lise; Burnod, Yves

    2003-10-01

    The brain plays a central role in sexual motivation. To identify cerebral areas whose activation was correlated with sexual desire, eight healthy male volunteers were studied with functional magnetic resonance imaging (fMRI). Visual stimuli were sexually stimulating photographs (S condition) and emotionally neutral photographs (N condition). Subjective responses pertaining to sexual desire were recorded after each condition. To image the entire brain, separate runs focused on the upper and the lower parts of the brain. Statistical Parametric Mapping was used for data analysis. Subjective ratings confirmed that sexual pictures effectively induced sexual arousal. In the S condition compared to the N condition, a group analysis conducted on the upper part of the brain demonstrated an increased signal in the parietal lobes (superior parietal lobules, left intraparietal sulcus, left inferior parietal lobule, and right postcentral gyrus), the right parietooccipital sulcus, the left superior occipital gyrus, and the precentral gyri. In addition, a decreased signal was recorded in the right posterior cingulate gyrus and the left precuneus. In individual analyses conducted on the lower part of the brain, an increased signal was found in the right and/or left middle occipital gyrus in seven subjects, and in the right and/or left fusiform gyrus in six subjects. In conclusion, fMRI allows to identify brain responses to visual sexual stimuli. Among activated regions in the S condition, parietal areas are known to be involved in attentional processes directed toward motivationally relevant stimuli, while frontal premotor areas have been implicated in motor preparation and motor imagery. Further work is needed to identify those specific features of the neural responses that distinguish sexual desire from other emotional and motivational states.

  16. Use of Sine Shaped High-Frequency Rhythmic Visual Stimuli Patterns for SSVEP Response Analysis and Fatigue Rate Evaluation in Normal Subjects.

    Science.gov (United States)

    Keihani, Ahmadreza; Shirzhiyan, Zahra; Farahi, Morteza; Shamsi, Elham; Mahnam, Amin; Makkiabadi, Bahador; Haidari, Mohsen R; Jafari, Amir H

    2018-01-01

    Background: Recent EEG-SSVEP signal based BCI studies have used high frequency square pulse visual stimuli to reduce subjective fatigue. However, the effect of total harmonic distortion (THD) has not been considered. Compared to CRT and LCD monitors, LED screen displays high-frequency wave with better refresh rate. In this study, we present high frequency sine wave simple and rhythmic patterns with low THD rate by LED to analyze SSVEP responses and evaluate subjective fatigue in normal subjects. Materials and Methods: We used patterns of 3-sequence high-frequency sine waves (25, 30, and 35 Hz) to design our visual stimuli. Nine stimuli patterns, 3 simple (repetition of each of above 3 frequencies e.g., P25-25-25) and 6 rhythmic (all of the frequencies in 6 different sequences e.g., P25-30-35) were chosen. A hardware setup with low THD rate ( 90% for CCA and LASSO (for TWs > 1 s). High frequency rhythmic patterns group with low THD rate showed higher accuracy rate (99.24%) than simple patterns group (98.48%). Repeated measure ANOVA showed significant difference between rhythmic pattern features ( P rhythmic [3.85 ± 2.13] compared to the simple patterns group [3.96 ± 2.21], ( P = 0.63). Rhythmic group had lower within group VAS variation (min = P25-30-35 [2.90 ± 2.45], max = P35-25-30 [4.81 ± 2.65]) as well as least individual pattern VAS (P25-30-35). Discussion and Conclusion: Overall, rhythmic and simple pattern groups had higher and similar accuracy rates. Rhythmic stimuli patterns showed insignificantly lower fatigue rate than simple patterns. We conclude that both rhythmic and simple visual high frequency sine wave stimuli require further research for human subject SSVEP-BCI studies.

  17. Phylo-mLogo: an interactive and hierarchical multiple-logo visualization tool for alignment of many sequences

    Directory of Open Access Journals (Sweden)

    Lee DT

    2007-02-01

    Full Text Available Abstract Background When aligning several hundreds or thousands of sequences, such as epidemic virus sequences or homologous/orthologous sequences of some big gene families, to reconstruct the epidemiological history or their phylogenies, how to analyze and visualize the alignment results of many sequences has become a new challenge for computational biologists. Although there are several tools available for visualization of very long sequence alignments, few of them are applicable to the alignments of many sequences. Results A multiple-logo alignment visualization tool, called Phylo-mLogo, is presented in this paper. Phylo-mLogo calculates the variabilities and homogeneities of alignment sequences by base frequencies or entropies. Different from the traditional representations of sequence logos, Phylo-mLogo not only displays the global logo patterns of the whole alignment of multiple sequences, but also demonstrates their local homologous logos for each clade hierarchically. In addition, Phylo-mLogo also allows the user to focus only on the analysis of some important, structurally or functionally constrained sites in the alignment selected by the user or by built-in automatic calculation. Conclusion With Phylo-mLogo, the user can symbolically and hierarchically visualize hundreds of aligned sequences simultaneously and easily check the changes of their amino acid sites when analyzing many homologous/orthologous or influenza virus sequences. More information of Phylo-mLogo can be found at URL http://biocomp.iis.sinica.edu.tw/phylomlogo.

  18. Brain Activation by Visual Food-Related Stimuli and Correlations with Metabolic and Hormonal Parameters: A fMRI Study

    NARCIS (Netherlands)

    Jakobsdottir, S.; de Ruiter, M.B.; Deijen, J.B.; Veltman, D.J.; Drent, M.L.

    2012-01-01

    Regional brain activity in 15 healthy, normal weight males during processing of visual food stimuli in a satiated and a hungry state was examined and correlated with neuroendocrine factors known to be involved in hunger and satiated states. Two functional Magnetic Resonance Imaging (fMRI) sessions

  19. Gender differences in the processing of standard emotional visual stimuli: integrating ERP and fMRI results

    Science.gov (United States)

    Yang, Lei; Tian, Jie; Wang, Xiaoxiang; Hu, Jin

    2005-04-01

    The comprehensive understanding of human emotion processing needs consideration both in the spatial distribution and the temporal sequencing of neural activity. The aim of our work is to identify brain regions involved in emotional recognition as well as to follow the time sequence in the millisecond-range resolution. The effect of activation upon visual stimuli in different gender by International Affective Picture System (IAPS) has been examined. Hemodynamic and electrophysiological responses were measured in the same subjects. Both fMRI and ERP study were employed in an event-related study. fMRI have been obtained with 3.0 T Siemens Magnetom whole-body MRI scanner. 128-channel ERP data were recorded using an EGI system. ERP is sensitive to millisecond changes in mental activity, but the source localization and timing is limited by the ill-posed 'inversed' problem. We try to investigate the ERP source reconstruction problem in this study using fMRI constraint. We chose ICA as a pre-processing step of ERP source reconstruction to exclude the artifacts and provide a prior estimate of the number of dipoles. The results indicate that male and female show differences in neural mechanism during emotion visual stimuli.

  20. Comparison of the influence of stimuli color on Steady-State Visual Evoked Potentials

    Directory of Open Access Journals (Sweden)

    Richard Junior Manuel Godinez Tello

    Full Text Available IntroductionThe main idea of a traditional Steady State Visually Evoked Potentials (SSVEP-BCI is the activation of commands through gaze control. For this purpose, the retina of the eye is excited by a stimulus at a certain frequency. Several studies have shown effects related to different kind of stimuli, frequencies, window lengths, techniques of feature extraction and even classification. So far, none of the previous studies has performed a comparison of performance of stimuli colors through LED technology. This study addresses precisely this important aspect and would be a great contribution to the topic of SSVEP-BCIs. Additionally, the performance of different colors at different frequencies and the visual comfort were evaluated in each case.MethodsLEDs of four different colors (red, green, blue and yellow flickering at four distinct frequencies (8, 11, 13 and 15 Hz were used. Twenty subjects were distributed in two groups performing different protocols. Multivariate Synchronization Index (MSI was the technique adopted as feature extractor.ResultsThe accuracy was gradually enhanced with the increase of the time window. From our observations, the red color provides, in most frequencies, both highest rates of accuracy and Information Transfer Rate (ITR for detection of SSVEP.ConclusionAlthough the red color has presented higher ITR, this color was turned in the less comfortable one and can even elicit epileptic responses according to the literature. For this reason, the green color is suggested as the best choice according to the proposed rules. In addition, this color has shown to be safe and accurate for an SSVEP-BCI.

  1. Use of a Remote Eye-Tracker for the Analysis of Gaze during Treadmill Walking and Visual Stimuli Exposition

    Directory of Open Access Journals (Sweden)

    V. Serchi

    2016-01-01

    Full Text Available The knowledge of the visual strategies adopted while walking in cognitively engaging environments is extremely valuable. Analyzing gaze when a treadmill and a virtual reality environment are used as motor rehabilitation tools is therefore critical. Being completely unobtrusive, remote eye-trackers are the most appropriate way to measure the point of gaze. Still, the point of gaze measurements are affected by experimental conditions such as head range of motion and visual stimuli. This study assesses the usability limits and measurement reliability of a remote eye-tracker during treadmill walking while visual stimuli are projected. During treadmill walking, the head remained within the remote eye-tracker workspace. Generally, the quality of the point of gaze measurements declined as the distance from the remote eye-tracker increased and data loss occurred for large gaze angles. The stimulus location (a dot-target did not influence the point of gaze accuracy, precision, and trackability during both standing and walking. Similar results were obtained when the dot-target was replaced by a static or moving 2D target and “region of interest” analysis was applied. These findings foster the feasibility of the use of a remote eye-tracker for the analysis of gaze during treadmill walking in virtual reality environments.

  2. Heads First: Visual Aftereffects Reveal Hierarchical Integration of Cues to Social Attention.

    Directory of Open Access Journals (Sweden)

    Sarah Cooney

    Full Text Available Determining where another person is attending is an important skill for social interaction that relies on various visual cues, including the turning direction of the head and body. This study reports a novel high-level visual aftereffect that addresses the important question of how these sources of information are combined in gauging social attention. We show that adapting to images of heads turned 25° to the right or left produces a perceptual bias in judging the turning direction of subsequently presented bodies. In contrast, little to no change in the judgment of head orientation occurs after adapting to extremely oriented bodies. The unidirectional nature of the aftereffect suggests that cues from the human body signaling social attention are combined in a hierarchical fashion and is consistent with evidence from single-cell recording studies in nonhuman primates showing that information about head orientation can override information about body posture when both are visible.

  3. Colour and luminance contrasts predict the human detection of natural stimuli in complex visual environments.

    Science.gov (United States)

    White, Thomas E; Rojas, Bibiana; Mappes, Johanna; Rautiala, Petri; Kemp, Darrell J

    2017-09-01

    Much of what we know about human colour perception has come from psychophysical studies conducted in tightly-controlled laboratory settings. An enduring challenge, however, lies in extrapolating this knowledge to the noisy conditions that characterize our actual visual experience. Here we combine statistical models of visual perception with empirical data to explore how chromatic (hue/saturation) and achromatic (luminant) information underpins the detection and classification of stimuli in a complex forest environment. The data best support a simple linear model of stimulus detection as an additive function of both luminance and saturation contrast. The strength of each predictor is modest yet consistent across gross variation in viewing conditions, which accords with expectation based upon general primate psychophysics. Our findings implicate simple visual cues in the guidance of perception amidst natural noise, and highlight the potential for informing human vision via a fusion between psychophysical modelling and real-world behaviour. © 2017 The Author(s).

  4. Determination of hemispheric language dominance using functional MRI : comparison of visual and auditory stimuli

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Ic Ryung; Ahn, Kook Jin; Lee, Jae Mun [The Catholic Univ. of Korea, Seoul (Korea, Republic of); Kim, Tae [The Catholic Magnetic Resonance Research Center, Seoul (Korea, Republic of)

    1999-12-01

    To assess the difference between auditory and visual stimuli when determining hemispheric language dominance by using functional MRI. In ten healthy adult volunteers (8 right-handed, 1 left-handed, 1 ambidextrous), motor language activation in axial slices of frontal lobe was mapped on a Simens 1.5T Vision Plus system using single-shot EPI. Series of 120 consecutive images per section were acquired during three cycles of task activation and rest. During each activation, a series of four syllables was delivered by means of both a visual and auditory method, and the volunteers were asked to mentally generate words starting with each syllable. In both in ferior frontal gyri and whole frontal lobes, lateralization indices were calculated from the activated pixels. We determined the language dominant hemisphere, and compared the results of the visual method and the auditory method. Seven right-handed persons were left-hemisphere dominant, and one left-handed and one ambidex-trous person were right-hemisphere dominant. Five of nine persons demonstrated larger lateralization indices with the auditory method than the visual method, while the remaining four showed larger lateralization indices with the visual method. No statistically significant difference was noted when comparing the results of the two methods(p>0.05). When determining hemispheric language dominance using functional MRI, the two methods are equally appropriate.

  5. Determination of hemispheric language dominance using functional MRI : comparison of visual and auditory stimuli

    International Nuclear Information System (INIS)

    Yoo, Ic Ryung; Ahn, Kook Jin; Lee, Jae Mun; Kim, Tae

    1999-01-01

    To assess the difference between auditory and visual stimuli when determining hemispheric language dominance by using functional MRI. In ten healthy adult volunteers (8 right-handed, 1 left-handed, 1 ambidextrous), motor language activation in axial slices of frontal lobe was mapped on a Simens 1.5T Vision Plus system using single-shot EPI. Series of 120 consecutive images per section were acquired during three cycles of task activation and rest. During each activation, a series of four syllables was delivered by means of both a visual and auditory method, and the volunteers were asked to mentally generate words starting with each syllable. In both in ferior frontal gyri and whole frontal lobes, lateralization indices were calculated from the activated pixels. We determined the language dominant hemisphere, and compared the results of the visual method and the auditory method. Seven right-handed persons were left-hemisphere dominant, and one left-handed and one ambidex-trous person were right-hemisphere dominant. Five of nine persons demonstrated larger lateralization indices with the auditory method than the visual method, while the remaining four showed larger lateralization indices with the visual method. No statistically significant difference was noted when comparing the results of the two methods(p>0.05). When determining hemispheric language dominance using functional MRI, the two methods are equally appropriate

  6. Divisive normalization and neuronal oscillations in a single hierarchical framework of selective visual attention

    Directory of Open Access Journals (Sweden)

    Jorrit Steven Montijn

    2012-05-01

    Full Text Available In divisive normalization models of covert attention, spike rate modulations are commonly used as indicators of the effect of top-down attention. In addition, an increasing number of studies have shown that top-down attention increases the synchronization of neuronal oscillations as well, particularly those in gamma-band frequencies (25 to 100 Hz. Although modulations of spike rate and synchronous oscillations are not mutually exclusive as mechanisms of attention, there has thus far been little effort to integrate these concepts into a single framework of attention. Here, we aim to provide such a unified framework by expanding the normalization model of attention with a time dimension; allowing the simulation of a recently reported backward progression of attentional effects along the visual cortical hierarchy. A simple hierarchical cascade of normalization models simulating different cortical areas however leads to signal degradation and a loss of discriminability over time. To negate this degradation and ensure stable neuronal stimulus representations, we incorporate oscillatory phase entrainment into our model, a mechanism previously proposed as the communication-through-coherence (CTC hypothesis. Our analysis shows that divisive normalization and oscillation models can complement each other in a unified account of the neural mechanisms of selective visual attention. The resulting hierarchical normalization and oscillation (HNO model reproduces several additional spatial and temporal aspects of attentional modulation.

  7. Robust spike sorting of retinal ganglion cells tuned to spot stimuli.

    Science.gov (United States)

    Ghahari, Alireza; Badea, Tudor C

    2016-08-01

    We propose an automatic spike sorting approach for the data recorded from a microelectrode array during visual stimulation of wild type retinas with tiled spot stimuli. The approach first detects individual spikes per electrode by their signature local minima. With the mixture probability distribution of the local minima estimated afterwards, it applies a minimum-squared-error clustering algorithm to sort the spikes into different clusters. A template waveform for each cluster per electrode is defined, and a number of reliability tests are performed on it and its corresponding spikes. Finally, a divisive hierarchical clustering algorithm is used to deal with the correlated templates per cluster type across all the electrodes. According to the measures of performance of the spike sorting approach, it is robust even in the cases of recordings with low signal-to-noise ratio.

  8. Visual sexual stimuli – cue or reward? A key for interpreting brain imaging studies on human sexual behaviors

    Directory of Open Access Journals (Sweden)

    Mateusz Gola

    2016-08-01

    Full Text Available There is an increasing number of neuroimaging studies using visual sexual stimuli (VSS for human sexuality studies, including emerging field of research on compulsive sexual behaviors. A central question in this field is whether behaviors such as extensive pornography consumption share common brain mechanisms with widely studied substance and behavioral addictions. Depending on how VSS are conceptualized, different predictions can be formulated within the frameworks of Reinforcement Learning or Incentive Salience Theory, where a crucial distinction is made between conditioned (cue and unconditioned (reward stimuli (related to reward anticipation vs reward consumption, respectively. Surveying 40 recent human neuroimaging studies we show existing ambiguity about the conceptualization of VSS. Therefore, we feel that it is important to address the question of whether VSS should be considered as cues (conditioned stimuli or rewards (unconditioned stimuli. Here we present our own perspective, which is that in most laboratory settings VSS play a role of reward (unconditioned stimuli, as evidenced by: 1. experience of pleasure while watching VSS, possibly accompanied by genital reaction 2. reward-related brain activity correlated with these pleasurable feelings in response to VSS, 3. a willingness to exert effort to view VSS similarly as for other rewarding stimuli such as money, and/or 4. conditioning for cues (CS predictive for. We hope that this perspective paper will initiate a scientific discussion on this important and overlooked topic and increase attention for appropriate interpretations of results of human neuroimaging studies using VSS.

  9. Recognition of visual stimuli and memory for spatial context in schizophrenic patients and healthy volunteers.

    Science.gov (United States)

    Brébion, Gildas; David, Anthony S; Pilowsky, Lyn S; Jones, Hugh

    2004-11-01

    Verbal and visual recognition tasks were administered to 40 patients with schizophrenia and 40 healthy comparison subjects. The verbal recognition task consisted of discriminating between 16 target words and 16 new words. The visual recognition task consisted of discriminating between 16 target pictures (8 black-and-white and 8 color) and 16 new pictures (8 black-and-white and 8 color). Visual recognition was followed by a spatial context discrimination task in which subjects were required to remember the spatial location of the target pictures at encoding. Results showed that recognition deficit in patients was similar for verbal and visual material. In both schizophrenic and healthy groups, men, but not women, obtained better recognition scores for the colored than for the black-and-white pictures. However, men and women similarly benefited from color to reduce spatial context discrimination errors. Patients showed a significant deficit in remembering the spatial location of the pictures, independently of accuracy in remembering the pictures themselves. These data suggest that patients are impaired in the amount of visual information that they can encode. With regards to the perceptual attributes of the stimuli, memory for spatial information appears to be affected, but not processing of color information.

  10. Cortical response tracking the conscious experience of threshold duration visual stimuli indicates visual perception is all or none

    Science.gov (United States)

    Sekar, Krithiga; Findley, William M.; Poeppel, David; Llinás, Rodolfo R.

    2013-01-01

    At perceptual threshold, some stimuli are available for conscious access whereas others are not. Such threshold inputs are useful tools for investigating the events that separate conscious awareness from unconscious stimulus processing. Here, viewing unmasked, threshold-duration images was combined with recording magnetoencephalography to quantify differences among perceptual states, ranging from no awareness to ambiguity to robust perception. A four-choice scale was used to assess awareness: “didn’t see” (no awareness), “couldn’t identify” (awareness without identification), “unsure” (awareness with low certainty identification), and “sure” (awareness with high certainty identification). Stimulus-evoked neuromagnetic signals were grouped according to behavioral response choices. Three main cortical responses were elicited. The earliest response, peaking at ∼100 ms after stimulus presentation, showed no significant correlation with stimulus perception. A late response (∼290 ms) showed moderate correlation with stimulus awareness but could not adequately differentiate conscious access from its absence. By contrast, an intermediate response peaking at ∼240 ms was observed only for trials in which stimuli were consciously detected. That this signal was similar for all conditions in which awareness was reported is consistent with the hypothesis that conscious visual access is relatively sharply demarcated. PMID:23509248

  11. Effects of auditory stimuli in the horizontal plane on audiovisual integration: an event-related potential study.

    Science.gov (United States)

    Yang, Weiping; Li, Qi; Ochi, Tatsuya; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Takahashi, Satoshi; Wu, Jinglong

    2013-01-01

    This article aims to investigate whether auditory stimuli in the horizontal plane, particularly originating from behind the participant, affect audiovisual integration by using behavioral and event-related potential (ERP) measurements. In this study, visual stimuli were presented directly in front of the participants, auditory stimuli were presented at one location in an equidistant horizontal plane at the front (0°, the fixation point), right (90°), back (180°), or left (270°) of the participants, and audiovisual stimuli that include both visual stimuli and auditory stimuli originating from one of the four locations were simultaneously presented. These stimuli were presented randomly with equal probability; during this time, participants were asked to attend to the visual stimulus and respond promptly only to visual target stimuli (a unimodal visual target stimulus and the visual target of the audiovisual stimulus). A significant facilitation of reaction times and hit rates was obtained following audiovisual stimulation, irrespective of whether the auditory stimuli were presented in the front or back of the participant. However, no significant interactions were found between visual stimuli and auditory stimuli from the right or left. Two main ERP components related to audiovisual integration were found: first, auditory stimuli from the front location produced an ERP reaction over the right temporal area and right occipital area at approximately 160-200 milliseconds; second, auditory stimuli from the back produced a reaction over the parietal and occipital areas at approximately 360-400 milliseconds. Our results confirmed that audiovisual integration was also elicited, even though auditory stimuli were presented behind the participant, but no integration occurred when auditory stimuli were presented in the right or left spaces, suggesting that the human brain might be particularly sensitive to information received from behind than both sides.

  12. Visual Sexual Stimuli-Cue or Reward? A Perspective for Interpreting Brain Imaging Findings on Human Sexual Behaviors.

    Science.gov (United States)

    Gola, Mateusz; Wordecha, Małgorzata; Marchewka, Artur; Sescousse, Guillaume

    2016-01-01

    There is an increasing number of neuroimaging studies using visual sexual stimuli (VSS), especially within the emerging field of research on compulsive sexual behaviors (CSB). A central question in this field is whether behaviors such as excessive pornography consumption share common brain mechanisms with widely studied substance and behavioral addictions. Depending on how VSS are conceptualized, different predictions can be formulated within the frameworks of Reinforcement Learning or Incentive Salience Theory, where a crucial distinction is made between conditioned and unconditioned stimuli (related to reward anticipation vs. reward consumption, respectively). Surveying 40 recent human neuroimaging studies we show existing ambiguity about the conceptualization of VSS. Therefore, we feel that it is important to address the question of whether VSS should be considered as conditioned stimuli (cue) or unconditioned stimuli (reward). Here we present our own perspective, which is that in most laboratory settings VSS play a role of reward, as evidenced by: (1) experience of pleasure while watching VSS, possibly accompanied by genital reaction; (2) reward-related brain activity correlated with these pleasurable feelings in response to VSS; (3) a willingness to exert effort to view VSS similarly as for other rewarding stimuli such as money; and (4) conditioning for cues predictive of VSS. We hope that this perspective article will initiate a scientific discussion on this important and overlooked topic and increase attention for appropriate interpretations of results of human neuroimaging studies using VSS.

  13. Olfactory cues are subordinate to visual stimuli in a neotropical generalist weevil.

    Directory of Open Access Journals (Sweden)

    Fernando Otálora-Luna

    Full Text Available The tropical root weevil Diaprepes abbreviatus is a major pest of multiple crops in the Caribbean Islands and has become a serious constraint to citrus production in the United States. Recent work has identified host and conspecific volatiles that mediate host- and mate-finding by D. abbreviatus. The interaction of light, color, and odors has not been studied in this species. The responses of male and female D. abbreviatus to narrow bandwidths of visible light emitted by LEDs offered alone and in combination with olfactory stimuli were studied in a specially-designed multiple choice arena combined with a locomotion compensator. Weevils were more attracted to wavelengths close to green and yellow compared with blue or ultraviolet, but preferred red and darkness over green. Additionally, dim green light was preferred over brighter green. Adult weevils were also attracted to the odor of its citrus host + conspecifics. However, the attractiveness of citrus + conspecific odors disappeared in the presence of a green light. Photic stimulation induced males but not females to increase their speed. In the presence of light emitted by LEDs, turning speed decreased and path straightness increased, indicating that weevils tended to walk less tortuously. Diaprepes abbreviatus showed a hierarchy between chemo- and photo-taxis in the series of experiments presented herein, where the presence of the green light abolished upwind anemotaxis elicited by the pheromone + host plant odor. Insight into the strong responses to visual stimuli of chemically stimulated insects may be provided when the amount of information supplied by vision and olfaction is compared, as the information transmission capacity of compound eyes is estimated to be several orders of magnitude higher compared with the olfactory system. Subordination of olfactory responses by photic stimuli should be considered in the design of strategies aimed at management of such insects.

  14. Generating Stimuli for Neuroscience Using PsychoPy

    OpenAIRE

    Peirce, Jonathan W.

    2009-01-01

    PsychoPy is a software library written in Python, using OpenGL to generate very precise visual stimuli on standard personal computers. It is designed to allow the construction of as wide a variety of neuroscience experiments as possible, with the least effort. By writing scripts in standard Python syntax users can generate an enormous variety of visual and auditory stimuli and can interact with a wide range of external hardware (enabling its use in fMRI, EEG, MEG etc.). The structure of scrip...

  15. Development of visual cortical function in infant macaques: A BOLD fMRI study.

    Directory of Open Access Journals (Sweden)

    Tom J Van Grootel

    Full Text Available Functional brain development is not well understood. In the visual system, neurophysiological studies in nonhuman primates show quite mature neuronal properties near birth although visual function is itself quite immature and continues to develop over many months or years after birth. Our goal was to assess the relative development of two main visual processing streams, dorsal and ventral, using BOLD fMRI in an attempt to understand the global mechanisms that support the maturation of visual behavior. Seven infant macaque monkeys (Macaca mulatta were repeatedly scanned, while anesthetized, over an age range of 102 to 1431 days. Large rotating checkerboard stimuli induced BOLD activation in visual cortices at early ages. Additionally we used static and dynamic Glass pattern stimuli to probe BOLD responses in primary visual cortex and two extrastriate areas: V4 and MT-V5. The resulting activations were analyzed with standard GLM and multivoxel pattern analysis (MVPA approaches. We analyzed three contrasts: Glass pattern present/absent, static/dynamic Glass pattern presentation, and structured/random Glass pattern form. For both GLM and MVPA approaches, robust coherent BOLD activation appeared relatively late in comparison to the maturation of known neuronal properties and the development of behavioral sensitivity to Glass patterns. Robust differential activity to Glass pattern present/absent and dynamic/static stimulus presentation appeared first in V1, followed by V4 and MT-V5 at older ages; there was no reliable distinction between the two extrastriate areas. A similar pattern of results was obtained with the two analysis methods, although MVPA analysis showed reliable differential responses emerging at later ages than GLM. Although BOLD responses to large visual stimuli are detectable, our results with more refined stimuli indicate that global BOLD activity changes as behavioral performance matures. This reflects an hierarchical development of

  16. Event-related potential response to auditory social stimuli, parent-reported social communicative deficits and autism risk in school-aged children with congenital visual impairment

    Directory of Open Access Journals (Sweden)

    Joe Bathelt

    2017-10-01

    Full Text Available Communication with visual signals, like facial expression, is important in early social development, but the question if these signals are necessary for typical social development remains to be addressed. The potential impact on social development of being born with no or very low levels of vision is therefore of high theoretical and clinical interest. The current study investigated event-related potential responses to basic social stimuli in a rare group of school-aged children with congenital visual disorders of the anterior visual system (globe of the eye, retina, anterior optic nerve. Early-latency event-related potential responses showed no difference between the VI and control group, suggesting similar initial auditory processing. However, the mean amplitude over central and right frontal channels between 280 and 320 ms was reduced in response to own-name stimuli, but not control stimuli, in children with VI suggesting differences in social processing. Children with VI also showed an increased rate of autistic-related behaviours, pragmatic language deficits, as well as peer relationship and emotional problems on standard parent questionnaires. These findings suggest that vision may be necessary for the typical development of social processing across modalities.

  17. Divisive normalization and neuronal oscillations in a single hierarchical framework of selective visual attention.

    Science.gov (United States)

    Montijn, Jorrit Steven; Klink, P Christaan; van Wezel, Richard J A

    2012-01-01

    Divisive normalization models of covert attention commonly use spike rate modulations as indicators of the effect of top-down attention. In addition, an increasing number of studies have shown that top-down attention increases the synchronization of neuronal oscillations as well, particularly in gamma-band frequencies (25-100 Hz). Although modulations of spike rate and synchronous oscillations are not mutually exclusive as mechanisms of attention, there has thus far been little effort to integrate these concepts into a single framework of attention. Here, we aim to provide such a unified framework by expanding the normalization model of attention with a multi-level hierarchical structure and a time dimension; allowing the simulation of a recently reported backward progression of attentional effects along the visual cortical hierarchy. A simple cascade of normalization models simulating different cortical areas is shown to cause signal degradation and a loss of stimulus discriminability over time. To negate this degradation and ensure stable neuronal stimulus representations, we incorporate a kind of oscillatory phase entrainment into our model that has previously been proposed as the "communication-through-coherence" (CTC) hypothesis. Our analysis shows that divisive normalization and oscillation models can complement each other in a unified account of the neural mechanisms of selective visual attention. The resulting hierarchical normalization and oscillation (HNO) model reproduces several additional spatial and temporal aspects of attentional modulation and predicts a latency effect on neuronal responses as a result of cued attention.

  18. Facilitation of responses by task-irrelevant complex deviant stimuli.

    Science.gov (United States)

    Schomaker, J; Meeter, M

    2014-05-01

    Novel stimuli reliably attract attention, suggesting that novelty may disrupt performance when it is task-irrelevant. However, under certain circumstances novel stimuli can also elicit a general alerting response having beneficial effects on performance. In a series of experiments we investigated whether different aspects of novelty--stimulus novelty, contextual novelty, surprise, deviance, and relative complexity--lead to distraction or facilitation. We used a version of the visual oddball paradigm in which participants responded to an occasional auditory target. Participants responded faster to this auditory target when it occurred during the presentation of novel visual stimuli than of standard stimuli, especially at SOAs of 0 and 200 ms (Experiment 1). Facilitation was absent for both infrequent simple deviants and frequent complex images (Experiment 2). However, repeated complex deviant images did facilitate responses to the auditory target at the 200 ms SOA (Experiment 3). These findings suggest that task-irrelevant deviant visual stimuli can facilitate responses to an unrelated auditory target in a short 0-200 millisecond time-window after presentation. This only occurs when the deviant stimuli are complex relative to standard stimuli. We link our findings to the novelty P3, which is generated under the same circumstances, and to the adaptive gain theory of the locus coeruleus-norepinephrine system (Aston-Jones and Cohen, 2005), which may explain the timing of the effects. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Affective and physiological correlates of the perception of unimodal and bimodal emotional stimuli.

    Science.gov (United States)

    Rosa, Pedro J; Oliveira, Jorge; Alghazzawi, Daniyal; Fardoun, Habib; Gamito, Pedro

    2017-08-01

    Despite the multisensory nature of perception, previous research on emotions has been focused on unimodal emotional cues with visual stimuli. To the best of our knowledge, there is no evidence on the extent to which incongruent emotional cues from visual and auditory sensory channels affect pupil size. To investigate the effects of audiovisual emotional information perception on the physiological and affective response, but also to determine the impact of mismatched cues in emotional perception on these physiological indexes. Pupil size, electrodermal activity and affective subjective responses were recorded while 30 participants were exposed to visual and auditory stimuli with varied emotional content in three different experimental conditions: pictures and sounds presented alone (unimodal), emotionally matched audio-visual stimuli (bimodal congruent) and emotionally mismatched audio-visual stimuli (bimodal incongruent). The data revealed no effect of emotional incongruence on physiological and affective responses. On the other hand, pupil size covaried with skin conductance response (SCR), but the subjective experience was partially dissociated from autonomic responses. Emotional stimuli are able to trigger physiological responses regardless of valence, sensory modality or level of emotional congruence.

  20. Steady-state VEP responses to uncomfortable stimuli.

    Science.gov (United States)

    O'Hare, Louise

    2017-02-01

    Periodic stimuli, such as op-art, can evoke a range of aversive sensations included in the term visual discomfort. Illusory motion effects are elicited by fixational eye movements, but the cortex might also contribute to effects of discomfort. To investigate this possibility, steady-state visually evoked responses (SSVEPs) to contrast-matched op-art-based stimuli were measured at the same time as discomfort judgements. On average, discomfort reduced with increasing spatial frequency of the pattern. In contrast, the peak amplitude of the SSVEP response was around the midrange spatial frequencies. Like the discomfort judgements, SSVEP responses to the highest spatial frequencies were lowest amplitude, but the relationship breaks down between discomfort and SSVEP for the lower spatial frequency stimuli. This was not explicable by gross eye movements as measured using the facial electrodes. There was a weak relationship between the peak SSVEP responses and discomfort judgements for some stimuli, suggesting that discomfort can be explained in part by electrophysiological responses measured at the level of the cortex. However, there is a breakdown of this relationship in the case of lower spatial frequency stimuli, which remains unexplained. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  1. Testing a Poisson Counter Model for Visual Identification of Briefly Presented, Mutually Confusable Single Stimuli in Pure Accuracy Tasks

    Science.gov (United States)

    Kyllingsbaek, Soren; Markussen, Bo; Bundesen, Claus

    2012-01-01

    The authors propose and test a simple model of the time course of visual identification of briefly presented, mutually confusable single stimuli in pure accuracy tasks. The model implies that during stimulus analysis, tentative categorizations that stimulus i belongs to category j are made at a constant Poisson rate, v(i, j). The analysis is…

  2. Effects of visual working memory on brain information processing of irrelevant auditory stimuli.

    Directory of Open Access Journals (Sweden)

    Jiagui Qu

    Full Text Available Selective attention has traditionally been viewed as a sensory processing modulator that promotes cognitive processing efficiency by favoring relevant stimuli while inhibiting irrelevant stimuli. However, the cross-modal processing of irrelevant information during working memory (WM has been rarely investigated. In this study, the modulation of irrelevant auditory information by the brain during a visual WM task was investigated. The N100 auditory evoked potential (N100-AEP following an auditory click was used to evaluate the selective attention to auditory stimulus during WM processing and at rest. N100-AEP amplitudes were found to be significantly affected in the left-prefrontal, mid-prefrontal, right-prefrontal, left-frontal, and mid-frontal regions while performing a high WM load task. In contrast, no significant differences were found between N100-AEP amplitudes in WM states and rest states under a low WM load task in all recorded brain regions. Furthermore, no differences were found between the time latencies of N100-AEP troughs in WM states and rest states while performing either the high or low WM load task. These findings suggested that the prefrontal cortex (PFC may integrate information from different sensory channels to protect perceptual integrity during cognitive processing.

  3. Effects of visual working memory on brain information processing of irrelevant auditory stimuli.

    Science.gov (United States)

    Qu, Jiagui; Rizak, Joshua D; Zhao, Lun; Li, Minghong; Ma, Yuanye

    2014-01-01

    Selective attention has traditionally been viewed as a sensory processing modulator that promotes cognitive processing efficiency by favoring relevant stimuli while inhibiting irrelevant stimuli. However, the cross-modal processing of irrelevant information during working memory (WM) has been rarely investigated. In this study, the modulation of irrelevant auditory information by the brain during a visual WM task was investigated. The N100 auditory evoked potential (N100-AEP) following an auditory click was used to evaluate the selective attention to auditory stimulus during WM processing and at rest. N100-AEP amplitudes were found to be significantly affected in the left-prefrontal, mid-prefrontal, right-prefrontal, left-frontal, and mid-frontal regions while performing a high WM load task. In contrast, no significant differences were found between N100-AEP amplitudes in WM states and rest states under a low WM load task in all recorded brain regions. Furthermore, no differences were found between the time latencies of N100-AEP troughs in WM states and rest states while performing either the high or low WM load task. These findings suggested that the prefrontal cortex (PFC) may integrate information from different sensory channels to protect perceptual integrity during cognitive processing.

  4. Visual stimuli for the P300 brain-computer interface: a comparison of white/gray and green/blue flicker matrices.

    Science.gov (United States)

    Takano, Kouji; Komatsu, Tomoaki; Hata, Naoki; Nakajima, Yasoichi; Kansaku, Kenji

    2009-08-01

    The white/gray flicker matrix has been used as a visual stimulus for the so-called P300 brain-computer interface (BCI), but the white/gray flash stimuli might induce discomfort. In this study, we investigated the effectiveness of green/blue flicker matrices as visual stimuli. Ten able-bodied, non-trained subjects performed Alphabet Spelling (Japanese Alphabet: Hiragana) using an 8 x 10 matrix with three types of intensification/rest flicker combinations (L, luminance; C, chromatic; LC, luminance and chromatic); both online and offline performances were evaluated. The accuracy rate under the online LC condition was 80.6%. Offline analysis showed that the LC condition was associated with significantly higher accuracy than was the L or C condition (Tukey-Kramer, p < 0.05). No significant difference was observed between L and C conditions. The LC condition, which used the green/blue flicker matrix was associated with better performances in the P300 BCI. The green/blue chromatic flicker matrix can be an efficient tool for practical BCI application.

  5. Impact of visual repetition rate on intrinsic properties of low frequency fluctuations in the visual network.

    Directory of Open Access Journals (Sweden)

    Yi-Chia Li

    Full Text Available BACKGROUND: Visual processing network is one of the functional networks which have been reliably identified to consistently exist in human resting brains. In our work, we focused on this network and investigated the intrinsic properties of low frequency (0.01-0.08 Hz fluctuations (LFFs during changes of visual stimuli. There were two main questions to be discussed in this study: intrinsic properties of LFFs regarding (1 interactions between visual stimuli and resting-state; (2 impact of repetition rate of visual stimuli. METHODOLOGY/PRINCIPAL FINDINGS: We analyzed scanning sessions that contained rest and visual stimuli in various repetition rates with a novel method. The method included three numerical approaches involving ICA (Independent Component Analyses, fALFF (fractional Amplitude of Low Frequency Fluctuation, and Coherence, to respectively investigate the modulations of visual network pattern, low frequency fluctuation power, and interregional functional connectivity during changes of visual stimuli. We discovered when resting-state was replaced by visual stimuli, more areas were involved in visual processing, and both stronger low frequency fluctuations and higher interregional functional connectivity occurred in visual network. With changes of visual repetition rate, the number of areas which were involved in visual processing, low frequency fluctuation power, and interregional functional connectivity in this network were also modulated. CONCLUSIONS/SIGNIFICANCE: To combine the results of prior literatures and our discoveries, intrinsic properties of LFFs in visual network are altered not only by modulations of endogenous factors (eye-open or eye-closed condition; alcohol administration and disordered behaviors (early blind, but also exogenous sensory stimuli (visual stimuli with various repetition rates. It demonstrates that the intrinsic properties of LFFs are valuable to represent physiological states of human brains.

  6. Different Stimuli, Different Spatial Codes: A Visual Map and an Auditory Rate Code for Oculomotor Space in the Primate Superior Colliculus

    Science.gov (United States)

    Lee, Jungah; Groh, Jennifer M.

    2014-01-01

    Maps are a mainstay of visual, somatosensory, and motor coding in many species. However, auditory maps of space have not been reported in the primate brain. Instead, recent studies have suggested that sound location may be encoded via broadly responsive neurons whose firing rates vary roughly proportionately with sound azimuth. Within frontal space, maps and such rate codes involve different response patterns at the level of individual neurons. Maps consist of neurons exhibiting circumscribed receptive fields, whereas rate codes involve open-ended response patterns that peak in the periphery. This coding format discrepancy therefore poses a potential problem for brain regions responsible for representing both visual and auditory information. Here, we investigated the coding of auditory space in the primate superior colliculus(SC), a structure known to contain visual and oculomotor maps for guiding saccades. We report that, for visual stimuli, neurons showed circumscribed receptive fields consistent with a map, but for auditory stimuli, they had open-ended response patterns consistent with a rate or level-of-activity code for location. The discrepant response patterns were not segregated into different neural populations but occurred in the same neurons. We show that a read-out algorithm in which the site and level of SC activity both contribute to the computation of stimulus location is successful at evaluating the discrepant visual and auditory codes, and can account for subtle but systematic differences in the accuracy of auditory compared to visual saccades. This suggests that a given population of neurons can use different codes to support appropriate multimodal behavior. PMID:24454779

  7. Event-related potential response to auditory social stimuli, parent-reported social communicative deficits and autism risk in school-aged children with congenital visual impairment.

    Science.gov (United States)

    Bathelt, Joe; Dale, Naomi; de Haan, Michelle

    2017-10-01

    Communication with visual signals, like facial expression, is important in early social development, but the question if these signals are necessary for typical social development remains to be addressed. The potential impact on social development of being born with no or very low levels of vision is therefore of high theoretical and clinical interest. The current study investigated event-related potential responses to basic social stimuli in a rare group of school-aged children with congenital visual disorders of the anterior visual system (globe of the eye, retina, anterior optic nerve). Early-latency event-related potential responses showed no difference between the VI and control group, suggesting similar initial auditory processing. However, the mean amplitude over central and right frontal channels between 280 and 320ms was reduced in response to own-name stimuli, but not control stimuli, in children with VI suggesting differences in social processing. Children with VI also showed an increased rate of autistic-related behaviours, pragmatic language deficits, as well as peer relationship and emotional problems on standard parent questionnaires. These findings suggest that vision may be necessary for the typical development of social processing across modalities. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  8. The impact of task demand on visual word recognition.

    Science.gov (United States)

    Yang, J; Zevin, J

    2014-07-11

    The left occipitotemporal cortex has been found sensitive to the hierarchy of increasingly complex features in visually presented words, from individual letters to bigrams and morphemes. However, whether this sensitivity is a stable property of the brain regions engaged by word recognition is still unclear. To address the issue, the current study investigated whether different task demands modify this sensitivity. Participants viewed real English words and stimuli with hierarchical word-likeness while performing a lexical decision task (i.e., to decide whether each presented stimulus is a real word) and a symbol detection task. General linear model and independent component analysis indicated strong activation in the fronto-parietal and temporal regions during the two tasks. Furthermore, the bilateral inferior frontal gyrus and insula showed significant interaction effects between task demand and stimulus type in the pseudoword condition. The occipitotemporal cortex showed strong main effects for task demand and stimulus type, but no sensitivity to the hierarchical word-likeness was found. These results suggest that different task demands on semantic, phonological and orthographic processes can influence the involvement of the relevant regions during visual word recognition. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.

  9. A solution for measuring accurate reaction time to visual stimuli realized with a programmable microcontroller.

    Science.gov (United States)

    Ohyanagi, Toshio; Sengoku, Yasuhito

    2010-02-01

    This article presents a new solution for measuring accurate reaction time (SMART) to visual stimuli. The SMART is a USB device realized with a Cypress Programmable System-on-Chip (PSoC) mixed-signal array programmable microcontroller. A brief overview of the hardware and firmware of the PSoC is provided, together with the results of three experiments. In Experiment 1, we investigated the timing accuracy of the SMART in measuring reaction time (RT) under different conditions of operating systems (OSs; Windows XP or Vista) and monitor displays (a CRT or an LCD). The results indicated that the timing error in measuring RT by the SMART was less than 2 msec, on average, under all combinations of OS and display and that the SMART was tolerant to jitter and noise. In Experiment 2, we tested the SMART with 8 participants. The results indicated that there was no significant difference among RTs obtained with the SMART under the different conditions of OS and display. In Experiment 3, we used Microsoft (MS) PowerPoint to present visual stimuli on the display. We found no significant difference in RTs obtained using MS DirectX technology versus using the PowerPoint file with the SMART. We are certain that the SMART is a simple and practical solution for measuring RTs accurately. Although there are some restrictions in using the SMART with RT paradigms, the SMART is capable of providing both researchers and health professionals working in clinical settings with new ways of using RT paradigms in their work.

  10. l-Theanine and caffeine improve target-specific attention to visual stimuli by decreasing mind wandering: a human functional magnetic resonance imaging study.

    Science.gov (United States)

    Kahathuduwa, Chanaka N; Dhanasekara, Chathurika S; Chin, Shao-Hua; Davis, Tyler; Weerasinghe, Vajira S; Dassanayake, Tharaka L; Binks, Martin

    2018-01-01

    Oral intake of l-theanine and caffeine supplements is known to be associated with faster stimulus discrimination, possibly via improving attention to stimuli. We hypothesized that l-theanine and caffeine may be bringing about this beneficial effect by increasing attention-related neural resource allocation to target stimuli and decreasing deviation of neural resources to distractors. We used functional magnetic resonance imaging (fMRI) to test this hypothesis. Solutions of 200mg of l-theanine, 160mg of caffeine, their combination, or the vehicle (distilled water; placebo) were administered in a randomized 4-way crossover design to 9 healthy adult men. Sixty minutes after administration, a 20-minute fMRI scan was performed while the subjects performed a visual color stimulus discrimination task. l-Theanine and l-theanine-caffeine combination resulted in faster responses to targets compared with placebo (∆=27.8milliseconds, P=.018 and ∆=26.7milliseconds, P=.037, respectively). l-Theanine was associated with decreased fMRI responses to distractor stimuli in brain regions that regulate visual attention, suggesting that l-theanine may be decreasing neural resource allocation to process distractors, thus allowing to attend to targets more efficiently. l-Theanine-caffeine combination was associated with decreased fMRI responses to target stimuli as compared with distractors in several brain regions that typically show increased activation during mind wandering. Factorial analysis suggested that l-theanine and caffeine seem to have a synergistic action in decreasing mind wandering. Therefore, our hypothesis is that l-theanine and caffeine may be decreasing deviation of attention to distractors (including mind wandering); thus, enhancing attention to target stimuli was confirmed. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Visual attention and emotional reactions to negative stimuli: The role of age and cognitive reappraisal.

    Science.gov (United States)

    Wirth, Maria; Isaacowitz, Derek M; Kunzmann, Ute

    2017-09-01

    Prominent life span theories of emotion propose that older adults attend less to negative emotional information and report less negative emotional reactions to the same information than younger adults do. Although parallel age differences in affective information processing and age differences in emotional reactivity have been proposed, they have rarely been investigated within the same study. In this eye-tracking study, we tested age differences in visual attention and emotional reactivity, using standardized emotionally negative stimuli. Additionally, we investigated age differences in the association between visual attention and emotional reactivity, and whether these are moderated by cognitive reappraisal. Older as compared with younger adults showed fixation patterns away from negative image content, while they reacted with greater negative emotions. The association between visual attention and emotional reactivity differed by age group and positive reappraisal. Younger adults felt better when they attended more to negative content rather than less, but this relationship only held for younger adults who did not attach a positive meaning to the negative situation. For older adults, overall, there was no significant association between visual attention and emotional reactivity. However, for older adults who did not use positive reappraisal, decreases in attention to negative information were associated with less negative emotions. The present findings point to a complex relationship between younger and older adults' visual attention and emotional reactions. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  12. Selective Attention to Visual Stimuli Using Auditory Distractors Is Altered in Alpha-9 Nicotinic Receptor Subunit Knock-Out Mice.

    Science.gov (United States)

    Terreros, Gonzalo; Jorratt, Pascal; Aedo, Cristian; Elgoyhen, Ana Belén; Delano, Paul H

    2016-07-06

    During selective attention, subjects voluntarily focus their cognitive resources on a specific stimulus while ignoring others. Top-down filtering of peripheral sensory responses by higher structures of the brain has been proposed as one of the mechanisms responsible for selective attention. A prerequisite to accomplish top-down modulation of the activity of peripheral structures is the presence of corticofugal pathways. The mammalian auditory efferent system is a unique neural network that originates in the auditory cortex and projects to the cochlear receptor through the olivocochlear bundle, and it has been proposed to function as a top-down filter of peripheral auditory responses during attention to cross-modal stimuli. However, to date, there is no conclusive evidence of the involvement of olivocochlear neurons in selective attention paradigms. Here, we trained wild-type and α-9 nicotinic receptor subunit knock-out (KO) mice, which lack cholinergic transmission between medial olivocochlear neurons and outer hair cells, in a two-choice visual discrimination task and studied the behavioral consequences of adding different types of auditory distractors. In addition, we evaluated the effects of contralateral noise on auditory nerve responses as a measure of the individual strength of the olivocochlear reflex. We demonstrate that KO mice have a reduced olivocochlear reflex strength and perform poorly in a visual selective attention paradigm. These results confirm that an intact medial olivocochlear transmission aids in ignoring auditory distraction during selective attention to visual stimuli. The auditory efferent system is a neural network that originates in the auditory cortex and projects to the cochlear receptor through the olivocochlear system. It has been proposed to function as a top-down filter of peripheral auditory responses during attention to cross-modal stimuli. However, to date, there is no conclusive evidence of the involvement of olivocochlear

  13. Sex differences in interactions between nucleus accumbens and visual cortex by explicit visual erotic stimuli: an fMRI study.

    Science.gov (United States)

    Lee, S W; Jeong, B S; Choi, J; Kim, J-W

    2015-01-01

    Men tend to have greater positive responses than women to explicit visual erotic stimuli (EVES). However, it remains unclear, which brain network makes men more sensitive to EVES and which factors contribute to the brain network activity. In this study, we aimed to assess the effect of sex difference on brain connectivity patterns by EVES. We also investigated the association of testosterone with brain connection that showed the effects of sex difference. During functional magnetic resonance imaging scans, 14 males and 14 females were asked to see alternating blocks of pictures that were either erotic or non-erotic. Psychophysiological interaction analysis was performed to investigate the functional connectivity of the nucleus accumbens (NA) as it related to EVES. Men showed significantly greater EVES-specific functional connection between the right NA and the right lateral occipital cortex (LOC). In addition, the right NA and the right LOC network activity was positively correlated with the plasma testosterone level in men. Our results suggest that the reason men are sensitive to EVES is the increased interaction in the visual reward networks, which is modulated by their plasma testosterone level.

  14. Auditory Motion Elicits a Visual Motion Aftereffect.

    Science.gov (United States)

    Berger, Christopher C; Ehrsson, H Henrik

    2016-01-01

    The visual motion aftereffect is a visual illusion in which exposure to continuous motion in one direction leads to a subsequent illusion of visual motion in the opposite direction. Previous findings have been mixed with regard to whether this visual illusion can be induced cross-modally by auditory stimuli. Based on research on multisensory perception demonstrating the profound influence auditory perception can have on the interpretation and perceived motion of visual stimuli, we hypothesized that exposure to auditory stimuli with strong directional motion cues should induce a visual motion aftereffect. Here, we demonstrate that horizontally moving auditory stimuli induced a significant visual motion aftereffect-an effect that was driven primarily by a change in visual motion perception following exposure to leftward moving auditory stimuli. This finding is consistent with the notion that visual and auditory motion perception rely on at least partially overlapping neural substrates.

  15. Auditory Motion Elicits a Visual Motion Aftereffect

    Directory of Open Access Journals (Sweden)

    Christopher C. Berger

    2016-12-01

    Full Text Available The visual motion aftereffect is a visual illusion in which exposure to continuous motion in one direction leads to a subsequent illusion of visual motion in the opposite direction. Previous findings have been mixed with regard to whether this visual illusion can be induced cross-modally by auditory stimuli. Based on research on multisensory perception demonstrating the profound influence auditory perception can have on the interpretation and perceived motion of visual stimuli, we hypothesized that exposure to auditory stimuli with strong directional motion cues should induce a visual motion aftereffect. Here, we demonstrate that horizontally moving auditory stimuli induced a significant visual motion aftereffect—an effect that was driven primarily by a change in visual motion perception following exposure to leftward moving auditory stimuli. This finding is consistent with the notion that visual and auditory motion perception rely on at least partially overlapping neural substrates.

  16. The Role of Visual and Auditory Stimuli in Continuous Performance Tests: Differential Effects on Children With ADHD.

    Science.gov (United States)

    Simões, Eunice N; Carvalho, Ana L Novais; Schmidt, Sergio L

    2018-04-01

    Continuous performance tests (CPTs) usually utilize visual stimuli. A previous investigation showed that inattention is partially independent of modality, but response inhibition is modality-specific. Here we aimed to compare performance on visual and auditory CPTs in ADHD and in healthy controls. The sample consisted of 160 elementary and high school students (43 ADHD, 117 controls). For each sensory modality, five variables were extracted: commission errors (CEs) and omission errors (OEs), reaction time (RT), variability of reaction time (VRT), and coefficient of variability (CofV = VRT / RT). The ADHD group exhibited higher rates for all test variables. The discriminant analysis indicated that auditory OE was the most reliable variable for discriminating between groups, followed by visual CE, auditory CE, and auditory CofV. Discriminant equation classified ADHD with 76.3% accuracy. Auditory parameters in the inattention domain (OE and VRT) can discriminate ADHD from controls. For the hyperactive/impulsive domain (CE), the two modalities are equally important.

  17. Evaluating a Bilingual Text-Mining System with a Taxonomy of Key Words and Hierarchical Visualization for Understanding Learner-Generated Text

    Science.gov (United States)

    Kong, Siu Cheung; Li, Ping; Song, Yanjie

    2018-01-01

    This study evaluated a bilingual text-mining system, which incorporated a bilingual taxonomy of key words and provided hierarchical visualization, for understanding learner-generated text in the learning management systems through automatic identification and counting of matching key words. A class of 27 in-service teachers studied a course…

  18. Attending and Inhibiting Stimuli That Match the Contents of Visual Working Memory: Evidence from Eye Movements and Pupillometry (2015 GDR Vision meeting)

    OpenAIRE

    Mathôt, Sebastiaan; Heusden, Elle van; Stigchel, Stefan Van der

    2015-01-01

    Slides for: Mathôt, S., & Van Heusden, E., & Van der Stigchel, S. (2015, Dec). Attending and Inhibiting Stimuli That Match the Contents of Visual Working Memory: Evidence from Eye Movements and Pupillometry. Talk presented at the GDR Vision Meething, Grenoble, France.

  19. Generating Stimuli for Neuroscience Using PsychoPy.

    Science.gov (United States)

    Peirce, Jonathan W

    2008-01-01

    PsychoPy is a software library written in Python, using OpenGL to generate very precise visual stimuli on standard personal computers. It is designed to allow the construction of as wide a variety of neuroscience experiments as possible, with the least effort. By writing scripts in standard Python syntax users can generate an enormous variety of visual and auditory stimuli and can interact with a wide range of external hardware (enabling its use in fMRI, EEG, MEG etc.). The structure of scripts is simple and intuitive. As a result, new experiments can be written very quickly, and trying to understand a previously written script is easy, even with minimal code comments. PsychoPy can also generate movies and image sequences to be used in demos or simulated neuroscience experiments. This paper describes the range of tools and stimuli that it provides and the environment in which experiments are conducted.

  20. Freezing Behavior as a Response to Sexual Visual Stimuli as Demonstrated by Posturography

    Science.gov (United States)

    Mouras, Harold; Lelard, Thierry; Ahmaidi, Said; Godefroy, Olivier; Krystkowiak, Pierre

    2015-01-01

    Posturographic changes in motivational conditions remain largely unexplored in the context of embodied cognition. Over the last decade, sexual motivation has been used as a good canonical working model to study motivated social interactions. The objective of this study was to explore posturographic variations in response to visual sexual videos as compared to neutral videos. Our results support demonstration of a freezing-type response in response to sexually explicit stimuli compared to other conditions, as demonstrated by significantly decreased standard deviations for (i) the center of pressure displacement along the mediolateral and anteroposterior axes and (ii) center of pressure’s displacement surface. These results support the complexity of the motor correlates of sexual motivation considered to be a canonical functional context to study the motor correlates of motivated social interactions. PMID:25992571

  1. Hierarchical event selection for video storyboards with a case study on snooker video visualization.

    Science.gov (United States)

    Parry, Matthew L; Legg, Philip A; Chung, David H S; Griffiths, Iwan W; Chen, Min

    2011-12-01

    Video storyboard, which is a form of video visualization, summarizes the major events in a video using illustrative visualization. There are three main technical challenges in creating a video storyboard, (a) event classification, (b) event selection and (c) event illustration. Among these challenges, (a) is highly application-dependent and requires a significant amount of application specific semantics to be encoded in a system or manually specified by users. This paper focuses on challenges (b) and (c). In particular, we present a framework for hierarchical event representation, and an importance-based selection algorithm for supporting the creation of a video storyboard from a video. We consider the storyboard to be an event summarization for the whole video, whilst each individual illustration on the board is also an event summarization but for a smaller time window. We utilized a 3D visualization template for depicting and annotating events in illustrations. To demonstrate the concepts and algorithms developed, we use Snooker video visualization as a case study, because it has a concrete and agreeable set of semantic definitions for events and can make use of existing techniques of event detection and 3D reconstruction in a reliable manner. Nevertheless, most of our concepts and algorithms developed for challenges (b) and (c) can be applied to other application areas. © 2010 IEEE

  2. Happiness takes you right: the effect of emotional stimuli on line bisection.

    Science.gov (United States)

    Cattaneo, Zaira; Lega, Carlotta; Boehringer, Jana; Gallucci, Marcello; Girelli, Luisa; Carbon, Claus-Christian

    2014-01-01

    Emotion recognition is mediated by a complex network of cortical and subcortical areas, with the two hemispheres likely being differently involved in processing positive and negative emotions. As results on valence-dependent hemispheric specialisation are quite inconsistent, we carried out three experiments with emotional stimuli with a task being sensitive to measure specific hemispheric processing. Participants were required to bisect visual lines that were delimited by emotional face flankers, or to haptically bisect rods while concurrently listening to emotional vocal expressions. We found that prolonged (but not transient) exposition to concurrent happy stimuli significantly shifted the bisection bias to the right compared to both sad and neutral stimuli, indexing a greater involvement of the left hemisphere in processing of positively connoted stimuli. No differences between sad and neutral stimuli were observed across the experiments. In sum, our data provide consistent evidence in favour of a greater involvement of the left hemisphere in processing positive emotions and suggest that (prolonged) exposure to stimuli expressing happiness significantly affects allocation of (spatial) attentional resources, regardless of the sensory (visual/auditory) modality in which the emotion is perceived and space is explored (visual/haptic).

  3. Generating stimuli for neuroscience using PsychoPy

    Directory of Open Access Journals (Sweden)

    Jonathan W Peirce

    2009-01-01

    Full Text Available PsychoPy is a software library written in Python, using OpenGL to generate very precise visual stimuli on standard personal computers. It is designed to allow the construction of as wide a variety of neuroscience experiments as possible, with the least effort. By writing scripts in standard Python syntax users can generate an enormous variety of visual and auditory stimuli and can interact with a wide range of external hardware (enabling its use in fMRI, EEG, MEG etc.. The structure of scripts is simple and intuitive. As a result, new experiments can be written very quickly, and trying to understand a previously written script is easy, even with minimal code comments. PsychoPy can also generate movies and image sequences to be used in demos or simulated neuroscience experiments. This paper describes the range of tools and stimuli that it provides and the environment in which experiments are conducted.

  4. Selective attention to spatial and non-spatial visual stimuli is affected differentially by age: Effects on event-related brain potentials and performance data

    NARCIS (Netherlands)

    Talsma, D.; Kok, Albert; Ridderinkhof, K. Richard

    2006-01-01

    To assess selective attention processes in young and old adults, behavioral and event-related potential (ERP) measures were recorded. Streams of visual stimuli were presented from left or right locations (Experiment 1) or from a central location and comprising two different spatial frequencies

  5. Consistent phosphenes generated by electrical microstimulation of the visual thalamus. An experimental approach for thalamic visual neuroprostheses

    Directory of Open Access Journals (Sweden)

    Fivos ePanetsos

    2011-07-01

    Full Text Available Most work on visual prostheses has centred on developing retinal or cortical devices. However, when retinal implants are not feasible, neuroprostheses could be implanted in the lateral geniculate nucleus of the thalamus (LGN, the intermediate relay station of visual information from the retina to the visual cortex (V1. The objective of the present study was to determine the types of artificial stimuli that when delivered to the visual thalamus can generate reliable responses of the cortical neurons similar to those obtained when the eye perceives a visual image. Visual stimuli {Si} were presented to one eye of an experimental animal and both, the thalamic {RThi} and cortical responses {RV1i} to such stimuli were recorded. Electrical patterns {RThi*} resembling {RThi} were then injected into the visual thalamus to obtain cortical responses {RV1i*} similar to {RV1i}. Visually- and electrically-generated V1 responses were compared.Results: During the course of this work we: (i characterised the response of V1 neurons to visual stimuli according to response magnitude, duration, spiking rate and the distribution of interspike intervals; (ii experimentally tested the dependence of V1 responses on stimulation parameters such as intensity, frequency, duration, etc. and determined the ranges of these parameters generating the desired cortical activity; (iii identified similarities between responses of V1 useful to compare the naturally and artificially generated neuronal activity of V1; and (iv by modifying the stimulation parameters, we generated artificial V1 responses similar to those elicited by visual stimuli.Generation of predictable and consistent phosphenes by means of artificial stimulation of the LGN is important for the feasibility of visual prostheses. Here we proved that electrical stimuli to the LGN can generate V1 neural responses that resemble those elicited by natural visual stimuli.

  6. Enhanced ERPs to visual stimuli in unaffected male siblings of ASD children.

    Science.gov (United States)

    Anzures, Gizelle; Goyet, Louise; Ganea, Natasa; Johnson, Mark H

    2016-01-01

    Autism spectrum disorders are characterized by deficits in social and communication abilities. While unaffected relatives lack severe deficits, milder impairments have been reported in some first-degree relatives. The present study sought to verify whether mild deficits in face perception are evident among the unaffected younger siblings of children with ASD. Children between 6-9 years of age completed a face-recognition task and a passive viewing ERP task with face and house stimuli. Sixteen children were typically developing with no family history of ASD, and 17 were unaffected children with an older sibling with ASD. Findings indicate that, while unaffected siblings are comparable to controls in their face-recognition abilities, unaffected male siblings in particular show relatively enhanced P100 and P100-N170 peak-to-peak amplitude responses to faces and houses. Enhanced ERPs among unaffected male siblings is discussed in relation to potential differences in neural network recruitment during visual and face processing.

  7. Electrophysiological evidence of altered visual processing in adults who experienced visual deprivation during infancy.

    Science.gov (United States)

    Segalowitz, Sidney J; Sternin, Avital; Lewis, Terri L; Dywan, Jane; Maurer, Daphne

    2017-04-01

    We examined the role of early visual input in visual system development by testing adults who had been born with dense bilateral cataracts that blocked all patterned visual input during infancy until the cataractous lenses were removed surgically and the eyes fitted with compensatory contact lenses. Patients viewed checkerboards and textures to explore early processing regions (V1, V2), Glass patterns to examine global form processing (V4), and moving stimuli to explore global motion processing (V5). Patients' ERPs differed from those of controls in that (1) the V1 component was much smaller for all but the simplest stimuli and (2) extrastriate components did not differentiate amongst texture stimuli, Glass patterns, or motion stimuli. The results indicate that early visual deprivation contributes to permanent abnormalities at early and mid levels of visual processing, consistent with enduring behavioral deficits in the ability to process complex textures, global form, and global motion. © 2017 Wiley Periodicals, Inc.

  8. [Effects of visual optical stimuli for accommodation-convergence system on asthenopia].

    Science.gov (United States)

    Iwasaki, Tsuneto; Tawara, Akihiko; Miyake, Nobuyuki

    2006-01-01

    We investigated the effect on eyestrain of optical stimuli that we designed for accommodation and convergence systems. Eight female students were given optical stimuli for accommodation and convergence systems for 1.5 min immediately after 20 min of a sustained task on a 3-D display. Before and after the trial, their ocular functions were measured and their symptoms were assessed. The optical stimuli were applied by moving targets of scenery images far and near around the far point position of both eyes on a horizonal place, which induced divergence in the direction of the eye position of rest. In a control group, subjects rested with closed eyes for 1.5 min instead of applying the optical stimuli. There were significant changes in the accommodative contraction time (from far to near) and the accommodative relaxation time (from near to far) and the lag of accommodation at near target, from 1.26 s to 1.62 s and from 1.49 s to 1.63 s and from 0.5 D to 0.65 D, respectively, and in the symptoms in the control group after the duration of closed-eye rest. In the stimulus group, however, the changes of those functions were smaller than in the control group. From these results, we suggest that our designed optical stimuli for accommodation and convergence systems are effective on asthenopia following accommodative dysfunction.

  9. The eye-tracking of social stimuli in patients with Rett syndrome and autism spectrum disorders: a pilot study

    Directory of Open Access Journals (Sweden)

    José Salomão Schwartzman

    2015-05-01

    Full Text Available Objective To compare visual fixation at social stimuli in Rett syndrome (RT and autism spectrum disorders (ASD patients. Method Visual fixation at social stimuli was analyzed in 14 RS female patients (age range 4-30 years, 11 ASD male patients (age range 4-20 years, and 17 children with typical development (TD. Patients were exposed to three different pictures (two of human faces and one with social and non-social stimuli presented for 8 seconds each on the screen of a computer attached to an eye-tracker equipment. Results Percentage of visual fixation at social stimuli was significantly higher in the RS group compared to ASD and even to TD groups. Conclusion Visual fixation at social stimuli seems to be one more endophenotype making RS to be very different from ASD.

  10. Binocular Combination of Second-Order Stimuli

    Science.gov (United States)

    Zhou, Jiawei; Liu, Rong; Zhou, Yifeng; Hess, Robert F.

    2014-01-01

    Phase information is a fundamental aspect of visual stimuli. However, the nature of the binocular combination of stimuli defined by modulations in contrast, so-called second-order stimuli, is presently not clear. To address this issue, we measured binocular combination for first- (luminance modulated) and second-order (contrast modulated) stimuli using a binocular phase combination paradigm in seven normal adults. We found that the binocular perceived phase of second-order gratings depends on the interocular signal ratio as has been previously shown for their first order counterparts; the interocular signal ratios when the two eyes were balanced was close to 1 in both first- and second-order phase combinations. However, second-order combination is more linear than previously found for first-order combination. Furthermore, binocular combination of second-order stimuli was similar regardless of whether the carriers in the two eyes were correlated, anti-correlated, or uncorrelated. This suggests that, in normal adults, the binocular phase combination of second-order stimuli occurs after the monocular extracting of the second-order modulations. The sensory balance associated with this second-order combination can be obtained from binocular phase combination measurements. PMID:24404180

  11. Attentional Capture by Emotional Stimuli Is Modulated by Semantic Processing

    Science.gov (United States)

    Huang, Yang-Ming; Baddeley, Alan; Young, Andrew W.

    2008-01-01

    The attentional blink paradigm was used to examine whether emotional stimuli always capture attention. The processing requirement for emotional stimuli in a rapid sequential visual presentation stream was manipulated to investigate the circumstances under which emotional distractors capture attention, as reflected in an enhanced attentional blink…

  12. Visual Attention to Pictorial Food Stimuli in Individuals With Night Eating Syndrome: An Eye-Tracking Study.

    Science.gov (United States)

    Baldofski, Sabrina; Lüthold, Patrick; Sperling, Ingmar; Hilbert, Anja

    2018-03-01

    Night eating syndrome (NES) is characterized by excessive evening and/or nocturnal eating episodes. Studies indicate an attentional bias towards food in other eating disorders. For NES, however, evidence of attentional food processing is lacking. Attention towards food and non-food stimuli was compared using eye-tracking in 19 participants with NES and 19 matched controls without eating disorders during a free exploration paradigm and a visual search task. In the free exploration paradigm, groups did not differ in initial fixation position or gaze duration. However, a significant orienting bias to food compared to non-food was found within the NES group, but not in controls. A significant attentional maintenance bias to non-food compared to food was found in both groups. Detection times did not differ between groups in the search task. Only in NES, attention to and faster detection of non-food stimuli were related to higher BMI and more evening eating episodes. The results might indicate an attentional approach-avoidance pattern towards food in NES. However, further studies should clarify the implications of attentional mechanisms for the etiology and maintenance of NES. Copyright © 2017. Published by Elsevier Ltd.

  13. Auditory and visual spatial impression: Recent studies of three auditoria

    Science.gov (United States)

    Nguyen, Andy; Cabrera, Densil

    2004-10-01

    Auditory spatial impression is widely studied for its contribution to auditorium acoustical quality. By contrast, visual spatial impression in auditoria has received relatively little attention in formal studies. This paper reports results from a series of experiments investigating the auditory and visual spatial impression of concert auditoria. For auditory stimuli, a fragment of an anechoic recording of orchestral music was convolved with calibrated binaural impulse responses, which had been made with the dummy head microphone at a wide range of positions in three auditoria and the sound source on the stage. For visual stimuli, greyscale photographs were used, taken at the same positions in the three auditoria, with a visual target on the stage. Subjective experiments were conducted with auditory stimuli alone, visual stimuli alone, and visual and auditory stimuli combined. In these experiments, subjects rated apparent source width, listener envelopment, intimacy and source distance (auditory stimuli), and spaciousness, envelopment, stage dominance, intimacy and target distance (visual stimuli). Results show target distance to be of primary importance in auditory and visual spatial impression-thereby providing a basis for covariance between some attributes of auditory and visual spatial impression. Nevertheless, some attributes of spatial impression diverge between the senses.

  14. Enhanced pain and autonomic responses to ambiguous visual stimuli in chronic Complex Regional Pain Syndrome (CRPS) type I.

    Science.gov (United States)

    Cohen, H E; Hall, J; Harris, N; McCabe, C S; Blake, D R; Jänig, W

    2012-02-01

    Cortical reorganisation of sensory, motor and autonomic systems can lead to dysfunctional central integrative control. This may contribute to signs and symptoms of Complex Regional Pain Syndrome (CRPS), including pain. It has been hypothesised that central neuroplastic changes may cause afferent sensory feedback conflicts and produce pain. We investigated autonomic responses produced by ambiguous visual stimuli (AVS) in CRPS, and their relationship to pain. Thirty CRPS patients with upper limb involvement and 30 age and sex matched healthy controls had sympathetic autonomic function assessed using laser Doppler flowmetry of the finger pulp at baseline and while viewing a control figure or AVS. Compared to controls, there were diminished vasoconstrictor responses and a significant difference in the ratio of response between affected and unaffected limbs (symmetry ratio) to a deep breath and viewing AVS. While viewing visual stimuli, 33.5% of patients had asymmetric vasomotor responses and all healthy controls had a homologous symmetric pattern of response. Nineteen (61%) CRPS patients had enhanced pain within seconds of viewing the AVS. All the asymmetric vasomotor responses were in this group, and were not predictable from baseline autonomic function. Ten patients had accompanying dystonic reactions in their affected limb: 50% were in the asymmetric sub-group. In conclusion, there is a group of CRPS patients that demonstrate abnormal pain networks interacting with central somatomotor and autonomic integrational pathways. © 2011 European Federation of International Association for the Study of Pain Chapters.

  15. Consuming Almonds vs. Isoenergetic Baked Food Does Not Differentially Influence Postprandial Appetite or Neural Reward Responses to Visual Food Stimuli.

    Science.gov (United States)

    Sayer, R Drew; Dhillon, Jaapna; Tamer, Gregory G; Cornier, Marc-Andre; Chen, Ningning; Wright, Amy J; Campbell, Wayne W; Mattes, Richard D

    2017-07-27

    Nuts have high energy and fat contents, but nut intake does not promote weight gain or obesity, which may be partially explained by their proposed high satiety value. The primary aim of this study was to assess the effects of consuming almonds versus a baked food on postprandial appetite and neural responses to visual food stimuli. Twenty-two adults (19 women and 3 men) with a BMI between 25 and 40 kg/m² completed the current study during a 12-week behavioral weight loss intervention. Participants consumed either 28 g of whole, lightly salted roasted almonds or a serving of a baked food with equivalent energy and macronutrient contents in random order on two testing days prior to and at the end of the intervention. Pre- and postprandial appetite ratings and functional magnetic resonance imaging scans were completed on all four testing days. Postprandial hunger, desire to eat, fullness, and neural responses to visual food stimuli were not different following consumption of almonds and the baked food, nor were they influenced by weight loss. These results support energy and macronutrient contents as principal determinants of postprandial appetite and do not support a unique satiety effect of almonds independent of these variables.

  16. Lack of Multisensory Integration in Hemianopia: No Influence of Visual Stimuli on Aurally Guided Saccades to the Blind Hemifield

    Science.gov (United States)

    Ten Brink, Antonia F.; Nijboer, Tanja C. W.; Bergsma, Douwe P.; Barton, Jason J. S.; Van der Stigchel, Stefan

    2015-01-01

    In patients with visual hemifield defects residual visual functions may be present, a phenomenon called blindsight. The superior colliculus (SC) is part of the spared pathway that is considered to be responsible for this phenomenon. Given that the SC processes input from different modalities and is involved in the programming of saccadic eye movements, the aim of the present study was to examine whether multimodal integration can modulate oculomotor competition in the damaged hemifield. We conducted two experiments with eight patients who had visual field defects due to lesions that affected the retinogeniculate pathway but spared the retinotectal direct SC pathway. They had to make saccades to an auditory target that was presented alone or in combination with a visual stimulus. The visual stimulus could either be spatially coincident with the auditory target (possibly enhancing the auditory target signal), or spatially disparate to the auditory target (possibly competing with the auditory tar-get signal). For each patient we compared the saccade endpoint deviation in these two bi-modal conditions with the endpoint deviation in the unimodal condition (auditory target alone). In all seven hemianopic patients, saccade accuracy was affected only by visual stimuli in the intact, but not in the blind visual field. In one patient with a more limited quadrantano-pia, a facilitation effect of the spatially coincident visual stimulus was observed. We conclude that our results show that multisensory integration is infrequent in the blind field of patients with hemianopia. PMID:25835952

  17. Lack of multisensory integration in hemianopia: no influence of visual stimuli on aurally guided saccades to the blind hemifield.

    Directory of Open Access Journals (Sweden)

    Antonia F Ten Brink

    Full Text Available In patients with visual hemifield defects residual visual functions may be present, a phenomenon called blindsight. The superior colliculus (SC is part of the spared pathway that is considered to be responsible for this phenomenon. Given that the SC processes input from different modalities and is involved in the programming of saccadic eye movements, the aim of the present study was to examine whether multimodal integration can modulate oculomotor competition in the damaged hemifield. We conducted two experiments with eight patients who had visual field defects due to lesions that affected the retinogeniculate pathway but spared the retinotectal direct SC pathway. They had to make saccades to an auditory target that was presented alone or in combination with a visual stimulus. The visual stimulus could either be spatially coincident with the auditory target (possibly enhancing the auditory target signal, or spatially disparate to the auditory target (possibly competing with the auditory tar-get signal. For each patient we compared the saccade endpoint deviation in these two bi-modal conditions with the endpoint deviation in the unimodal condition (auditory target alone. In all seven hemianopic patients, saccade accuracy was affected only by visual stimuli in the intact, but not in the blind visual field. In one patient with a more limited quadrantano-pia, a facilitation effect of the spatially coincident visual stimulus was observed. We conclude that our results show that multisensory integration is infrequent in the blind field of patients with hemianopia.

  18. Comparison of Gated Audiovisual Speech Identification in Elderly Hearing Aid Users and Elderly Normal-Hearing Individuals: Effects of Adding Visual Cues to Auditory Speech Stimuli.

    Science.gov (United States)

    Moradi, Shahram; Lidestam, Björn; Rönnberg, Jerker

    2016-06-17

    The present study compared elderly hearing aid (EHA) users (n = 20) with elderly normal-hearing (ENH) listeners (n = 20) in terms of isolation points (IPs, the shortest time required for correct identification of a speech stimulus) and accuracy of audiovisual gated speech stimuli (consonants, words, and final words in highly and less predictable sentences) presented in silence. In addition, we compared the IPs of audiovisual speech stimuli from the present study with auditory ones extracted from a previous study, to determine the impact of the addition of visual cues. Both participant groups achieved ceiling levels in terms of accuracy in the audiovisual identification of gated speech stimuli; however, the EHA group needed longer IPs for the audiovisual identification of consonants and words. The benefit of adding visual cues to auditory speech stimuli was more evident in the EHA group, as audiovisual presentation significantly shortened the IPs for consonants, words, and final words in less predictable sentences; in the ENH group, audiovisual presentation only shortened the IPs for consonants and words. In conclusion, although the audiovisual benefit was greater for EHA group, this group had inferior performance compared with the ENH group in terms of IPs when supportive semantic context was lacking. Consequently, EHA users needed the initial part of the audiovisual speech signal to be longer than did their counterparts with normal hearing to reach the same level of accuracy in the absence of a semantic context. © The Author(s) 2016.

  19. Long-term memory of hierarchical relationships in free-living greylag geese

    NARCIS (Netherlands)

    Weiss, Brigitte M.; Scheiber, Isabella B. R.

    Animals may memorise spatial and social information for many months and even years. Here, we investigated long-term memory of hierarchically ordered relationships, where the position of a reward depended on the relationship of a stimulus relative to other stimuli in the hierarchy. Seventeen greylag

  20. Iconic-Memory Processing of Unfamiliar Stimuli by Retarded and Nonretarded Individuals.

    Science.gov (United States)

    Hornstein, Henry A.; Mosley, James L.

    1979-01-01

    The iconic-memory processing of unfamiliar stimuli by 11 mentally retarded males (mean age 22 years) was undertaken employing a visually cued partial-report procedure and a visual masking procedure. (Author/CL)

  1. A Wider Look at Visual Discomfort

    Directory of Open Access Journals (Sweden)

    L O'Hare

    2012-07-01

    Full Text Available Visual discomfort is the adverse effects reported by some on viewing certain stimuli, such as stripes and certain filtered noise patterns. Stimuli that deviate from natural image statistics might be encoded inefficiently, which could cause discomfort (Juricevic, Land, Wilkins and Webster, 2010, Perception, 39(7, 884–899, possibly through excessive cortical responses (Wilkins, 1995, Visual Stress, Oxford, Oxford University Press. A less efficient visual system might exacerbate the effects of difficult stimuli. Extreme examples are seen in epilepsy and migraines (Wilkins, Bonnanni, Prociatti, Guerrini, 2004, Epilepsia, 45, 1–7; Aurora and Wilkinson, 2007, Cephalalgia, 27(12, 1422–1435. However, similar stimuli are also seen as uncomfortable by non-clinical populations, eg, striped patterns (Wilkins et al, 1984, Brain, 107(4. We propose that oversensitivity of clinical populations may represent extreme examples of visual discomfort in the general population. To study the prevalence and impact of visual discomfort in a wider context than typically studied, an Internet-based survey was conducted, including standardised questionnaires measuring visual discomfort susceptibility (Conlon, Lovegrove, Chekaluk and Pattison, 1999, Visual Cognition, 6(6, 637–663; Evans and Stevenson, 2008, Ophthal Physiol Opt 28(4 295–309 and judgments of visual stimuli, such as striped patterns (Wilkins et al, 1984 and filtered noise patterns (Fernandez and Wilkins, 2008, Perception, 37(7 1098–1013. Results show few individuals reporting high visual discomfort, contrary to other researchers (eg, Conlon et al, 1999.

  2. Statistics of Visual Responses to Image Object Stimuli from Primate AIT Neurons to DNN Neurons.

    Science.gov (United States)

    Dong, Qiulei; Wang, Hong; Hu, Zhanyi

    2018-02-01

    Under the goal-driven paradigm, Yamins et al. ( 2014 ; Yamins & DiCarlo, 2016 ) have shown that by optimizing only the final eight-way categorization performance of a four-layer hierarchical network, not only can its top output layer quantitatively predict IT neuron responses but its penultimate layer can also automatically predict V4 neuron responses. Currently, deep neural networks (DNNs) in the field of computer vision have reached image object categorization performance comparable to that of human beings on ImageNet, a data set that contains 1.3 million training images of 1000 categories. We explore whether the DNN neurons (units in DNNs) possess image object representational statistics similar to monkey IT neurons, particularly when the network becomes deeper and the number of image categories becomes larger, using VGG19, a typical and widely used deep network of 19 layers in the computer vision field. Following Lehky, Kiani, Esteky, and Tanaka ( 2011 , 2014 ), where the response statistics of 674 IT neurons to 806 image stimuli are analyzed using three measures (kurtosis, Pareto tail index, and intrinsic dimensionality), we investigate the three issues in this letter using the same three measures: (1) the similarities and differences of the neural response statistics between VGG19 and primate IT cortex, (2) the variation trends of the response statistics of VGG19 neurons at different layers from low to high, and (3) the variation trends of the response statistics of VGG19 neurons when the numbers of stimuli and neurons increase. We find that the response statistics on both single-neuron selectivity and population sparseness of VGG19 neurons are fundamentally different from those of IT neurons in most cases; by increasing the number of neurons in different layers and the number of stimuli, the response statistics of neurons at different layers from low to high do not substantially change; and the estimated intrinsic dimensionality values at the low

  3. Visualization and Hierarchical Analysis of Flow in Discrete Fracture Network Models

    Science.gov (United States)

    Aldrich, G. A.; Gable, C. W.; Painter, S. L.; Makedonska, N.; Hamann, B.; Woodring, J.

    2013-12-01

    Flow and transport in low permeability fractured rock is primary in interconnected fracture networks. Prediction and characterization of flow and transport in fractured rock has important implications in underground repositories for hazardous materials (eg. nuclear and chemical waste), contaminant migration and remediation, groundwater resource management, and hydrocarbon extraction. We have developed methods to explicitly model flow in discrete fracture networks and track flow paths using passive particle tracking algorithms. Visualization and analysis of particle trajectory through the fracture network is important to understanding fracture connectivity, flow patterns, potential contaminant pathways and fast paths through the network. However, occlusion due to the large number of highly tessellated and intersecting fracture polygons preclude the effective use of traditional visualization methods. We would also like quantitative analysis methods to characterize the trajectory of a large number of particle paths. We have solved these problems by defining a hierarchal flow network representing the topology of particle flow through the fracture network. This approach allows us to analyses the flow and the dynamics of the system as a whole. We are able to easily query the flow network, and use paint-and-link style framework to filter the fracture geometry and particle traces based on the flow analytics. This allows us to greatly reduce occlusion while emphasizing salient features such as the principal transport pathways. Examples are shown that demonstrate the methodology and highlight how use of this new method allows quantitative analysis and characterization of flow and transport in a number of representative fracture networks.

  4. Paying attention to orthography: A visual evoked potential study

    Directory of Open Access Journals (Sweden)

    Anthony Thomas Herdman

    2013-05-01

    Full Text Available In adult readers, letters and words are rapidly identified within visual networks to allow for efficient reading abilities. Neuroimaging studies of orthography have mostly used words and letter strings that recruit many hierarchical levels in reading. Understanding how single letters are processed could provide further insight into orthographic processing. The present study investigated orthographic processing using single letters and pseudoletters when adults were encouraged to pay attention to or away from orthographic features. We measured evoked potentials (EPs to single letters and pseudoletters from adults while they performed an orthographic-discrimination task (letters vs. pseudoletters, a colour-discrimination task (red vs. blue, and a target-detection task (respond to #1 and #2. Larger and later peaking N1 responses (~170ms and larger P2 responses (~250 ms occurred to pseudoletters as compared to letters. This reflected greater visual processing for pseudoletters. Dipole analyses localized this effect to bilateral fusiform and inferior temporal cortices. Moreover, this letter-pseudoletter difference was not modulated by task and thus indicates that directing attention to or away from orthographic features didn’t affect early visual processing of single letters or pseudoletters within extrastriate regions. Paying attention to orthography or colour as compared to disregarding the stimuli (target-detection task elicited selection negativities at about 175 ms, which were followed by a classical N2-P3 complexes. This indicated that the tasks sufficiently drew participant’s attention to and away from the stimuli. Together these findings revealed that visual processing of single letters and pseudoletters, in adults, appeared to be sensory-contingent and independent of paying attention to stimulus features (e.g., orthography or colour.

  5. Influence of auditory and audiovisual stimuli on the right-left prevalence effect

    DEFF Research Database (Denmark)

    Vu, Kim-Phuong L; Minakata, Katsumi; Ngo, Mary Kim

    2014-01-01

    occurs when the two-dimensional stimuli are audiovisual, as well as whether there will be cross-modal facilitation of response selection for the horizontal and vertical dimensions. We also examined whether there is an additional benefit of adding a pitch dimension to the auditory stimulus to facilitate...... vertical coding through use of the spatial-musical association of response codes (SMARC) effect, where pitch is coded in terms of height in space. In Experiment 1, we found a larger right-left prevalence effect for unimodal auditory than visual stimuli. Neutral, non-pitch coded, audiovisual stimuli did...... not result in cross-modal facilitation, but did show evidence of visual dominance. The right-left prevalence effect was eliminated in the presence of SMARC audiovisual stimuli, but the effect influenced horizontal rather than vertical coding. Experiment 2 showed that the influence of the pitch dimension...

  6. Visual Memories Bypass Normalization.

    Science.gov (United States)

    Bloem, Ilona M; Watanabe, Yurika L; Kibbe, Melissa M; Ling, Sam

    2018-05-01

    How distinct are visual memory representations from visual perception? Although evidence suggests that briefly remembered stimuli are represented within early visual cortices, the degree to which these memory traces resemble true visual representations remains something of a mystery. Here, we tested whether both visual memory and perception succumb to a seemingly ubiquitous neural computation: normalization. Observers were asked to remember the contrast of visual stimuli, which were pitted against each other to promote normalization either in perception or in visual memory. Our results revealed robust normalization between visual representations in perception, yet no signature of normalization occurring between working memory stores-neither between representations in memory nor between memory representations and visual inputs. These results provide unique insight into the nature of visual memory representations, illustrating that visual memory representations follow a different set of computational rules, bypassing normalization, a canonical visual computation.

  7. Parallel processing in the brain’s visual form system: An fMRI study

    Directory of Open Access Journals (Sweden)

    Yoshihito eShigihara

    2014-07-01

    Full Text Available We here extend and complement our earlier time-based, magneto-encephalographic (MEG, study of the processing of forms by the visual brain (Shigihara and Zeki, 2013 with a functional magnetic resonance imaging (fMRI study, in order to better localize the activity produced in early visual areas when subjects view simple geometric stimuli of increasing perceptual complexity (lines, angles, rhomboids constituted from the same elements (lines. Our results show that all three categories of form activate all three visual areas with which we were principally concerned (V1, V2, V3, with angles producing the strongest and rhomboids the weakest activity in all three. The difference between the activity produced by angles and rhomboids was significant, that between lines and rhomboids was trend significant while that between lines and angles was not. Taken together with our earlier MEG results, the present ones suggest that a parallel strategy is used in processing forms, in addition to the well-documented hierarchical strategy.

  8. Hierarchical Targeting Strategy for Enhanced Tumor Tissue Accumulation/Retention and Cellular Internalization.

    Science.gov (United States)

    Wang, Sheng; Huang, Peng; Chen, Xiaoyuan

    2016-09-01

    Targeted delivery of therapeutic agents is an important way to improve the therapeutic index and reduce side effects. To design nanoparticles for targeted delivery, both enhanced tumor tissue accumulation/retention and enhanced cellular internalization should be considered simultaneously. So far, there have been very few nanoparticles with immutable structures that can achieve this goal efficiently. Hierarchical targeting, a novel targeting strategy based on stimuli responsiveness, shows good potential to enhance both tumor tissue accumulation/retention and cellular internalization. Here, the recent design and development of hierarchical targeting nanoplatforms, based on changeable particle sizes, switchable surface charges and activatable surface ligands, will be introduced. In general, the targeting moieties in these nanoplatforms are not activated during blood circulation for efficient tumor tissue accumulation, but re-activated by certain internal or external stimuli in the tumor microenvironment for enhanced cellular internalization. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. iHAT: interactive Hierarchical Aggregation Table for Genetic Association Data

    Directory of Open Access Journals (Sweden)

    Heinrich Julian

    2012-05-01

    Full Text Available Abstract In the search for single-nucleotide polymorphisms which influence the observable phenotype, genome wide association studies have become an important technique for the identification of associations between genotype and phenotype of a diverse set of sequence-based data. We present a methodology for the visual assessment of single-nucleotide polymorphisms using interactive hierarchical aggregation techniques combined with methods known from traditional sequence browsers and cluster heatmaps. Our tool, the interactive Hierarchical Aggregation Table (iHAT, facilitates the visualization of multiple sequence alignments, associated metadata, and hierarchical clusterings. Different color maps and aggregation strategies as well as filtering options support the user in finding correlations between sequences and metadata. Similar to other visualizations such as parallel coordinates or heatmaps, iHAT relies on the human pattern-recognition ability for spotting patterns that might indicate correlation or anticorrelation. We demonstrate iHAT using artificial and real-world datasets for DNA and protein association studies as well as expression Quantitative Trait Locus data.

  10. Effects of facial emotion recognition remediation on visual scanning of novel face stimuli.

    Science.gov (United States)

    Marsh, Pamela J; Luckett, Gemma; Russell, Tamara; Coltheart, Max; Green, Melissa J

    2012-11-01

    Previous research shows that emotion recognition in schizophrenia can be improved with targeted remediation that draws attention to important facial features (eyes, nose, mouth). Moreover, the effects of training have been shown to last for up to one month after training. The aim of this study was to investigate whether improved emotion recognition of novel faces is associated with concomitant changes in visual scanning of these same novel facial expressions. Thirty-nine participants with schizophrenia received emotion recognition training using Ekman's Micro-Expression Training Tool (METT), with emotion recognition and visual scanpath (VSP) recordings to face stimuli collected simultaneously. Baseline ratings of interpersonal and cognitive functioning were also collected from all participants. Post-METT training, participants showed changes in foveal attention to the features of facial expressions of emotion not used in METT training, which were generally consistent with the information about important features from the METT. In particular, there were changes in how participants looked at the features of facial expressions of emotion surprise, disgust, fear, happiness, and neutral, demonstrating that improved emotion recognition is paralleled by changes in the way participants with schizophrenia viewed novel facial expressions of emotion. However, there were overall decreases in foveal attention to sad and neutral faces that indicate more intensive instruction might be needed for these faces during training. Most importantly, the evidence shows that participant gender may affect training outcomes. Copyright © 2012 Elsevier B.V. All rights reserved.

  11. The pointillism method for creating stimuli suitable for use in computer-based visual contrast sensitivity testing.

    Science.gov (United States)

    Turner, Travis H

    2005-03-30

    An increasingly large corpus of clinical and experimental neuropsychological research has demonstrated the utility of measuring visual contrast sensitivity. Unfortunately, existing means of measuring contrast sensitivity can be prohibitively expensive, difficult to standardize, or lack reliability. Additionally, most existing tests do not allow full control over important characteristics, such as off-angle rotations, waveform, contrast, and spatial frequency. Ideally, researchers could manipulate characteristics and display stimuli in a computerized task designed to meet experimental needs. Thus far, 256-bit color limitation in standard cathode ray tube (CRT) monitors has been preclusive. To this end, the pointillism method (PM) was developed. Using MATLAB software, stimuli are created based on both mathematical and stochastic components, such that differences in regional luminance values of the gradient field closely approximate the desired contrast. This paper describes the method and examines its performance in sine and square-wave image sets from a range of contrast values. Results suggest the utility of the method for most experimental applications. Weaknesses in the current version, the need for validation and reliability studies, and considerations regarding applications are discussed. Syntax for the program is provided in an appendix, and a version of the program independent of MATLAB is available from the author.

  12. Real time unsupervised learning of visual stimuli in neuromorphic VLSI systems

    Science.gov (United States)

    Giulioni, Massimiliano; Corradi, Federico; Dante, Vittorio; Del Giudice, Paolo

    2015-10-01

    Neuromorphic chips embody computational principles operating in the nervous system, into microelectronic devices. In this domain it is important to identify computational primitives that theory and experiments suggest as generic and reusable cognitive elements. One such element is provided by attractor dynamics in recurrent networks. Point attractors are equilibrium states of the dynamics (up to fluctuations), determined by the synaptic structure of the network; a ‘basin’ of attraction comprises all initial states leading to a given attractor upon relaxation, hence making attractor dynamics suitable to implement robust associative memory. The initial network state is dictated by the stimulus, and relaxation to the attractor state implements the retrieval of the corresponding memorized prototypical pattern. In a previous work we demonstrated that a neuromorphic recurrent network of spiking neurons and suitably chosen, fixed synapses supports attractor dynamics. Here we focus on learning: activating on-chip synaptic plasticity and using a theory-driven strategy for choosing network parameters, we show that autonomous learning, following repeated presentation of simple visual stimuli, shapes a synaptic connectivity supporting stimulus-selective attractors. Associative memory develops on chip as the result of the coupled stimulus-driven neural activity and ensuing synaptic dynamics, with no artificial separation between learning and retrieval phases.

  13. Altered visual information processing systems in bipolar disorder: evidence from visual MMN and P3

    Directory of Open Access Journals (Sweden)

    Toshihiko eMaekawa

    2013-07-01

    Full Text Available Objective: Mismatch negativity (MMN and P3 are unique ERP components that provide objective indices of human cognitive functions such as short-term memory and prediction. Bipolar disorder (BD is an endogenous psychiatric disorder characterized by extreme shifts in mood, energy, and ability to function socially. BD patients usually show cognitive dysfunction, and the goal of this study was to access their altered visual information processing via visual MMN (vMMN and P3 using windmill pattern stimuli.Methods: Twenty patients with BD and 20 healthy controls matched for age, gender, and handedness participated in this study. Subjects were seated in front of a monitor and listened to a story via earphones. Two types of windmill patterns (standard and deviant and white circle (target stimuli were randomly presented on the monitor. All stimuli were presented in random order at 200-ms durations with an 800-ms inter-stimulus interval. Stimuli were presented at 80% (standard, 10% (deviant, and 10% (target probabilities. The participants were instructed to attend to the story and press a button as soon as possible when the target stimuli were presented. Event-related potentials were recorded throughout the experiment using 128-channel EEG equipment. vMMN was obtained by subtracting standard from deviant stimuli responses, and P3 was evoked from the target stimulus.Results: Mean reaction times for target stimuli in the BD group were significantly higher than those in the control group. Additionally, mean vMMN-amplitudes and peak P3-amplitudes were significantly lower in the BD group than in controls.Conclusions: Abnormal vMMN and P3 in patients indicate a deficit of visual information processing in bipolar disorder, which is consistent with their increased reaction time to visual target stimuli.Significance: Both bottom-up and top-down visual information processing are likely altered in BD.

  14. Adaptive visualization for large-scale graph

    International Nuclear Information System (INIS)

    Nakamura, Hiroko; Shinano, Yuji; Ohzahata, Satoshi

    2010-01-01

    We propose an adoptive visualization technique for representing a large-scale hierarchical dataset within limited display space. A hierarchical dataset has nodes and links showing the parent-child relationship between the nodes. These nodes and links are described using graphics primitives. When the number of these primitives is large, it is difficult to recognize the structure of the hierarchical data because many primitives are overlapped within a limited region. To overcome this difficulty, we propose an adaptive visualization technique for hierarchical datasets. The proposed technique selects an appropriate graph style according to the nodal density in each area. (author)

  15. A GIS-Enabled, Michigan-Specific, Hierarchical Groundwater Modeling and Visualization System

    Science.gov (United States)

    Liu, Q.; Li, S.; Mandle, R.; Simard, A.; Fisher, B.; Brown, E.; Ross, S.

    2005-12-01

    Efficient management of groundwater resources relies on a comprehensive database that represents the characteristics of the natural groundwater system as well as analysis and modeling tools to describe the impacts of decision alternatives. Many agencies in Michigan have spent several years compiling expensive and comprehensive surface water and groundwater inventories and other related spatial data that describe their respective areas of responsibility. However, most often this wealth of descriptive data has only been utilized for basic mapping purposes. The benefits from analyzing these data, using GIS analysis functions or externally developed analysis models or programs, has yet to be systematically realized. In this talk, we present a comprehensive software environment that allows Michigan groundwater resources managers and frontline professionals to make more effective use of the available data and improve their ability to manage and protect groundwater resources, address potential conflicts, design cleanup schemes, and prioritize investigation activities. In particular, we take advantage of the Interactive Ground Water (IGW) modeling system and convert it to a customized software environment specifically for analyzing, modeling, and visualizing the Michigan statewide groundwater database. The resulting Michigan IGW modeling system (IGW-M) is completely window-based, fully interactive, and seamlessly integrated with a GIS mapping engine. The system operates in real-time (on the fly) providing dynamic, hierarchical mapping, modeling, spatial analysis, and visualization. Specifically, IGW-M allows water resources and environmental professionals in Michigan to: * Access and utilize the extensive data from the statewide groundwater database, interactively manipulate GIS objects, and display and query the associated data and attributes; * Analyze and model the statewide groundwater database, interactively convert GIS objects into numerical model features

  16. Negative emotional stimuli reduce contextual cueing but not response times in inefficient search.

    Science.gov (United States)

    Kunar, Melina A; Watson, Derrick G; Cole, Louise; Cox, Angeline

    2014-02-01

    In visual search, previous work has shown that negative stimuli narrow the focus of attention and speed reaction times (RTs). This paper investigates these two effects by first asking whether negative emotional stimuli narrow the focus of attention to reduce the learning of a display context in a contextual cueing task and, second, whether exposure to negative stimuli also reduces RTs in inefficient search tasks. In Experiment 1, participants viewed either negative or neutral images (faces or scenes) prior to a contextual cueing task. In a typical contextual cueing experiment, RTs are reduced if displays are repeated across the experiment compared with novel displays that are not repeated. The results showed that a smaller contextual cueing effect was obtained after participants viewed negative stimuli than when they viewed neutral stimuli. However, in contrast to previous work, overall search RTs were not faster after viewing negative stimuli (Experiments 2 to 4). The findings are discussed in terms of the impact of emotional content on visual processing and the ability to use scene context to help facilitate search.

  17. Associative visual learning by tethered bees in a controlled visual environment.

    Science.gov (United States)

    Buatois, Alexis; Pichot, Cécile; Schultheiss, Patrick; Sandoz, Jean-Christophe; Lazzari, Claudio R; Chittka, Lars; Avarguès-Weber, Aurore; Giurfa, Martin

    2017-10-10

    Free-flying honeybees exhibit remarkable cognitive capacities but the neural underpinnings of these capacities cannot be studied in flying insects. Conversely, immobilized bees are accessible to neurobiological investigation but display poor visual learning. To overcome this limitation, we aimed at establishing a controlled visual environment in which tethered bees walking on a spherical treadmill learn to discriminate visual stimuli video projected in front of them. Freely flying bees trained to walk into a miniature Y-maze displaying these stimuli in a dark environment learned the visual discrimination efficiently when one of them (CS+) was paired with sucrose and the other with quinine solution (CS-). Adapting this discrimination to the treadmill paradigm with a tethered, walking bee was successful as bees exhibited robust discrimination and preferred the CS+ to the CS- after training. As learning was better in the maze, movement freedom, active vision and behavioral context might be important for visual learning. The nature of the punishment associated with the CS- also affects learning as quinine and distilled water enhanced the proportion of learners. Thus, visual learning is amenable to a controlled environment in which tethered bees learn visual stimuli, a result that is important for future neurobiological studies in virtual reality.

  18. Music influences ratings of the affect of visual stimuli

    NARCIS (Netherlands)

    Hanser, W.E.; Mark, R.E.

    2013-01-01

    This review provides an overview of recent studies that have examined how music influences the judgment of emotional stimuli, including affective pictures and film clips. The relevant findings are incorporated within a broader theory of music and emotion, and suggestions for future research are

  19. Do episodic migraineurs selectively attend to headache-related visual stimuli?

    Science.gov (United States)

    McDermott, Michael J; Peck, Kelly R; Walters, A Brooke; Smitherman, Todd A

    2013-02-01

    To assess pain-related attentional biases among individuals with episodic migraine. Prior studies have examined whether chronic pain patients selectively attend to pain-related stimuli in the environment, but these studies have produced largely mixed findings and focused primarily on patients with chronic musculoskeletal pain. Limited research has implicated attentional biases among chronic headache patients, but no studies have been conducted among episodic migraineurs, who comprise the overwhelming majority of the migraine population. This was a case-control, experimental study. Three hundred and eight participants (mean age = 19.2 years [standard deviation = 3.3]; 69.5% female; 36.4% minority), consisting of 84 episodic migraineurs, diagnosed in accordance with International Classification of Headache Disorders (2(nd) edition) criteria using a structured diagnostic interview, and 224 non-migraine controls completed a computerized dot probe task to assess attentional bias toward headache-related pictorial stimuli. The task consisted of 192 trials and utilized 2 emotional-neutral stimulus pairing conditions (headache-neutral and happy-neutral). No within-group differences for reaction time latencies to headache vs happy conditions were found among those with episodic migraine or among the non-migraine controls. Migraine status was unrelated to attentional bias indices for both headache (F [1,306] = 0.56, P = .45) and happy facial stimuli (F [1,306] = 0.37, P = .54), indicating a lack of between-group differences. Lack of within- and between-group differences was confirmed with repeated measures analysis of variance. In light of the large sample size and prior pilot testing of presented images, results suggest that episodic migraineurs do not differentially attend to headache-related facial stimuli. Given modest evidence of attentional biases among chronic headache samples, these findings suggest potential differences in attentional

  20. The visual attention span deficit in dyslexia is visual and not verbal.

    Science.gov (United States)

    Lobier, Muriel; Zoubrinetzky, Rachel; Valdois, Sylviane

    2012-06-01

    The visual attention (VA) span deficit hypothesis of dyslexia posits that letter string deficits are a consequence of impaired visual processing. Alternatively, some have interpreted this deficit as resulting from a visual-to-phonology code mapping impairment. This study aims to disambiguate between the two interpretations by investigating performance in a non-verbal character string visual categorization task with verbal and non-verbal stimuli. Results show that VA span ability predicts performance for the non-verbal visual processing task in normal reading children. Furthermore, VA span impaired dyslexic children are also impaired for the categorization task independently of stimuli type. This supports the hypothesis that the underlying impairment responsible for the VA span deficit is visual, not verbal. Copyright © 2011 Elsevier Srl. All rights reserved.

  1. Benefits of stimulus congruency for multisensory facilitation of visual learning.

    Directory of Open Access Journals (Sweden)

    Robyn S Kim

    Full Text Available BACKGROUND: Studies of perceptual learning have largely focused on unisensory stimuli. However, multisensory interactions are ubiquitous in perception, even at early processing stages, and thus can potentially play a role in learning. Here, we examine the effect of auditory-visual congruency on visual learning. METHODOLOGY/PRINCIPLE FINDINGS: Subjects were trained over five days on a visual motion coherence detection task with either congruent audiovisual, or incongruent audiovisual stimuli. Comparing performance on visual-only trials, we find that training with congruent audiovisual stimuli produces significantly better learning than training with incongruent audiovisual stimuli or with only visual stimuli. CONCLUSIONS/SIGNIFICANCE: This advantage from stimulus congruency during training suggests that the benefits of multisensory training may result from audiovisual interactions at a perceptual rather than cognitive level.

  2. Temporal attention for visual food stimuli in restrained eaters

    NARCIS (Netherlands)

    Neimeijer, Renate A. M.; de Jong, Peter J.; Roefs, Anne

    2013-01-01

    Although restrained eaters try to limit their food intake, they often fail and indulge in exactly those foods that they want to avoid. A possible explanation is a temporal attentional bias for food cues. It could be that for these people food stimuli are processed relatively efficiently and require

  3. Processing of unconventional stimuli requires the recruitment of the non-specialized hemisphere

    Directory of Open Access Journals (Sweden)

    Yoed Nissan Kenett

    2015-02-01

    Full Text Available In the present study we investigate hemispheric processing of conventional and unconventional visual stimuli in the context of visual and verbal creative ability. In Experiment 1, we studied two unconventional visual recognition tasks – Mooney face and objects' silhouette recognition – and found a significant relationship between measures of verbal creativity and unconventional face recognition. In Experiment 2 we used the split visual field paradigm to investigate hemispheric processing of conventional and unconventional faces and its relation to verbal and visual characteristics of creativity. Results showed that while conventional faces were better processed by the specialized right hemisphere, unconventional faces were better processed by the non-specialized left hemisphere. In addition, only unconventional face processing by the non-specialized left hemisphere was related to verbal and visual measures of creative ability. Our findings demonstrate the role of the non-specialized hemisphere in processing unconventional stimuli and how it relates to creativity.

  4. Evolutionary relevance facilitates visual information processing.

    Science.gov (United States)

    Jackson, Russell E; Calvillo, Dusti P

    2013-11-03

    Visual search of the environment is a fundamental human behavior that perceptual load affects powerfully. Previously investigated means for overcoming the inhibitions of high perceptual load, however, generalize poorly to real-world human behavior. We hypothesized that humans would process evolutionarily relevant stimuli more efficiently than evolutionarily novel stimuli, and evolutionary relevance would mitigate the repercussions of high perceptual load during visual search. Animacy is a significant component to evolutionary relevance of visual stimuli because perceiving animate entities is time-sensitive in ways that pose significant evolutionary consequences. Participants completing a visual search task located evolutionarily relevant and animate objects fastest and with the least impact of high perceptual load. Evolutionarily novel and inanimate objects were located slowest and with the highest impact of perceptual load. Evolutionary relevance may importantly affect everyday visual information processing.

  5. Audio-visual synchrony and feature-selective attention co-amplify early visual processing.

    Science.gov (United States)

    Keitel, Christian; Müller, Matthias M

    2016-05-01

    Our brain relies on neural mechanisms of selective attention and converging sensory processing to efficiently cope with rich and unceasing multisensory inputs. One prominent assumption holds that audio-visual synchrony can act as a strong attractor for spatial attention. Here, we tested for a similar effect of audio-visual synchrony on feature-selective attention. We presented two superimposed Gabor patches that differed in colour and orientation. On each trial, participants were cued to selectively attend to one of the two patches. Over time, spatial frequencies of both patches varied sinusoidally at distinct rates (3.14 and 3.63 Hz), giving rise to pulse-like percepts. A simultaneously presented pure tone carried a frequency modulation at the pulse rate of one of the two visual stimuli to introduce audio-visual synchrony. Pulsed stimulation elicited distinct time-locked oscillatory electrophysiological brain responses. These steady-state responses were quantified in the spectral domain to examine individual stimulus processing under conditions of synchronous versus asynchronous tone presentation and when respective stimuli were attended versus unattended. We found that both, attending to the colour of a stimulus and its synchrony with the tone, enhanced its processing. Moreover, both gain effects combined linearly for attended in-sync stimuli. Our results suggest that audio-visual synchrony can attract attention to specific stimulus features when stimuli overlap in space.

  6. Similarity relations in visual search predict rapid visual categorization

    Science.gov (United States)

    Mohan, Krithika; Arun, S. P.

    2012-01-01

    How do we perform rapid visual categorization?It is widely thought that categorization involves evaluating the similarity of an object to other category items, but the underlying features and similarity relations remain unknown. Here, we hypothesized that categorization performance is based on perceived similarity relations between items within and outside the category. To this end, we measured the categorization performance of human subjects on three diverse visual categories (animals, vehicles, and tools) and across three hierarchical levels (superordinate, basic, and subordinate levels among animals). For the same subjects, we measured their perceived pair-wise similarities between objects using a visual search task. Regardless of category and hierarchical level, we found that the time taken to categorize an object could be predicted using its similarity to members within and outside its category. We were able to account for several classic categorization phenomena, such as (a) the longer times required to reject category membership; (b) the longer times to categorize atypical objects; and (c) differences in performance across tasks and across hierarchical levels. These categorization times were also accounted for by a model that extracts coarse structure from an image. The striking agreement observed between categorization and visual search suggests that these two disparate tasks depend on a shared coarse object representation. PMID:23092947

  7. Reappraising the functional implications of the primate visual anatomical hierarchy.

    Science.gov (United States)

    Hegdé, Jay; Felleman, Daniel J

    2007-10-01

    The primate visual system has been shown to be organized into an anatomical hierarchy by the application of a few principled criteria. It has been widely assumed that cortical visual processing is also hierarchical, with the anatomical hierarchy providing a defined substrate for clear levels of hierarchical function. A large body of empirical evidence seemed to support this assumption, including the general observations that functional properties of visual neurons grow progressively more complex at progressively higher levels of the anatomical hierarchy. However, a growing body of evidence, including recent direct experimental comparisons of functional properties at two or more levels of the anatomical hierarchy, indicates that visual processing neither is hierarchical nor parallels the anatomical hierarchy. Recent results also indicate that some of the pathways of visual information flow are not hierarchical, so that the anatomical hierarchy cannot be taken as a strict flowchart of visual information either. Thus, while the sustaining strength of the notion of hierarchical processing may be that it is rather simple, its fatal flaw is that it is overly simplistic.

  8. Cortical Integration of Audio-Visual Information

    Science.gov (United States)

    Vander Wyk, Brent C.; Ramsay, Gordon J.; Hudac, Caitlin M.; Jones, Warren; Lin, David; Klin, Ami; Lee, Su Mei; Pelphrey, Kevin A.

    2013-01-01

    We investigated the neural basis of audio-visual processing in speech and non-speech stimuli. Physically identical auditory stimuli (speech and sinusoidal tones) and visual stimuli (animated circles and ellipses) were used in this fMRI experiment. Relative to unimodal stimuli, each of the multimodal conjunctions showed increased activation in largely non-overlapping areas. The conjunction of Ellipse and Speech, which most resembles naturalistic audiovisual speech, showed higher activation in the right inferior frontal gyrus, fusiform gyri, left posterior superior temporal sulcus, and lateral occipital cortex. The conjunction of Circle and Tone, an arbitrary audio-visual pairing with no speech association, activated middle temporal gyri and lateral occipital cortex. The conjunction of Circle and Speech showed activation in lateral occipital cortex, and the conjunction of Ellipse and Tone did not show increased activation relative to unimodal stimuli. Further analysis revealed that middle temporal regions, although identified as multimodal only in the Circle-Tone condition, were more strongly active to Ellipse-Speech or Circle-Speech, but regions that were identified as multimodal for Ellipse-Speech were always strongest for Ellipse-Speech. Our results suggest that combinations of auditory and visual stimuli may together be processed by different cortical networks, depending on the extent to which speech or non-speech percepts are evoked. PMID:20709442

  9. VEP Responses to Op-Art Stimuli.

    Directory of Open Access Journals (Sweden)

    Louise O'Hare

    Full Text Available Several types of striped patterns have been reported to cause adverse sensations described as visual discomfort. Previous research using op-art-based stimuli has demonstrated that spurious eye movement signals can cause the experience of illusory motion, or shimmering effects, which might be perceived as uncomfortable. Whilst the shimmering effects are one cause of discomfort, another possible contributor to discomfort is excessive neural responses: As striped patterns do not have the statistical redundancy typical of natural images, they are perhaps unable to be encoded efficiently. If this is the case, then this should be seen in the amplitude of the EEG response. This study found that stimuli that were judged to be most comfortable were also those with the lowest EEG amplitude. This provides some support for the idea that excessive neural responses might also contribute to discomfort judgements in normal populations, in stimuli controlled for perceived contrast.

  10. VEP Responses to Op-Art Stimuli.

    Science.gov (United States)

    O'Hare, Louise; Clarke, Alasdair D F; Pollux, Petra M J

    2015-01-01

    Several types of striped patterns have been reported to cause adverse sensations described as visual discomfort. Previous research using op-art-based stimuli has demonstrated that spurious eye movement signals can cause the experience of illusory motion, or shimmering effects, which might be perceived as uncomfortable. Whilst the shimmering effects are one cause of discomfort, another possible contributor to discomfort is excessive neural responses: As striped patterns do not have the statistical redundancy typical of natural images, they are perhaps unable to be encoded efficiently. If this is the case, then this should be seen in the amplitude of the EEG response. This study found that stimuli that were judged to be most comfortable were also those with the lowest EEG amplitude. This provides some support for the idea that excessive neural responses might also contribute to discomfort judgements in normal populations, in stimuli controlled for perceived contrast.

  11. Flexibility and Stability in Sensory Processing Revealed Using Visual-to-Auditory Sensory Substitution

    Science.gov (United States)

    Hertz, Uri; Amedi, Amir

    2015-01-01

    The classical view of sensory processing involves independent processing in sensory cortices and multisensory integration in associative areas. This hierarchical structure has been challenged by evidence of multisensory responses in sensory areas, and dynamic weighting of sensory inputs in associative areas, thus far reported independently. Here, we used a visual-to-auditory sensory substitution algorithm (SSA) to manipulate the information conveyed by sensory inputs while keeping the stimuli intact. During scan sessions before and after SSA learning, subjects were presented with visual images and auditory soundscapes. The findings reveal 2 dynamic processes. First, crossmodal attenuation of sensory cortices changed direction after SSA learning from visual attenuations of the auditory cortex to auditory attenuations of the visual cortex. Secondly, associative areas changed their sensory response profile from strongest response for visual to that for auditory. The interaction between these phenomena may play an important role in multisensory processing. Consistent features were also found in the sensory dominance in sensory areas and audiovisual convergence in associative area Middle Temporal Gyrus. These 2 factors allow for both stability and a fast, dynamic tuning of the system when required. PMID:24518756

  12. Hierarchical prediction errors in midbrain and basal forebrain during sensory learning.

    Science.gov (United States)

    Iglesias, Sandra; Mathys, Christoph; Brodersen, Kay H; Kasper, Lars; Piccirelli, Marco; den Ouden, Hanneke E M; Stephan, Klaas E

    2013-10-16

    In Bayesian brain theories, hierarchically related prediction errors (PEs) play a central role for predicting sensory inputs and inferring their underlying causes, e.g., the probabilistic structure of the environment and its volatility. Notably, PEs at different hierarchical levels may be encoded by different neuromodulatory transmitters. Here, we tested this possibility in computational fMRI studies of audio-visual learning. Using a hierarchical Bayesian model, we found that low-level PEs about visual stimulus outcome were reflected by widespread activity in visual and supramodal areas but also in the midbrain. In contrast, high-level PEs about stimulus probabilities were encoded by the basal forebrain. These findings were replicated in two groups of healthy volunteers. While our fMRI measures do not reveal the exact neuron types activated in midbrain and basal forebrain, they suggest a dichotomy between neuromodulatory systems, linking dopamine to low-level PEs about stimulus outcome and acetylcholine to more abstract PEs about stimulus probabilities. Copyright © 2013 Elsevier Inc. All rights reserved.

  13. Increased Early Processing of Task-Irrelevant Auditory Stimuli in Older Adults.

    Directory of Open Access Journals (Sweden)

    Erich S Tusch

    Full Text Available The inhibitory deficit hypothesis of cognitive aging posits that older adults' inability to adequately suppress processing of irrelevant information is a major source of cognitive decline. Prior research has demonstrated that in response to task-irrelevant auditory stimuli there is an age-associated increase in the amplitude of the N1 wave, an ERP marker of early perceptual processing. Here, we tested predictions derived from the inhibitory deficit hypothesis that the age-related increase in N1 would be 1 observed under an auditory-ignore, but not auditory-attend condition, 2 attenuated in individuals with high executive capacity (EC, and 3 augmented by increasing cognitive load of the primary visual task. ERPs were measured in 114 well-matched young, middle-aged, young-old, and old-old adults, designated as having high or average EC based on neuropsychological testing. Under the auditory-ignore (visual-attend task, participants ignored auditory stimuli and responded to rare target letters under low and high load. Under the auditory-attend task, participants ignored visual stimuli and responded to rare target tones. Results confirmed an age-associated increase in N1 amplitude to auditory stimuli under the auditory-ignore but not auditory-attend task. Contrary to predictions, EC did not modulate the N1 response. The load effect was the opposite of expectation: the N1 to task-irrelevant auditory events was smaller under high load. Finally, older adults did not simply fail to suppress the N1 to auditory stimuli in the task-irrelevant modality; they generated a larger response than to identical stimuli in the task-relevant modality. In summary, several of the study's findings do not fit the inhibitory-deficit hypothesis of cognitive aging, which may need to be refined or supplemented by alternative accounts.

  14. Auditory-visual integration in fields of the auditory cortex.

    Science.gov (United States)

    Kubota, Michinori; Sugimoto, Shunji; Hosokawa, Yutaka; Ojima, Hisayuki; Horikawa, Junsei

    2017-03-01

    While multimodal interactions have been known to exist in the early sensory cortices, the response properties and spatiotemporal organization of these interactions are poorly understood. To elucidate the characteristics of multimodal sensory interactions in the cerebral cortex, neuronal responses to visual stimuli with or without auditory stimuli were investigated in core and belt fields of guinea pig auditory cortex using real-time optical imaging with a voltage-sensitive dye. On average, visual responses consisted of short excitation followed by long inhibition. Although visual responses were observed in core and belt fields, there were regional and temporal differences in responses. The most salient visual responses were observed in the caudal belt fields, especially posterior (P) and dorsocaudal belt (DCB) fields. Visual responses emerged first in fields P and DCB and then spread rostroventrally to core and ventrocaudal belt (VCB) fields. Absolute values of positive and negative peak amplitudes of visual responses were both larger in fields P and DCB than in core and VCB fields. When combined visual and auditory stimuli were applied, fields P and DCB were more inhibited than core and VCB fields beginning approximately 110 ms after stimuli. Correspondingly, differences between responses to auditory stimuli alone and combined audiovisual stimuli became larger in fields P and DCB than in core and VCB fields after approximately 110 ms after stimuli. These data indicate that visual influences are most salient in fields P and DCB, which manifest mainly as inhibition, and that they enhance differences in auditory responses among fields. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Bank of Standardized Stimuli (BOSS phase II: 930 new normative photos.

    Directory of Open Access Journals (Sweden)

    Mathieu B Brodeur

    Full Text Available Researchers have only recently started to take advantage of the developments in technology and communication for sharing data and documents. However, the exchange of experimental material has not taken advantage of this progress yet. In order to facilitate access to experimental material, the Bank of Standardized Stimuli (BOSS project was created as a free standardized set of visual stimuli accessible to all researchers, through a normative database. The BOSS is currently the largest existing photo bank providing norms for more than 15 dimensions (e.g. familiarity, visual complexity, manipulability, etc., making the BOSS an extremely useful research tool and a mean to homogenize scientific data worldwide. The first phase of the BOSS was completed in 2010, and contained 538 normative photos. The second phase of the BOSS project presented in this article, builds on the previous phase by adding 930 new normative photo stimuli. New categories of concepts were introduced, including animals, building infrastructures, body parts, and vehicles and the number of photos in other categories was increased. All new photos of the BOSS were normalized relative to their name, familiarity, visual complexity, object agreement, viewpoint agreement, and manipulability. The availability of these norms is a precious asset that should be considered for characterizing the stimuli as a function of the requirements of research and for controlling for potential confounding effects.

  16. High-intensity erotic visual stimuli de-activate the primary visual cortex in women.

    Science.gov (United States)

    Huynh, Hieu K; Beers, Caroline; Willemsen, Antoon; Lont, Erna; Laan, Ellen; Dierckx, Rudi; Jansen, Monique; Sand, Michael; Weijmar Schultz, Willibrord; Holstege, Gert

    2012-06-01

    The primary visual cortex, Brodmann's area (BA 17), plays a vital role in basic survival mechanisms in humans. In most neuro-imaging studies in which the volunteers have to watch pictures or movies, the primary visual cortex is similarly activated independent of the content of the pictures or movies. However, in case the volunteers perform demanding non-visual tasks, the primary visual cortex becomes de-activated, although the amount of incoming visual sensory information is the same. Do low- and high-intensity erotic movies, compared to neutral movies, produce similar de-activation of the primary visual cortex? Brain activation/de-activation was studied by Positron Emission Tomography scanning of the brains of 12 healthy heterosexual premenopausal women, aged 18-47, who watched neutral, low- and high-intensity erotic film segments. We measured differences in regional cerebral blood flow (rCBF) in the primary visual cortex during watching neutral, low-intensity erotic, and high-intensity erotic film segments. Watching high-intensity erotic, but not low-intensity erotic movies, compared to neutral movies resulted in strong de-activation of the primary (BA 17) and adjoining parts of the secondary visual cortex. The strong de-activation during watching high-intensity erotic film might represent compensation for the increased blood supply in the brain regions involved in sexual arousal, also because high-intensity erotic movies do not require precise scanning of the visual field, because the impact is clear to the observer. © 2012 International Society for Sexual Medicine.

  17. Pain and other symptoms of CRPS can be increased by ambiguous visual stimuli--an exploratory study.

    Science.gov (United States)

    Hall, Jane; Harrison, Simon; Cohen, Helen; McCabe, Candida S; Harris, N; Blake, David R

    2011-01-01

    Visual disturbance, visuo-spatial difficulties, and exacerbations of pain associated with these, have been reported by some patients with Complex Regional Pain Syndrome (CRPS). We investigated the hypothesis that some visual stimuli (i.e. those which produce ambiguous perceptions) can induce pain and other somatic sensations in people with CRPS. Thirty patients with CRPS, 33 with rheumatology conditions and 45 healthy controls viewed two images: a bistable spatial image and a control image. For each image participants recorded the frequency of percept change in 1 min and reported any changes in somatosensation. 73% of patients with CRPS reported increases in pain and/or sensory disturbances including changes in perception of the affected limb, temperature and weight changes and feelings of disorientation after viewing the bistable image. Additionally, 13% of the CRPS group responded with striking worsening of their symptoms which necessitated task cessation. Subjects in the control groups did not report pain increases or somatic sensations. It is possible to worsen the pain suffered in CRPS, and to produce other somatic sensations, by means of a visual stimulus alone. This is a newly described finding. As a clinical and research tool, the experimental method provides a means to generate and exacerbate somaesthetic disturbances, including pain, without moving the affected limb and causing nociceptive interference. This may be particularly useful for brain imaging studies. Copyright © 2010 European Federation of International Association for the Study of Pain Chapters. Published by Elsevier Ltd. All rights reserved.

  18. Evolutionary Relevance Facilitates Visual Information Processing

    Directory of Open Access Journals (Sweden)

    Russell E. Jackson

    2013-07-01

    Full Text Available Visual search of the environment is a fundamental human behavior that perceptual load affects powerfully. Previously investigated means for overcoming the inhibitions of high perceptual load, however, generalize poorly to real-world human behavior. We hypothesized that humans would process evolutionarily relevant stimuli more efficiently than evolutionarily novel stimuli, and evolutionary relevance would mitigate the repercussions of high perceptual load during visual search. Animacy is a significant component to evolutionary relevance of visual stimuli because perceiving animate entities is time-sensitive in ways that pose significant evolutionary consequences. Participants completing a visual search task located evolutionarily relevant and animate objects fastest and with the least impact of high perceptual load. Evolutionarily novel and inanimate objects were located slowest and with the highest impact of perceptual load. Evolutionary relevance may importantly affect everyday visual information processing.

  19. Auditory Emotional Cues Enhance Visual Perception

    Science.gov (United States)

    Zeelenberg, Rene; Bocanegra, Bruno R.

    2010-01-01

    Recent studies show that emotional stimuli impair performance to subsequently presented neutral stimuli. Here we show a cross-modal perceptual enhancement caused by emotional cues. Auditory cue words were followed by a visually presented neutral target word. Two-alternative forced-choice identification of the visual target was improved by…

  20. Ingestão de ração e comportamento de larvas de pacu em resposta a estímulos químicos e visuais Diet ingestion rate and pacu larvae behavior in response to chemical and visual stimuli

    Directory of Open Access Journals (Sweden)

    Marcelo Borges Tesser

    2006-10-01

    Full Text Available Este estudo foi realizado com o objetivo de comparar a influência dos estímulos visual e/ou químico de náuplios de Artemia e de dieta microencapsulada sobre a taxa de ingestão da dieta microencapusulada por larvas de pacu Piaractus mesopotamicus. Utilizou-se um esquema fatorial 7 x 4 (estímulos e idades com duas repetições. Verificou-se efeito da idade das larvas e dos estímulos, mas não houve efeito para a interação idade ´ estímulos. O estímulo químico da Artemia e ambos os estímulos da Artemia resultaram em maior taxa de ingestão de dieta inerte. Resultado intermediário foi obtido com o estímulo visual da dieta microencapsulada. O estímulo químico, em comparação ao estímulo visual da Artemia, resultou em maiores taxas de ingestão da dieta. Com o aumento da idade, houve incremento na taxa de ingestão. Os estímulos visual e químico dos náuplios e o estímulo visual da ração aumentaram a ingestão de dieta inerte por larvas de pacu. Náuplios de Artemia devem ser oferecidos antes do fornecimento da dieta inerte, pois podem auxiliar no processo de transição alimentar. Os resultados deste trabalho apontaram novas possibilidades de estudos com larvas de peixes neotropicais visando a substituição precoce do alimento vivo para o inerte.The effect of visual, chemical and the combination of both stimuli from Artemia nauplii and from microencapsulated diet on dry diet ingestion by pacu Piaractus mesopotamicus larvae was evaluated in this research. The experiment was analyzed as a 7 x 4 factorial arrangement (seven stimuli and four ages with two replicates. It was observed effect of larvae age and stimuli, but no interaction (age ´ stimuli was observed. The chemical effect from Artemia and both effects from Artemia resulted in higher ingestion rates. An intermediary result was obtained with visual effect from microencapsulated diet. The chemical stimulus from Artemia resulted in higher ingestion rates than that

  1. Read-out of emotional information from iconic memory: the longevity of threatening stimuli.

    Science.gov (United States)

    Kuhbandner, Christof; Spitzer, Bernhard; Pekrun, Reinhard

    2011-05-01

    Previous research has shown that emotional stimuli are more likely than neutral stimuli to be selected by attention, indicating that the processing of emotional information is prioritized. In this study, we examined whether the emotional significance of stimuli influences visual processing already at the level of transient storage of incoming information in iconic memory, before attentional selection takes place. We used a typical iconic memory task in which the delay of a poststimulus cue, indicating which of several visual stimuli has to be reported, was varied. Performance decreased rapidly with increasing cue delay, reflecting the fast decay of information stored in iconic memory. However, although neutral stimulus information and emotional stimulus information were initially equally likely to enter iconic memory, the subsequent decay of the initially stored information was slowed for threatening stimuli, a result indicating that fear-relevant information has prolonged availability for read-out from iconic memory. This finding provides the first evidence that emotional significance already facilitates stimulus processing at the stage of iconic memory.

  2. Investigating vision in schizophrenia through responses to humorous stimuli

    Directory of Open Access Journals (Sweden)

    Wolfgang Tschacher

    2015-06-01

    Full Text Available The visual environment of humans contains abundant ambiguity and fragmentary information. Therefore, an early step of vision must disambiguate the incessant stream of information. Humorous stimuli produce a situation that is strikingly analogous to this process: Funniness is associated with the incongruity contained in a joke, pun, or cartoon. Like in vision in general, appreciating a visual pun as funny necessitates disambiguation of incongruous information. Therefore, perceived funniness of visual puns was implemented to study visual perception in a sample of 36 schizophrenia patients and 56 healthy control participants. We found that both visual incongruity and Theory of Mind (ToM content of the puns were associated with increased experienced funniness. This was significantly less so in participants with schizophrenia, consistent with the gestalt hypothesis of schizophrenia, which would predict compromised perceptual organization in patients. The association of incongruity with funniness was not mediated by known predictors of humor appreciation, such as affective state, depression, or extraversion. Patients with higher excitement symptoms and, at a trend level, reduced cognitive symptoms, reported lower funniness experiences. An open question remained whether patients showed this deficiency of visual incongruity detection independent of their ToM deficiency. Humorous stimuli may be viewed as a convenient method to study perceptual processes, but also fundamental questions of higher-level cognition.

  3. Moving Stimuli Facilitate Synchronization But Not Temporal Perception.

    Science.gov (United States)

    Silva, Susana; Castro, São Luís

    2016-01-01

    Recent studies have shown that a moving visual stimulus (e.g., a bouncing ball) facilitates synchronization compared to a static stimulus (e.g., a flashing light), and that it can even be as effective as an auditory beep. We asked a group of participants to perform different tasks with four stimulus types: beeps, siren-like sounds, visual flashes (static) and bouncing balls. First, participants performed synchronization with isochronous sequences (stimulus-guided synchronization), followed by a continuation phase in which the stimulus was internally generated (imagery-guided synchronization). Then they performed a perception task, in which they judged whether the final part of a temporal sequence was compatible with the previous beat structure (stimulus-guided perception). Similar to synchronization, an imagery-guided variant was added, in which sequences contained a gap in between (imagery-guided perception). Balls outperformed flashes and matched beeps (powerful ball effect) in stimulus-guided synchronization but not in perception (stimulus- or imagery-guided). In imagery-guided synchronization, performance accuracy decreased for beeps and balls, but not for flashes and sirens. Our findings suggest that the advantages of moving visual stimuli over static ones are grounded in action rather than perception, and they support the hypothesis that the sensorimotor coupling mechanisms for auditory (beeps) and moving visual stimuli (bouncing balls) overlap.

  4. A computational exploration of complementary learning mechanisms in the primate ventral visual pathway.

    Science.gov (United States)

    Spoerer, Courtney J; Eguchi, Akihiro; Stringer, Simon M

    2016-02-01

    In order to develop transformation invariant representations of objects, the visual system must make use of constraints placed upon object transformation by the environment. For example, objects transform continuously from one point to another in both space and time. These two constraints have been exploited separately in order to develop translation and view invariance in a hierarchical multilayer model of the primate ventral visual pathway in the form of continuous transformation learning and temporal trace learning. We show for the first time that these two learning rules can work cooperatively in the model. Using these two learning rules together can support the development of invariance in cells and help maintain object selectivity when stimuli are presented over a large number of locations or when trained separately over a large number of viewing angles. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  5. Keeping in Touch With the Visual System: Spatial Alignment and Multisensory Integration of Visual-Somatosensory Inputs

    Directory of Open Access Journals (Sweden)

    Jeannette Rose Mahoney

    2015-08-01

    Full Text Available Correlated sensory inputs coursing along the individual sensory processing hierarchies arrive at multisensory convergence zones in cortex where inputs are processed in an integrative manner. The exact hierarchical level of multisensory convergence zones and the timing of their inputs are still under debate, although increasingly, evidence points to multisensory integration at very early sensory processing levels. The objective of the current study was to determine, both psychophysically and electrophysiologically, whether differential visual-somatosensory integration patterns exist for stimuli presented to the same versus opposite hemifields. Using high-density electrical mapping and complementary psychophysical data, we examined multisensory integrative processing for combinations of visual and somatosensory inputs presented to both left and right spatial locations. We assessed how early during sensory processing visual-somatosensory (VS interactions were seen in the event-related potential and whether spatial alignment of the visual and somatosensory elements resulted in differential integration effects. Reaction times to all VS pairings were significantly faster than those to the unisensory conditions, regardless of spatial alignment, pointing to engagement of integrative multisensory processing in all conditions. In support, electrophysiological results revealed significant differences between multisensory simultaneous VS and summed V+S responses, regardless of the spatial alignment of the constituent inputs. Nonetheless, multisensory effects were earlier in the aligned conditions, and were found to be particularly robust in the case of right-sided inputs (beginning at just 55ms. In contrast to previous work on audio-visual and audio-somatosensory inputs, the current work suggests a degree of spatial specificity to the earliest detectable multisensory integrative effects in response to visual-somatosensory pairings.

  6. North-American norms for name disagreement: pictorial stimuli naming discrepancies.

    Directory of Open Access Journals (Sweden)

    Mary O'Sullivan

    Full Text Available Pictorial stimuli are commonly used by scientists to explore central processes; including memory, attention, and language. Pictures that have been collected and put into sets for these purposes often contain visual ambiguities that lead to name disagreement amongst subjects. In the present work, we propose new norms which reflect these sources of name disagreement, and we apply this method to two sets of pictures: the Snodgrass and Vanderwart (S&V set and the Bank of Standardized Stimuli (BOSS. Naming responses of the presented pictures were classified within response categories based on whether they were correct, incorrect, or equivocal. To characterize the naming strategy where an alternative name was being used, responses were further divided into different sub-categories that reflected various sources of name disagreement. Naming strategies were also compared across the two sets of stimuli. Results showed that the pictures of the S&V set and the BOSS were more likely to elicit alternative specific and equivocal names, respectively. It was also found that the use of incorrect names was not significantly different across stimulus sets but that errors were more likely caused by visual ambiguity in the S&V set and by a misuse of names in the BOSS. Norms for name disagreement presented in this paper are useful for subsequent research for their categorization and elucidation of name disagreement that occurs when choosing visual stimuli from one or both stimulus sets. The sources of disagreement should be examined carefully as they help to provide an explanation of errors and inconsistencies of many concepts during picture naming tasks.

  7. The Hierarchical Perspective

    Directory of Open Access Journals (Sweden)

    Daniel Sofron

    2015-05-01

    Full Text Available This paper is focused on the hierarchical perspective, one of the methods for representing space that was used before the discovery of the Renaissance linear perspective. The hierarchical perspective has a more or less pronounced scientific character and its study offers us a clear image of the way the representatives of the cultures that developed it used to perceive the sensitive reality. This type of perspective is an original method of representing three-dimensional space on a flat surface, which characterises the art of Ancient Egypt and much of the art of the Middle Ages, being identified in the Eastern European Byzantine art, as well as in the Western European Pre-Romanesque and Romanesque art. At the same time, the hierarchical perspective is also present in naive painting and infantile drawing. Reminiscences of this method can be recognised also in the works of some precursors of the Italian Renaissance. The hierarchical perspective can be viewed as a subjective ranking criterion, according to which the elements are visually represented by taking into account their relevance within the image while perception is ignored. This paper aims to show how the main objective of the artists of those times was not to faithfully represent the objective reality, but rather to emphasize the essence of the world and its perennial aspects. This may represent a possible explanation for the refusal of perspective in the Egyptian, Romanesque and Byzantine painting, characterised by a marked two-dimensionality.

  8. Auditory motion capturing ambiguous visual motion

    Directory of Open Access Journals (Sweden)

    Arjen eAlink

    2012-01-01

    Full Text Available In this study, it is demonstrated that moving sounds have an effect on the direction in which one sees visual stimuli move. During the main experiment sounds were presented consecutively at four speaker locations inducing left- or rightwards auditory apparent motion. On the path of auditory apparent motion, visual apparent motion stimuli were presented with a high degree of directional ambiguity. The main outcome of this experiment is that our participants perceived visual apparent motion stimuli that were ambiguous (equally likely to be perceived as moving left- or rightwards more often as moving in the same direction than in the opposite direction of auditory apparent motion. During the control experiment we replicated this finding and found no effect of sound motion direction on eye movements. This indicates that auditory motion can capture our visual motion percept when visual motion direction is insufficiently determinate without affecting eye movements.

  9. Neural reactivity to visual food stimuli is reduced in some areas of the brain during evening hours compared to morning hours: an fMRI study in women.

    Science.gov (United States)

    Masterson, Travis D; Kirwan, C Brock; Davidson, Lance E; LeCheminant, James D

    2016-03-01

    The extent that neural responsiveness to visual food stimuli is influenced by time of day is not well examined. Using a crossover design, 15 healthy women were scanned using fMRI while presented with low- and high-energy pictures of food, once in the morning (6:30-8:30 am) and once in the evening (5:00-7:00 pm). Diets were identical on both days of the fMRI scans and were verified using weighed food records. Visual analog scales were used to record subjective perception of hunger and preoccupation with food prior to each fMRI scan. Six areas of the brain showed lower activation in the evening to both high- and low-energy foods, including structures in reward pathways (P foods compared to low-energy foods (P food stimuli tended to produce greater fMRI responses than low-energy food stimuli in specific areas of the brain, regardless of time of day. However, evening scans showed a lower response to both low- and high-energy food pictures in some areas of the brain. Subjectively, participants reported no difference in hunger by time of day (F = 1.84, P = 0.19), but reported they could eat more (F = 4.83, P = 0.04) and were more preoccupied with thoughts of food (F = 5.51, P = 0.03) in the evening compared to the morning. These data underscore the role that time of day may have on neural responses to food stimuli. These results may also have clinical implications for fMRI measurement in order to prevent a time of day bias.

  10. Auditory and visual memory in musicians and nonmusicians.

    Science.gov (United States)

    Cohen, Michael A; Evans, Karla K; Horowitz, Todd S; Wolfe, Jeremy M

    2011-06-01

    Numerous studies have shown that musicians outperform nonmusicians on a variety of tasks. Here we provide the first evidence that musicians have superior auditory recognition memory for both musical and nonmusical stimuli, compared to nonmusicians. However, this advantage did not generalize to the visual domain. Previously, we showed that auditory recognition memory is inferior to visual recognition memory. Would this be true even for trained musicians? We compared auditory and visual memory in musicians and nonmusicians using familiar music, spoken English, and visual objects. For both groups, memory for the auditory stimuli was inferior to memory for the visual objects. Thus, although considerable musical training is associated with better musical and nonmusical auditory memory, it does not increase the ability to remember sounds to the levels found with visual stimuli. This suggests a fundamental capacity difference between auditory and visual recognition memory, with a persistent advantage for the visual domain.

  11. Reproducibility assessment of brain responses to visual food stimuli in adults with overweight and obesity.

    Science.gov (United States)

    Drew Sayer, R; Tamer, Gregory G; Chen, Ningning; Tregellas, Jason R; Cornier, Marc-Andre; Kareken, David A; Talavage, Thomas M; McCrory, Megan A; Campbell, Wayne W

    2016-10-01

    The brain's reward system influences ingestive behavior and subsequently obesity risk. Functional magnetic resonance imaging (fMRI) is a common method for investigating brain reward function. This study sought to assess the reproducibility of fasting-state brain responses to visual food stimuli using BOLD fMRI. A priori brain regions of interest included bilateral insula, amygdala, orbitofrontal cortex, caudate, and putamen. Fasting-state fMRI and appetite assessments were completed by 28 women (n = 16) and men (n = 12) with overweight or obesity on 2 days. Reproducibility was assessed by comparing mean fasting-state brain responses and measuring test-retest reliability of these responses on the two testing days. Mean fasting-state brain responses on day 2 were reduced compared with day 1 in the left insula and right amygdala, but mean day 1 and day 2 responses were not different in the other regions of interest. With the exception of the left orbitofrontal cortex response (fair reliability), test-retest reliabilities of brain responses were poor or unreliable. fMRI-measured responses to visual food cues in adults with overweight or obesity show relatively good mean-level reproducibility but considerable within-subject variability. Poor test-retest reliability reduces the likelihood of observing true correlations and increases the necessary sample sizes for studies. © 2016 The Obesity Society.

  12. Are females more responsive to emotional stimuli? A neurophysiological study across arousal and valence dimensions.

    Science.gov (United States)

    Lithari, C; Frantzidis, C A; Papadelis, C; Vivas, Ana B; Klados, M A; Kourtidou-Papadeli, C; Pappas, C; Ioannides, A A; Bamidis, P D

    2010-03-01

    Men and women seem to process emotions and react to them differently. Yet, few neurophysiological studies have systematically investigated gender differences in emotional processing. Here, we studied gender differences using Event Related Potentials (ERPs) and Skin Conductance Responses (SCR) recorded from participants who passively viewed emotional pictures selected from the International Affective Picture System (IAPS). The arousal and valence dimension of the stimuli were manipulated orthogonally. The peak amplitude and peak latency of ERP components and SCR were analyzed separately, and the scalp topographies of significant ERP differences were documented. Females responded with enhanced negative components (N100 and N200), in comparison to males, especially to the unpleasant visual stimuli, whereas both genders responded faster to high arousing or unpleasant stimuli. Scalp topographies revealed more pronounced gender differences on central and left hemisphere areas. Our results suggest a difference in the way emotional stimuli are processed by genders: unpleasant and high arousing stimuli evoke greater ERP amplitudes in women relatively to men. It also seems that unpleasant or high arousing stimuli are temporally prioritized during visual processing by both genders.

  13. Attentional bias for positive emotional stimuli: A meta-analytic investigation.

    Science.gov (United States)

    Pool, Eva; Brosch, Tobias; Delplanque, Sylvain; Sander, David

    2016-01-01

    Despite an initial focus on negative threatening stimuli, researchers have more recently expanded the investigation of attentional biases toward positive rewarding stimuli. The present meta-analysis systematically compared attentional bias for positive compared with neutral visual stimuli across 243 studies (N = 9,120 healthy participants) that used different types of attentional paradigms and positive stimuli. Factors were tested that, as postulated by several attentional models derived from theories of emotion, might modulate this bias. Overall, results showed a significant, albeit modest (Hedges' g = .258), attentional bias for positive as compared with neutral stimuli. Moderator analyses revealed that the magnitude of this attentional bias varied as a function of arousal and that this bias was significantly larger when the emotional stimulus was relevant to specific concerns (e.g., hunger) of the participants compared with other positive stimuli that were less relevant to the participants' concerns. Moreover, the moderator analyses showed that attentional bias for positive stimuli was larger in paradigms that measure early, rather than late, attentional processing, suggesting that attentional bias for positive stimuli occurs rapidly and involuntarily. Implications for theories of emotion and attention are discussed. (c) 2015 APA, all rights reserved).

  14. Eye structure, activity rhythms and visually-driven behavior are tuned to visual niche in ants

    Directory of Open Access Journals (Sweden)

    Ayse eYilmaz

    2014-06-01

    Full Text Available Insects have evolved physiological adaptations and behavioural strategies that allow them to cope with a broad spectrum of environmental challenges and contribute to their evolutionary success. Visual performance plays a key role in this success. Correlates between life style and eye organization have been reported in various insect species. Yet, if and how visual ecology translates effectively into different visual discrimination and learning capabilities has been less explored. Here we report results from optical and behavioural analyses performed in two sympatric ant species, Formica cunicularia and Camponotus aethiops. We show that the former are diurnal while the latter are cathemeral. Accordingly, F. cunicularia workers present compound eyes with higher resolution, while C. aethiops workers exhibit eyes with lower resolution but higher sensitivity. The discrimination and learning of visual stimuli differs significantly between these species in controlled dual-choice experiments: discrimination learning of small-field visual stimuli is achieved by F. cunicularia but not by C. aethiops, while both species master the discrimination of large-field visual stimuli. Our work thus provides a paradigmatic example about how timing of foraging activities and visual environment match the organization of compound eyes and visually-driven behaviour. This correspondence underlines the relevance of an ecological/evolutionary framework for analyses in behavioural neuroscience.

  15. Beyond arousal and valence: the importance of the biological versus social relevance of emotional stimuli.

    Science.gov (United States)

    Sakaki, Michiko; Niki, Kazuhisa; Mather, Mara

    2012-03-01

    The present study addressed the hypothesis that emotional stimuli relevant to survival or reproduction (biologically emotional stimuli) automatically affect cognitive processing (e.g., attention, memory), while those relevant to social life (socially emotional stimuli) require elaborative processing to modulate attention and memory. Results of our behavioral studies showed that (1) biologically emotional images hold attention more strongly than do socially emotional images, (2) memory for biologically emotional images was enhanced even with limited cognitive resources, but (3) memory for socially emotional images was enhanced only when people had sufficient cognitive resources at encoding. Neither images' subjective arousal nor their valence modulated these patterns. A subsequent functional magnetic resonance imaging study revealed that biologically emotional images induced stronger activity in the visual cortex and greater functional connectivity between the amygdala and visual cortex than did socially emotional images. These results suggest that the interconnection between the amygdala and visual cortex supports enhanced attention allocation to biological stimuli. In contrast, socially emotional images evoked greater activity in the medial prefrontal cortex (MPFC) and yielded stronger functional connectivity between the amygdala and MPFC than did biological images. Thus, it appears that emotional processing of social stimuli involves elaborative processing requiring frontal lobe activity.

  16. Robust selectivity to two-object images in human visual cortex

    Science.gov (United States)

    Agam, Yigal; Liu, Hesheng; Papanastassiou, Alexander; Buia, Calin; Golby, Alexandra J.; Madsen, Joseph R.; Kreiman, Gabriel

    2010-01-01

    SUMMARY We can recognize objects in a fraction of a second in spite of the presence of other objects [1–3]. The responses in macaque areas V4 and inferior temporal cortex [4–15] to a neuron’s preferred stimuli are typically suppressed by the addition of a second object within the receptive field (see however [16, 17]). How can this suppression be reconciled with rapid visual recognition in complex scenes? One option is that certain “special categories” are unaffected by other objects [18] but this leaves the problem unsolved for other categories. Another possibility is that serial attentional shifts help ameliorate the problem of distractor objects [19–21]. Yet, psychophysical studies [1–3], scalp recordings [1] and neurophysiological recordings [14, 16, 22–24], suggest that the initial sweep of visual processing contains a significant amount of information. We recorded intracranial field potentials in human visual cortex during presentation of flashes of two-object images. Visual selectivity from temporal cortex during the initial ~200 ms was largely robust to the presence of other objects. We could train linear decoders on the responses to isolated objects and decode information in two-object images. These observations are compatible with parallel, hierarchical and feed-forward theories of rapid visual recognition [25] and may provide a neural substrate to begin to unravel rapid recognition in natural scenes. PMID:20417105

  17. Visual memory errors in Parkinson's disease patient with visual hallucinations.

    Science.gov (United States)

    Barnes, J; Boubert, L

    2011-03-01

    The occurrences of visual hallucinations seem to be more prevalent in low light and hallucinators tend to be more prone to false positive type errors in memory tasks. Here we investigated whether the richness of stimuli does indeed affect recognition differently in hallucinating and nonhallucinating participants, and if so whether this difference extends to identifying spatial context. We compared 36 Parkinson's disease (PD) patients with visual hallucinations, 32 Parkinson's patients without hallucinations, and 36 age-matched controls, on a visual memory task where color and black and white pictures were presented at different locations. Participants had to recognize the pictures among distracters along with the location of the stimulus. Findings revealed clear differences in performance between the groups. Both PD groups had impaired recognition compared to the controls, but those with hallucinations were significantly more impaired on black and white than on color stimuli. In addition, the group with hallucinations was significantly impaired compared to the other two groups on spatial memory. We suggest that not only do PD patients have poorer recognition of pictorial stimuli than controls, those who present with visual hallucinations appear to be more heavily reliant on bottom up sensory input and impaired on spatial ability.

  18. Visualization of Social Networks

    NARCIS (Netherlands)

    Boertjes, E.M.; Kotterink, B.; Jager, E.J.

    2011-01-01

    Current visualizations of social networks are mostly some form of node-link diagram. Depending on the type of social network, this can be some treevisualization with a strict hierarchical structure or a more generic network visualization.

  19. Negative emotional stimuli reduce contextual cueing but not response times in inefficient search

    OpenAIRE

    Kunar, Melina A.; Watson, Derrick G.; Cole, Louise (Researcher in Psychology); Cox, Angeline

    2014-01-01

    In visual search, previous work has shown that negative stimuli narrow the focus of attention and speed reaction times (RTs). This paper investigates these two effects by first asking whether negative emotional stimuli narrow the focus of attention to reduce the learning of a display context in a contextual cueing task and, second, whether exposure to negative stimuli also reduces RTs in inefficient search tasks. In Experiment 1, participants viewed either negative or neutral images (faces or...

  20. Music Influences Ratings of the Affect of Visual Stimuli

    Directory of Open Access Journals (Sweden)

    Waldie E Hanser

    2013-09-01

    Full Text Available This review provides an overview of recent studies that have examined how music influences the judgment of emotional stimuli, including affective pictures and film clips. The relevant findings are incorporated within a broader theory of music and emotion, and suggestions for future research are offered.Music is important in our daily lives, and one of its primary uses by listeners is the active regulation of one's mood. Despite this widespread use as a regulator of mood and its general pervasiveness in our society, the number of studies investigating the issue of whether, and how, music affects mood and emotional behaviour is limited however. Experiments investigating the effects of music have generally focused on how the emotional valence of background music impacts how affective pictures and/or film clips are evaluated. These studies have demonstrated strong effects of music on the emotional judgment of such stimuli. Most studies have reported concurrent background music to enhance the emotional valence when music and pictures are emotionally congruent. On the other hand, when music and pictures are emotionally incongruent, the ratings of the affect of the pictures will in- or decrease depending on the emotional valence of the background music. These results appear to be consistent in studies investigating the effects of (background music.

  1. Human visual system automatically represents large-scale sequential regularities.

    Science.gov (United States)

    Kimura, Motohiro; Widmann, Andreas; Schröger, Erich

    2010-03-04

    Our brain recordings reveal that large-scale sequential regularities defined across non-adjacent stimuli can be automatically represented in visual sensory memory. To show that, we adopted an auditory paradigm developed by Sussman, E., Ritter, W., and Vaughan, H. G. Jr. (1998). Predictability of stimulus deviance and the mismatch negativity. NeuroReport, 9, 4167-4170, Sussman, E., and Gumenyuk, V. (2005). Organization of sequential sounds in auditory memory. NeuroReport, 16, 1519-1523 to the visual domain by presenting task-irrelevant infrequent luminance-deviant stimuli (D, 20%) inserted among task-irrelevant frequent stimuli being of standard luminance (S, 80%) in randomized (randomized condition, SSSDSSSSSDSSSSD...) and fixed manners (fixed condition, SSSSDSSSSDSSSSD...). Comparing the visual mismatch negativity (visual MMN), an event-related brain potential (ERP) index of memory-mismatch processes in human visual sensory system, revealed that visual MMN elicited by deviant stimuli was reduced in the fixed compared to the randomized condition. Thus, the large-scale sequential regularity being present in the fixed condition (SSSSD) must have been represented in visual sensory memory. Interestingly, this effect did not occur in conditions with stimulus-onset asynchronies (SOAs) of 480 and 800 ms but was confined to the 160-ms SOA condition supporting the hypothesis that large-scale regularity extraction was based on perceptual grouping of the five successive stimuli defining the regularity. 2010 Elsevier B.V. All rights reserved.

  2. Synchronization with competing visual and auditory rhythms: bouncing ball meets metronome.

    Science.gov (United States)

    Hove, Michael J; Iversen, John R; Zhang, Allen; Repp, Bruno H

    2013-07-01

    Synchronization of finger taps with periodically flashing visual stimuli is known to be much more variable than synchronization with an auditory metronome. When one of these rhythms is the synchronization target and the other serves as a distracter at various temporal offsets, strong auditory dominance is observed. However, it has recently been shown that visuomotor synchronization improves substantially with moving stimuli such as a continuously bouncing ball. The present study pitted a bouncing ball against an auditory metronome in a target-distracter synchronization paradigm, with the participants being auditory experts (musicians) and visual experts (video gamers and ball players). Synchronization was still less variable with auditory than with visual target stimuli in both groups. For musicians, auditory stimuli tended to be more distracting than visual stimuli, whereas the opposite was the case for the visual experts. Overall, there was no main effect of distracter modality. Thus, a distracting spatiotemporal visual rhythm can be as effective as a distracting auditory rhythm in its capacity to perturb synchronous movement, but its effectiveness also depends on modality-specific expertise.

  3. Attending to and remembering tactile stimuli: a review of brain imaging data and single-neuron responses.

    Science.gov (United States)

    Burton, H; Sinclair, R J

    2000-11-01

    Clinical and neuroimaging observations of the cortical network implicated in tactile attention have identified foci in parietal somatosensory, posterior parietal, and superior frontal locations. Tasks involving intentional hand-arm movements activate similar or nearby parietal and frontal foci. Visual spatial attention tasks and deliberate visuomotor behavior also activate overlapping posterior parietal and frontal foci. Studies in the visual and somatosensory systems thus support a proposal that attention to the spatial location of an object engages cortical regions responsible for the same coordinate referents used for guiding purposeful motor behavior. Tactile attention also biases processing in the somatosensory cortex through amplification of responses to relevant features of selected stimuli. Psychophysical studies demonstrate retention gradients for tactile stimuli like those reported for visual and auditory stimuli, and suggest analogous neural mechanisms for working memory across modalities. Neuroimaging studies in humans using memory tasks, and anatomic studies in monkeys support the idea that tactile information relayed from the somatosensory cortex is directed ventrally through the insula to the frontal cortex for short-term retention and to structures of the medial temporal lobe for long-term encoding. At the level of single neurons, tactile (such as visual and auditory) short-term memory appears as a persistent response during delay intervals between sampled stimuli.

  4. Teaching children with autism spectrum disorder to tact olfactory stimuli.

    Science.gov (United States)

    Dass, Tina K; Kisamore, April N; Vladescu, Jason C; Reeve, Kenneth F; Reeve, Sharon A; Taylor-Santa, Catherine

    2018-05-28

    Research on tact acquisition by children with autism spectrum disorder (ASD) has often focused on teaching participants to tact visual stimuli. It is important to evaluate procedures for teaching tacts of nonvisual stimuli (e.g., olfactory, tactile). The purpose of the current study was to extend the literature on secondary target instruction and tact training by evaluating the effects of a discrete-trial instruction procedure involving (a) echoic prompts, a constant prompt delay, and error correction for primary targets; (b) inclusion of secondary target stimuli in the consequent portion of learning trials; and (c) multiple exemplar training on the acquisition of item tacts of olfactory stimuli, emergence of category tacts of olfactory stimuli, generalization of category tacts, and emergence of category matching, with three children diagnosed with ASD. Results showed that all participants learned the item and category tacts following teaching, participants demonstrated generalization across category tacts, and category matching emerged for all participants. © 2018 Society for the Experimental Analysis of Behavior.

  5. Increasing Valid Profiles in Phallometric Assessment of Sex Offenders with Child Victims: Combining the Strengths of Audio Stimuli and Synthetic Characters.

    Science.gov (United States)

    Marschall-Lévesque, Shawn; Rouleau, Joanne-Lucine; Renaud, Patrice

    2018-02-01

    Penile plethysmography (PPG) is a measure of sexual interests that relies heavily on the stimuli it uses to generate valid results. Ethical considerations surrounding the use of real images in PPG have further limited the content admissible for these stimuli. To palliate this limitation, the current study aimed to combine audio and visual stimuli by incorporating computer-generated characters to create new stimuli capable of accurately classifying sex offenders with child victims, while also increasing the number of valid profiles. Three modalities (audio, visual, and audiovisual) were compared using two groups (15 sex offenders with child victims and 15 non-offenders). Both the new visual and audiovisual stimuli resulted in a 13% increase in the number of valid profiles at 2.5 mm, when compared to the standard audio stimuli. Furthermore, the new audiovisual stimuli generated a 34% increase in penile responses. All three modalities were able to discriminate between the two groups by their responses to the adult and child stimuli. Lastly, sexual interest indices for all three modalities could accurately classify participants in their appropriate groups, as demonstrated by ROC curve analysis (i.e., audio AUC = .81, 95% CI [.60, 1.00]; visual AUC = .84, 95% CI [.66, 1.00], and audiovisual AUC = .83, 95% CI [.63, 1.00]). Results suggest that computer-generated characters allow accurate discrimination of sex offenders with child victims and can be added to already validated stimuli to increase the number of valid profiles. The implications of audiovisual stimuli using computer-generated characters and their possible use in PPG evaluations are also discussed.

  6. Hierarchical Linked Views

    Energy Technology Data Exchange (ETDEWEB)

    Erbacher, Robert; Frincke, Deb

    2007-07-02

    Coordinated views have proven critical to the development of effective visualization environments. This results from the fact that a single view or representation of the data cannot show all of the intricacies of a given data set. Additionally, users will often need to correlate more data parameters than can effectively be integrated into a single visual display. Typically, development of multiple-linked views results in an adhoc configuration of views and associated interactions. The hierarchical model we are proposing is geared towards more effective organization of such environments and the views they encompass. At the same time, this model can effectively integrate much of the prior work on interactive and visual frameworks. Additionally, we expand the concept of views to incorporate perceptual views. This is related to the fact that visual displays can have information encoded at various levels of focus. Thus, a global view of the display provides overall trends of the data while focusing in on individual elements provides detailed specifics. By integrating interaction and perception into a single model, we show how one impacts the other. Typically, interaction and perception are considered separately, however, when interaction is being considered at a fundamental level and allowed to direct/modify the visualization directly we must consider them simultaneously and how they impact one another.

  7. Visual attention

    NARCIS (Netherlands)

    Evans, K.K.; Horowitz, T.S.; Howe, P.; Pedersini, R.; Reijnen, E.; Pinto, Y.; Wolfe, J.M.

    2011-01-01

    A typical visual scene we encounter in everyday life is complex and filled with a huge amount of perceptual information. The term, ‘visual attention’ describes a set of mechanisms that limit some processing to a subset of incoming stimuli. Attentional mechanisms shape what we see and what we can act

  8. Aesthetic Perception of Visual Textures: A Holistic Exploration using Texture Analysis, Psychological Experiment and Perception Modeling

    Directory of Open Access Journals (Sweden)

    Jianli eLiu

    2015-11-01

    Full Text Available Modeling human aesthetic perception of visual textures is important and valuable in numerous industrial domains, such as product design, architectural design and decoration. Based on results from a semantic differential rating experiment, we modeled the relationship between low-level basic texture features and aesthetic properties involved in human aesthetic texture perception. First, we compute basic texture features from textural images using four classical methods. These features are neutral, objective and independent of the socio-cultural context of the visual textures. Then, we conduct a semantic differential rating experiment to collect from evaluators their aesthetic perceptions of selected textural stimuli. In semantic differential rating experiment, eights pairs of aesthetic properties are chosen, which are strongly related to the socio-cultural context of the selected textures and to human emotions. They are easily understood and connected to everyday life. We propose a hierarchical feed-forward layer model of aesthetic texture perception and assign 8 pairs of aesthetic properties to different layers. Finally, we describe the generation of multiple linear and nonlinear regression models for aesthetic prediction by taking dimensionality-reduced texture features and aesthetic properties of visual textures as dependent and independent variables, respectively. Our experimental results indicate that the relationships between each layer and its neighbors in the hierarchical feed-forward layer model of aesthetic texture perception can be fitted well by linear functions, and the models thus generated can successfully bridge the gap between computational texture features and aesthetic texture properties.

  9. Endogenous visuospatial attention increases visual awareness independent of visual discrimination sensitivity.

    Science.gov (United States)

    Vernet, Marine; Japee, Shruti; Lokey, Savannah; Ahmed, Sara; Zachariou, Valentinos; Ungerleider, Leslie G

    2017-08-12

    Visuospatial attention often improves task performance by increasing signal gain at attended locations and decreasing noise at unattended locations. Attention is also believed to be the mechanism that allows information to enter awareness. In this experiment, we assessed whether orienting endogenous visuospatial attention with cues differentially affects visual discrimination sensitivity (an objective task performance) and visual awareness (the subjective feeling of perceiving) during the same discrimination task. Gabor patch targets were presented laterally, either at low contrast (contrast stimuli) or at high contrast embedded in noise (noise stimuli). Participants reported their orientation either in a 3-alternative choice task (clockwise, counterclockwise, unknown) that allowed for both objective and subjective reports, or in a 2-alternative choice task (clockwise, counterclockwise) that provided a control for objective reports. Signal detection theory models were fit to the experimental data: estimated perceptual sensitivity reflected objective performance; decision criteria, or subjective biases, were a proxy for visual awareness. Attention increased sensitivity (i.e., improved objective performance) for the contrast, but not for the noise stimuli. Indeed, with the latter, attention did not further enhance the already high target signal or reduce the already low uncertainty on its position. Interestingly, for both contrast and noise stimuli, attention resulted in more liberal criteria, i.e., awareness increased. The noise condition is thus an experimental configuration where people think they see the targets they attend to better, even if they do not. This could be explained by an internal representation of their attentional state, which influences awareness independent of objective visual signals. Copyright © 2017. Published by Elsevier Ltd.

  10. Neural Mechanisms of Selective Visual Attention.

    Science.gov (United States)

    Moore, Tirin; Zirnsak, Marc

    2017-01-03

    Selective visual attention describes the tendency of visual processing to be confined largely to stimuli that are relevant to behavior. It is among the most fundamental of cognitive functions, particularly in humans and other primates for whom vision is the dominant sense. We review recent progress in identifying the neural mechanisms of selective visual attention. We discuss evidence from studies of different varieties of selective attention and examine how these varieties alter the processing of stimuli by neurons within the visual system, current knowledge of their causal basis, and methods for assessing attentional dysfunctions. In addition, we identify some key questions that remain in identifying the neural mechanisms that give rise to the selective processing of visual information.

  11. The coupling of cerebral blood flow and oxygen metabolism with brain activation is similar for simple and complex stimuli in human primary visual cortex.

    Science.gov (United States)

    Griffeth, Valerie E M; Simon, Aaron B; Buxton, Richard B

    2015-01-01

    Quantitative functional MRI (fMRI) experiments to measure blood flow and oxygen metabolism coupling in the brain typically rely on simple repetitive stimuli. Here we compared such stimuli with a more naturalistic stimulus. Previous work on the primary visual cortex showed that direct attentional modulation evokes a blood flow (CBF) response with a relatively large oxygen metabolism (CMRO2) response in comparison to an unattended stimulus, which evokes a much smaller metabolic response relative to the flow response. We hypothesized that a similar effect would be associated with a more engaging stimulus, and tested this by measuring the primary human visual cortex response to two contrast levels of a radial flickering checkerboard in comparison to the response to free viewing of brief movie clips. We did not find a significant difference in the blood flow-metabolism coupling (n=%ΔCBF/%ΔCMRO2) between the movie stimulus and the flickering checkerboards employing two different analysis methods: a standard analysis using the Davis model and a new analysis using a heuristic model dependent only on measured quantities. This finding suggests that in the primary visual cortex a naturalistic stimulus (in comparison to a simple repetitive stimulus) is either not sufficient to provoke a change in flow-metabolism coupling by attentional modulation as hypothesized, that the experimental design disrupted the cognitive processes underlying the response to a more natural stimulus, or that the technique used is not sensitive enough to detect a small difference. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. Detection of differential viewing patterns to erotic and non-erotic stimuli using eye-tracking methodology.

    Science.gov (United States)

    Lykins, Amy D; Meana, Marta; Kambe, Gretchen

    2006-10-01

    As a first step in the investigation of the role of visual attention in the processing of erotic stimuli, eye-tracking methodology was employed to measure eye movements during erotic scene presentation. Because eye-tracking is a novel methodology in sexuality research, we attempted to determine whether the eye-tracker could detect differences (should they exist) in visual attention to erotic and non-erotic scenes. A total of 20 men and 20 women were presented with a series of erotic and non-erotic images and tracked their eye movements during image presentation. Comparisons between erotic and non-erotic image groups showed significant differences on two of three dependent measures of visual attention (number of fixations and total time) in both men and women. As hypothesized, there was a significant Stimulus x Scene Region interaction, indicating that participants visually attended to the body more in the erotic stimuli than in the non-erotic stimuli, as evidenced by a greater number of fixations and longer total time devoted to that region. These findings provide support for the application of eye-tracking methodology as a measure of visual attentional capture in sexuality research. Future applications of this methodology to expand our knowledge of the role of cognition in sexuality are suggested.

  13. An event-related brain potential study of visual selective attention to conjunctions of color and shape.

    Science.gov (United States)

    Smid, H G; Jakob, A; Heinze, H J

    1999-03-01

    What cognitive processes underlie event-related brain potential (ERP) effects related to visual multidimensional selective attention and how are these processes organized? We recorded ERPs when participants attended to one conjunction of color, global shape and local shape and ignored other conjunctions of these attributes in three discriminability conditions. Attending to color and shape produced three ERP effects: frontal selection positivity (FSP), central negativity (N2b), and posterior selection negativity (SN). The results suggested that the processes underlying SN and N2b perform independent within-dimension selections, whereas the process underlying the FSP performs hierarchical between-dimension selections. At posterior electrodes, manipulation of discriminability changed the ERPs to the relevant but not to the irrelevant stimuli, suggesting that the SN does not concern the selection process itself but rather a cognitive process initiated after selection is finished. Other findings suggested that selection of multiple visual attributes occurs in parallel.

  14. Hebbian learning of hand-centred representations in a hierarchical neural network model of the primate visual system

    Science.gov (United States)

    Born, Jannis; Stringer, Simon M.

    2017-01-01

    A subset of neurons in the posterior parietal and premotor areas of the primate brain respond to the locations of visual targets in a hand-centred frame of reference. Such hand-centred visual representations are thought to play an important role in visually-guided reaching to target locations in space. In this paper we show how a biologically plausible, Hebbian learning mechanism may account for the development of localized hand-centred representations in a hierarchical neural network model of the primate visual system, VisNet. The hand-centered neurons developed in the model use an invariance learning mechanism known as continuous transformation (CT) learning. In contrast to previous theoretical proposals for the development of hand-centered visual representations, CT learning does not need a memory trace of recent neuronal activity to be incorporated in the synaptic learning rule. Instead, CT learning relies solely on a Hebbian learning rule, which is able to exploit the spatial overlap that naturally occurs between successive images of a hand-object configuration as it is shifted across different retinal locations due to saccades. Our simulations show how individual neurons in the network model can learn to respond selectively to target objects in particular locations with respect to the hand, irrespective of where the hand-object configuration occurs on the retina. The response properties of these hand-centred neurons further generalise to localised receptive fields in the hand-centred space when tested on novel hand-object configurations that have not been explored during training. Indeed, even when the network is trained with target objects presented across a near continuum of locations around the hand during training, the model continues to develop hand-centred neurons with localised receptive fields in hand-centred space. With the help of principal component analysis, we provide the first theoretical framework that explains the behavior of Hebbian learning

  15. Hebbian learning of hand-centred representations in a hierarchical neural network model of the primate visual system.

    Science.gov (United States)

    Born, Jannis; Galeazzi, Juan M; Stringer, Simon M

    2017-01-01

    A subset of neurons in the posterior parietal and premotor areas of the primate brain respond to the locations of visual targets in a hand-centred frame of reference. Such hand-centred visual representations are thought to play an important role in visually-guided reaching to target locations in space. In this paper we show how a biologically plausible, Hebbian learning mechanism may account for the development of localized hand-centred representations in a hierarchical neural network model of the primate visual system, VisNet. The hand-centered neurons developed in the model use an invariance learning mechanism known as continuous transformation (CT) learning. In contrast to previous theoretical proposals for the development of hand-centered visual representations, CT learning does not need a memory trace of recent neuronal activity to be incorporated in the synaptic learning rule. Instead, CT learning relies solely on a Hebbian learning rule, which is able to exploit the spatial overlap that naturally occurs between successive images of a hand-object configuration as it is shifted across different retinal locations due to saccades. Our simulations show how individual neurons in the network model can learn to respond selectively to target objects in particular locations with respect to the hand, irrespective of where the hand-object configuration occurs on the retina. The response properties of these hand-centred neurons further generalise to localised receptive fields in the hand-centred space when tested on novel hand-object configurations that have not been explored during training. Indeed, even when the network is trained with target objects presented across a near continuum of locations around the hand during training, the model continues to develop hand-centred neurons with localised receptive fields in hand-centred space. With the help of principal component analysis, we provide the first theoretical framework that explains the behavior of Hebbian learning

  16. High-intensity Erotic Visual Stimuli De-activate the Primary Visual Cortex in Women

    NARCIS (Netherlands)

    Huynh, Hieu K.; Beers, Caroline; Willemsen, Antoon; Lont, Erna; Laan, Ellen; Dierckx, Rudi; Jansen, Monique; Sand, Michael; Schultz, Willibrord Weijmar; Holstege, Gert

    Introduction. The primary visual cortex, Brodmann's area (BA 17), plays a vital role in basic survival mechanisms in humans. In most neuro-imaging studies in which the volunteers have to watch pictures or movies, the primary visual cortex is similarly activated independent of the content of the

  17. High-intensity Erotic Visual Stimuli De-activate the Primary Visual Cortex in Women

    NARCIS (Netherlands)

    Huynh, Hieu K.; Beers, Caroline; Willemsen, Antoon; Lont, Erna; Laan, Ellen; Dierckx, Rudi; Jansen, Monique; Sand, Michael; Weijmar Schultz, Willibrord; Holstege, Gert

    2012-01-01

    Introduction. The primary visual cortex, Brodmann's area (BA 17), plays a vital role in basic survival mechanisms in humans. In most neuro-imaging studies in which the volunteers have to watch pictures or movies, the primary visual cortex is similarly activated independent of the content of the

  18. Observer's Mood Manipulates Level of Visual Processing: Evidence from Face and Nonface Stimuli

    Directory of Open Access Journals (Sweden)

    Setareh Mokhtari

    2011-05-01

    Full Text Available For investigating the effect of observer's mood on level of processing of visual stimuli, happy or sad mood was induced in two groups of participants through asking them to deliberate one of their sad or happy memories while listening to a congruent piece of music. This was followed by a computer-based task that required counting some features (arcs or lines of emotional schematic faces (with either sad or happy expressions for group 1, and counting same features of meaningless combined shapes for group 2. Reaction time analysis indicated there is a significant difference in RTs after listening to the sad music compared with happy music for group 1; participants with sad moods were significantly slower when they worked on local levels of schematic faces with sad expressions. Happy moods did not show any specific effect on reaction time of participants who were working on local details of emotionally expressive faces. Sad moods or happy moods had no significant effect on reaction time of working on parts of meaningless shapes. It seems that sad moods as a contextual factor elevate the ability of sad expression to grab the attention and block fast access to the local parts of the holistic meaningful shapes.

  19. Amygdala activity related to enhanced memory for pleasant and aversive stimuli.

    Science.gov (United States)

    Hamann, S B; Ely, T D; Grafton, S T; Kilts, C D

    1999-03-01

    Pleasant or aversive events are better remembered than neutral events. Emotional enhancement of episodic memory has been linked to the amygdala in animal and neuropsychological studies. Using positron emission tomography, we show that bilateral amygdala activity during memory encoding is correlated with enhanced episodic recognition memory for both pleasant and aversive visual stimuli relative to neutral stimuli, and that this relationship is specific to emotional stimuli. Furthermore, data suggest that the amygdala enhances episodic memory in part through modulation of hippocampal activity. The human amygdala seems to modulate the strength of conscious memory for events according to emotional importance, regardless of whether the emotion is pleasant or aversive.

  20. Enhanced early visual processing in response to snake and trypophobic stimuli

    NARCIS (Netherlands)

    J.W. van Strien (Jan); Van der Peijl, M.K. (Manja K.)

    2018-01-01

    textabstractBackground: Trypophobia refers to aversion to clusters of holes. We investigated whether trypophobic stimuli evoke augmented early posterior negativity (EPN). Methods: Twenty-four participants filled out a trypophobia questionnaire and watched the random rapid serial presentation of 450

  1. Interpretative bias in spider phobia: Perception and information processing of ambiguous schematic stimuli.

    Science.gov (United States)

    Haberkamp, Anke; Schmidt, Filipp

    2015-09-01

    This study investigates the interpretative bias in spider phobia with respect to rapid visuomotor processing. We compared perception, evaluation, and visuomotor processing of ambiguous schematic stimuli between spider-fearful and control participants. Stimuli were produced by gradually morphing schematic flowers into spiders. Participants rated these stimuli related to their perceptual appearance and to their feelings of valence, disgust, and arousal. Also, they responded to the same stimuli within a response priming paradigm that measures rapid motor activation. Spider-fearful individuals showed an interpretative bias (i.e., ambiguous stimuli were perceived as more similar to spiders) and rated spider-like stimuli as more unpleasant, disgusting, and arousing. However, we observed no differences between spider-fearful and control participants in priming effects for ambiguous stimuli. For non-ambiguous stimuli, we observed a similar enhancement for phobic pictures as has been reported previously for natural images. We discuss our findings with respect to the visual representation of morphed stimuli and to perceptual learning processes. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Attentional Bias for Emotional Stimuli in Borderline Personality Disorder: A Meta-Analysis.

    Science.gov (United States)

    Kaiser, Deborah; Jacob, Gitta A; Domes, Gregor; Arntz, Arnoud

    2016-01-01

    In borderline personality disorder (BPD), attentional bias (AB) to emotional stimuli may be a core component in disorder pathogenesis and maintenance. 11 emotional Stroop task (EST) studies with 244 BPD patients, 255 nonpatients (NPs) and 95 clinical controls and 4 visual dot-probe task (VDPT) studies with 151 BPD patients or subjects with BPD features and 62 NPs were included. We conducted two separate meta-analyses for AB in BPD. One meta-analysis focused on the EST for generally negative and BPD-specific/personally relevant negative words. The other meta-analysis concentrated on the VDPT for negative and positive facial stimuli. There is evidence for an AB towards generally negative emotional words compared to NPs (standardized mean difference, SMD = 0.311) and to other psychiatric disorders (SMD = 0.374) in the EST studies. Regarding BPD-specific/personally relevant negative words, BPD patients reveal an even stronger AB than NPs (SMD = 0.454). The VDPT studies indicate a tendency towards an AB to positive facial stimuli but not negative stimuli in BPD patients compared to NPs. The findings rather reflect an AB in BPD to generally negative and BPD-specific/personally relevant negative words rather than an AB in BPD towards facial stimuli, and/or a biased allocation of covert attentional resources to negative emotional stimuli in BPD and not a bias in focus of visual attention. Further research regarding the role of childhood traumatization and comorbid anxiety disorders may improve the understanding of these underlying processes. © 2016 The Author(s) Published by S. Karger AG, Basel.

  3. [Intermodal timing cues for audio-visual speech recognition].

    Science.gov (United States)

    Hashimoto, Masahiro; Kumashiro, Masaharu

    2004-06-01

    The purpose of this study was to investigate the limitations of lip-reading advantages for Japanese young adults by desynchronizing visual and auditory information in speech. In the experiment, audio-visual speech stimuli were presented under the six test conditions: audio-alone, and audio-visually with either 0, 60, 120, 240 or 480 ms of audio delay. The stimuli were the video recordings of a face of a female Japanese speaking long and short Japanese sentences. The intelligibility of the audio-visual stimuli was measured as a function of audio delays in sixteen untrained young subjects. Speech intelligibility under the audio-delay condition of less than 120 ms was significantly better than that under the audio-alone condition. On the other hand, the delay of 120 ms corresponded to the mean mora duration measured for the audio stimuli. The results implied that audio delays of up to 120 ms would not disrupt lip-reading advantage, because visual and auditory information in speech seemed to be integrated on a syllabic time scale. Potential applications of this research include noisy workplace in which a worker must extract relevant speech from all the other competing noises.

  4. Visual fatigue while watching 3D stimuli from different positions

    Directory of Open Access Journals (Sweden)

    J. Antonio Aznar-Casanova

    2017-07-01

    Conclusion: This results support a mixed model, combining a model based on the visual angle (related to viewing distance and another based on the oculomotor imbalance (related to visual direction. This mixed model could help to predict the distribution of seats in the cinema room ranging from those that produce greater visual comfort to those that produce more visual discomfort. Also could be a first step to pre-diagnosis of binocular vision disorders.

  5. Hierarchical imaging of the human knee

    Science.gov (United States)

    Schulz, Georg; Götz, Christian; Deyhle, Hans; Müller-Gerbl, Magdalena; Zanette, Irene; Zdora, Marie-Christine; Khimchenko, Anna; Thalmann, Peter; Rack, Alexander; Müller, Bert

    2016-10-01

    Among the clinically relevant imaging techniques, computed tomography (CT) reaches the best spatial resolution. Sub-millimeter voxel sizes are regularly obtained. For investigations on true micrometer level lab-based μCT has become gold standard. The aim of the present study is the hierarchical investigation of a human knee post mortem using hard X-ray μCT. After the visualization of the entire knee using a clinical CT with a spatial resolution on the sub-millimeter range, a hierarchical imaging study was performed using a laboratory μCT system nanotom m. Due to the size of the whole knee the pixel length could not be reduced below 65 μm. These first two data sets were directly compared after a rigid registration using a cross-correlation algorithm. The μCT data set allowed an investigation of the trabecular structures of the bones. The further reduction of the pixel length down to 25 μm could be achieved by removing the skin and soft tissues and measuring the tibia and the femur separately. True micrometer resolution could be achieved after extracting cylinders of several millimeters diameters from the two bones. The high resolution scans revealed the mineralized cartilage zone including the tide mark line as well as individual calcified chondrocytes. The visualization of soft tissues including cartilage, was arranged by X-ray grating interferometry (XGI) at ESRF and Diamond Light Source. Whereas the high-energy measurements at ESRF allowed the simultaneous visualization of soft and hard tissues, the low-energy results from Diamond Light Source made individual chondrocytes within the cartilage visual.

  6. Subliminal and supraliminal processing of reward-related stimuli in anorexia nervosa.

    Science.gov (United States)

    Boehm, I; King, J A; Bernardoni, F; Geisler, D; Seidel, M; Ritschel, F; Goschke, T; Haynes, J-D; Roessner, V; Ehrlich, S

    2018-04-01

    Previous studies have highlighted the role of the brain reward and cognitive control systems in the etiology of anorexia nervosa (AN). In an attempt to disentangle the relative contribution of these systems to the disorder, we used functional magnetic resonance imaging (fMRI) to investigate hemodynamic responses to reward-related stimuli presented both subliminally and supraliminally in acutely underweight AN patients and age-matched healthy controls (HC). fMRI data were collected from a total of 35 AN patients and 35 HC, while they passively viewed subliminally and supraliminally presented streams of food, positive social, and neutral stimuli. Activation patterns of the group × stimulation condition × stimulus type interaction were interrogated to investigate potential group differences in processing different stimulus types under the two stimulation conditions. Moreover, changes in functional connectivity were investigated using generalized psychophysiological interaction analysis. AN patients showed a generally increased response to supraliminally presented stimuli in the inferior frontal junction (IFJ), but no alterations within the reward system. Increased activation during supraliminal stimulation with food stimuli was observed in the AN group in visual regions including superior occipital gyrus and the fusiform gyrus/parahippocampal gyrus. No group difference was found with respect to the subliminal stimulation condition and functional connectivity. Increased IFJ activation in AN during supraliminal stimulation may indicate hyperactive cognitive control, which resonates with clinical presentation of excessive self-control in AN patients. Increased activation to food stimuli in visual regions may be interpreted in light of an attentional food bias in AN.

  7. Human visual system automatically encodes sequential regularities of discrete events.

    Science.gov (United States)

    Kimura, Motohiro; Schröger, Erich; Czigler, István; Ohira, Hideki

    2010-06-01

    For our adaptive behavior in a dynamically changing environment, an essential task of the brain is to automatically encode sequential regularities inherent in the environment into a memory representation. Recent studies in neuroscience have suggested that sequential regularities embedded in discrete sensory events are automatically encoded into a memory representation at the level of the sensory system. This notion is largely supported by evidence from investigations using auditory mismatch negativity (auditory MMN), an event-related brain potential (ERP) correlate of an automatic memory-mismatch process in the auditory sensory system. However, it is still largely unclear whether or not this notion can be generalized to other sensory modalities. The purpose of the present study was to investigate the contribution of the visual sensory system to the automatic encoding of sequential regularities using visual mismatch negativity (visual MMN), an ERP correlate of an automatic memory-mismatch process in the visual sensory system. To this end, we conducted a sequential analysis of visual MMN in an oddball sequence consisting of infrequent deviant and frequent standard stimuli, and tested whether the underlying memory representation of visual MMN generation contains only a sensory memory trace of standard stimuli (trace-mismatch hypothesis) or whether it also contains sequential regularities extracted from the repetitive standard sequence (regularity-violation hypothesis). The results showed that visual MMN was elicited by first deviant (deviant stimuli following at least one standard stimulus), second deviant (deviant stimuli immediately following first deviant), and first standard (standard stimuli immediately following first deviant), but not by second standard (standard stimuli immediately following first standard). These results are consistent with the regularity-violation hypothesis, suggesting that the visual sensory system automatically encodes sequential

  8. Novel mathematical neural models for visual attention

    DEFF Research Database (Denmark)

    Li, Kang

    for the visual attention theories and spiking neuron models for single spike trains. Statistical inference and model selection are performed and various numerical methods are explored. The designed methods also give a framework for neural coding under visual attention theories. We conduct both analysis on real......Visual attention has been extensively studied in psychology, but some fundamental questions remain controversial. We focus on two questions in this study. First, we investigate how a neuron in visual cortex responds to multiple stimuli inside the receptive eld, described by either a response...... system, supported by simulation study. Finally, we present the decoding of multiple temporal stimuli under these visual attention theories, also in a realistic biophysical situation with simulations....

  9. Infant Attention to Dynamic Audiovisual Stimuli: Look Duration from 3 to 9 Months of Age

    Science.gov (United States)

    Reynolds, Greg D.; Zhang, Dantong; Guy, Maggie W.

    2013-01-01

    The goal of this study was to examine developmental change in visual attention to dynamic visual and audiovisual stimuli in 3-, 6-, and 9-month-old infants. Infant look duration was measured during exposure to dynamic geometric patterns and Sesame Street video clips under three different stimulus modality conditions: unimodal visual, synchronous…

  10. Deep hierarchical attention network for video description

    Science.gov (United States)

    Li, Shuohao; Tang, Min; Zhang, Jun

    2018-03-01

    Pairing video to natural language description remains a challenge in computer vision and machine translation. Inspired by image description, which uses an encoder-decoder model for reducing visual scene into a single sentence, we propose a deep hierarchical attention network for video description. The proposed model uses convolutional neural network (CNN) and bidirectional LSTM network as encoders while a hierarchical attention network is used as the decoder. Compared to encoder-decoder models used in video description, the bidirectional LSTM network can capture the temporal structure among video frames. Moreover, the hierarchical attention network has an advantage over single-layer attention network on global context modeling. To make a fair comparison with other methods, we evaluate the proposed architecture with different types of CNN structures and decoders. Experimental results on the standard datasets show that our model has a more superior performance than the state-of-the-art techniques.

  11. Visual plasticity : Blindsight bridges anatomy and function in the visual system

    NARCIS (Netherlands)

    Tamietto, M.; Morrone, M.C.

    2016-01-01

    Some people who are blind due to damage to their primary visual cortex, V1, can discriminate stimuli presented within their blind visual field. This residual function has been recently linked to a pathway that bypasses V1, and connects the thalamic lateral geniculate nucleus directly with the

  12. Contextual effects in visual working memory reveal hierarchically structured memory representations.

    Science.gov (United States)

    Brady, Timothy F; Alvarez, George A

    2015-01-01

    Influential slot and resource models of visual working memory make the assumption that items are stored in memory as independent units, and that there are no interactions between them. Consequently, these models predict that the number of items to be remembered (the set size) is the primary determinant of working memory performance, and therefore these models quantify memory capacity in terms of the number and quality of individual items that can be stored. Here we demonstrate that there is substantial variance in display difficulty within a single set size, suggesting that limits based on the number of individual items alone cannot explain working memory storage. We asked hundreds of participants to remember the same sets of displays, and discovered that participants were highly consistent in terms of which items and displays were hardest or easiest to remember. Although a simple grouping or chunking strategy could not explain this individual-display variability, a model with multiple, interacting levels of representation could explain some of the display-by-display differences. Specifically, a model that includes a hierarchical representation of items plus the mean and variance of sets of the colors on the display successfully accounts for some of the variability across displays. We conclude that working memory representations are composed only in part of individual, independent object representations, and that a major factor in how many items are remembered on a particular display is interitem representations such as perceptual grouping, ensemble, and texture representations.

  13. Lateralized visual behavior in bottlenose dolphins (Tursiops truncatus) performing audio-visual tasks: the right visual field advantage.

    Science.gov (United States)

    Delfour, F; Marten, K

    2006-01-10

    Analyzing cerebral asymmetries in various species helps in understanding brain organization. The left and right sides of the brain (lateralization) are involved in different cognitive and sensory functions. This study focuses on dolphin visual lateralization as expressed by spontaneous eye preference when performing a complex cognitive task; we examine lateralization when processing different visual stimuli displayed on an underwater touch-screen (two-dimensional figures, three-dimensional figures and dolphin/human video sequences). Three female bottlenose dolphins (Tursiops truncatus) were submitted to a 2-, 3- or 4-, choice visual/auditory discrimination problem, without any food reward: the subjects had to correctly match visual and acoustic stimuli together. In order to visualize and to touch the underwater target, the dolphins had to come close to the touch-screen and to position themselves using monocular vision (left or right eye) and/or binocular naso-ventral vision. The results showed an ability to associate simple visual forms and auditory information using an underwater touch-screen. Moreover, the subjects showed a spontaneous tendency to use monocular vision. Contrary to previous findings, our results did not clearly demonstrate right eye preference in spontaneous choice. However, the individuals' scores of correct answers were correlated with right eye vision, demonstrating the advantage of this visual field in visual information processing and suggesting a left hemispheric dominance. We also demonstrated that the nature of the presented visual stimulus does not seem to have any influence on the animals' monocular vision choice.

  14. Convex Clustering: An Attractive Alternative to Hierarchical Clustering

    Science.gov (United States)

    Chen, Gary K.; Chi, Eric C.; Ranola, John Michael O.; Lange, Kenneth

    2015-01-01

    The primary goal in cluster analysis is to discover natural groupings of objects. The field of cluster analysis is crowded with diverse methods that make special assumptions about data and address different scientific aims. Despite its shortcomings in accuracy, hierarchical clustering is the dominant clustering method in bioinformatics. Biologists find the trees constructed by hierarchical clustering visually appealing and in tune with their evolutionary perspective. Hierarchical clustering operates on multiple scales simultaneously. This is essential, for instance, in transcriptome data, where one may be interested in making qualitative inferences about how lower-order relationships like gene modules lead to higher-order relationships like pathways or biological processes. The recently developed method of convex clustering preserves the visual appeal of hierarchical clustering while ameliorating its propensity to make false inferences in the presence of outliers and noise. The solution paths generated by convex clustering reveal relationships between clusters that are hidden by static methods such as k-means clustering. The current paper derives and tests a novel proximal distance algorithm for minimizing the objective function of convex clustering. The algorithm separates parameters, accommodates missing data, and supports prior information on relationships. Our program CONVEXCLUSTER incorporating the algorithm is implemented on ATI and nVidia graphics processing units (GPUs) for maximal speed. Several biological examples illustrate the strengths of convex clustering and the ability of the proximal distance algorithm to handle high-dimensional problems. CONVEXCLUSTER can be freely downloaded from the UCLA Human Genetics web site at http://www.genetics.ucla.edu/software/ PMID:25965340

  15. Improved discrimination of visual stimuli following repetitive transcranial magnetic stimulation.

    Directory of Open Access Journals (Sweden)

    Michael L Waterston

    Full Text Available BACKGROUND: Repetitive transcranial magnetic stimulation (rTMS at certain frequencies increases thresholds for motor-evoked potentials and phosphenes following stimulation of cortex. Consequently rTMS is often assumed to introduce a "virtual lesion" in stimulated brain regions, with correspondingly diminished behavioral performance. METHODOLOGY/PRINCIPAL FINDINGS: Here we investigated the effects of rTMS to visual cortex on subjects' ability to perform visual psychophysical tasks. Contrary to expectations of a visual deficit, we find that rTMS often improves the discrimination of visual features. For coarse orientation tasks, discrimination of a static stimulus improved consistently following theta-burst stimulation of the occipital lobe. Using a reaction-time task, we found that these improvements occurred throughout the visual field and lasted beyond one hour post-rTMS. Low-frequency (1 Hz stimulation yielded similar improvements. In contrast, we did not find consistent effects of rTMS on performance in a fine orientation discrimination task. CONCLUSIONS/SIGNIFICANCE: Overall our results suggest that rTMS generally improves or has no effect on visual acuity, with the nature of the effect depending on the type of stimulation and the task. We interpret our results in the context of an ideal-observer model of visual perception.

  16. Direct Contribution of Auditory Motion Information to Sound-Induced Visual Motion Perception

    Directory of Open Access Journals (Sweden)

    Souta Hidaka

    2011-10-01

    Full Text Available We have recently demonstrated that alternating left-right sound sources induce motion perception to static visual stimuli along the horizontal plane (SIVM: sound-induced visual motion perception, Hidaka et al., 2009. The aim of the current study was to elucidate whether auditory motion signals, rather than auditory positional signals, can directly contribute to the SIVM. We presented static visual flashes at retinal locations outside the fovea together with a lateral auditory motion provided by a virtual stereo noise source smoothly shifting in the horizontal plane. The flashes appeared to move in the situation where auditory positional information would have little influence on the perceived position of visual stimuli; the spatiotemporal position of the flashes was in the middle of the auditory motion trajectory. Furthermore, the auditory motion altered visual motion perception in a global motion display; in this display, different localized motion signals of multiple visual stimuli were combined to produce a coherent visual motion perception so that there was no clear one-to-one correspondence between the auditory stimuli and each visual stimulus. These findings suggest the existence of direct interactions between the auditory and visual modalities in motion processing and motion perception.

  17. Attentional capture by social stimuli in young infants

    Directory of Open Access Journals (Sweden)

    Maxie eGluckman

    2013-08-01

    Full Text Available We investigated the possibility that a range of social stimuli capture the attention of 6-month-old infants when in competition with other non-face objects. Infants viewed a series of six-item arrays in which one target item was a face, body part, or animal as their eye movements were recorded. Stimulus arrays were also processed for relative salience of each item in terms of color, luminance, and amount of contour. Targets were rarely the most visually salient items in the arrays, yet infants’ first looks toward all three target types were above chance, and dwell times for targets exceeded other stimulus types. Girls looked longer at faces than did boys, but there were no sex differences for other stimuli. These results are interpreted in a context of learning to discriminate between different classes of animate stimuli, perhaps in line with affordances for social interaction, and origins of sex differences in social attention.

  18. A case of epilepsy induced by eating or by visual stimuli of food made of minced meat.

    Science.gov (United States)

    Mimura, Naoya; Inoue, Takeshi; Shimotake, Akihiro; Matsumoto, Riki; Ikeda, Akio; Takahashi, Ryosuke

    2017-08-31

    We report a 34-year-old woman with eating epilepsy induced not only by eating but also seeing foods made of minced meat. In her early 20s of age, she started having simple partial seizures (SPS) as flashback and epigastric discomfort induced by particular foods. When she was 33 years old, she developed SPS, followed by secondarily generalized tonic-clonic seizure (sGTCS) provoked by eating a hot dog, and 6 months later, only seeing the video of dumpling. We performed video electroencephalogram (EEG) monitoring while she was seeing the video of soup dumpling, which most likely caused sGTCS. Ictal EEG showed rhythmic theta activity in the left frontal to mid-temporal area, followed by generalized seizure pattern. In this patient, seizures were provoked not only by eating particular foods but also by seeing these. This suggests a form of epilepsy involving visual stimuli.

  19. Visual discomfort and depth-of-field

    NARCIS (Netherlands)

    O'Hare, L.; Zhang, T.; Nefs, H.T.; Hibbard, P.B.

    2013-01-01

    Visual discomfort has been reported for certain visual stimuli and under particular viewing conditions, such as stereoscopic viewing. In stereoscopic viewing, visual discomfort can be caused by a conflict between accommodation and convergence cues that may specify different distances in depth.

  20. The threshold for conscious report: Signal loss and response bias in visual and frontal cortex.

    Science.gov (United States)

    van Vugt, Bram; Dagnino, Bruno; Vartak, Devavrat; Safaai, Houman; Panzeri, Stefano; Dehaene, Stanislas; Roelfsema, Pieter R

    2018-05-04

    Why are some visual stimuli consciously detected, whereas others remain subliminal? We investigated the fate of weak visual stimuli in the visual and frontal cortex of awake monkeys trained to report stimulus presence. Reported stimuli were associated with strong sustained activity in the frontal cortex, and frontal activity was weaker and quickly decayed for unreported stimuli. Information about weak stimuli could be lost at successive stages en route from the visual to the frontal cortex, and these propagation failures were confirmed through microstimulation of area V1. Fluctuations in response bias and sensitivity during perception of identical stimuli were traced back to prestimulus brain-state markers. A model in which stimuli become consciously reportable when they elicit a nonlinear ignition process in higher cortical areas explained our results. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

  1. Using Prosopagnosia to Test and Modify Visual Recognition Theory.

    Science.gov (United States)

    O'Brien, Alexander M

    2018-02-01

    Biederman's contemporary theory of basic visual object recognition (Recognition-by-Components) is based on structural descriptions of objects and presumes 36 visual primitives (geons) people can discriminate, but there has been no empirical test of the actual use of these 36 geons to visually distinguish objects. In this study, we tested for the actual use of these geons in basic visual discrimination by comparing object discrimination performance patterns (when distinguishing varied stimuli) of an acquired prosopagnosia patient (LB) and healthy control participants. LB's prosopagnosia left her heavily reliant on structural descriptions or categorical object differences in visual discrimination tasks versus the control participants' additional ability to use face recognition or coordinate systems (Coordinate Relations Hypothesis). Thus, when LB performed comparably to control participants with a given stimulus, her restricted reliance on basic or categorical discriminations meant that the stimuli must be distinguishable on the basis of a geon feature. By varying stimuli in eight separate experiments and presenting all 36 geons, we discerned that LB coded only 12 (vs. 36) distinct visual primitives (geons), apparently reflective of human visual systems generally.

  2. Hierarchical modeling of active materials

    International Nuclear Information System (INIS)

    Taya, Minoru

    2003-01-01

    Intelligent (or smart) materials are increasingly becoming key materials for use in actuators and sensors. If an intelligent material is used as a sensor, it can be embedded in a variety of structure functioning as a health monitoring system to make their life longer with high reliability. If an intelligent material is used as an active material in an actuator, it plays a key role of making dynamic movement of the actuator under a set of stimuli. This talk intends to cover two different active materials in actuators, (1) piezoelectric laminate with FGM microstructure, (2) ferromagnetic shape memory alloy (FSMA). The advantage of using the FGM piezo laminate is to enhance its fatigue life while maintaining large bending displacement, while that of use in FSMA is its fast actuation while providing a large force and stroke capability. Use of hierarchical modeling of the above active materials is a key design step in optimizing its microstructure for enhancement of their performance. I will discuss briefly hierarchical modeling of the above two active materials. For FGM piezo laminate, we will use both micromechanical model and laminate theory, while for FSMA, the modeling interfacing nano-structure, microstructure and macro-behavior is discussed. (author)

  3. Reversal Negativity and Bistable Stimuli: Attention, Awareness, or Something Else?

    Science.gov (United States)

    Intaite, Monika; Koivisto, Mika; Ruksenas, Osvaldas; Revonsuo, Antti

    2010-01-01

    Ambiguous (or bistable) figures are visual stimuli that have two mutually exclusive perceptual interpretations that spontaneously alternate with each other. Perceptual reversals, as compared with non-reversals, typically elicit a negative difference called reversal negativity (RN), peaking around 250 ms from stimulus onset. The cognitive…

  4. Audiovisual semantic interactions between linguistic and nonlinguistic stimuli: The time-courses and categorical specificity.

    Science.gov (United States)

    Chen, Yi-Chuan; Spence, Charles

    2018-04-30

    We examined the time-courses and categorical specificity of the crossmodal semantic congruency effects elicited by naturalistic sounds and spoken words on the processing of visual pictures (Experiment 1) and printed words (Experiment 2). Auditory cues were presented at 7 different stimulus onset asynchronies (SOAs) with respect to the visual targets, and participants made speeded categorization judgments (living vs. nonliving). Three common effects were observed across 2 experiments: Both naturalistic sounds and spoken words induced a slowly emerging congruency effect when leading by 250 ms or more in the congruent compared with the incongruent condition, and a rapidly emerging inhibitory effect when leading by 250 ms or less in the incongruent condition as opposed to the noise condition. Only spoken words that did not match the visual targets elicited an additional inhibitory effect when leading by 100 ms or when presented simultaneously. Compared with nonlinguistic stimuli, the crossmodal congruency effects associated with linguistic stimuli occurred over a wider range of SOAs and occurred at a more specific level of the category hierarchy (i.e., the basic level) than was required by the task. A comprehensive framework is proposed to provide a dynamic view regarding how meaning is extracted during the processing of visual or auditory linguistic and nonlinguistic stimuli, therefore contributing to our understanding of multisensory semantic processing in humans. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  5. Emotional conditioning to masked stimuli and modulation of visuospatial attention.

    Science.gov (United States)

    Beaver, John D; Mogg, Karin; Bradley, Brendan P

    2005-03-01

    Two studies investigated the effects of conditioning to masked stimuli on visuospatial attention. During the conditioning phase, masked snakes and spiders were paired with a burst of white noise, or paired with an innocuous tone, in the conditioned stimulus (CS)+ and CS- conditions, respectively. Attentional allocation to the CSs was then assessed with a visual probe task, in which the CSs were presented unmasked (Experiment 1) or both unmasked and masked (Experiment 2), together with fear-irrelevant control stimuli (flowers and mushrooms). In Experiment 1, participants preferentially allocated attention to CS+ relative to control stimuli. Experiment 2 suggested that this attentional bias depended on the perceived aversiveness of the unconditioned stimulus and did not require conscious recognition of the CSs during both acquisition and expression. Copyright 2005 APA, all rights reserved.

  6. Brain network involved in visual processing of movement stimuli used in upper limb robotic training: an fMRI study.

    Science.gov (United States)

    Nocchi, Federico; Gazzellini, Simone; Grisolia, Carmela; Petrarca, Maurizio; Cannatà, Vittorio; Cappa, Paolo; D'Alessio, Tommaso; Castelli, Enrico

    2012-07-24

    The potential of robot-mediated therapy and virtual reality in neurorehabilitation is becoming of increasing importance. However, there is limited information, using neuroimaging, on the neural networks involved in training with these technologies. This study was intended to detect the brain network involved in the visual processing of movement during robotic training. The main aim was to investigate the existence of a common cerebral network able to assimilate biological (human upper limb) and non-biological (abstract object) movements, hence testing the suitability of the visual non-biological feedback provided by the InMotion2 Robot. A visual functional Magnetic Resonance Imaging (fMRI) task was administered to 22 healthy subjects. The task required observation and retrieval of motor gestures and of the visual feedback used in robotic training. Functional activations of both biological and non-biological movements were examined to identify areas activated in both conditions, along with differential activity in upper limb vs. abstract object trials. Control of response was also tested by administering trials with congruent and incongruent reaching movements. The observation of upper limb and abstract object movements elicited similar patterns of activations according to a caudo-rostral pathway for the visual processing of movements (including specific areas of the occipital, temporal, parietal, and frontal lobes). Similarly, overlapping activations were found for the subsequent retrieval of the observed movement. Furthermore, activations of frontal cortical areas were associated with congruent trials more than with the incongruent ones. This study identified the neural pathway associated with visual processing of movement stimuli used in upper limb robot-mediated training and investigated the brain's ability to assimilate abstract object movements with human motor gestures. In both conditions, activations were elicited in cerebral areas involved in visual

  7. Brain network involved in visual processing of movement stimuli used in upper limb robotic training: an fMRI study

    Directory of Open Access Journals (Sweden)

    Nocchi Federico

    2012-07-01

    Full Text Available Abstract Background The potential of robot-mediated therapy and virtual reality in neurorehabilitation is becoming of increasing importance. However, there is limited information, using neuroimaging, on the neural networks involved in training with these technologies. This study was intended to detect the brain network involved in the visual processing of movement during robotic training. The main aim was to investigate the existence of a common cerebral network able to assimilate biological (human upper limb and non-biological (abstract object movements, hence testing the suitability of the visual non-biological feedback provided by the InMotion2 Robot. Methods A visual functional Magnetic Resonance Imaging (fMRI task was administered to 22 healthy subjects. The task required observation and retrieval of motor gestures and of the visual feedback used in robotic training. Functional activations of both biological and non-biological movements were examined to identify areas activated in both conditions, along with differential activity in upper limb vs. abstract object trials. Control of response was also tested by administering trials with congruent and incongruent reaching movements. Results The observation of upper limb and abstract object movements elicited similar patterns of activations according to a caudo-rostral pathway for the visual processing of movements (including specific areas of the occipital, temporal, parietal, and frontal lobes. Similarly, overlapping activations were found for the subsequent retrieval of the observed movement. Furthermore, activations of frontal cortical areas were associated with congruent trials more than with the incongruent ones. Conclusions This study identified the neural pathway associated with visual processing of movement stimuli used in upper limb robot-mediated training and investigated the brain’s ability to assimilate abstract object movements with human motor gestures. In both conditions

  8. Learning Visualizations by Analogy: Promoting Visual Literacy through Visualization Morphing.

    Science.gov (United States)

    Ruchikachorn, Puripant; Mueller, Klaus

    2015-09-01

    We propose the concept of teaching (and learning) unfamiliar visualizations by analogy, that is, demonstrating an unfamiliar visualization method by linking it to another more familiar one, where the in-betweens are designed to bridge the gap of these two visualizations and explain the difference in a gradual manner. As opposed to a textual description, our morphing explains an unfamiliar visualization through purely visual means. We demonstrate our idea by ways of four visualization pair examples: data table and parallel coordinates, scatterplot matrix and hyperbox, linear chart and spiral chart, and hierarchical pie chart and treemap. The analogy is commutative i.e. any member of the pair can be the unfamiliar visualization. A series of studies showed that this new paradigm can be an effective teaching tool. The participants could understand the unfamiliar visualization methods in all of the four pairs either fully or at least significantly better after they observed or interacted with the transitions from the familiar counterpart. The four examples suggest how helpful visualization pairings be identified and they will hopefully inspire other visualization morphings and associated transition strategies to be identified.

  9. Exposure to Virtual Social Stimuli Modulates Subjective Pain Reports

    Directory of Open Access Journals (Sweden)

    Jacob M Vigil

    2014-01-01

    Full Text Available BACKGROUND: Contextual factors, including the gender of researchers, influence experimental and patient pain reports. It is currently not known how social stimuli influence pain percepts, nor which types of sensory modalities of communication, such as auditory, visual or olfactory cues associated with person perception and gender processing, produce these effects.

  10. On Utmost Multiplicity of Hierarchical Stellar Systems

    Directory of Open Access Journals (Sweden)

    Gebrehiwot Y. M.

    2016-12-01

    Full Text Available According to theoretical considerations, multiplicity of hierarchical stellar systems can reach, depending on masses and orbital parameters, several hundred, while observational data confirm the existence of at most septuple (seven-component systems. In this study, we cross-match the stellar systems of very high multiplicity (six and more components in modern catalogues of visual double and multiple stars to find among them the candidates to hierarchical systems. After cross-matching the catalogues of closer binaries (eclipsing, spectroscopic, etc., some of their components were found to be binary/multiple themselves, what increases the system's degree of multiplicity. Optical pairs, known from literature or filtered by the authors, were flagged and excluded from the statistics. We compiled a list of hierarchical systems with potentially very high multiplicity that contains ten objects. Their multiplicity does not exceed 12, and we discuss a number of ways to explain the lack of extremely high multiplicity systems.

  11. Effect of a combination of flip and zooming stimuli on the performance of a visual brain-computer interface for spelling.

    Science.gov (United States)

    Cheng, Jiao; Jin, Jing; Daly, Ian; Zhang, Yu; Wang, Bei; Wang, Xingyu; Cichocki, Andrzej

    2018-02-13

    Brain-computer interface (BCI) systems can allow their users to communicate with the external world by recognizing intention directly from their brain activity without the assistance of the peripheral motor nervous system. The P300-speller is one of the most widely used visual BCI applications. In previous studies, a flip stimulus (rotating the background area of the character) that was based on apparent motion, suffered from less refractory effects. However, its performance was not improved significantly. In addition, a presentation paradigm that used a "zooming" action (changing the size of the symbol) has been shown to evoke relatively higher P300 amplitudes and obtain a better BCI performance. To extend this method of stimuli presentation within a BCI and, consequently, to improve BCI performance, we present a new paradigm combining both the flip stimulus with a zooming action. This new presentation modality allowed BCI users to focus their attention more easily. We investigated whether such an action could combine the advantages of both types of stimuli presentation to bring a significant improvement in performance compared to the conventional flip stimulus. The experimental results showed that the proposed paradigm could obtain significantly higher classification accuracies and bit rates than the conventional flip paradigm (p<0.01).

  12. Hebbian learning of hand-centred representations in a hierarchical neural network model of the primate visual system.

    Directory of Open Access Journals (Sweden)

    Jannis Born

    Full Text Available A subset of neurons in the posterior parietal and premotor areas of the primate brain respond to the locations of visual targets in a hand-centred frame of reference. Such hand-centred visual representations are thought to play an important role in visually-guided reaching to target locations in space. In this paper we show how a biologically plausible, Hebbian learning mechanism may account for the development of localized hand-centred representations in a hierarchical neural network model of the primate visual system, VisNet. The hand-centered neurons developed in the model use an invariance learning mechanism known as continuous transformation (CT learning. In contrast to previous theoretical proposals for the development of hand-centered visual representations, CT learning does not need a memory trace of recent neuronal activity to be incorporated in the synaptic learning rule. Instead, CT learning relies solely on a Hebbian learning rule, which is able to exploit the spatial overlap that naturally occurs between successive images of a hand-object configuration as it is shifted across different retinal locations due to saccades. Our simulations show how individual neurons in the network model can learn to respond selectively to target objects in particular locations with respect to the hand, irrespective of where the hand-object configuration occurs on the retina. The response properties of these hand-centred neurons further generalise to localised receptive fields in the hand-centred space when tested on novel hand-object configurations that have not been explored during training. Indeed, even when the network is trained with target objects presented across a near continuum of locations around the hand during training, the model continues to develop hand-centred neurons with localised receptive fields in hand-centred space. With the help of principal component analysis, we provide the first theoretical framework that explains the behavior

  13. Peripheral visual response time and visual display layout

    Science.gov (United States)

    Haines, R. F.

    1974-01-01

    Experiments were performed on a group of 42 subjects in a study of their peripheral visual response time to visual signals under positive acceleration, during prolonged bedrest, at passive 70 deg headup body lift, under exposures to high air temperatures and high luminance levels, and under normal stress-free laboratory conditions. Diagrams are plotted for mean response times to white, red, yellow, green, and blue stimuli under different conditions.

  14. Visual Meta-Programming Notation

    National Research Council Canada - National Science Library

    Auguston, Mikhail

    2001-01-01

    ...), encapsulation means for hierarchical rules design, two-dimensional data-flow diagrams for rules, visual control constructs for conditionals and iteration, default mapping results to reduce real...

  15. Absent Audiovisual Integration Elicited by Peripheral Stimuli in Parkinson's Disease.

    Science.gov (United States)

    Ren, Yanna; Suzuki, Keisuke; Yang, Weiping; Ren, Yanling; Wu, Fengxia; Yang, Jiajia; Takahashi, Satoshi; Ejima, Yoshimichi; Wu, Jinglong; Hirata, Koichi

    2018-01-01

    The basal ganglia, which have been shown to be a significant multisensory hub, are disordered in Parkinson's disease (PD). This study was to investigate the audiovisual integration of peripheral stimuli in PD patients with/without sleep disturbances. Thirty-six age-matched normal controls (NC) and 30 PD patients were recruited for an auditory/visual discrimination experiment. The mean response times for each participant were analyzed using repeated measures ANOVA and race model. The results showed that the response to all stimuli was significantly delayed for PD compared to NC (all p audiovisual stimuli was significantly faster than that to unimodal stimuli in both NC and PD ( p audiovisual integration was absent in PD; however, it did occur in NC. Further analysis showed that there was no significant audiovisual integration in PD with/without cognitive impairment or in PD with/without sleep disturbances. Furthermore, audiovisual facilitation was not associated with Hoehn and Yahr stage, disease duration, or the presence of sleep disturbances (all p > 0.05). The current results showed that audiovisual multisensory integration for peripheral stimuli is absent in PD regardless of sleep disturbances and further suggested the abnormal audiovisual integration might be a potential early manifestation of PD.

  16. Spatial Scaling of the Profile of Selective Attention in the Visual Field.

    Science.gov (United States)

    Gannon, Matthew A; Knapp, Ashley A; Adams, Thomas G; Long, Stephanie M; Parks, Nathan A

    2016-01-01

    Neural mechanisms of selective attention must be capable of adapting to variation in the absolute size of an attended stimulus in the ever-changing visual environment. To date, little is known regarding how attentional selection interacts with fluctuations in the spatial expanse of an attended object. Here, we use event-related potentials (ERPs) to investigate the scaling of attentional enhancement and suppression across the visual field. We measured ERPs while participants performed a task at fixation that varied in its attentional demands (attentional load) and visual angle (1.0° or 2.5°). Observers were presented with a stream of task-relevant stimuli while foveal, parafoveal, and peripheral visual locations were probed by irrelevant distractor stimuli. We found two important effects in the N1 component of visual ERPs. First, N1 modulations to task-relevant stimuli indexed attentional selection of stimuli during the load task and further correlated with task performance. Second, with increased task size, attentional modulation of the N1 to distractor stimuli showed a differential pattern that was consistent with a scaling of attentional selection. Together, these results demonstrate that the size of an attended stimulus scales the profile of attentional selection across the visual field and provides insights into the attentional mechanisms associated with such spatial scaling.

  17. Interaction between visual and chemical cues in a Liolaemus lizard: a multimodal approach.

    Science.gov (United States)

    Vicente, Natalin S; Halloy, Monique

    2017-12-01

    Multimodal communication involves the use of signals and cues across two or more sensory modalities. The genus Liolaemus (Iguania: Liolaemidae) offers a great potential for studies on the ecology and evolution of multimodal communication, including visual and chemical signals. In this study, we analyzed the response of male and female Liolaemus pacha to chemical, visual and combined (multimodal) stimuli. Using cue-isolation tests, we registered the number of tongue flicks and headbob displays from exposure to signals in each modality. Number of tongue flicks was greater when a chemical stimulus was presented alone than in the presence of visual or multimodal stimuli. In contrast, headbob displays were fewer in number with visual and chemical stimuli alone, but significantly higher in number when combined. Female signallers triggered significantly more tongue flicks than male signallers, suggesting that chemical cues are involved in sexual recognition. We did not find an inhibition between chemical and visual cues. On the contrary, we observed a dominance of the chemical modality, because when presented with visual stimuli, lizards also responded with more tongue flicks than headbob displays. The total response produced by multimodal stimuli was similar to that of the chemical stimuli alone, possibly suggesting non-redundancy. We discuss whether the visual component of a multimodal signal could attract attention at a distance, increasing the effectiveness of transmission and reception of the information in chemical cues. Copyright © 2017 Elsevier GmbH. All rights reserved.

  18. A noninvasive brain computer interface using visually-induced near-infrared spectroscopy responses.

    Science.gov (United States)

    Chen, Cheng-Hsuan; Ho, Ming-Shan; Shyu, Kuo-Kai; Hsu, Kou-Cheng; Wang, Kuo-Wei; Lee, Po-Lei

    2014-09-19

    Visually-induced near-infrared spectroscopy (NIRS) response was utilized to design a brain computer interface (BCI) system. Four circular checkerboards driven by distinct flickering sequences were displayed on a LCD screen as visual stimuli to induce subjects' NIRS responses. Each flickering sequence was a concatenated sequence of alternative flickering segments and resting segments. The flickering segment was designed with fixed duration of 3s whereas the resting segment was chosen randomly within 15-20s to create the mutual independencies among different flickering sequences. Six subjects were recruited in this study and subjects were requested to gaze at the four visual stimuli one-after-one in a random order. Since visual responses in human brain are time-locked to the onsets of visual stimuli and the flicker sequences of distinct visual stimuli were designed mutually independent, the NIRS responses induced by user's gazed targets can be discerned from non-gazed targets by applying a simple averaging process. The accuracies for the six subjects were higher than 90% after 10 or more epochs being averaged. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  19. Visual Aversive Learning Compromises Sensory Discrimination.

    Science.gov (United States)

    Shalev, Lee; Paz, Rony; Avidan, Galia

    2018-03-14

    Aversive learning is thought to modulate perceptual thresholds, which can lead to overgeneralization. However, it remains undetermined whether this modulation is domain specific or a general effect. Moreover, despite the unique role of the visual modality in human perception, it is unclear whether this aspect of aversive learning exists in this modality. The current study was designed to examine the effect of visual aversive outcomes on the perception of basic visual and auditory features. We tested the ability of healthy participants, both males and females, to discriminate between neutral stimuli, before and after visual learning. In each experiment, neutral stimuli were associated with aversive images in an experimental group and with neutral images in a control group. Participants demonstrated a deterioration in discrimination (higher discrimination thresholds) only after aversive learning. This deterioration was measured for both auditory (tone frequency) and visual (orientation and contrast) features. The effect was replicated in five different experiments and lasted for at least 24 h. fMRI neural responses and pupil size were also measured during learning. We showed an increase in neural activations in the anterior cingulate cortex, insula, and amygdala during aversive compared with neutral learning. Interestingly, the early visual cortex showed increased brain activity during aversive compared with neutral context trials, with identical visual information. Our findings imply the existence of a central multimodal mechanism, which modulates early perceptual properties, following exposure to negative situations. Such a mechanism could contribute to abnormal responses that underlie anxiety states, even in new and safe environments. SIGNIFICANCE STATEMENT Using a visual aversive-learning paradigm, we found deteriorated discrimination abilities for visual and auditory stimuli that were associated with visual aversive stimuli. We showed increased neural

  20. Effects of Auditory and Visual Priming on the Identification of Spoken Words.

    Science.gov (United States)

    Shigeno, Sumi

    2017-04-01

    This study examined the effects of preceding contextual stimuli, either auditory or visual, on the identification of spoken target words. Fifty-one participants (29% males, 71% females; mean age = 24.5 years, SD = 8.5) were divided into three groups: no context, auditory context, and visual context. All target stimuli were spoken words masked with white noise. The relationships between the context and target stimuli were as follows: identical word, similar word, and unrelated word. Participants presented with context experienced a sequence of six context stimuli in the form of either spoken words or photographs. Auditory and visual context conditions produced similar results, but the auditory context aided word identification more than the visual context in the similar word relationship. We discuss these results in the light of top-down processing, motor theory, and the phonological system of language.

  1. Altered processing of visual emotional stimuli in posttraumatic stress disorder: an event-related potential study.

    Science.gov (United States)

    Saar-Ashkenazy, Rotem; Shalev, Hadar; Kanthak, Magdalena K; Guez, Jonathan; Friedman, Alon; Cohen, Jonathan E

    2015-08-30

    Patients with posttraumatic stress disorder (PTSD) display abnormal emotional processing and bias towards emotional content. Most neurophysiological studies in PTSD found higher amplitudes of event-related potentials (ERPs) in response to trauma-related visual content. Here we aimed to characterize brain electrical activity in PTSD subjects in response to non-trauma-related emotion-laden pictures (positive, neutral and negative). A combined behavioral-ERP study was conducted in 14 severe PTSD patients and 14 controls. Response time in PTSD patients was slower compared with that in controls, irrespective to emotional valence. In both PTSD and controls, response time to negative pictures was slower compared with that to neutral or positive pictures. Upon ranking, both control and PTSD subjects similarly discriminated between pictures with different emotional valences. ERP analysis revealed three distinctive components (at ~300, ~600 and ~1000 ms post-stimulus onset) for emotional valence in control subjects. In contrast, PTSD patients displayed a similar brain response across all emotional categories, resembling the response of controls to negative stimuli. We interpret these findings as a brain-circuit response tendency towards negative overgeneralization in PTSD. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  2. Timing the impact of literacy on visual processing

    Science.gov (United States)

    Pegado, Felipe; Comerlato, Enio; Ventura, Fabricio; Jobert, Antoinette; Nakamura, Kimihiro; Buiatti, Marco; Ventura, Paulo; Dehaene-Lambertz, Ghislaine; Kolinsky, Régine; Morais, José; Braga, Lucia W.; Cohen, Laurent; Dehaene, Stanislas

    2014-01-01

    Learning to read requires the acquisition of an efficient visual procedure for quickly recognizing fine print. Thus, reading practice could induce a perceptual learning effect in early vision. Using functional magnetic resonance imaging (fMRI) in literate and illiterate adults, we previously demonstrated an impact of reading acquisition on both high- and low-level occipitotemporal visual areas, but could not resolve the time course of these effects. To clarify whether literacy affects early vs. late stages of visual processing, we measured event-related potentials to various categories of visual stimuli in healthy adults with variable levels of literacy, including completely illiterate subjects, early-schooled literate subjects, and subjects who learned to read in adulthood (ex-illiterates). The stimuli included written letter strings forming pseudowords, on which literacy is expected to have a major impact, as well as faces, houses, tools, checkerboards, and false fonts. To evaluate the precision with which these stimuli were encoded, we studied repetition effects by presenting the stimuli in pairs composed of repeated, mirrored, or unrelated pictures from the same category. The results indicate that reading ability is correlated with a broad enhancement of early visual processing, including increased repetition suppression, suggesting better exemplar discrimination, and increased mirror discrimination, as early as ∼100–150 ms in the left occipitotemporal region. These effects were found with letter strings and false fonts, but also were partially generalized to other visual categories. Thus, learning to read affects the magnitude, precision, and invariance of early visual processing. PMID:25422460

  3. Reproducibility and discriminability of brain patterns of semantic categories enhanced by congruent audiovisual stimuli.

    Directory of Open Access Journals (Sweden)

    Yuanqing Li

    Full Text Available One of the central questions in cognitive neuroscience is the precise neural representation, or brain pattern, associated with a semantic category. In this study, we explored the influence of audiovisual stimuli on the brain patterns of concepts or semantic categories through a functional magnetic resonance imaging (fMRI experiment. We used a pattern search method to extract brain patterns corresponding to two semantic categories: "old people" and "young people." These brain patterns were elicited by semantically congruent audiovisual, semantically incongruent audiovisual, unimodal visual, and unimodal auditory stimuli belonging to the two semantic categories. We calculated the reproducibility index, which measures the similarity of the patterns within the same category. We also decoded the semantic categories from these brain patterns. The decoding accuracy reflects the discriminability of the brain patterns between two categories. The results showed that both the reproducibility index of brain patterns and the decoding accuracy were significantly higher for semantically congruent audiovisual stimuli than for unimodal visual and unimodal auditory stimuli, while the semantically incongruent stimuli did not elicit brain patterns with significantly higher reproducibility index or decoding accuracy. Thus, the semantically congruent audiovisual stimuli enhanced the within-class reproducibility of brain patterns and the between-class discriminability of brain patterns, and facilitate neural representations of semantic categories or concepts. Furthermore, we analyzed the brain activity in superior temporal sulcus and middle temporal gyrus (STS/MTG. The strength of the fMRI signal and the reproducibility index were enhanced by the semantically congruent audiovisual stimuli. Our results support the use of the reproducibility index as a potential tool to supplement the fMRI signal amplitude for evaluating multimodal integration.

  4. Action video game players' visual search advantage extends to biologically relevant stimuli.

    Science.gov (United States)

    Chisholm, Joseph D; Kingstone, Alan

    2015-07-01

    Research investigating the effects of action video game experience on cognition has demonstrated a host of performance improvements on a variety of basic tasks. Given the prevailing evidence that these benefits result from efficient control of attentional processes, there has been growing interest in using action video games as a general tool to enhance everyday attentional control. However, to date, there is little evidence indicating that the benefits of action video game playing scale up to complex settings with socially meaningful stimuli - one of the fundamental components of our natural environment. The present experiment compared action video game player (AVGP) and non-video game player (NVGP) performance on an oculomotor capture task that presented participants with face stimuli. In addition, the expression of a distractor face was manipulated to assess if action video game experience modulated the effect of emotion. Results indicate that AVGPs experience less oculomotor capture than NVGPs; an effect that was not influenced by the emotional content depicted by distractor faces. It is noteworthy that this AVGP advantage emerged despite participants being unaware that the investigation had to do with video game playing, and participants being equivalent in their motivation and treatment of the task as a game. The results align with the notion that action video game experience is associated with superior attentional and oculomotor control, and provides evidence that these benefits can generalize to more complex and biologically relevant stimuli. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Non-hierarchical Influence of Visual Form, Touch, and Position Cues on Embodiment, Agency, and Presence in Virtual Reality.

    Science.gov (United States)

    Pritchard, Stephen C; Zopf, Regine; Polito, Vince; Kaplan, David M; Williams, Mark A

    2016-01-01

    The concept of self-representation is commonly decomposed into three component constructs (sense of embodiment, sense of agency, and sense of presence), and each is typically investigated separately across different experimental contexts. For example, embodiment has been explored in bodily illusions; agency has been investigated in hypnosis research; and presence has been primarily studied in the context of Virtual Reality (VR) technology. Given that each component involves the integration of multiple cues within and across sensory modalities, they may rely on similar underlying mechanisms. However, the degree to which this may be true remains unclear when they are independently studied. As a first step toward addressing this issue, we manipulated a range of cues relevant to these components of self-representation within a single experimental context. Using consumer-grade Oculus Rift VR technology, and a new implementation of the Virtual Hand Illusion, we systematically manipulated visual form plausibility, visual-tactile synchrony , and visual-proprioceptive spatial offset to explore their influence on self-representation. Our results show that these cues differentially influence embodiment, agency, and presence. We provide evidence that each type of cue can independently and non-hierarchically influence self-representation yet none of these cues strictly constrains or gates the influence of the others. We discuss theoretical implications for understanding self-representation as well as practical implications for VR experiment design, including the suitability of consumer-based VR technology in research settings.

  6. Magnetic stimulation of visual cortex impairs perceptual learning.

    Science.gov (United States)

    Baldassarre, Antonello; Capotosto, Paolo; Committeri, Giorgia; Corbetta, Maurizio

    2016-12-01

    The ability to learn and process visual stimuli more efficiently is important for survival. Previous neuroimaging studies have shown that perceptual learning on a shape identification task differently modulates activity in both frontal-parietal cortical regions and visual cortex (Sigman et al., 2005;Lewis et al., 2009). Specifically, fronto-parietal regions (i.e. intra parietal sulcus, pIPS) became less activated for trained as compared to untrained stimuli, while visual regions (i.e. V2d/V3 and LO) exhibited higher activation for familiar shape. Here, after the intensive training, we employed transcranial magnetic stimulation over both visual occipital and parietal regions, previously shown to be modulated, to investigate their causal role in learning the shape identification task. We report that interference with V2d/V3 and LO increased reaction times to learned stimuli as compared to pIPS and Sham control condition. Moreover, the impairment observed after stimulation over the two visual regions was positive correlated. These results strongly support the causal role of the visual network in the control of the perceptual learning. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Visual attention modulates brain activation to angry voices.

    Science.gov (United States)

    Mothes-Lasch, Martin; Mentzel, Hans-Joachim; Miltner, Wolfgang H R; Straube, Thomas

    2011-06-29

    In accordance with influential models proposing prioritized processing of threat, previous studies have shown automatic brain responses to angry prosody in the amygdala and the auditory cortex under auditory distraction conditions. However, it is unknown whether the automatic processing of angry prosody is also observed during cross-modal distraction. The current fMRI study investigated brain responses to angry versus neutral prosodic stimuli during visual distraction. During scanning, participants were exposed to angry or neutral prosodic stimuli while visual symbols were displayed simultaneously. By means of task requirements, participants either attended to the voices or to the visual stimuli. While the auditory task revealed pronounced activation in the auditory cortex and amygdala to angry versus neutral prosody, this effect was absent during the visual task. Thus, our results show a limitation of the automaticity of the activation of the amygdala and auditory cortex to angry prosody. The activation of these areas to threat-related voices depends on modality-specific attention.

  8. Radial frequency stimuli and sine-wave gratings seem to be processed by distinct contrast brain mechanisms

    Directory of Open Access Journals (Sweden)

    M.L.B. Simas

    2005-03-01

    Full Text Available An assumption commonly made in the study of visual perception is that the lower the contrast threshold for a given stimulus, the more sensitive and selective will be the mechanism that processes it. On the basis of this consideration, we investigated contrast thresholds for two classes of stimuli: sine-wave gratings and radial frequency stimuli (i.e., j0 targets or stimuli modulated by spherical Bessel functions. Employing a suprathreshold summation method, we measured the selectivity of spatial and radial frequency filters using either sine-wave gratings or j0 target contrast profiles at either 1 or 4 cycles per degree of visual angle (cpd, as the test frequencies. Thus, in a forced-choice trial, observers chose between a background spatial (or radial frequency alone and the given background stimulus plus the test frequency (1 or 4 cpd sine-wave grating or radial frequency. Contrary to our expectations, the results showed elevated thresholds (i.e., inhibition for sine-wave gratings and decreased thresholds (i.e., summation for radial frequencies when background and test frequencies were identical. This was true for both 1- and 4-cpd test frequencies. This finding suggests that sine-wave gratings and radial frequency stimuli are processed by different quasi-linear systems, one working at low luminance and contrast level (sine-wave gratings and the other at high luminance and contrast levels (radial frequency stimuli. We think that this interpretation is consistent with distinct foveal only and foveal-parafoveal mechanisms involving striate and/or other higher visual areas (i.e., V2 and V4.

  9. Understanding Consumers' In-store Visual Perception

    DEFF Research Database (Denmark)

    Clement, Jesper; Kristensen, Tore; Grønhaug, Kjell

    2013-01-01

    It is widely accepted that the human brain has limited capacity for perceptual stimuli and consumers'' visual attention, when searching for a particular product or brand in a grocery store, should then be limited by the boundaries of their own perceptual capacity. In this exploratory study, we...... examine the relationship between abundant in-store stimuli and limited human perceptual capacity. Specifically, we test the influence of package design features on visual attention. Data was collected through two eye-tracking experiments, one in a grocery store using wireless eye-tracking equipment......, and another in a lab setting. Findings show that consumers have fragmented visual attention during grocery shopping, and that their visual attention is simultaneously influenced and disrupted by the shelf display. Physical design features such as shape and contrast dominate the initial phase of searching...

  10. Cortical oscillations modulated by congruent and incongruent audiovisual stimuli.

    Science.gov (United States)

    Herdman, A T; Fujioka, T; Chau, W; Ross, B; Pantev, C; Picton, T W

    2004-11-30

    Congruent or incongruent grapheme-phoneme stimuli are easily perceived as one or two linguistic objects. The main objective of this study was to investigate the changes in cortical oscillations that reflect the processing of congruent and incongruent audiovisual stimuli. Graphemes were Japanese Hiragana characters for four different vowels (/a/, /o/, /u/, and /i/). They were presented simultaneously with their corresponding phonemes (congruent) or non-corresponding phonemes (incongruent) to native-speaking Japanese participants. Participants' reaction times to the congruent audiovisual stimuli were significantly faster by 57 ms as compared to reaction times to incongruent stimuli. We recorded the brain responses for each condition using a whole-head magnetoencephalograph (MEG). A novel approach to analysing MEG data, called synthetic aperture magnetometry (SAM), was used to identify event-related changes in cortical oscillations involved in audiovisual processing. The SAM contrast between congruent and incongruent responses revealed greater event-related desynchonization (8-16 Hz) bilaterally in the occipital lobes and greater event-related synchronization (4-8 Hz) in the left transverse temporal gyrus. Results from this study further support the concept of interactions between the auditory and visual sensory cortices in multi-sensory processing of audiovisual objects.

  11. Non-target adjacent stimuli classification improves performance of classical ERP-based brain computer interface

    Science.gov (United States)

    Ceballos, G. A.; Hernández, L. F.

    2015-04-01

    Objective. The classical ERP-based speller, or P300 Speller, is one of the most commonly used paradigms in the field of Brain Computer Interfaces (BCI). Several alterations to the visual stimuli presentation system have been developed to avoid unfavorable effects elicited by adjacent stimuli. However, there has been little, if any, regard to useful information contained in responses to adjacent stimuli about spatial location of target symbols. This paper aims to demonstrate that combining the classification of non-target adjacent stimuli with standard classification (target versus non-target) significantly improves classical ERP-based speller efficiency. Approach. Four SWLDA classifiers were trained and combined with the standard classifier: the lower row, upper row, right column and left column classifiers. This new feature extraction procedure and the classification method were carried out on three open databases: the UAM P300 database (Universidad Autonoma Metropolitana, Mexico), BCI competition II (dataset IIb) and BCI competition III (dataset II). Main results. The inclusion of the classification of non-target adjacent stimuli improves target classification in the classical row/column paradigm. A gain in mean single trial classification of 9.6% and an overall improvement of 25% in simulated spelling speed was achieved. Significance. We have provided further evidence that the ERPs produced by adjacent stimuli present discriminable features, which could provide additional information about the spatial location of intended symbols. This work promotes the searching of information on the peripheral stimulation responses to improve the performance of emerging visual ERP-based spellers.

  12. Secondary hyperalgesia to heat stimuli after burn injury in man

    DEFF Research Database (Denmark)

    Pedersen, J L; Kehlet, H

    1998-01-01

    The aim of the study was to examine the presence of hyperalgesia to heat stimuli within the zone of secondary hyperalgesia to punctate mechanical stimuli. A burn was produced on the medial part of the non-dominant crus in 15 healthy volunteers with a 50 x 25 mm thermode (47 degrees C, 7 min......), and assessments were made 70 min and 40 min before, and 0, 1, and 2 h after the burn injury. Hyperalgesia to mechanical and heat stimuli were examined by von Frey hairs and contact thermodes (3.75 and 12.5 cm2), and pain responses were rated with a visual analog scale (0-100). The area of secondary hyperalgesia...... to punctate stimuli was assessed with a rigid von Frey hair (462 mN). The heat pain responses to 45 degrees C in 5 s (3.75 cm2) were tested in the area just outside the burn, where the subjects developed secondary hyperalgesia, and on the lateral crus where no subject developed secondary hyperalgesia (control...

  13. Stress improves selective attention towards emotionally neutral left ear stimuli.

    Science.gov (United States)

    Hoskin, Robert; Hunter, M D; Woodruff, P W R

    2014-09-01

    Research concerning the impact of psychological stress on visual selective attention has produced mixed results. The current paper describes two experiments which utilise a novel auditory oddball paradigm to test the impact of psychological stress on auditory selective attention. Participants had to report the location of emotionally-neutral auditory stimuli, while ignoring task-irrelevant changes in their content. The results of the first experiment, in which speech stimuli were presented, suggested that stress improves the ability to selectively attend to left, but not right ear stimuli. When this experiment was repeated using tonal stimuli the same result was evident, but only for female participants. Females were also found to experience greater levels of distraction in general across the two experiments. These findings support the goal-shielding theory which suggests that stress improves selective attention by reducing the attentional resources available to process task-irrelevant information. The study also demonstrates, for the first time, that this goal-shielding effect extends to auditory perception. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Hierarchical Learning of Tree Classifiers for Large-Scale Plant Species Identification.

    Science.gov (United States)

    Fan, Jianping; Zhou, Ning; Peng, Jinye; Gao, Ling

    2015-11-01

    In this paper, a hierarchical multi-task structural learning algorithm is developed to support large-scale plant species identification, where a visual tree is constructed for organizing large numbers of plant species in a coarse-to-fine fashion and determining the inter-related learning tasks automatically. For a given parent node on the visual tree, it contains a set of sibling coarse-grained categories of plant species or sibling fine-grained plant species, and a multi-task structural learning algorithm is developed to train their inter-related classifiers jointly for enhancing their discrimination power. The inter-level relationship constraint, e.g., a plant image must first be assigned to a parent node (high-level non-leaf node) correctly if it can further be assigned to the most relevant child node (low-level non-leaf node or leaf node) on the visual tree, is formally defined and leveraged to learn more discriminative tree classifiers over the visual tree. Our experimental results have demonstrated the effectiveness of our hierarchical multi-task structural learning algorithm on training more discriminative tree classifiers for large-scale plant species identification.

  15. Neural Basis of Visual Attentional Orienting in Childhood Autism Spectrum Disorders.

    Science.gov (United States)

    Murphy, Eric R; Norr, Megan; Strang, John F; Kenworthy, Lauren; Gaillard, William D; Vaidya, Chandan J

    2017-01-01

    We examined spontaneous attention orienting to visual salience in stimuli without social significance using a modified Dot-Probe task during functional magnetic resonance imaging in high-functioning preadolescent children with Autism Spectrum Disorder (ASD) and age- and IQ-matched control children. While the magnitude of attentional bias (faster response to probes in the location of solid color patch) to visually salient stimuli was similar in the groups, activation differences in frontal and temporoparietal regions suggested hyper-sensitivity to visual salience or to sameness in ASD children. Further, activation in a subset of those regions was associated with symptoms of restricted and repetitive behavior. Thus, atypicalities in response to visual properties of stimuli may drive attentional orienting problems associated with ASD.

  16. Visual Stimuli Induce Waves of Electrical Activity in Turtle Cortex

    Science.gov (United States)

    Prechtl, J. C.; Cohen, L. B.; Pesaran, B.; Mitra, P. P.; Kleinfeld, D.

    1997-07-01

    The computations involved in the processing of a visual scene invariably involve the interactions among neurons throughout all of visual cortex. One hypothesis is that the timing of neuronal activity, as well as the amplitude of activity, provides a means to encode features of objects. The experimental data from studies on cat [Gray, C. M., Konig, P., Engel, A. K. & Singer, W. (1989) Nature (London) 338, 334-337] support a view in which only synchronous (no phase lags) activity carries information about the visual scene. In contrast, theoretical studies suggest, on the one hand, the utility of multiple phases within a population of neurons as a means to encode independent visual features and, on the other hand, the likely existence of timing differences solely on the basis of network dynamics. Here we use widefield imaging in conjunction with voltage-sensitive dyes to record electrical activity from the virtually intact, unanesthetized turtle brain. Our data consist of single-trial measurements. We analyze our data in the frequency domain to isolate coherent events that lie in different frequency bands. Low frequency oscillations (scale differences in neuronal timing are present and persistent during visual processing.

  17. Auditory, visual and auditory-visual memory and sequencing performance in typically developing children.

    Science.gov (United States)

    Pillai, Roshni; Yathiraj, Asha

    2017-09-01

    The study evaluated whether there exists a difference/relation in the way four different memory skills (memory score, sequencing score, memory span, & sequencing span) are processed through the auditory modality, visual modality and combined modalities. Four memory skills were evaluated on 30 typically developing children aged 7 years and 8 years across three modality conditions (auditory, visual, & auditory-visual). Analogous auditory and visual stimuli were presented to evaluate the three modality conditions across the two age groups. The children obtained significantly higher memory scores through the auditory modality compared to the visual modality. Likewise, their memory scores were significantly higher through the auditory-visual modality condition than through the visual modality. However, no effect of modality was observed on the sequencing scores as well as for the memory and the sequencing span. A good agreement was seen between the different modality conditions that were studied (auditory, visual, & auditory-visual) for the different memory skills measures (memory scores, sequencing scores, memory span, & sequencing span). A relatively lower agreement was noted only between the auditory and visual modalities as well as between the visual and auditory-visual modality conditions for the memory scores, measured using Bland-Altman plots. The study highlights the efficacy of using analogous stimuli to assess the auditory, visual as well as combined modalities. The study supports the view that the performance of children on different memory skills was better through the auditory modality compared to the visual modality. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Working memory biasing of visual perception without awareness.

    Science.gov (United States)

    Pan, Yi; Lin, Bingyuan; Zhao, Yajun; Soto, David

    2014-10-01

    Previous research has demonstrated that the contents of visual working memory can bias visual processing in favor of matching stimuli in the scene. However, the extent to which such top-down, memory-driven biasing of visual perception is contingent on conscious awareness remains unknown. Here we showed that conscious awareness of critical visual cues is dispensable for working memory to bias perceptual selection mechanisms. Using the procedure of continuous flash suppression, we demonstrated that "unseen" visual stimuli during interocular suppression can gain preferential access to awareness if they match the contents of visual working memory. Strikingly, the very same effect occurred even when the visual cue to be held in memory was rendered nonconscious by masking. Control experiments ruled out the alternative accounts of repetition priming and different detection criteria. We conclude that working memory biases of visual perception can operate in the absence of conscious awareness.

  19. When goals conflict with values: counterproductive attentional and oculomotor capture by reward-related stimuli.

    Science.gov (United States)

    Le Pelley, Mike E; Pearson, Daniel; Griffiths, Oren; Beesley, Tom

    2015-02-01

    Attention provides the gateway to cognition, by selecting certain stimuli for further analysis. Recent research demonstrates that whether a stimulus captures attention is not determined solely by its physical properties, but is malleable, being influenced by our previous experience of rewards obtained by attending to that stimulus. Here we show that this influence of reward learning on attention extends to task-irrelevant stimuli. In a visual search task, certain stimuli signaled the magnitude of available reward, but reward delivery was not contingent on responding to those stimuli. Indeed, any attentional capture by these critical distractor stimuli led to a reduction in the reward obtained. Nevertheless, distractors signaling large reward produced greater attentional and oculomotor capture than those signaling small reward. This counterproductive capture by task-irrelevant stimuli is important because it demonstrates how external reward structures can produce patterns of behavior that conflict with task demands, and similar processes may underlie problematic behavior directed toward real-world rewards.

  20. Task context impacts visual object processing differentially across the cortex

    Science.gov (United States)

    Harel, Assaf; Kravitz, Dwight J.; Baker, Chris I.

    2014-01-01

    Perception reflects an integration of “bottom-up” (sensory-driven) and “top-down” (internally generated) signals. Although models of visual processing often emphasize the central role of feed-forward hierarchical processing, less is known about the impact of top-down signals on complex visual representations. Here, we investigated whether and how the observer’s goals modulate object processing across the cortex. We examined responses elicited by a diverse set of objects under six distinct tasks, focusing on either physical (e.g., color) or conceptual properties (e.g., man-made). Critically, the same stimuli were presented in all tasks, allowing us to investigate how task impacts the neural representations of identical visual input. We found that task has an extensive and differential impact on object processing across the cortex. First, we found task-dependent representations in the ventral temporal and prefrontal cortex. In particular, although object identity could be decoded from the multivoxel response within task, there was a significant reduction in decoding across tasks. In contrast, the early visual cortex evidenced equivalent decoding within and across tasks, indicating task-independent representations. Second, task information was pervasive and present from the earliest stages of object processing. However, although the responses of the ventral temporal, prefrontal, and parietal cortex enabled decoding of both the type of task (physical/conceptual) and the specific task (e.g., color), the early visual cortex was not sensitive to type of task and could only be used to decode individual physical tasks. Thus, object processing is highly influenced by the behavioral goal of the observer, highlighting how top-down signals constrain and inform the formation of visual representations. PMID:24567402

  1. Computer-animated stimuli to measure motion sensitivity: constraints on signal design in the Jacky dragon.

    Science.gov (United States)

    Woo, Kevin L; Rieucau, Guillaume; Burke, Darren

    2017-02-01

    Identifying perceptual thresholds is critical for understanding the mechanisms that underlie signal evolution. Using computer-animated stimuli, we examined visual speed sensitivity in the Jacky dragon Amphibolurus muricatus , a species that makes extensive use of rapid motor patterns in social communication. First, focal lizards were tested in discrimination trials using random-dot kinematograms displaying combinations of speed, coherence, and direction. Second, we measured subject lizards' ability to predict the appearance of a secondary reinforcer (1 of 3 different computer-generated animations of invertebrates: cricket, spider, and mite) based on the direction of movement of a field of drifting dots by following a set of behavioural responses (e.g., orienting response, latency to respond) to our virtual stimuli. We found an effect of both speed and coherence, as well as an interaction between these 2 factors on the perception of moving stimuli. Overall, our results showed that Jacky dragons have acute sensitivity to high speeds. We then employed an optic flow analysis to match the performance to ecologically relevant motion. Our results suggest that the Jacky dragon visual system may have been shaped to detect fast motion. This pre-existing sensitivity may have constrained the evolution of conspecific displays. In contrast, Jacky dragons may have difficulty in detecting the movement of ambush predators, such as snakes and of some invertebrate prey. Our study also demonstrates the potential of the computer-animated stimuli technique for conducting nonintrusive tests to explore motion range and sensitivity in a visually mediated species.

  2. A working memory bias for alcohol-related stimuli depends on drinking score.

    Science.gov (United States)

    Kessler, Klaus; Pajak, Katarzyna Malgorzata; Harkin, Ben; Jones, Barry

    2013-03-01

    We tested 44 participants with respect to their working memory (WM) performance on alcohol-related versus neutral visual stimuli. Previously an alcohol attentional bias (AAB) had been reported using these stimuli, where the attention of frequent drinkers was automatically drawn toward alcohol-related items (e.g., beer bottle). The present study set out to provide evidence for an alcohol memory bias (AMB) that would persist over longer time-scales than the AAB. The WM task we used required memorizing 4 stimuli in their correct locations and a visual interference task was administered during a 4-sec delay interval. A subsequent probe required participants to indicate whether a stimulus was shown in the correct or incorrect location. For each participant we calculated a drinking score based on 3 items derived from the Alcohol Use Questionnaire, and we observed that higher scorers better remembered alcohol-related images compared with lower scorers, particularly when these were presented in their correct locations upon recall. This provides first evidence for an AMB. It is important to highlight that this effect persisted over a 4-sec delay period including a visual interference task that erased iconic memories and diverted attention away from the encoded items, thus the AMB cannot be reduced to the previously reported AAB. Our finding calls for further investigation of alcohol-related cognitive biases in WM, and we propose a preliminary model that may guide future research. (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  3. Sensory modality specificity of neural activity related to memory in visual cortex.

    Science.gov (United States)

    Gibson, J R; Maunsell, J H

    1997-09-01

    Previous studies have shown that when monkeys perform a delayed match-to-sample (DMS) task, some neurons in inferotemporal visual cortex are activated selectively during the delay period when the animal must remember particular visual stimuli. This selective delay activity may be involved in short-term memory. It does not depend on visual stimulation: both auditory and tactile stimuli can trigger selective delay activity in inferotemporal cortex when animals expect to respond to visual stimuli in a DMS task. We have examined the overall modality specificity of delay period activity using a variety of auditory/visual cross-modal and unimodal DMS tasks. The cross-modal DMS tasks involved making specific long-term memory associations between visual and auditory stimuli, whereas the unimodal DMS tasks were standard identity matching tasks. Delay activity existed in auditory/visual cross-modal DMS tasks whether the animal anticipated responding to visual or auditory stimuli. No evidence of selective delay period activation was seen in a purely auditory DMS task. Delay-selective cells were relatively common in one animal where they constituted up to 53% neurons tested with a given task. This was only the case for up to 9% of cells in a second animal. In the first animal, a specific long-term memory representation for learned cross-modal associations was observed in delay activity, indicating that this type of representation need not be purely visual. Furthermore, in this same animal, delay activity in one cross-modal task, an auditory-to-visual task, predicted correct and incorrect responses. These results suggest that neurons in inferotemporal cortex contribute to abstract memory representations that can be activated by input from other sensory modalities, but these representations are specific to visual behaviors.

  4. Exploring combinations of different color and facial expression stimuli for gaze-independent BCIs

    Directory of Open Access Journals (Sweden)

    Long eChen

    2016-01-01

    Full Text Available AbstractBackground: Some studies have proven that a conventional visual brain computer interface (BCI based on overt attention cannot be used effectively when eye movement control is not possible. To solve this problem, a novel visual-based BCI system based on covert attention and feature attention had been proposed and was called the gaze-independent BCI. Color and shape difference between stimuli and backgrounds have generally been used in examples of gaze-independent BCIs. Recently, a new paradigm based on facial expression change had been presented, and obtained high performance. However, some facial expressions were so similar that users couldn’t tell them apart. Especially they were presented at the same position in a rapid serial visual presentation (RSVP paradigm. Consequently, the performance of BCIs is reduced.New Method: In this paper, we combined facial expressions and colors to optimize the stimuli presentation in the gaze-independent BCI. This optimized paradigm was called the colored dummy face pattern. It is suggested that different colors and facial expressions could help subjects to locate the target and evoke larger event-related potentials (ERPs. In order to evaluate the performance of this new paradigm, two other paradigms were presented, called the grey dummy face pattern and the colored ball pattern. Comparison with Existing Method(s: The key point that determined the value of the colored dummy faces stimuli in BCI systems were whether dummy face stimuli could obtain higher performance than grey faces or colored balls stimuli. Ten healthy subjects (7 male, aged 21-26 years, mean 24.5±1.25 participated in our experiment. Online and offline results of four different paradigms were obtained and comparatively analyzed.Results: The results showed that the colored dummy face pattern could evoke higher P300 and N400 ERP amplitudes, compared with the grey dummy face pattern and the colored ball pattern. Online results showed

  5. Do Tonic Itch and Pain Stimuli Draw Attention towards Their Location?

    Directory of Open Access Journals (Sweden)

    Antoinette I. M. van Laarhoven

    2017-01-01

    Full Text Available Background. Although itch and pain are distinct experiences, both are unpleasant, may demand attention, and interfere with daily activities. Research investigating the role of attention in tonic itch and pain stimuli, particularly whether attention is drawn to the stimulus location, is scarce. Methods. In the somatosensory attention task, fifty-three healthy participants were exposed to 35-second electrical itch or pain stimuli on either the left or right wrist. Participants responded as quickly as possible to visual targets appearing at the stimulated location (ipsilateral trials or the arm without stimulation (contralateral trials. During control blocks, participants performed the visual task without stimulation. Attention allocation at the itch and pain location is inferred when responses are faster ipsilaterally than contralaterally. Results. Results did not indicate that attention was directed towards or away from the itch and pain location. Notwithstanding, participants were slower during itch and pain than during control blocks. Conclusions. In contrast with our hypotheses, no indications were found for spatial attention allocation towards the somatosensory stimuli. This may relate to dynamic shifts in attention over the time course of the tonic sensations. Our secondary finding that itch and pain interfere with task performance is in-line with attention theories of bodily perception.

  6. Dorsal hippocampus is necessary for visual categorization in rats.

    Science.gov (United States)

    Kim, Jangjin; Castro, Leyre; Wasserman, Edward A; Freeman, John H

    2018-02-23

    The hippocampus may play a role in categorization because of the need to differentiate stimulus categories (pattern separation) and to recognize category membership of stimuli from partial information (pattern completion). We hypothesized that the hippocampus would be more crucial for categorization of low-density (few relevant features) stimuli-due to the higher demand on pattern separation and pattern completion-than for categorization of high-density (many relevant features) stimuli. Using a touchscreen apparatus, rats were trained to categorize multiple abstract stimuli into two different categories. Each stimulus was a pentagonal configuration of five visual features; some of the visual features were relevant for defining the category whereas others were irrelevant. Two groups of rats were trained with either a high (dense, n = 8) or low (sparse, n = 8) number of category-relevant features. Upon reaching criterion discrimination (≥75% correct, on 2 consecutive days), bilateral cannulas were implanted in the dorsal hippocampus. The rats were then given either vehicle or muscimol infusions into the hippocampus just prior to various testing sessions. They were tested with: the previously trained stimuli (trained), novel stimuli involving new irrelevant features (novel), stimuli involving relocated features (relocation), and a single relevant feature (singleton). In training, the dense group reached criterion faster than the sparse group, indicating that the sparse task was more difficult than the dense task. In testing, accuracy of both groups was equally high for trained and novel stimuli. However, both groups showed impaired accuracy in the relocation and singleton conditions, with a greater deficit in the sparse group. The testing data indicate that rats encode both the relevant features and the spatial locations of the features. Hippocampal inactivation impaired visual categorization regardless of the density of the category-relevant features for

  7. Visual cortex in dementia with Lewy bodies: magnetic resonance imaging study

    Science.gov (United States)

    Taylor, John-Paul; Firbank, Michael J.; He, Jiabao; Barnett, Nicola; Pearce, Sarah; Livingstone, Anthea; Vuong, Quoc; McKeith, Ian G.; O’Brien, John T.

    2012-01-01

    Background Visual hallucinations and visuoperceptual deficits are common in dementia with Lewy bodies, suggesting that cortical visual function may be abnormal. Aims To investigate: (1) cortical visual function using functional magnetic resonance imaging (fMRI); and (2) the nature and severity of perfusion deficits in visual areas using arterial spin labelling (ASL)-MRI. Method In total, 17 participants with dementia with Lewy bodies (DLB group) and 19 similarly aged controls were presented with simple visual stimuli (checkerboard, moving dots, and objects) during fMRI and subsequently underwent ASL-MRI (DLB group n = 15, control group n = 19). Results Functional activations were evident in visual areas in both the DLB and control groups in response to checkerboard and objects stimuli but reduced visual area V5/MT (middle temporal) activation occurred in the DLB group in response to motion stimuli. Posterior cortical perfusion deficits occurred in the DLB group, particularly in higher visual areas. Conclusions Higher visual areas, particularly occipito-parietal, appear abnormal in dementia with Lewy bodies, while there is a preservation of function in lower visual areas (V1 and V2/3). PMID:22500014

  8. Balancing Attended and Global Stimuli in Perceived Video Quality Assessment

    DEFF Research Database (Denmark)

    You, Junyong; Korhonen, Jari; Perkis, Andrew

    2011-01-01

    . This paper proposes a quality model based on the late attention selection theory, assuming that the video quality is perceived via two mechanisms: global and local quality assessment. First we model several visual features influencing the visual attention in quality assessment scenarios to derive......The visual attention mechanism plays a key role in the human perception system and it has a significant impact on our assessment of perceived video quality. In spite of receiving less attention from the viewers, unattended stimuli can still contribute to the understanding of the visual content...... an attention map using appropriate fusion techniques. The global quality assessment as based on the assumption that viewers allocate their attention equally to the entire visual scene, is modeled by four carefully designed quality features. By employing these same quality features, the local quality model...

  9. Auditory short-term memory behaves like visual short-term memory.

    Directory of Open Access Journals (Sweden)

    Kristina M Visscher

    2007-03-01

    Full Text Available Are the information processing steps that support short-term sensory memory common to all the senses? Systematic, psychophysical comparison requires identical experimental paradigms and comparable stimuli, which can be challenging to obtain across modalities. Participants performed a recognition memory task with auditory and visual stimuli that were comparable in complexity and in their neural representations at early stages of cortical processing. The visual stimuli were static and moving Gaussian-windowed, oriented, sinusoidal gratings (Gabor patches; the auditory stimuli were broadband sounds whose frequency content varied sinusoidally over time (moving ripples. Parallel effects on recognition memory were seen for number of items to be remembered, retention interval, and serial position. Further, regardless of modality, predicting an item's recognizability requires taking account of (1 the probe's similarity to the remembered list items (summed similarity, and (2 the similarity between the items in memory (inter-item homogeneity. A model incorporating both these factors gives a good fit to recognition memory data for auditory as well as visual stimuli. In addition, we present the first demonstration of the orthogonality of summed similarity and inter-item homogeneity effects. These data imply that auditory and visual representations undergo very similar transformations while they are encoded and retrieved from memory.

  10. Auditory short-term memory behaves like visual short-term memory.

    Science.gov (United States)

    Visscher, Kristina M; Kaplan, Elina; Kahana, Michael J; Sekuler, Robert

    2007-03-01

    Are the information processing steps that support short-term sensory memory common to all the senses? Systematic, psychophysical comparison requires identical experimental paradigms and comparable stimuli, which can be challenging to obtain across modalities. Participants performed a recognition memory task with auditory and visual stimuli that were comparable in complexity and in their neural representations at early stages of cortical processing. The visual stimuli were static and moving Gaussian-windowed, oriented, sinusoidal gratings (Gabor patches); the auditory stimuli were broadband sounds whose frequency content varied sinusoidally over time (moving ripples). Parallel effects on recognition memory were seen for number of items to be remembered, retention interval, and serial position. Further, regardless of modality, predicting an item's recognizability requires taking account of (1) the probe's similarity to the remembered list items (summed similarity), and (2) the similarity between the items in memory (inter-item homogeneity). A model incorporating both these factors gives a good fit to recognition memory data for auditory as well as visual stimuli. In addition, we present the first demonstration of the orthogonality of summed similarity and inter-item homogeneity effects. These data imply that auditory and visual representations undergo very similar transformations while they are encoded and retrieved from memory.

  11. Spatial Scaling of the Profile of Selective Attention in the Visual Field.

    Directory of Open Access Journals (Sweden)

    Matthew A Gannon

    Full Text Available Neural mechanisms of selective attention must be capable of adapting to variation in the absolute size of an attended stimulus in the ever-changing visual environment. To date, little is known regarding how attentional selection interacts with fluctuations in the spatial expanse of an attended object. Here, we use event-related potentials (ERPs to investigate the scaling of attentional enhancement and suppression across the visual field. We measured ERPs while participants performed a task at fixation that varied in its attentional demands (attentional load and visual angle (1.0° or 2.5°. Observers were presented with a stream of task-relevant stimuli while foveal, parafoveal, and peripheral visual locations were probed by irrelevant distractor stimuli. We found two important effects in the N1 component of visual ERPs. First, N1 modulations to task-relevant stimuli indexed attentional selection of stimuli during the load task and further correlated with task performance. Second, with increased task size, attentional modulation of the N1 to distractor stimuli showed a differential pattern that was consistent with a scaling of attentional selection. Together, these results demonstrate that the size of an attended stimulus scales the profile of attentional selection across the visual field and provides insights into the attentional mechanisms associated with such spatial scaling.

  12. Startle Auditory Stimuli Enhance the Performance of Fast Dynamic Contractions

    Science.gov (United States)

    Fernandez-Del-Olmo, Miguel; Río-Rodríguez, Dan; Iglesias-Soler, Eliseo; Acero, Rafael M.

    2014-01-01

    Fast reaction times and the ability to develop a high rate of force development (RFD) are crucial for sports performance. However, little is known regarding the relationship between these parameters. The aim of this study was to investigate the effects of auditory stimuli of different intensities on the performance of a concentric bench-press exercise. Concentric bench-presses were performed by thirteen trained subjects in response to three different conditions: a visual stimulus (VS); a visual stimulus accompanied by a non-startle auditory stimulus (AS); and a visual stimulus accompanied by a startle auditory stimulus (SS). Peak RFD, peak velocity, onset movement, movement duration and electromyography from pectoralis and tricep muscles were recorded. The SS condition induced an increase in the RFD and peak velocity and a reduction in the movement onset and duration, in comparison with the VS and AS condition. The onset activation of the pectoralis and tricep muscles was shorter for the SS than for the VS and AS conditions. These findings point out to specific enhancement effects of loud auditory stimulation on the rate of force development. This is of relevance since startle stimuli could be used to explore neural adaptations to resistance training. PMID:24489967

  13. Startle auditory stimuli enhance the performance of fast dynamic contractions.

    Directory of Open Access Journals (Sweden)

    Miguel Fernandez-Del-Olmo

    Full Text Available Fast reaction times and the ability to develop a high rate of force development (RFD are crucial for sports performance. However, little is known regarding the relationship between these parameters. The aim of this study was to investigate the effects of auditory stimuli of different intensities on the performance of a concentric bench-press exercise. Concentric bench-presses were performed by thirteen trained subjects in response to three different conditions: a visual stimulus (VS; a visual stimulus accompanied by a non-startle auditory stimulus (AS; and a visual stimulus accompanied by a startle auditory stimulus (SS. Peak RFD, peak velocity, onset movement, movement duration and electromyography from pectoralis and tricep muscles were recorded. The SS condition induced an increase in the RFD and peak velocity and a reduction in the movement onset and duration, in comparison with the VS and AS condition. The onset activation of the pectoralis and tricep muscles was shorter for the SS than for the VS and AS conditions. These findings point out to specific enhancement effects of loud auditory stimulation on the rate of force development. This is of relevance since startle stimuli could be used to explore neural adaptations to resistance training.

  14. Visual Categorization of Natural Movies by Rats

    Science.gov (United States)

    Vinken, Kasper; Vermaercke, Ben

    2014-01-01

    Visual categorization of complex, natural stimuli has been studied for some time in human and nonhuman primates. Recent interest in the rodent as a model for visual perception, including higher-level functional specialization, leads to the question of how rodents would perform on a categorization task using natural stimuli. To answer this question, rats were trained in a two-alternative forced choice task to discriminate movies containing rats from movies containing other objects and from scrambled movies (ordinate-level categorization). Subsequently, transfer to novel, previously unseen stimuli was tested, followed by a series of control probes. The results show that the animals are capable of acquiring a decision rule by abstracting common features from natural movies to generalize categorization to new stimuli. Control probes demonstrate that they did not use single low-level features, such as motion energy or (local) luminance. Significant generalization was even present with stationary snapshots from untrained movies. The variability within and between training and test stimuli, the complexity of natural movies, and the control experiments and analyses all suggest that a more high-level rule based on more complex stimulus features than local luminance-based cues was used to classify the novel stimuli. In conclusion, natural stimuli can be used to probe ordinate-level categorization in rats. PMID:25100598

  15. Absent Audiovisual Integration Elicited by Peripheral Stimuli in Parkinson’s Disease

    Directory of Open Access Journals (Sweden)

    Yanna Ren

    2018-01-01

    Full Text Available The basal ganglia, which have been shown to be a significant multisensory hub, are disordered in Parkinson’s disease (PD. This study was to investigate the audiovisual integration of peripheral stimuli in PD patients with/without sleep disturbances. Thirty-six age-matched normal controls (NC and 30 PD patients were recruited for an auditory/visual discrimination experiment. The mean response times for each participant were analyzed using repeated measures ANOVA and race model. The results showed that the response to all stimuli was significantly delayed for PD compared to NC (all p0.05. The current results showed that audiovisual multisensory integration for peripheral stimuli is absent in PD regardless of sleep disturbances and further suggested the abnormal audiovisual integration might be a potential early manifestation of PD.

  16. Heightened eating drive and visual food stimuli attenuate central nociceptive processing.

    Science.gov (United States)

    Wright, Hazel; Li, Xiaoyun; Fallon, Nicholas B; Giesbrecht, Timo; Thomas, Anna; Harrold, Joanne A; Halford, Jason C G; Stancak, Andrej

    2015-03-01

    Hunger and pain are basic drives that compete for a behavioral response when experienced together. To investigate the cortical processes underlying hunger-pain interactions, we manipulated participants' hunger and presented photographs of appetizing food or inedible objects in combination with painful laser stimuli. Fourteen healthy participants completed two EEG sessions: one after an overnight fast, the other following a large breakfast. Spatio-temporal patterns of cortical activation underlying the hunger-pain competition were explored with 128-channel EEG recordings and source dipole analysis of laser-evoked potentials (LEPs). We found that initial pain ratings were temporarily reduced when participants were hungry compared with fed. Source activity in parahippocampal gyrus was weaker when participants were hungry, and activations of operculo-insular cortex, anterior cingulate cortex, parahippocampal gyrus, and cerebellum were smaller in the context of appetitive food photographs than in that of inedible object photographs. Cortical processing of noxious stimuli in pain-related brain structures is reduced and pain temporarily attenuated when people are hungry or passively viewing food photographs, suggesting a possible interaction between the opposing motivational forces of the eating drive and pain. Copyright © 2015 the American Physiological Society.

  17. The identification of credit card encoders by hierarchical cluster analysis of the jitters of magnetic stripes.

    Science.gov (United States)

    Leung, S C; Fung, W K; Wong, K H

    1999-01-01

    The relative bit density variation graphs of 207 specimen credit cards processed by 12 encoding machines were examined first visually, and then classified by means of hierarchical cluster analysis. Twenty-nine credit cards being treated as 'questioned' samples were tested by way of cluster analysis against 'controls' derived from known encoders. It was found that hierarchical cluster analysis provided a high accuracy of identification with all 29 'questioned' samples classified correctly. On the other hand, although visual comparison of jitter graphs was less discriminating, it was nevertheless capable of giving a reasonably accurate result.

  18. Working Memory Enhances Visual Perception: Evidence from Signal Detection Analysis

    Science.gov (United States)

    Soto, David; Wriglesworth, Alice; Bahrami-Balani, Alex; Humphreys, Glyn W.

    2010-01-01

    We show that perceptual sensitivity to visual stimuli can be modulated by matches between the contents of working memory (WM) and stimuli in the visual field. Observers were presented with an object cue (to hold in WM or to merely attend) and subsequently had to identify a brief target presented within a colored shape. The cue could be…

  19. Prevailing theories of consciousness are challenged by novel cross-modal associations acquired between subliminal stimuli.

    Science.gov (United States)

    Scott, Ryan B; Samaha, Jason; Chrisley, Ron; Dienes, Zoltan

    2018-06-01

    While theories of consciousness differ substantially, the 'conscious access hypothesis', which aligns consciousness with the global accessibility of information across cortical regions, is present in many of the prevailing frameworks. This account holds that consciousness is necessary to integrate information arising from independent functions such as the specialist processing required by different senses. We directly tested this account by evaluating the potential for associative learning between novel pairs of subliminal stimuli presented in different sensory modalities. First, pairs of subliminal stimuli were presented and then their association assessed by examining the ability of the first stimulus to prime classification of the second. In Experiments 1-4 the stimuli were word-pairs consisting of a male name preceding either a creative or uncreative profession. Participants were subliminally exposed to two name-profession pairs where one name was paired with a creative profession and the other an uncreative profession. A supraliminal task followed requiring the timed classification of one of those two professions. The target profession was preceded by either the name with which it had been subliminally paired (concordant) or the alternate name (discordant). Experiment 1 presented stimuli auditorily, Experiment 2 visually, and Experiment 3 presented names auditorily and professions visually. All three experiments revealed the same inverse priming effect with concordant test pairs associated with significantly slower classification judgements. Experiment 4 sought to establish if learning would be more efficient with supraliminal stimuli and found evidence that a different strategy is adopted when stimuli are consciously perceived. Finally, Experiment 5 replicated the unconscious cross-modal association achieved in Experiment 3 utilising non-linguistic stimuli. The results demonstrate the acquisition of novel cross-modal associations between stimuli which are not

  20. Agnosia for mirror stimuli: a new case report with a small parietal lesion.

    Science.gov (United States)

    Martinaud, Olivier; Mirlink, Nicolas; Bioux, Sandrine; Bliaux, Evangéline; Lebas, Axel; Gerardin, Emmanuel; Hannequin, Didier

    2014-11-01

    Only seven cases of agnosia for mirror stimuli have been reported, always with an extensive lesion. We report a new case of an agnosia for mirror stimuli due to a circumscribed lesion. An extensive battery of neuropsychological tests and a new experimental procedure to assess visual object mirror and orientation discrimination were assessed 10 days after the onset of clinical symptoms, and 5 years later. The performances of our patient were compared with those of four healthy control subjects matched for age. This test revealed an agnosia for mirror stimuli. Brain imaging showed a small right occipitoparietal hematoma, encompassing the extrastriate cortex adjoining the inferior parietal lobe. This new case suggests that: (i) agnosia for mirror stimuli can persist for 5 years after onset and (ii) the posterior part of the right intraparietal sulcus could be critical in the cognitive process of mirror stimuli discrimination. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  1. The Visual Shock of Francis Bacon: An essay in neuroesthetics

    Directory of Open Access Journals (Sweden)

    Semir eZeki

    2013-12-01

    Full Text Available In this paper we discuss the work of Francis Bacon in the context of his declared aim of giving a visual shock. We explore what this means in terms of brain activity and what insights into the brain’s visual perceptive system his work gives. We do so especially with reference to the representation of faces and bodies in the human visual brain. We discuss the evidence that shows that both these categories of stimuli have a very privileged status in visual perception, compared to the perception of other stimuli, including man-made artifacts such as houses, chairs and cars. We show that viewing face and house stimuli that depart significantly from a normal representation of faces and bodies entails a significant difference in the pattern of brain activation. We argue that Bacon succeeded in delivering his visual shock because he subverted the normal neural representation of faces and bodies, without at the same time subverting the representation of man-made artifacts.

  2. The "Visual Shock" of Francis Bacon: an essay in neuroesthetics.

    Science.gov (United States)

    Zeki, Semir; Ishizu, Tomohiro

    2013-01-01

    In this paper we discuss the work of Francis Bacon in the context of his declared aim of giving a "visual shock."We explore what this means in terms of brain activity and what insights into the brain's visual perceptive system his work gives. We do so especially with reference to the representation of faces and bodies in the human visual brain. We discuss the evidence that shows that both these categories of stimuli have a very privileged status in visual perception, compared to the perception of other stimuli, including man-made artifacts such as houses, chairs, and cars. We show that viewing stimuli that depart significantly from a normal representation of faces and bodies entails a significant difference in the pattern of brain activation. We argue that Bacon succeeded in delivering his "visual shock" because he subverted the normal neural representation of faces and bodies, without at the same time subverting the representation of man-made artifacts.

  3. The left visual-field advantage in rapid visual presentation is amplified rather than reduced by posterior-parietal rTMS

    DEFF Research Database (Denmark)

    Verleger, Rolf; Möller, Friderike; Kuniecki, Michal

    2010-01-01

    ) either as effective or as sham stimulation. In two experiments, either one of these two factors, hemisphere and effectiveness of rTMS, was varied within or between participants. Again, T2 was much better identified in the left than in the right visual field. This advantage of the left visual field......In the present task, series of visual stimuli are rapidly presented left and right, containing two target stimuli, T1 and T2. In previous studies, T2 was better identified in the left than in the right visual field. This advantage of the left visual field might reflect dominance exerted...... by the right over the left hemisphere. If so, then repetitive transcranial magnetic stimulation (rTMS) to the right parietal cortex might release the left hemisphere from right-hemispheric control, thereby improving T2 identification in the right visual field. Alternatively or additionally, the asymmetry in T2...

  4. Gaze-independent ERP-BCIs: augmenting performance through location-congruent bimodal stimuli

    Science.gov (United States)

    Thurlings, Marieke E.; Brouwer, Anne-Marie; Van Erp, Jan B. F.; Werkhoven, Peter

    2014-01-01

    Gaze-independent event-related potential (ERP) based brain-computer interfaces (BCIs) yield relatively low BCI performance and traditionally employ unimodal stimuli. Bimodal ERP-BCIs may increase BCI performance due to multisensory integration or summation in the brain. An additional advantage of bimodal BCIs may be that the user can choose which modality or modalities to attend to. We studied bimodal, visual-tactile, gaze-independent BCIs and investigated whether or not ERP components’ tAUCs and subsequent classification accuracies are increased for (1) bimodal vs. unimodal stimuli; (2) location-congruent vs. location-incongruent bimodal stimuli; and (3) attending to both modalities vs. to either one modality. We observed an enhanced bimodal (compared to unimodal) P300 tAUC, which appeared to be positively affected by location-congruency (p = 0.056) and resulted in higher classification accuracies. Attending either to one or to both modalities of the bimodal location-congruent stimuli resulted in differences between ERP components, but not in classification performance. We conclude that location-congruent bimodal stimuli improve ERP-BCIs, and offer the user the possibility to switch the attended modality without losing performance. PMID:25249947

  5. Monetary reward modulates task-irrelevant perceptual learning for invisible stimuli.

    Science.gov (United States)

    Pascucci, David; Mastropasqua, Tommaso; Turatto, Massimo

    2015-01-01

    Task Irrelevant Perceptual Learning (TIPL) shows that the brain's discriminative capacity can improve also for invisible and unattended visual stimuli. It has been hypothesized that this form of "unconscious" neural plasticity is mediated by an endogenous reward mechanism triggered by the correct task performance. Although this result has challenged the mandatory role of attention in perceptual learning, no direct evidence exists of the hypothesized link between target recognition, reward and TIPL. Here, we manipulated the reward value associated with a target to demonstrate the involvement of reinforcement mechanisms in sensory plasticity for invisible inputs. Participants were trained in a central task associated with either high or low monetary incentives, provided only at the end of the experiment, while subliminal stimuli were presented peripherally. Our results showed that high incentive-value targets induced a greater degree of perceptual improvement for the subliminal stimuli, supporting the role of reinforcement mechanisms in TIPL.

  6. Decoding complex flow-field patterns in visual working memory.

    Science.gov (United States)

    Christophel, Thomas B; Haynes, John-Dylan

    2014-05-01

    There has been a long history of research on visual working memory. Whereas early studies have focused on the role of lateral prefrontal cortex in the storage of sensory information, this has been challenged by research in humans that has directly assessed the encoding of perceptual contents, pointing towards a role of visual and parietal regions during storage. In a previous study we used pattern classification to investigate the storage of complex visual color patterns across delay periods. This revealed coding of such contents in early visual and parietal brain regions. Here we aim to investigate whether the involvement of visual and parietal cortex is also observable for other types of complex, visuo-spatial pattern stimuli. Specifically, we used a combination of fMRI and multivariate classification to investigate the retention of complex flow-field stimuli defined by the spatial patterning of motion trajectories of random dots. Subjects were trained to memorize the precise spatial layout of these stimuli and to retain this information during an extended delay. We used a multivariate decoding approach to identify brain regions where spatial patterns of activity encoded the memorized stimuli. Content-specific memory signals were observable in motion sensitive visual area MT+ and in posterior parietal cortex that might encode spatial information in a modality independent manner. Interestingly, we also found information about the memorized visual stimulus in somatosensory cortex, suggesting a potential crossmodal contribution to memory. Our findings thus indicate that working memory storage of visual percepts might be distributed across unimodal, multimodal and even crossmodal brain regions. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. Auditory capture of visual motion: effects on perception and discrimination.

    Science.gov (United States)

    McCourt, Mark E; Leone, Lynnette M

    2016-09-28

    We asked whether the perceived direction of visual motion and contrast thresholds for motion discrimination are influenced by the concurrent motion of an auditory sound source. Visual motion stimuli were counterphasing Gabor patches, whose net motion energy was manipulated by adjusting the contrast of the leftward-moving and rightward-moving components. The presentation of these visual stimuli was paired with the simultaneous presentation of auditory stimuli, whose apparent motion in 3D auditory space (rightward, leftward, static, no sound) was manipulated using interaural time and intensity differences, and Doppler cues. In experiment 1, observers judged whether the Gabor visual stimulus appeared to move rightward or leftward. In experiment 2, contrast discrimination thresholds for detecting the interval containing unequal (rightward or leftward) visual motion energy were obtained under the same auditory conditions. Experiment 1 showed that the perceived direction of ambiguous visual motion is powerfully influenced by concurrent auditory motion, such that auditory motion 'captured' ambiguous visual motion. Experiment 2 showed that this interaction occurs at a sensory stage of processing as visual contrast discrimination thresholds (a criterion-free measure of sensitivity) were significantly elevated when paired with congruent auditory motion. These results suggest that auditory and visual motion signals are integrated and combined into a supramodal (audiovisual) representation of motion.

  8. Appetitive and aversive visual learning in freely moving Drosophila

    Directory of Open Access Journals (Sweden)

    Christopher Schnaitmann

    2010-03-01

    Full Text Available To compare appetitive and aversive visual memories of the fruit fly Drosophila melanogaster, we developed a new paradigm for classical conditioning. Adult flies are trained en masse to differentially associate one of two visual conditioned stimuli (blue and green light as conditioned stimuli or CS with an appetitive or aversive chemical substance (unconditioned stimulus or US. In a test phase, flies are given a choice between the paired and the unpaired visual stimuli. Associative memory is measured based on altered visual preference in the test. If a group of flies has, for example, received a sugar reward with green light, they show a significantly higher preference for the green stimulus during the test than another group of flies having received the same reward with blue light. We demonstrate critical parameters for the formation of visual appetitive memory, such as training repetition, order of reinforcement, starvation, and individual conditioning. Furthermore, we show that formic acid can act as an aversive chemical reinforcer, yielding weak, yet significant, aversive memory. These results provide a basis for future investigations into the cellular and molecular mechanisms underlying visual memory and perception in Drosophila.

  9. Auditory/visual distance estimation: accuracy and variability

    Directory of Open Access Journals (Sweden)

    Paul Wallace Anderson

    2014-10-01

    Full Text Available Past research has shown that auditory distance estimation improves when listeners are given the opportunity to see all possible sound sources when compared to no visual input. It has also been established that distance estimation is more accurate in vision than in audition. The present study investigates the degree to which auditory distance estimation is improved when matched with a congruent visual stimulus. Virtual sound sources based on binaural room impulse response (BRIR measurements made from distances ranging from approximately 0.3 to 9.8 m in a concert hall were used as auditory stimuli. Visual stimuli were photographs taken from the listener’s perspective at each distance in the impulse response measurement setup presented on a large HDTV monitor. Listeners were asked to estimate egocentric distance to the sound source in each of three conditions: auditory only (A, visual only (V, and congruent auditory/visual stimuli (A+V. Each condition was presented within its own block. Sixty-two listeners were tested in order to quantify the response variability inherent in auditory distance perception. Distance estimates from both the V and A+V conditions were found to be considerably more accurate and less variable than estimates from the A condition.

  10. Heightened eating drive and visual food stimuli attenuate central nociceptive processing

    OpenAIRE

    Wright, Hazel; Li, Xiaoyun; Fallon, Nicholas B.; Giesbrecht, Timo; Thomas, Anna; Harrold, Joanne A.; Halford, Jason C. G.; Stancak, Andrej

    2014-01-01

    Hunger and pain are basic drives that compete for a behavioral response when experienced together. To investigate the cortical processes underlying hunger-pain interactions, we manipulated participants' hunger and presented photographs of appetizing food or inedible objects in combination with painful laser stimuli. Fourteen healthy participants completed two EEG sessions: one after an overnight fast, the other following a large breakfast. Spatio-temporal patterns of cortical activation under...

  11. Top-down attention affects sequential regularity representation in the human visual system.

    Science.gov (United States)

    Kimura, Motohiro; Widmann, Andreas; Schröger, Erich

    2010-08-01

    Recent neuroscience studies using visual mismatch negativity (visual MMN), an event-related brain potential (ERP) index of memory-mismatch processes in the visual sensory system, have shown that although sequential regularities embedded in successive visual stimuli can be automatically represented in the visual sensory system, an existence of sequential regularity itself does not guarantee that the sequential regularity will be automatically represented. In the present study, we investigated the effects of top-down attention on sequential regularity representation in the visual sensory system. Our results showed that a sequential regularity (SSSSD) embedded in a modified oddball sequence where infrequent deviant (D) and frequent standard stimuli (S) differing in luminance were regularly presented (SSSSDSSSSDSSSSD...) was represented in the visual sensory system only when participants attended the sequential regularity in luminance, but not when participants ignored the stimuli or simply attended the dimension of luminance per se. This suggests that top-down attention affects sequential regularity representation in the visual sensory system and that top-down attention is a prerequisite for particular sequential regularities to be represented. Copyright 2010 Elsevier B.V. All rights reserved.

  12. Attentional Bias for Emotional Stimuli in Borderline Personality Disorder : A Meta-Analysis

    NARCIS (Netherlands)

    Kaiser, D.; Jacob, G.A.; Domes, G.; Arntz, A.

    2016-01-01

    Background: In borderline personality disorder (BPD), attentional bias (AB) to emotional stimuli may be a core component in disorder pathogenesis and maintenance. Sampling: 11 emotional Stroop task (EST) studies with 244 BPD patients, 255 nonpatients (NPs) and 95 clinical controls and 4 visual

  13. Adaptation to Variance of Stimuli in Drosophila Larva Navigation

    Science.gov (United States)

    Wolk, Jason; Gepner, Ruben; Gershow, Marc

    In order to respond to stimuli that vary over orders of magnitude while also being capable of sensing very small changes, neural systems must be capable of rapidly adapting to the variance of stimuli. We study this adaptation in Drosophila larvae responding to varying visual signals and optogenetically induced fictitious odors using an infrared illuminated arena and custom computer vision software. Larval navigational decisions (when to turn) are modeled as the output a linear-nonlinear Poisson process. The development of the nonlinear turn rate in response to changes in variance is tracked using an adaptive point process filter determining the rate of adaptation to different stimulus profiles. Supported by NIH Grant 1DP2EB022359 and NSF Grant PHY-1455015.

  14. Assessment of sexual orientation using the hemodynamic brain response to visual sexual stimuli

    DEFF Research Database (Denmark)

    Ponseti, Jorge; Granert, Oliver; Jansen, Olav

    2009-01-01

    in a nonclinical sample of 12 heterosexual men and 14 homosexual men. During fMRI, participants were briefly exposed to pictures of same-sex and opposite-sex genitals. Data analysis involved four steps: (i) differences in the BOLD response to female and male sexual stimuli were calculated for each subject; (ii......) these contrast images were entered into a group analysis to calculate whole-brain difference maps between homosexual and heterosexual participants; (iii) a single expression value was computed for each subject expressing its correspondence to the group result; and (iv) based on these expression values, Fisher...... response patterns of the brain to sexual stimuli contained sufficient information to predict individual sexual orientation with high accuracy. These results suggest that fMRI-based classification methods hold promise for the diagnosis of paraphilic disorders (e.g., pedophilia)....

  15. Nanoscale Analysis of a Hierarchical Hybrid Solar Cell in 3D.

    Science.gov (United States)

    Divitini, Giorgio; Stenzel, Ole; Ghadirzadeh, Ali; Guarnera, Simone; Russo, Valeria; Casari, Carlo S; Bassi, Andrea Li; Petrozza, Annamaria; Di Fonzo, Fabio; Schmidt, Volker; Ducati, Caterina

    2014-05-01

    A quantitative method for the characterization of nanoscale 3D morphology is applied to the investigation of a hybrid solar cell based on a novel hierarchical nanostructured photoanode. A cross section of the solar cell device is prepared by focused ion beam milling in a micropillar geometry, which allows a detailed 3D reconstruction of the titania photoanode by electron tomography. It is found that the hierarchical titania nanostructure facilitates polymer infiltration, thus favoring intermixing of the two semiconducting phases, essential for charge separation. The 3D nanoparticle network is analyzed with tools from stochastic geometry to extract information related to the charge transport in the hierarchical solar cell. In particular, the experimental dataset allows direct visualization of the percolation pathways that contribute to the photocurrent.

  16. Complex cells decrease errors for the Müller-Lyer illusion in a model of the visual ventral stream

    Directory of Open Access Journals (Sweden)

    Astrid eZeman

    2014-09-01

    Full Text Available To improve robustness in object recognition, many artificial visual systems imitate the way in which the human visual cortex encodes object information as a hierarchical set of features. These systems are usually evaluated in terms of their ability to accurately categorize well-defined, unambiguous objects and scenes. In the real world, however, not all objects and scenes are presented clearly, with well-defined labels and interpretations. Visual illusions demonstrate a disparity between perception and objective reality, allowing psychophysicists to methodically manipulate stimuli and study our interpretation of the environment. One prominent effect, the Müller-Lyer illusion, is demonstrated when the perceived length of a line is contracted (or expanded by the addition of arrowheads (or arrow-tails to its ends. HMAX, a benchmark object recognition system, consistently produces a bias when classifying Müller-Lyer images. HMAX is a hierarchical, artificial neural network that imitates the ‘simple’ and ‘complex’ cell layers found in the visual ventral stream. In this study, we perform two experiments to explore the Müller-Lyer illusion in HMAX, asking: 1 How do simple versus complex cell operations within HMAX affect illusory bias and precision? 2 How does varying the position of the figures in the input image affect classification using HMAX? In our first experiment, we assessed classification after traversing each layer of HMAX and found that in general, kernel operations performed by simple cells increase bias and uncertainty while max-pooling operations executed by complex cells decrease bias and uncertainty. In our second experiment, we increased variation in the positions of figures in the input that reduced bias and uncertainty in HMAX. Our findings suggest that the Müller-Lyer illusion is exacerbated by the vulnerability of simple cell operations to positional fluctuations, but ameliorated by the robustness of complex cell

  17. Complex cells decrease errors for the Müller-Lyer illusion in a model of the visual ventral stream.

    Science.gov (United States)

    Zeman, Astrid; Obst, Oliver; Brooks, Kevin R

    2014-01-01

    To improve robustness in object recognition, many artificial visual systems imitate the way in which the human visual cortex encodes object information as a hierarchical set of features. These systems are usually evaluated in terms of their ability to accurately categorize well-defined, unambiguous objects and scenes. In the real world, however, not all objects and scenes are presented clearly, with well-defined labels and interpretations. Visual illusions demonstrate a disparity between perception and objective reality, allowing psychophysicists to methodically manipulate stimuli and study our interpretation of the environment. One prominent effect, the Müller-Lyer illusion, is demonstrated when the perceived length of a line is contracted (or expanded) by the addition of arrowheads (or arrow-tails) to its ends. HMAX, a benchmark object recognition system, consistently produces a bias when classifying Müller-Lyer images. HMAX is a hierarchical, artificial neural network that imitates the "simple" and "complex" cell layers found in the visual ventral stream. In this study, we perform two experiments to explore the Müller-Lyer illusion in HMAX, asking: (1) How do simple vs. complex cell operations within HMAX affect illusory bias and precision? (2) How does varying the position of the figures in the input image affect classification using HMAX? In our first experiment, we assessed classification after traversing each layer of HMAX and found that in general, kernel operations performed by simple cells increase bias and uncertainty while max-pooling operations executed by complex cells decrease bias and uncertainty. In our second experiment, we increased variation in the positions of figures in the input images that reduced bias and uncertainty in HMAX. Our findings suggest that the Müller-Lyer illusion is exacerbated by the vulnerability of simple cell operations to positional fluctuations, but ameliorated by the robustness of complex cell responses to such

  18. Visual categorization of natural movies by rats.

    Science.gov (United States)

    Vinken, Kasper; Vermaercke, Ben; Op de Beeck, Hans P

    2014-08-06

    Visual categorization of complex, natural stimuli has been studied for some time in human and nonhuman primates. Recent interest in the rodent as a model for visual perception, including higher-level functional specialization, leads to the question of how rodents would perform on a categorization task using natural stimuli. To answer this question, rats were trained in a two-alternative forced choice task to discriminate movies containing rats from movies containing other objects and from scrambled movies (ordinate-level categorization). Subsequently, transfer to novel, previously unseen stimuli was tested, followed by a series of control probes. The results show that the animals are capable of acquiring a decision rule by abstracting common features from natural movies to generalize categorization to new stimuli. Control probes demonstrate that they did not use single low-level features, such as motion energy or (local) luminance. Significant generalization was even present with stationary snapshots from untrained movies. The variability within and between training and test stimuli, the complexity of natural movies, and the control experiments and analyses all suggest that a more high-level rule based on more complex stimulus features than local luminance-based cues was used to classify the novel stimuli. In conclusion, natural stimuli can be used to probe ordinate-level categorization in rats. Copyright © 2014 the authors 0270-6474/14/3410645-14$15.00/0.

  19. Enhanced early visual processing in response to snake and trypophobic stimuli

    OpenAIRE

    Strien, Jan; Van der Peijl, M.K. (Manja K.)

    2018-01-01

    textabstractBackground: Trypophobia refers to aversion to clusters of holes. We investigated whether trypophobic stimuli evoke augmented early posterior negativity (EPN). Methods: Twenty-four participants filled out a trypophobia questionnaire and watched the random rapid serial presentation of 450 trypophobic pictures, 450 pictures of poisonous animals, 450 pictures of snakes, and 450 pictures of small birds (1800 pictures in total, at a rate of 3 pictures/s). The EPN was scored as the mean ...

  20. Analysis hierarchical model for discrete event systems

    Science.gov (United States)

    Ciortea, E. M.

    2015-11-01

    The This paper presents the hierarchical model based on discrete event network for robotic systems. Based on the hierarchical approach, Petri network is analysed as a network of the highest conceptual level and the lowest level of local control. For modelling and control of complex robotic systems using extended Petri nets. Such a system is structured, controlled and analysed in this paper by using Visual Object Net ++ package that is relatively simple and easy to use, and the results are shown as representations easy to interpret. The hierarchical structure of the robotic system is implemented on computers analysed using specialized programs. Implementation of hierarchical model discrete event systems, as a real-time operating system on a computer network connected via a serial bus is possible, where each computer is dedicated to local and Petri model of a subsystem global robotic system. Since Petri models are simplified to apply general computers, analysis, modelling, complex manufacturing systems control can be achieved using Petri nets. Discrete event systems is a pragmatic tool for modelling industrial systems. For system modelling using Petri nets because we have our system where discrete event. To highlight the auxiliary time Petri model using transport stream divided into hierarchical levels and sections are analysed successively. Proposed robotic system simulation using timed Petri, offers the opportunity to view the robotic time. Application of goods or robotic and transmission times obtained by measuring spot is obtained graphics showing the average time for transport activity, using the parameters sets of finished products. individually.

  1. The “Visual Shock” of Francis Bacon: an essay in neuroesthetics

    Science.gov (United States)

    Zeki, Semir; Ishizu, Tomohiro

    2013-01-01

    In this paper we discuss the work of Francis Bacon in the context of his declared aim of giving a “visual shock.”We explore what this means in terms of brain activity and what insights into the brain's visual perceptive system his work gives. We do so especially with reference to the representation of faces and bodies in the human visual brain. We discuss the evidence that shows that both these categories of stimuli have a very privileged status in visual perception, compared to the perception of other stimuli, including man-made artifacts such as houses, chairs, and cars. We show that viewing stimuli that depart significantly from a normal representation of faces and bodies entails a significant difference in the pattern of brain activation. We argue that Bacon succeeded in delivering his “visual shock” because he subverted the normal neural representation of faces and bodies, without at the same time subverting the representation of man-made artifacts. PMID:24339812

  2. Common coding of auditory and visual spatial information in working memory.

    Science.gov (United States)

    Lehnert, Günther; Zimmer, Hubert D

    2008-09-16

    We compared spatial short-term memory for visual and auditory stimuli in an event-related slow potentials study. Subjects encoded object locations of either four or six sequentially presented auditory or visual stimuli and maintained them during a retention period of 6 s. Slow potentials recorded during encoding were modulated by the modality of the stimuli. Stimulus related activity was stronger for auditory items at frontal and for visual items at posterior sites. At frontal electrodes, negative potentials incrementally increased with the sequential presentation of visual items, whereas a strong transient component occurred during encoding of each auditory item without the cumulative increment. During maintenance, frontal slow potentials were affected by modality and memory load according to task difficulty. In contrast, at posterior recording sites, slow potential activity was only modulated by memory load independent of modality. We interpret the frontal effects as correlates of different encoding strategies and the posterior effects as a correlate of common coding of visual and auditory object locations.

  3. Discriminative stimuli that control instrumental tobacco-seeking by human smokers also command selective attention.

    Science.gov (United States)

    Hogarth, Lee; Dickinson, Anthony; Duka, Theodora

    2003-08-01

    Incentive salience theory states that acquired bias in selective attention for stimuli associated with tobacco-smoke reinforcement controls the selective performance of tobacco-seeking and tobacco-taking behaviour. To support this theory, we assessed whether a stimulus that had acquired control of a tobacco-seeking response in a discrimination procedure would command the focus of visual attention in a subsequent test phase. Smokers received discrimination training in which an instrumental key-press response was followed by tobacco-smoke reinforcement when one visual discriminative stimulus (S+) was present, but not when another stimulus (S-) was present. The skin conductance response to the S+ and S- assessed whether Pavlovian conditioning to the S+ had taken place. In a subsequent test phase, the S+ and S- were presented in the dot-probe task and the allocation of the focus of visual attention to these stimuli was measured. Participants learned to perform the instrumental tobacco-seeking response selectively in the presence of the S+ relative to the S-, and showed a greater skin conductance response to the S+ than the S-. In the subsequent test phase, participants allocated the focus of visual attention to the S+ in preference to the S-. Correlation analysis revealed that the visual attentional bias for the S+ was positively associated with the number of times the S+ had been paired with tobacco-smoke in training, the skin conductance response to the S+ and with subjective craving to smoke. Furthermore, increased exposure to tobacco-smoke in the natural environment was associated with reduced discrimination learning. These data demonstrate that discriminative stimuli that signal that tobacco-smoke reinforcement is available acquire the capacity to command selective attentional and elicit instrumental tobacco-seeking behaviour.

  4. Monetary reward modulates task-irrelevant perceptual learning for invisible stimuli.

    Directory of Open Access Journals (Sweden)

    David Pascucci

    Full Text Available Task Irrelevant Perceptual Learning (TIPL shows that the brain's discriminative capacity can improve also for invisible and unattended visual stimuli. It has been hypothesized that this form of "unconscious" neural plasticity is mediated by an endogenous reward mechanism triggered by the correct task performance. Although this result has challenged the mandatory role of attention in perceptual learning, no direct evidence exists of the hypothesized link between target recognition, reward and TIPL. Here, we manipulated the reward value associated with a target to demonstrate the involvement of reinforcement mechanisms in sensory plasticity for invisible inputs. Participants were trained in a central task associated with either high or low monetary incentives, provided only at the end of the experiment, while subliminal stimuli were presented peripherally. Our results showed that high incentive-value targets induced a greater degree of perceptual improvement for the subliminal stimuli, supporting the role of reinforcement mechanisms in TIPL.

  5. Dissociating object-based from egocentric transformations in mental body rotation: effect of stimuli size.

    Science.gov (United States)

    Habacha, Hamdi; Moreau, David; Jarraya, Mohamed; Lejeune-Poutrain, Laure; Molinaro, Corinne

    2018-01-01

    The effect of stimuli size on the mental rotation of abstract objects has been extensively investigated, yet its effect on the mental rotation of bodily stimuli remains largely unexplored. Depending on the experimental design, mentally rotating bodily stimuli can elicit object-based transformations, relying mainly on visual processes, or egocentric transformations, which typically involve embodied motor processes. The present study included two mental body rotation tasks requiring either a same-different or a laterality judgment, designed to elicit object-based or egocentric transformations, respectively. Our findings revealed shorter response times for large-sized stimuli than for small-sized stimuli only for greater angular disparities, suggesting that the more unfamiliar the orientations of the bodily stimuli, the more stimuli size affected mental processing. Importantly, when comparing size transformation times, results revealed different patterns of size transformation times as a function of angular disparity between object-based and egocentric transformations. This indicates that mental size transformation and mental rotation proceed differently depending on the mental rotation strategy used. These findings are discussed with respect to the different spatial manipulations involved during object-based and egocentric transformations.

  6. Testing a Poisson counter model for visual identification of briefly presented, mutually confusable single stimuli in pure accuracy tasks.

    Science.gov (United States)

    Kyllingsbæk, Søren; Markussen, Bo; Bundesen, Claus

    2012-06-01

    The authors propose and test a simple model of the time course of visual identification of briefly presented, mutually confusable single stimuli in pure accuracy tasks. The model implies that during stimulus analysis, tentative categorizations that stimulus i belongs to category j are made at a constant Poisson rate, v(i, j). The analysis is continued until the stimulus disappears, and the overt response is based on the categorization made the greatest number of times. The model was evaluated by Monte Carlo tests of goodness of fit against observed probability distributions of responses in two extensive experiments and also by quantifications of the information loss of the model compared with the observed data by use of information theoretic measures. The model provided a close fit to individual data on identification of digits and an apparently perfect fit to data on identification of Landolt rings.

  7. Reward-associated stimuli capture the eyes in spite of strategic attentional set

    NARCIS (Netherlands)

    Hickey, C.M.; van Zoest, W.

    2013-01-01

    Theories of reinforcement learning have proposed that the association of reward to visual stimuli may cause these objects to become fundamentally salient and thus attention-drawing. A number of recent studies have investigated the oculomotor correlates of this reward-priming effect, but there is

  8. Affective Overload: The Effect of Emotive Visual Stimuli on Target Vocabulary Retrieval

    Science.gov (United States)

    Çetin, Yakup; Griffiths, Carol; Özel, Zeynep Ebrar Yetkiner; Kinay, Hüseyin

    2016-01-01

    There has been considerable interest in cognitive load in recent years, but the effect of affective load and its relationship to mental functioning has not received as much attention. In order to investigate the effects of affective stimuli on cognitive function as manifest in the ability to remember foreign language vocabulary, two groups of…

  9. [Recognition of visual objects under forward masking. Effects of cathegorial similarity of test and masking stimuli].

    Science.gov (United States)

    Gerasimenko, N Iu; Slavutskaia, A V; Kalinin, S A; Kulikov, M A; Mikhaĭlova, E S

    2013-01-01

    In 38 healthy subjects accuracy and response time were examined during recognition of two categories of images--animals andnonliving objects--under forward masking. We revealed new data that masking effects depended of categorical similarity of target and masking stimuli. The recognition accuracy was the lowest and the response time was the most slow, when the target and masking stimuli belongs to the same category, that was combined with high dispersion of response times. The revealed effects were more clear in the task of animal recognition in comparison with the recognition of nonliving objects. We supposed that the revealed effects connected with interference between cortical representations of the target and masking stimuli and discussed our results in context of cortical interference and negative priming.

  10. Visual Processing Speeds in Children

    Directory of Open Access Journals (Sweden)

    Steve Croker

    2011-01-01

    Full Text Available The aim of this study was to investigate visual processing speeds in children. A rapid serial visual presentation (RSVP task with schematic faces as stimuli was given to ninety-nine 6–10-year-old children as well as a short form of the WISC-III. Participants were asked to determine whether a happy face stimulus was embedded in a stream of distracter stimuli. Presentation time was gradually reduced from 500 ms per stimulus to 100 ms per stimulus, in 50 ms steps. The data revealed that (i RSVP speed increases with age, (ii children aged 8 years and over can discriminate stimuli presented every 100 ms—the speed typically used with RSVP procedures in adult and adolescent populations, and (iii RSVP speed is significantly correlated with digit span and object assembly. In consequence, the RSVP paradigm presented here is appropriate for use in further investigations of processes of temporal attention within this cohort.

  11. Statistical regularities in art: Relations with visual coding and perception.

    Science.gov (United States)

    Graham, Daniel J; Redies, Christoph

    2010-07-21

    Since at least 1935, vision researchers have used art stimuli to test human response to complex scenes. This is sensible given the "inherent interestingness" of art and its relation to the natural visual world. The use of art stimuli has remained popular, especially in eye tracking studies. Moreover, stimuli in common use by vision scientists are inspired by the work of famous artists (e.g., Mondrians). Artworks are also popular in vision science as illustrations of a host of visual phenomena, such as depth cues and surface properties. However, until recently, there has been scant consideration of the spatial, luminance, and color statistics of artwork, and even less study of ways that regularities in such statistics could affect visual processing. Furthermore, the relationship between regularities in art images and those in natural scenes has received little or no attention. In the past few years, there has been a concerted effort to study statistical regularities in art as they relate to neural coding and visual perception, and art stimuli have begun to be studied in rigorous ways, as natural scenes have been. In this minireview, we summarize quantitative studies of links between regular statistics in artwork and processing in the visual stream. The results of these studies suggest that art is especially germane to understanding human visual coding and perception, and it therefore warrants wider study. Copyright 2010 Elsevier Ltd. All rights reserved.

  12. Preschool-Age Children and Adults Flexibly Shift Their Preferences for Auditory versus Visual Modalities but Do Not Exhibit Auditory Dominance

    Science.gov (United States)

    Noles, Nicholaus S.; Gelman, Susan A.

    2012-01-01

    The goal of this study was to evaluate the claim that young children display preferences for auditory stimuli over visual stimuli. This study was motivated by concerns that the visual stimuli employed in prior studies were considerably more complex and less distinctive than the competing auditory stimuli, resulting in an illusory preference for…

  13. Auditory recognition memory is inferior to visual recognition memory.

    Science.gov (United States)

    Cohen, Michael A; Horowitz, Todd S; Wolfe, Jeremy M

    2009-04-07

    Visual memory for scenes is surprisingly robust. We wished to examine whether an analogous ability exists in the auditory domain. Participants listened to a variety of sound clips and were tested on their ability to distinguish old from new clips. Stimuli ranged from complex auditory scenes (e.g., talking in a pool hall) to isolated auditory objects (e.g., a dog barking) to music. In some conditions, additional information was provided to help participants with encoding. In every situation, however, auditory memory proved to be systematically inferior to visual memory. This suggests that there exists either a fundamental difference between auditory and visual stimuli, or, more plausibly, an asymmetry between auditory and visual processing.

  14. Visual search in ecological and non-ecological displays: evidence for a non-monotonic effect of complexity on performance.

    Directory of Open Access Journals (Sweden)

    Philippe Chassy

    Full Text Available Considerable research has been carried out on visual search, with single or multiple targets. However, most studies have used artificial stimuli with low ecological validity. In addition, little is known about the effects of target complexity and expertise in visual search. Here, we investigate visual search in three conditions of complexity (detecting a king, detecting a check, and detecting a checkmate with chess players of two levels of expertise (novices and club players. Results show that the influence of target complexity depends on level of structure of the visual display. Different functional relationships were found between artificial (random chess positions and ecologically valid (game positions stimuli: With artificial, but not with ecologically valid stimuli, a "pop out" effect was present when a target was visually more complex than distractors but could be captured by a memory chunk. This suggests that caution should be exercised when generalising from experiments using artificial stimuli with low ecological validity to real-life stimuli.

  15. Oxytocin and vasopressin enhance responsiveness to infant stimuli in adult marmosets.

    Science.gov (United States)

    Taylor, Jack H; French, Jeffrey A

    2015-09-01

    The neuropeptides oxytocin (OT) and arginine-vasopressin (AVP) have been implicated in modulating sex-specific responses to offspring in a variety of uniparental and biparental rodent species. Despite the large body of research in rodents, the effects of these hormones in biparental primates are less understood. Marmoset monkeys (Callithrix jacchus) belong to a clade of primates with a high incidence of biparental care and also synthesize a structurally distinct variant of OT (proline instead of leucine at the 8th amino acid position; Pro(8)-OT). We examined the roles of the OT and AVP systems in the control of responses to infant stimuli in marmoset monkeys. We administered neuropeptide receptor agonists and antagonists to male and female marmosets, and then exposed them to visual and auditory infant-related and control stimuli. Intranasal Pro(8)-OT decreased latencies to respond to infant stimuli in males, and intranasal AVP decreased latencies to respond to infant stimuli in females. Our study is the first to demonstrate that Pro(8)-OT and AVP alter responsiveness to infant stimuli in a biparental New World monkey. Across species, the effects of OT and AVP on parental behavior appear to vary by species-typical caregiving responsibilities in males and females. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. The mere exposure effect for visual image.

    Science.gov (United States)

    Inoue, Kazuya; Yagi, Yoshihiko; Sato, Nobuya

    2018-02-01

    Mere exposure effect refers to a phenomenon in which repeated stimuli are evaluated more positively than novel stimuli. We investigated whether this effect occurs for internally generated visual representations (i.e., visual images). In an exposure phase, a 5 × 5 dot array was presented, and a pair of dots corresponding to the neighboring vertices of an invisible polygon was sequentially flashed (in red), creating an invisible polygon. In Experiments 1, 2, and 4, participants visualized and memorized the shapes of invisible polygons based on different sequences of flashed dots, whereas in Experiment 3, participants only memorized positions of these dots. In a subsequent rating phase, participants visualized the shape of the invisible polygon from allocations of numerical characters on its vertices, and then rated their preference for invisible polygons (Experiments 1, 2, and 3). In contrast, in Experiment 4, participants rated the preference for visible polygons. Results showed that the mere exposure effect appeared only when participants visualized the shape of invisible polygons in both the exposure and rating phases (Experiments 1 and 2), suggesting that the mere exposure effect occurred for internalized visual images. This implies that the sensory inputs from repeated stimuli play a minor role in the mere exposure effect. Absence of the mere exposure effect in Experiment 4 suggests that the consistency of processing between exposure and rating phases plays an important role in the mere exposure effect.

  17. A deafening flash! Visual interference of auditory signal detection.

    Science.gov (United States)

    Fassnidge, Christopher; Cecconi Marcotti, Claudia; Freeman, Elliot

    2017-03-01

    In some people, visual stimulation evokes auditory sensations. How prevalent and how perceptually real is this? 22% of our neurotypical adult participants responded 'Yes' when asked whether they heard faint sounds accompanying flash stimuli, and showed significantly better ability to discriminate visual 'Morse-code' sequences. This benefit might arise from an ability to recode visual signals as sounds, thus taking advantage of superior temporal acuity of audition. In support of this, those who showed better visual relative to auditory sequence discrimination also had poorer auditory detection in the presence of uninformative visual flashes, though this was independent of awareness of visually-evoked sounds. Thus a visually-evoked auditory representation may occur subliminally and disrupt detection of real auditory signals. The frequent natural correlation between visual and auditory stimuli might explain the surprising prevalence of this phenomenon. Overall, our results suggest that learned correspondences between strongly correlated modalities may provide a precursor for some synaesthetic abilities. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Chewing Stimulation Reduces Appetite Ratings and Attentional Bias toward Visual Food Stimuli in Healthy-Weight Individuals.

    Science.gov (United States)

    Ikeda, Akitsu; Miyamoto, Jun J; Usui, Nobuo; Taira, Masato; Moriyama, Keiji

    2018-01-01

    Based on the theory of incentive sensitization, the exposure to food stimuli sensitizes the brain's reward circuits and enhances attentional bias toward food. Therefore, reducing attentional bias to food could possibly be beneficial in preventing impulsive eating. The importance of chewing has been increasingly implicated as one of the methods for reducing appetite, however, no studies to investigate the effect of chewing on attentional bias to food. In this study, we investigated whether chewing stimulation (i.e., chewing tasteless gum) reduces attentional bias to food as well as an actual feeding (i.e., ingesting a standardized meal) does. We measured reaction time, gaze direction and gaze duration to assess attentional bias toward food images in pairs of food and non-food images that were presented in a visual probe task (Experiment 1, n = 21) and/or eye-tracking task (Experiment 2, n = 20). We also measured appetite ratings using visual analog scale. In addition, we conducted a control study in which the same number of participants performed the identical tasks to Experiments 1 and 2, but the participants did not perform sham feeding with gum-chewing/actual feeding between tasks and they took a rest. Two-way ANOVA revealed that after actual feeding, subjective ratings of hunger, preoccupation with food, and desire to eat significantly decreased, whereas fullness significantly increased. Sham feeding showed the same trends, but to a lesser degree. Results of the visual probe task in Experiment 1 showed that both sham feeding and actual feeding reduced reaction time bias significantly. Eye-tracking data showed that both sham and actual feeding resulted in significant reduction in gaze direction bias, indexing initial attentional orientation. Gaze duration bias was unaffected. In both control experiments, one-way ANOVAs showed no significant differences between immediately before and after the resting state for any of the appetite ratings, reaction time bias, gaze

  19. Chewing Stimulation Reduces Appetite Ratings and Attentional Bias toward Visual Food Stimuli in Healthy-Weight Individuals

    Science.gov (United States)

    Ikeda, Akitsu; Miyamoto, Jun J.; Usui, Nobuo; Taira, Masato; Moriyama, Keiji

    2018-01-01

    Based on the theory of incentive sensitization, the exposure to food stimuli sensitizes the brain’s reward circuits and enhances attentional bias toward food. Therefore, reducing attentional bias to food could possibly be beneficial in preventing impulsive eating. The importance of chewing has been increasingly implicated as one of the methods for reducing appetite, however, no studies to investigate the effect of chewing on attentional bias to food. In this study, we investigated whether chewing stimulation (i.e., chewing tasteless gum) reduces attentional bias to food as well as an actual feeding (i.e., ingesting a standardized meal) does. We measured reaction time, gaze direction and gaze duration to assess attentional bias toward food images in pairs of food and non-food images that were presented in a visual probe task (Experiment 1, n = 21) and/or eye-tracking task (Experiment 2, n = 20). We also measured appetite ratings using visual analog scale. In addition, we conducted a control study in which the same number of participants performed the identical tasks to Experiments 1 and 2, but the participants did not perform sham feeding with gum-chewing/actual feeding between tasks and they took a rest. Two-way ANOVA revealed that after actual feeding, subjective ratings of hunger, preoccupation with food, and desire to eat significantly decreased, whereas fullness significantly increased. Sham feeding showed the same trends, but to a lesser degree. Results of the visual probe task in Experiment 1 showed that both sham feeding and actual feeding reduced reaction time bias significantly. Eye-tracking data showed that both sham and actual feeding resulted in significant reduction in gaze direction bias, indexing initial attentional orientation. Gaze duration bias was unaffected. In both control experiments, one-way ANOVAs showed no significant differences between immediately before and after the resting state for any of the appetite ratings, reaction time bias

  20. Intraindividual variability in vigilance performance: does degrading visual stimuli mimic age-related "neural noise"?

    Science.gov (United States)

    MacDonald, Stuart W S; Hultsch, David F; Bunce, David

    2006-07-01

    Intraindividual performance variability, or inconsistency, has been shown to predict neurological status, physiological functioning, and age differences and declines in cognition. However, potential moderating factors of inconsistency are not well understood. The present investigation examined whether inconsistency in vigilance response latencies varied as a function of time-on-task and task demands by degrading visual stimuli in three separate conditions (10%, 20%, and 30%). Participants were 24 younger women aged 21 to 30 years (M = 24.04, SD = 2.51) and 23 older women aged 61 to 83 years (M = 68.70, SD = 6.38). A measure of within-person inconsistency, the intraindividual standard deviation (ISD), was computed for each individual across reaction time (RT) trials (3 blocks of 45 event trials) for each condition of the vigilance task. Greater inconsistency was observed with increasing stimulus degradation and age, even after controlling for group differences in mean RTs and physical condition. Further, older adults were more inconsistent than younger adults for similar degradation conditions, with ISD scores for younger adults in the 30% condition approximating estimates observed for older adults in the 10% condition. Finally, a measure of perceptual sensitivity shared increasing negative associations with ISDs, with this association further modulated as a function of age but to a lesser degree by degradation condition. Results support current hypotheses suggesting that inconsistency serves as a marker of neurological integrity and are discussed in terms of potential underlying mechanisms.

  1. The fluency of social hierarchy: the ease with which hierarchical relationships are seen, remembered, learned, and liked.

    Science.gov (United States)

    Zitek, Emily M; Tiedens, Larissa Z

    2012-01-01

    We tested the hypothesis that social hierarchies are fluent social stimuli; that is, they are processed more easily and therefore liked better than less hierarchical stimuli. In Study 1, pairs of people in a hierarchy based on facial dominance were identified faster than pairs of people equal in their facial dominance. In Study 2, a diagram representing hierarchy was memorized more quickly than a diagram representing equality or a comparison diagram. This faster processing led the hierarchy diagram to be liked more than the equality diagram. In Study 3, participants were best able to learn a set of relationships that represented hierarchy (asymmetry of power)--compared to relationships in which there was asymmetry of friendliness, or compared to relationships in which there was symmetry--and this processing ease led them to like the hierarchy the most. In Study 4, participants found it easier to make decisions about a company that was more hierarchical and thus thought the hierarchical organization had more positive qualities. In Study 5, familiarity as a basis for the fluency of hierarchy was demonstrated by showing greater fluency for male than female hierarchies. This study also showed that when social relationships are difficult to learn, people's preference for hierarchy increases. Taken together, these results suggest one reason people might like hierarchies--hierarchies are easy to process. This fluency for social hierarchies might contribute to the construction and maintenance of hierarchies.

  2. Hierarchical processing in the prefrontal cortex in a variety of cognitive domains

    Directory of Open Access Journals (Sweden)

    Hyeon-Ae eJeon

    2014-11-01

    Full Text Available This review scrutinizes several findings on human hierarchical processing within the prefrontal cortex (PFC in diverse cognitive domains. Converging evidence from previous studies has shown that the PFC, specifically Brodmann area (BA 44, may function as the essential region for hierarchical processing across the domains. In language fMRI studies, BA 44 was significantly activated for the hierarchical processing of center-embedded sentences and this pattern of activations was also observed in artificial grammar. The same pattern was observed in the visuo-spatial domain where BA44 was actively involved in the processing of hierarchy for the visual symbol. Musical syntax, which is the rule-based arrangement of musical sets, has also been construed as hierarchical processing as in the language domain such that the activation in BA44 was observed in a chord sequence paradigm. P600 ERP was also engendered during the processing of musical hierarchy. Along with a longstanding idea that a human’s number faculty is developed as a by-product of language faculty, BA44 was closely involved in hierarchical processing in mental arithmetic. This review extended its discussion of hierarchical processing to hierarchical behavior, that is, human action which has been referred to as being hierarchically composed. Several lesion and TMS studies supported the involvement of BA44 for hierarchical processing in the action domain. Lastly, the hierarchical organization of cognitive controls was discussed within the PFC, forming a cascade of top-down hierarchical processes operating along a posterior-to-anterior axis of the lateral PFC including BA44 within the network. It is proposed that PFC is actively involved in different forms of hierarchical processing and specifically BA44 may play an integral role in the process. Taking levels of proficiency and subcortical areas into consideration may provide further insight into the functional role of BA44 for hierarchical

  3. Effects of Binaural Sensory Aids on the Development of Visual Perceptual Abilities in Visually Handicapped Infants. Final Report, April 15, 1982-November 15, 1982.

    Science.gov (United States)

    Hart, Verna; Ferrell, Kay

    Twenty-four congenitally visually handicapped infants, aged 6-24 months, participated in a study to determine (1) those stimuli best able to elicit visual attention, (2) the stability of visual acuity over time, and (3) the effects of binaural sensory aids on both visual attention and visual acuity. Ss were dichotomized into visually handicapped…

  4. Letters persistence after physical offset: visual word form area and left planum temporale. An fMRI study.

    Science.gov (United States)

    Barban, Francesco; Zannino, Gian Daniele; Macaluso, Emiliano; Caltagirone, Carlo; Carlesimo, Giovanni A

    2013-06-01

    Iconic memory is a high-capacity low-duration visual memory store that allows the persistence of a visual stimulus after its offset. The categorical nature of this store has been extensively debated. This study provides functional magnetic resonance imaging evidence for brain regions underlying the persistence of postcategorical representations of visual stimuli. In a partial report paradigm, subjects matched a cued row of a 3 × 3 array of letters (postcategorical stimuli) or false fonts (precategorical stimuli) with a subsequent triplet of stimuli. The cued row was indicated by two visual flankers presented at the onset (physical stimulus readout) or after the offset of the array (iconic memory readout). The left planum temporale showed a greater modulation of the source of readout (iconic memory vs. physical stimulus) when letters were presented compared to false fonts. This is a multimodal brain region responsible for matching incoming acoustic and visual patterns with acoustic pattern templates. These findings suggest that letters persist after their physical offset in an abstract postcategorical representation. A targeted region of interest analysis revealed a similar pattern of activation in the Visual Word Form Area. These results suggest that multiple higher-order visual areas mediate iconic memory for postcategorical stimuli. Copyright © 2012 Wiley Periodicals, Inc.

  5. Conditioned sounds enhance visual processing.

    Directory of Open Access Journals (Sweden)

    Fabrizio Leo

    Full Text Available This psychophysics study investigated whether prior auditory conditioning influences how a sound interacts with visual perception. In the conditioning phase, subjects were presented with three pure tones ( =  conditioned stimuli, CS that were paired with positive, negative or neutral unconditioned stimuli. As unconditioned reinforcers we employed pictures (highly pleasant, unpleasant and neutral or monetary outcomes (+50 euro cents, -50 cents, 0 cents. In the subsequent visual selective attention paradigm, subjects were presented with near-threshold Gabors displayed in their left or right hemifield. Critically, the Gabors were presented in synchrony with one of the conditioned sounds. Subjects discriminated whether the Gabors were presented in their left or right hemifields. Participants determined the location more accurately when the Gabors were presented in synchrony with positive relative to neutral sounds irrespective of reinforcer type. Thus, previously rewarded relative to neutral sounds increased the bottom-up salience of the visual Gabors. Our results are the first demonstration that prior auditory conditioning is a potent mechanism to modulate the effect of sounds on visual perception.

  6. Enhanced alpha-oscillations in visual cortex during anticipation of self-generated visual stimulation.

    Science.gov (United States)

    Stenner, Max-Philipp; Bauer, Markus; Haggard, Patrick; Heinze, Hans-Jochen; Dolan, Ray

    2014-11-01

    The perceived intensity of sensory stimuli is reduced when these stimuli are caused by the observer's actions. This phenomenon is traditionally explained by forward models of sensory action-outcome, which arise from motor processing. Although these forward models critically predict anticipatory modulation of sensory neural processing, neurophysiological evidence for anticipatory modulation is sparse and has not been linked to perceptual data showing sensory attenuation. By combining a psychophysical task involving contrast discrimination with source-level time-frequency analysis of MEG data, we demonstrate that the amplitude of alpha-oscillations in visual cortex is enhanced before the onset of a visual stimulus when the identity and onset of the stimulus are controlled by participants' motor actions. Critically, this prestimulus enhancement of alpha-amplitude is paralleled by psychophysical judgments of a reduced contrast for this stimulus. We suggest that alpha-oscillations in visual cortex preceding self-generated visual stimulation are a likely neurophysiological signature of motor-induced sensory anticipation and mediate sensory attenuation. We discuss our results in relation to proposals that attribute generic inhibitory functions to alpha-oscillations in prioritizing and gating sensory information via top-down control.

  7. Cross-modal processing in auditory and visual working memory.

    Science.gov (United States)

    Suchan, Boris; Linnewerth, Britta; Köster, Odo; Daum, Irene; Schmid, Gebhard

    2006-02-01

    This study aimed to further explore processing of auditory and visual stimuli in working memory. Smith and Jonides (1997) [Smith, E.E., Jonides, J., 1997. Working memory: A view from neuroimaging. Cogn. Psychol. 33, 5-42] described a modified working memory model in which visual input is automatically transformed into a phonological code. To study this process, auditory and the corresponding visual stimuli were presented in a variant of the 2-back task which involved changes from the auditory to the visual modality and vice versa. Brain activation patterns underlying visual and auditory processing as well as transformation mechanisms were analyzed. Results yielded a significant activation in the left primary auditory cortex associated with transformation of visual into auditory information which reflects the matching and recoding of a stored item and its modality. This finding yields empirical evidence for a transformation of visual input into a phonological code, with the auditory cortex as the neural correlate of the recoding process in working memory.

  8. Is one enough? The case for non-additive influences of visual features on crossmodal Stroop interference

    Directory of Open Access Journals (Sweden)

    Lawrence Gregory Appelbaum

    2013-10-01

    Full Text Available When different perceptual signals arising from the same physical entity are integrated, they form a more reliable sensory estimate. When such repetitive sensory signals are pitted against other competing stimuli, such as in a Stroop Task, this redundancy may lead to stronger processing that biases behavior towards reporting the redundant stimuli. This bias would therefore be expected to evoke greater incongruency effects than if these stimuli did not contain redundant sensory features. In the present paper we report that this is not the case for a set of three crossmodal, auditory-visual Stroop tasks. In these tasks participants attended to, and reported, either the visual or the auditory stimulus (in separate blocks while ignoring the other, unattended modality. The visual component of these stimuli could be purely semantic (words, purely perceptual (colors, or the combination of both. Based on previous work showing enhanced crossmodal integration and visual search gains for redundantly coded stimuli, we had expected that relative to the single features, redundant visual features would have induced both greater visual distracter incongruency effects for attended auditory targets, and been less influenced by auditory distracters for attended visual targets. Overall, reaction time were faster for visual targets and were dominated by behavioral facilitation for the cross-modal interactions (relative to interference, but showed surprisingly little influence of visual feature redundancy. Post hoc analyses revealed modest and trending evidence for possible increases in behavioral interference for redundant visual distracters on auditory targets, however, these effects were substantially smaller than anticipated and were not accompanied by redundancy effect for behavioral facilitation or for attended visual targets.

  9. Sex differences in adults' relative visual interest in female and male faces, toys, and play styles.

    Science.gov (United States)

    Alexander, Gerianne M; Charles, Nora

    2009-06-01

    An individual's reproductive potential appears to influence response to attractive faces of the opposite sex. Otherwise, relatively little is known about the characteristics of the adult observer that may influence his or her affective evaluation of male and female faces. An untested hypothesis (based on the proposed role of attractive faces in mate selection) is that most women would show greater interest in male faces whereas most men would show greater interest in female faces. Further, evidence from individuals with preferences for same-sex sexual partners suggests that response to attractive male and female faces may be influenced by gender-linked play preferences. To test these hypotheses, visual attention directed to sex-linked stimuli (faces, toys, play styles) was measured in 39 men and 44 women using eye tracking technology. Consistent with our predictions, men directed greater visual attention to all male-typical stimuli and visual attention to male and female faces was associated with visual attention to gender conforming or nonconforming stimuli in a manner consistent with previous research on sexual orientation. In contrast, women showed a visual preference for female-typical toys, but no visual preference for male faces or female-typical play styles. These findings indicate that sex differences in visual processing extend beyond stimuli associated with adult sexual behavior. We speculate that sex differences in visual processing are a component of the expression of gender phenotypes across the lifespan that may reflect sex differences in the motivational properties of gender-linked stimuli.

  10. Multiaccommodative stimuli in VR systems: problems & solutions.

    Science.gov (United States)

    Marran, L; Schor, C

    1997-09-01

    Virtual reality environments can introduce multiple and sometimes conflicting accommodative stimuli. For instance, with the high-powered lenses commonly used in head-mounted displays, small discrepancies in screen lens placement, caused by manufacturer error or user adjustment focus error, can change the focal depths of the image by a couple of diopters. This can introduce a binocular accommodative stimulus or, if the displacement between the two screens is unequal, an unequal (anisometropic) accommodative stimulus for the two eyes. Systems that allow simultaneous viewing of virtual and real images can also introduce a conflict in accommodative stimuli: When real and virtual images are at different focal planes, both cannot be in focus at the same time, though they may appear to be in similar locations in space. In this paper four unique designs are described that minimize the range of accommodative stimuli and maximize the visual system's ability to cope efficiently with the focus conflicts that remain: pinhole optics, monocular lens addition combined with aniso-accommodation, chromatic bifocal, and bifocal lens system. The advantages and disadvantages of each design are described and recommendation for design choice is given after consideration of the end use of the virtual reality system (e.g., low or high end, entertainment, technical, or medical use). The appropriate design modifications should allow greater user comfort and better performance.

  11. The Time Is Up: Compression of Visual Time Interval Estimations of Bimodal Aperiodic Patterns

    Science.gov (United States)

    Duarte, Fabiola; Lemus, Luis

    2017-01-01

    The ability to estimate time intervals subserves many of our behaviors and perceptual experiences. However, it is not clear how aperiodic (AP) stimuli affect our perception of time intervals across sensory modalities. To address this question, we evaluated the human capacity to discriminate between two acoustic (A), visual (V) or audiovisual (AV) time intervals of trains of scattered pulses. We first measured the periodicity of those stimuli and then sought for correlations with the accuracy and reaction times (RTs) of the subjects. We found that, for all time intervals tested in our experiment, the visual system consistently perceived AP stimuli as being shorter than the periodic (P) ones. In contrast, such a compression phenomenon was not apparent during auditory trials. Our conclusions are: first, the subjects exposed to P stimuli are more likely to measure their durations accurately. Second, perceptual time compression occurs for AP visual stimuli. Lastly, AV discriminations are determined by A dominance rather than by AV enhancement. PMID:28848406

  12. Anxiety and autonomic response to social-affective stimuli in individuals with Williams syndrome.

    Science.gov (United States)

    Ng, Rowena; Bellugi, Ursula; Järvinen, Anna

    2016-12-01

    Williams syndrome (WS) is a genetic condition characterized by an unusual "hypersocial" personality juxtaposed by high anxiety. Recent evidence suggests that autonomic reactivity to affective face stimuli is disorganised in WS, which may contribute to emotion dysregulation and/or social disinhibition. Electrodermal activity (EDA) and mean interbeat interval (IBI) of 25 participants with WS (19 - 57 years old) and 16 typically developing (TD; 17-43 years old) adults were measured during a passive presentation of affective face and voice stimuli. The Beck Anxiety Inventory was administered to examine associations between autonomic reactivity to social-affective stimuli and anxiety symptomatology. The WS group was characterized by higher overall anxiety symptomatology, and poorer anger recognition in social visual and aural stimuli relative to the TD group. No between-group differences emerged in autonomic response patterns. Notably, for participants with WS, increased anxiety was uniquely associated with diminished arousal to angry faces and voices. In contrast, for the TD group, no associations emerged between anxiety and physiological responsivity to social-emotional stimuli. The anxiety associated with WS appears to be intimately related to reduced autonomic arousal to angry social stimuli, which may also be linked to the characteristic social disinhibition. Copyright © 2016. Published by Elsevier Ltd.

  13. Visual and auditory perception in preschool children at risk for dyslexia.

    Science.gov (United States)

    Ortiz, Rosario; Estévez, Adelina; Muñetón, Mercedes; Domínguez, Carolina

    2014-11-01

    Recently, there has been renewed interest in perceptive problems of dyslexics. A polemic research issue in this area has been the nature of the perception deficit. Another issue is the causal role of this deficit in dyslexia. Most studies have been carried out in adult and child literates; consequently, the observed deficits may be the result rather than the cause of dyslexia. This study addresses these issues by examining visual and auditory perception in children at risk for dyslexia. We compared children from preschool with and without risk for dyslexia in auditory and visual temporal order judgment tasks and same-different discrimination tasks. Identical visual and auditory, linguistic and nonlinguistic stimuli were presented in both tasks. The results revealed that the visual as well as the auditory perception of children at risk for dyslexia is impaired. The comparison between groups in auditory and visual perception shows that the achievement of children at risk was lower than children without risk for dyslexia in the temporal tasks. There were no differences between groups in auditory discrimination tasks. The difficulties of children at risk in visual and auditory perceptive processing affected both linguistic and nonlinguistic stimuli. Our conclusions are that children at risk for dyslexia show auditory and visual perceptive deficits for linguistic and nonlinguistic stimuli. The auditory impairment may be explained by temporal processing problems and these problems are more serious for processing language than for processing other auditory stimuli. These visual and auditory perceptive deficits are not the consequence of failing to learn to read, thus, these findings support the theory of temporal processing deficit. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. The effect of spatio-temporal distance between visual stimuli on information processing in children with Specific Language Impairment.

    Science.gov (United States)

    Dispaldro, Marco; Corradi, Nicola

    2015-01-01

    The purpose of this study is to evaluate whether children with Specific Language Impairment (SLI) have a deficit in processing a sequence of two visual stimuli (S1 and S2) presented at different inter-stimulus intervals and in different spatial locations. In particular, the core of this study is to investigate whether S1 identification is disrupted due to a retroactive interference of S2. To this aim, two experiments were planned in which children with SLI and children with typical development (TD), matched by age and non-verbal IQ, were compared (Experiment 1: SLI n=19; TD n=19; Experiment 2: SLI n=16; TD n=16). Results show group differences in the ability to identify a single stimulus surrounded by flankers (Baseline level). Moreover, children with SLI show a stronger negative interference of S2, both for temporal and spatial modulation. These results are discussed in the light of an attentional processing limitation in children with SLI. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Mirrored and rotated stimuli are not the same: A neuropsychological and lesion mapping study.

    Science.gov (United States)

    Martinaud, Olivier; Mirlink, Nicolas; Bioux, Sandrine; Bliaux, Evangéline; Champmartin, Cécile; Pouliquen, Dorothée; Cruypeninck, Yohann; Hannequin, Didier; Gérardin, Emmanuel

    2016-05-01

    Agnosia for mirrored stimuli is a rare clinical deficit. Only eight patients have been reported in the literature so far and little is known about the neural substrates of this agnosia. Using a previously developed experimental test designed to assess this agnosia, namely the Mirror and Orientation Agnosia Test (MOAT), as well as voxel-lesion symptom mapping (VLSM), we tested the hypothesis that focal brain-injured patients with right parietal damage would be impaired in the discrimination between the canonical view of a visual object and its mirrored and rotated images. Thirty-four consecutively recruited patients with a stroke involving the right or left parietal lobe have been included: twenty patients (59%) had a deficit on at least one of the six conditions of the MOAT, fourteen patients (41%) had a deficit on the mirror condition, twelve patients (35%) had a deficit on at least one the four rotated conditions and one had a truly selective agnosia for mirrored stimuli. A lesion analysis showed that discrimination of mirrored stimuli was correlated to the mesial part of the posterior superior temporal gyrus and the lateral part of the inferior parietal lobule, while discrimination of rotated stimuli was correlated to the lateral part of the posterior superior temporal gyrus and the mesial part of the inferior parietal lobule, with only a small overlap between the two. These data suggest that the right visual 'dorsal' pathway is essential for accurate perception of mirrored and rotated stimuli, with a selective cognitive process and anatomical network underlying our ability to discriminate between mirrored images, different from the process of discriminating between rotated images. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Multi-Level Visual Alphabets

    NARCIS (Netherlands)

    Israël, Menno; van der Schaar, Jetske; van den Broek, Egon; den Uyl, Marten J.; van der Putten, Peter; Djemal, K.; Deriche, M.

    2010-01-01

    A central debate in visual perception theory is the argument for indirect versus direct perception; i.e., the use of intermediate, abstract, and hierarchical representations versus direct semantic interpretation of images through interaction with the outside world. We present a content-based

  17. Hierarchical Representation Learning for Kinship Verification.

    Science.gov (United States)

    Kohli, Naman; Vatsa, Mayank; Singh, Richa; Noore, Afzel; Majumdar, Angshul

    2017-01-01

    Kinship verification has a number of applications such as organizing large collections of images and recognizing resemblances among humans. In this paper, first, a human study is conducted to understand the capabilities of human mind and to identify the discriminatory areas of a face that facilitate kinship-cues. The visual stimuli presented to the participants determine their ability to recognize kin relationship using the whole face as well as specific facial regions. The effect of participant gender and age and kin-relation pair of the stimulus is analyzed using quantitative measures such as accuracy, discriminability index d' , and perceptual information entropy. Utilizing the information obtained from the human study, a hierarchical kinship verification via representation learning (KVRL) framework is utilized to learn the representation of different face regions in an unsupervised manner. We propose a novel approach for feature representation termed as filtered contractive deep belief networks (fcDBN). The proposed feature representation encodes relational information present in images using filters and contractive regularization penalty. A compact representation of facial images of kin is extracted as an output from the learned model and a multi-layer neural network is utilized to verify the kin accurately. A new WVU kinship database is created, which consists of multiple images per subject to facilitate kinship verification. The results show that the proposed deep learning framework (KVRL-fcDBN) yields the state-of-the-art kinship verification accuracy on the WVU kinship database and on four existing benchmark data sets. Furthermore, kinship information is used as a soft biometric modality to boost the performance of face verification via product of likelihood ratio and support vector machine based approaches. Using the proposed KVRL-fcDBN framework, an improvement of over 20% is observed in the performance of face verification.

  18. Visual experience and blindsight: A methodological review

    DEFF Research Database (Denmark)

    Overgaard, Morten

    2011-01-01

    Blindsight is classically defined as residual visual capacity, e.g., to detect and identify visual stimuli, in the total absence of perceptual awareness following lesions to V1. However, whereas most experiments have investigated what blindsight patients can and cannot do, the literature contains...

  19. Visual search of illusory contours: Shape and orientation effects

    Directory of Open Access Journals (Sweden)

    Gvozdenović Vasilije

    2008-01-01

    Full Text Available Illusory contours are specific class of visual stimuli that represent stimuli configurations perceived as integral irrespective of the fact that they are given in fragmented uncompleted wholes. Due to their specific features, illusory contours gained much attention in last decade representing prototype of stimuli used in investigations focused on binding problem. On the other side, investigations of illusory contours are related to problem of the level of their visual processing. Neurophysiologic studies show that processing of illusory contours proceed relatively early, on the V2 level, on the other hand most of experimental studies claim that illusory contours are perceived with engagement of visual attention, binding their elements to whole percept. This research is focused on two experiments in which visual search of illusory contours are based on shape and orientation. The main experimental procedure evolved the task proposed by Bravo and Nakayama where instead of detection, subjects were performing identification of one among two possible targets. In the first experiment subjects detected the presence of illusory square or illusory triangle, while in the second experiment subject were detecting two different orientations of illusory triangle. The results are interpreted in terms of visual search and feature integration theory. Beside the type of visual search task, search type proved to be dependent of specific features of illusory shapes which further complicate theoretical interpretation of the level of their perception.

  20. Hemispheric specialization in dogs for processing different acoustic stimuli.

    Directory of Open Access Journals (Sweden)

    Marcello Siniscalchi

    Full Text Available Considerable experimental evidence shows that functional cerebral asymmetries are widespread in animals. Activity of the right cerebral hemisphere has been associated with responses to novel stimuli and the expression of intense emotions, such as aggression, escape behaviour and fear. The left hemisphere uses learned patterns and responds to familiar stimuli. Although such lateralization has been studied mainly for visual responses, there is evidence in primates that auditory perception is lateralized and that vocal communication depends on differential processing by the hemispheres. The aim of the present work was to investigate whether dogs use different hemispheres to process different acoustic stimuli by presenting them with playbacks of a thunderstorm and their species-typical vocalizations. The results revealed that dogs usually process their species-typical vocalizations using the left hemisphere and the thunderstorm sounds using the right hemisphere. Nevertheless, conspecific vocalizations are not always processed by the left hemisphere, since the right hemisphere is used for processing vocalizations when they elicit intense emotion, including fear. These findings suggest that the specialisation of the left hemisphere for intraspecific communication is more ancient that previously thought, and so is specialisation of the right hemisphere for intense emotions.

  1. Interneuronal Mechanism for Tinbergen’s Hierarchical Model of Behavioral Choice

    Science.gov (United States)

    Pirger, Zsolt; Crossley, Michael; László, Zita; Naskar, Souvik; Kemenes, György; O’Shea, Michael; Benjamin, Paul R.; Kemenes, Ildikó

    2014-01-01

    Summary Recent studies of behavioral choice support the notion that the decision to carry out one behavior rather than another depends on the reconfiguration of shared interneuronal networks [1]. We investigated another decision-making strategy, derived from the classical ethological literature [2, 3], which proposes that behavioral choice depends on competition between autonomous networks. According to this model, behavioral choice depends on inhibitory interactions between incompatible hierarchically organized behaviors. We provide evidence for this by investigating the interneuronal mechanisms mediating behavioral choice between two autonomous circuits that underlie whole-body withdrawal [4, 5] and feeding [6] in the pond snail Lymnaea. Whole-body withdrawal is a defensive reflex that is initiated by tactile contact with predators. As predicted by the hierarchical model, tactile stimuli that evoke whole-body withdrawal responses also inhibit ongoing feeding in the presence of feeding stimuli. By recording neurons from the feeding and withdrawal networks, we found no direct synaptic connections between the interneuronal and motoneuronal elements that generate the two behaviors. Instead, we discovered that behavioral choice depends on the interaction between two unique types of interneurons with asymmetrical synaptic connectivity that allows withdrawal to override feeding. One type of interneuron, the Pleuro-Buccal (PlB), is an extrinsic modulatory neuron of the feeding network that completely inhibits feeding when excited by touch-induced monosynaptic input from the second type of interneuron, Pedal-Dorsal12 (PeD12). PeD12 plays a critical role in behavioral choice by providing a synaptic pathway joining the two behavioral networks that underlies the competitive dominance of whole-body withdrawal over feeding. PMID:25155505

  2. Collinearity Impairs Local Element Visual Search

    Science.gov (United States)

    Jingling, Li; Tseng, Chia-Huei

    2013-01-01

    In visual searches, stimuli following the law of good continuity attract attention to the global structure and receive attentional priority. Also, targets that have unique features are of high feature contrast and capture attention in visual search. We report on a salient global structure combined with a high orientation contrast to the…

  3. Brain activation in response to randomized visual stimulation as obtained from conjunction and differential analysis: an fMRI study

    International Nuclear Information System (INIS)

    Nasaruddin, N H; Yusoff, A N; Kaur, S

    2014-01-01

    The objective of this multiple-subjects functional magnetic resonance imaging (fMRI) study was to identify the common brain areas that are activated when viewing black-and-white checkerboard pattern stimuli of various shapes, pattern and size and to investigate specific brain areas that are involved in processing static and moving visual stimuli. Sixteen participants viewed the moving (expanding ring, rotating wedge, flipping hour glass and bowtie and arc quadrant) and static (full checkerboard) stimuli during an fMRI scan. All stimuli have black-and-white checkerboard pattern. Statistical parametric mapping (SPM) was used in generating brain activation. Differential analyses were implemented to separately search for areas involved in processing static and moving stimuli. In general, the stimuli of various shapes, pattern and size activated multiple brain areas mostly in the left hemisphere. The activation in the right middle temporal gyrus (MTG) was found to be significantly higher in processing moving visual stimuli as compared to static stimulus. In contrast, the activation in the left calcarine sulcus and left lingual gyrus were significantly higher for static stimulus as compared to moving stimuli. Visual stimulation of various shapes, pattern and size used in this study indicated left lateralization of activation. The involvement of the right MTG in processing moving visual information was evident from differential analysis, while the left calcarine sulcus and left lingual gyrus are the areas that are involved in the processing of static visual stimulus

  4. Brain activation in response to randomized visual stimulation as obtained from conjunction and differential analysis: an fMRI study

    Science.gov (United States)

    Nasaruddin, N. H.; Yusoff, A. N.; Kaur, S.

    2014-11-01

    The objective of this multiple-subjects functional magnetic resonance imaging (fMRI) study was to identify the common brain areas that are activated when viewing black-and-white checkerboard pattern stimuli of various shapes, pattern and size and to investigate specific brain areas that are involved in processing static and moving visual stimuli. Sixteen participants viewed the moving (expanding ring, rotating wedge, flipping hour glass and bowtie and arc quadrant) and static (full checkerboard) stimuli during an fMRI scan. All stimuli have black-and-white checkerboard pattern. Statistical parametric mapping (SPM) was used in generating brain activation. Differential analyses were implemented to separately search for areas involved in processing static and moving stimuli. In general, the stimuli of various shapes, pattern and size activated multiple brain areas mostly in the left hemisphere. The activation in the right middle temporal gyrus (MTG) was found to be significantly higher in processing moving visual stimuli as compared to static stimulus. In contrast, the activation in the left calcarine sulcus and left lingual gyrus were significantly higher for static stimulus as compared to moving stimuli. Visual stimulation of various shapes, pattern and size used in this study indicated left lateralization of activation. The involvement of the right MTG in processing moving visual information was evident from differential analysis, while the left calcarine sulcus and left lingual gyrus are the areas that are involved in the processing of static visual stimulus.

  5. Hierarchical layered and semantic-based image segmentation using ergodicity map

    Science.gov (United States)

    Yadegar, Jacob; Liu, Xiaoqing

    2010-04-01

    Image segmentation plays a foundational role in image understanding and computer vision. Although great strides have been made and progress achieved on automatic/semi-automatic image segmentation algorithms, designing a generic, robust, and efficient image segmentation algorithm is still challenging. Human vision is still far superior compared to computer vision, especially in interpreting semantic meanings/objects in images. We present a hierarchical/layered semantic image segmentation algorithm that can automatically and efficiently segment images into hierarchical layered/multi-scaled semantic regions/objects with contextual topological relationships. The proposed algorithm bridges the gap between high-level semantics and low-level visual features/cues (such as color, intensity, edge, etc.) through utilizing a layered/hierarchical ergodicity map, where ergodicity is computed based on a space filling fractal concept and used as a region dissimilarity measurement. The algorithm applies a highly scalable, efficient, and adaptive Peano- Cesaro triangulation/tiling technique to decompose the given image into a set of similar/homogenous regions based on low-level visual cues in a top-down manner. The layered/hierarchical ergodicity map is built through a bottom-up region dissimilarity analysis. The recursive fractal sweep associated with the Peano-Cesaro triangulation provides efficient local multi-resolution refinement to any level of detail. The generated binary decomposition tree also provides efficient neighbor retrieval mechanisms for contextual topological object/region relationship generation. Experiments have been conducted within the maritime image environment where the segmented layered semantic objects include the basic level objects (i.e. sky/land/water) and deeper level objects in the sky/land/water surfaces. Experimental results demonstrate the proposed algorithm has the capability to robustly and efficiently segment images into layered semantic objects

  6. Stimulus Dependency of Object-Evoked Responses in Human Visual Cortex: An Inverse Problem for Category Specificity

    Science.gov (United States)

    Graewe, Britta; De Weerd, Peter; Farivar, Reza; Castelo-Branco, Miguel

    2012-01-01

    Many studies have linked the processing of different object categories to specific event-related potentials (ERPs) such as the face-specific N170. Despite reports showing that object-related ERPs are influenced by visual stimulus features, there is consensus that these components primarily reflect categorical aspects of the stimuli. Here, we re-investigated this idea by systematically measuring the effects of visual feature manipulations on ERP responses elicited by both structure-from-motion (SFM)-defined and luminance-defined object stimuli. SFM objects elicited a novel component at 200–250 ms (N250) over parietal and posterior temporal sites. We found, however, that the N250 amplitude was unaffected by restructuring SFM stimuli into meaningless objects based on identical visual cues. This suggests that this N250 peak was not uniquely linked to categorical aspects of the objects, but is strongly determined by visual stimulus features. We provide strong support for this hypothesis by parametrically manipulating the depth range of both SFM- and luminance-defined object stimuli and showing that the N250 evoked by SFM stimuli as well as the well-known N170 to static faces were sensitive to this manipulation. Importantly, this effect could not be attributed to compromised object categorization in low depth stimuli, confirming a strong impact of visual stimulus features on object-related ERP signals. As ERP components linked with visual categorical object perception are likely determined by multiple stimulus features, this creates an interesting inverse problem when deriving specific perceptual processes from variations in ERP components. PMID:22363479

  7. Partial recovery of visual-spatial remapping of touch after restoring vision in a congenitally blind man.

    Science.gov (United States)

    Ley, Pia; Bottari, Davide; Shenoy, Bhamy H; Kekunnaya, Ramesh; Röder, Brigitte

    2013-05-01

    In an initial processing step, sensory events are encoded in modality specific representations in the brain but seem to be automatically remapped into a supra-modal, presumably visual-external frame of reference. To test whether there is a sensitive phase in the first years of life during which visual input is crucial for the acquisition of this remapping process, we tested a single case of a congenitally blind man whose sight was restored after the age of two years. HS performed a tactile temporal order judgment task (TOJ) which required judging the temporal order of two tactile stimuli, one presented to each index finger. In addition, a visual-tactile cross-modal congruency task was run, in which spatially congruent and spatially incongruent visual distractor stimuli were presented together with tactile stimuli. The tactile stimuli had to be localized. Both tasks were performed with an uncrossed and a crossed hand posture. Similar to congenitally blind individuals HS did not show a crossing effect in the tactile TOJ task suggesting an anatomical rather than visual-external coding of touch. In the visual-tactile task, however, external remapping of touch was observed though incomplete compared to sighted controls. These data support the hypothesis of a sensitive phase for the acquisition of an automatic use of visual-spatial representations for coding tactile input. Nonetheless, these representations seem to be acquired to some extent after the end of congenital blindness but seem to be recruited only in the context of visual stimuli and are used with a reduced efficiency. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. Attentional capture by social stimuli in young infants

    OpenAIRE

    Gluckman, Maxie; Johnson, Scott P.

    2013-01-01

    We investigated the possibility that a range of social stimuli capture the attention of 6-month-old infants when in competition with other non-face objects. Infants viewed a series of six-item arrays in which one target item was a face, body part, or animal as their eye movements were recorded. Stimulus arrays were also processed for relative salience of each item in terms of color, luminance, and amount of contour. Targets were rarely the most visually salient items in the arrays, yet inf...

  9. VRML metabolic network visualizer.

    Science.gov (United States)

    Rojdestvenski, Igor

    2003-03-01

    A successful date collection visualization should satisfy a set of many requirements: unification of diverse data formats, support for serendipity research, support of hierarchical structures, algorithmizability, vast information density, Internet-readiness, and other. Recently, virtual reality has made significant progress in engineering, architectural design, entertainment and communication. We experiment with the possibility of using the immersive abstract three-dimensional visualizations of the metabolic networks. We present the trial Metabolic Network Visualizer software, which produces graphical representation of a metabolic network as a VRML world from a formal description written in a simple SGML-type scripting language.

  10. Brain correlates of automatic visual change detection.

    Science.gov (United States)

    Cléry, H; Andersson, F; Fonlupt, P; Gomot, M

    2013-07-15

    A number of studies support the presence of visual automatic detection of change, but little is known about the brain generators involved in such processing and about the modulation of brain activity according to the salience of the stimulus. The study presented here was designed to locate the brain activity elicited by unattended visual deviant and novel stimuli using fMRI. Seventeen adult participants were presented with a passive visual oddball sequence while performing a concurrent visual task. Variations in BOLD signal were observed in the modality-specific sensory cortex, but also in non-specific areas involved in preattentional processing of changing events. A degree-of-deviance effect was observed, since novel stimuli elicited more activity in the sensory occipital regions and at the medial frontal site than small changes. These findings could be compared to those obtained in the auditory modality and might suggest a "general" change detection process operating in several sensory modalities. Copyright © 2013 Elsevier Inc. All rights reserved.

  11. Biases in Visual, Auditory, and Audiovisual Perception of Space

    Science.gov (United States)

    Odegaard, Brian; Wozny, David R.; Shams, Ladan

    2015-01-01

    Localization of objects and events in the environment is critical for survival, as many perceptual and motor tasks rely on estimation of spatial location. Therefore, it seems reasonable to assume that spatial localizations should generally be accurate. Curiously, some previous studies have reported biases in visual and auditory localizations, but these studies have used small sample sizes and the results have been mixed. Therefore, it is not clear (1) if the reported biases in localization responses are real (or due to outliers, sampling bias, or other factors), and (2) whether these putative biases reflect a bias in sensory representations of space or a priori expectations (which may be due to the experimental setup, instructions, or distribution of stimuli). Here, to address these questions, a dataset of unprecedented size (obtained from 384 observers) was analyzed to examine presence, direction, and magnitude of sensory biases, and quantitative computational modeling was used to probe the underlying mechanism(s) driving these effects. Data revealed that, on average, observers were biased towards the center when localizing visual stimuli, and biased towards the periphery when localizing auditory stimuli. Moreover, quantitative analysis using a Bayesian Causal Inference framework suggests that while pre-existing spatial biases for central locations exert some influence, biases in the sensory representations of both visual and auditory space are necessary to fully explain the behavioral data. How are these opposing visual and auditory biases reconciled in conditions in which both auditory and visual stimuli are produced by a single event? Potentially, the bias in one modality could dominate, or the biases could interact/cancel out. The data revealed that when integration occurred in these conditions, the visual bias dominated, but the magnitude of this bias was reduced compared to unisensory conditions. Therefore, multisensory integration not only improves the

  12. Biases in Visual, Auditory, and Audiovisual Perception of Space.

    Directory of Open Access Journals (Sweden)

    Brian Odegaard

    2015-12-01

    Full Text Available Localization of objects and events in the environment is critical for survival, as many perceptual and motor tasks rely on estimation of spatial location. Therefore, it seems reasonable to assume that spatial localizations should generally be accurate. Curiously, some previous studies have reported biases in visual and auditory localizations, but these studies have used small sample sizes and the results have been mixed. Therefore, it is not clear (1 if the reported biases in localization responses are real (or due to outliers, sampling bias, or other factors, and (2 whether these putative biases reflect a bias in sensory representations of space or a priori expectations (which may be due to the experimental setup, instructions, or distribution of stimuli. Here, to address these questions, a dataset of unprecedented size (obtained from 384 observers was analyzed to examine presence, direction, and magnitude of sensory biases, and quantitative computational modeling was used to probe the underlying mechanism(s driving these effects. Data revealed that, on average, observers were biased towards the center when localizing visual stimuli, and biased towards the periphery when localizing auditory stimuli. Moreover, quantitative analysis using a Bayesian Causal Inference framework suggests that while pre-existing spatial biases for central locations exert some influence, biases in the sensory representations of both visual and auditory space are necessary to fully explain the behavioral data. How are these opposing visual and auditory biases reconciled in conditions in which both auditory and visual stimuli are produced by a single event? Potentially, the bias in one modality could dominate, or the biases could interact/cancel out. The data revealed that when integration occurred in these conditions, the visual bias dominated, but the magnitude of this bias was reduced compared to unisensory conditions. Therefore, multisensory integration not only

  13. A Bilateral Advantage for Storage in Visual Working Memory

    Science.gov (United States)

    Umemoto, Akina; Drew, Trafton; Ester, Edward F.; Awh, Edward

    2010-01-01

    Various studies have demonstrated enhanced visual processing when information is presented across both visual hemifields rather than in a single hemifield (the "bilateral advantage"). For example, Alvarez and Cavanagh (2005) reported that observers were able to track twice as many moving visual stimuli when the tracked items were presented…

  14. Visual Working Memory Enhances the Neural Response to Matching Visual Input.

    Science.gov (United States)

    Gayet, Surya; Guggenmos, Matthias; Christophel, Thomas B; Haynes, John-Dylan; Paffen, Chris L E; Van der Stigchel, Stefan; Sterzer, Philipp

    2017-07-12

    Visual working memory (VWM) is used to maintain visual information available for subsequent goal-directed behavior. The content of VWM has been shown to affect the behavioral response to concurrent visual input, suggesting that visual representations originating from VWM and from sensory input draw upon a shared neural substrate (i.e., a sensory recruitment stance on VWM storage). Here, we hypothesized that visual information maintained in VWM would enhance the neural response to concurrent visual input that matches the content of VWM. To test this hypothesis, we measured fMRI BOLD responses to task-irrelevant stimuli acquired from 15 human participants (three males) performing a concurrent delayed match-to-sample task. In this task, observers were sequentially presented with two shape stimuli and a retro-cue indicating which of the two shapes should be memorized for subsequent recognition. During the retention interval, a task-irrelevant shape (the probe) was briefly presented in the peripheral visual field, which could either match or mismatch the shape category of the memorized stimulus. We show that this probe stimulus elicited a stronger BOLD response, and allowed for increased shape-classification performance, when it matched rather than mismatched the concurrently memorized content, despite identical visual stimulation. Our results demonstrate that VWM enhances the neural response to concurrent visual input in a content-specific way. This finding is consistent with the view that neural populations involved in sensory processing are recruited for VWM storage, and it provides a common explanation for a plethora of behavioral studies in which VWM-matching visual input elicits a stronger behavioral and perceptual response. SIGNIFICANCE STATEMENT Humans heavily rely on visual information to interact with their environment and frequently must memorize such information for later use. Visual working memory allows for maintaining such visual information in the mind

  15. The Role of Inhibition in Avoiding Distraction by Salient Stimuli.

    Science.gov (United States)

    Gaspelin, Nicholas; Luck, Steven J

    2018-01-01

    Researchers have long debated whether salient stimuli can involuntarily 'capture' visual attention. We review here evidence for a recently discovered inhibitory mechanism that may help to resolve this debate. This evidence suggests that salient stimuli naturally attempt to capture attention, but capture can be avoided if the salient stimulus is suppressed before it captures attention. Importantly, the suppression process can be more or less effective as a result of changing task demands or lapses in cognitive control. Converging evidence for the existence of this suppression mechanism comes from multiple sources, including psychophysics, eye-tracking, and event-related potentials (ERPs). We conclude that the evidence for suppression is strong, but future research will need to explore the nature and limits of this mechanism. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Microcontroller based fibre-optic visual presentation system for multisensory neuroimaging.

    Science.gov (United States)

    Kurniawan, Veldri; Klemen, Jane; Chambers, Christopher D

    2011-10-30

    Presenting visual stimuli in physical 3D space during fMRI experiments carries significant technical challenges. Certain types of multisensory visuotactile experiments and visuomotor tasks require presentation of visual stimuli in peripersonal space, which cannot be accommodated by ordinary projection screens or binocular goggles. However, light points produced by a group of LEDs can be transmitted through fibre-optic cables and positioned anywhere inside the MRI scanner. Here we describe the design and implementation of a microcontroller-based programmable digital device for controlling fibre-optically transmitted LED lights from a PC. The main feature of this device is the ability to independently control the colour, brightness, and timing of each LED. Moreover, the device was designed in a modular and extensible way, which enables easy adaptation for various experimental paradigms. The device was tested and validated in three fMRI experiments involving basic visual perception, a simple colour discrimination task, and a blocked multisensory visuo-tactile task. The results revealed significant lateralized activation in occipital cortex of all participants, a reliable response in ventral occipital areas to colour stimuli elicited by the device, and strong activations in multisensory brain regions in the multisensory task. Overall, these findings confirm the suitability of this device for presenting complex fibre-optic visual and cross-modal stimuli inside the scanner. Copyright © 2011 Elsevier B.V. All rights reserved.

  17. A visual representation system for the scheduling and management of projects

    NARCIS (Netherlands)

    Pollalis, S.N.

    1992-01-01

    A VISUAL SCHEDULING AND MANAGEMENT SYSTEM (VSMS) This work proposes a new system for the visual representation of projects that displays the quantities of work, resources and cost. This new system, called Visual Scheduling and Management System, has a built-in hierarchical system to provide

  18. False memories to emotional stimuli are not equally affected in right- and left-brain-damaged stroke patients.

    Science.gov (United States)

    Buratto, Luciano Grüdtner; Zimmermann, Nicolle; Ferré, Perrine; Joanette, Yves; Fonseca, Rochele Paz; Stein, Lilian Milnitsky

    2014-10-01

    Previous research has attributed to the right hemisphere (RH) a key role in eliciting false memories to visual emotional stimuli. These results have been explained in terms of two right-hemisphere properties: (i) that emotional stimuli are preferentially processed in the RH and (ii) that visual stimuli are represented more coarsely in the RH. According to this account, false emotional memories are preferentially produced in the RH because emotional stimuli are both more strongly and more diffusely activated during encoding, leaving a memory trace that can be erroneously reactivated by similar but unstudied emotional items at test. If this right-hemisphere hypothesis is correct, then RH damage should result in a reduction in false memories to emotional stimuli relative to left-hemisphere lesions. To investigate this possibility, groups of right-brain-damaged (RBD, N=15), left-brain-damaged (LBD, N=15) and healthy (HC, N=30) participants took part in a recognition memory experiment with emotional (negative and positive) and non-emotional pictures. False memories were operationalized as incorrect responses to unstudied pictures that were similar to studied ones. Both RBD and LBD participants showed similar reductions in false memories for negative pictures relative to controls. For positive pictures, however, false memories were reduced only in RBD patients. The results provide only partial support for the right-hemisphere hypothesis and suggest that inter-hemispheric cooperation models may be necessary to fully account for false emotional memories. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. Using Technology to Support Visual Learning Strategies

    Science.gov (United States)

    O'Bannon, Blanche; Puckett, Kathleen; Rakes, Glenda

    2006-01-01

    Visual learning is a strategy for visually representing the structure of information and for representing the ways in which concepts are related. Based on the work of Ausubel, these hierarchical maps facilitate student learning of unfamiliar information in the K-12 classroom. This paper presents the research base for this Type II computer tool, as…

  20. Visually Evoked Spiking Evolves While Spontaneous Ongoing Dynamics Persist

    Science.gov (United States)

    Huys, Raoul; Jirsa, Viktor K.; Darokhan, Ziauddin; Valentiniene, Sonata; Roland, Per E.

    2016-01-01

    Neurons in the primary visual cortex spontaneously spike even when there are no visual stimuli. It is unknown whether the spiking evoked by visual stimuli is just a modification of the spontaneous ongoing cortical spiking dynamics or whether the spontaneous spiking state disappears and is replaced by evoked spiking. This study of laminar recordings of spontaneous spiking and visually evoked spiking of neurons in the ferret primary visual cortex shows that the spiking dynamics does not change: the spontaneous spiking as well as evoked spiking is controlled by a stable and persisting fixed point attractor. Its existence guarantees that evoked spiking return to the spontaneous state. However, the spontaneous ongoing spiking state and the visual evoked spiking states are qualitatively different and are separated by a threshold (separatrix). The functional advantage of this organization is that it avoids the need for a system reorganization following visual stimulation, and impedes the transition of spontaneous spiking to evoked spiking and the propagation of spontaneous spiking from layer 4 to layers 2–3. PMID:26778982

  1. Activations in temporal areas using visual and auditory naming stimuli: A language fMRI study in temporal lobe epilepsy.

    Science.gov (United States)

    Gonzálvez, Gloria G; Trimmel, Karin; Haag, Anja; van Graan, Louis A; Koepp, Matthias J; Thompson, Pamela J; Duncan, John S

    2016-12-01

    Verbal fluency functional MRI (fMRI) is used for predicting language deficits after anterior temporal lobe resection (ATLR) for temporal lobe epilepsy (TLE), but primarily engages frontal lobe areas. In this observational study we investigated fMRI paradigms using visual and auditory stimuli, which predominately involve language areas resected during ATLR. Twenty-three controls and 33 patients (20 left (LTLE), 13 right (RTLE)) were assessed using three fMRI paradigms: verbal fluency, auditory naming with a contrast of auditory reversed speech; picture naming with a contrast of scrambled pictures and blurred faces. Group analysis showed bilateral temporal activations for auditory naming and picture naming. Correcting for auditory and visual input (by subtracting activations resulting from auditory reversed speech and blurred pictures/scrambled faces respectively) resulted in left-lateralised activations for patients and controls, which was more pronounced for LTLE compared to RTLE patients. Individual subject activations at a threshold of T>2.5, extent >10 voxels, showed that verbal fluency activated predominantly the left inferior frontal gyrus (IFG) in 90% of LTLE, 92% of RTLE, and 65% of controls, compared to right IFG activations in only 15% of LTLE and RTLE and 26% of controls. Middle temporal (MTG) or superior temporal gyrus (STG) activations were seen on the left in 30% of LTLE, 23% of RTLE, and 52% of controls, and on the right in 15% of LTLE, 15% of RTLE, and 35% of controls. Auditory naming activated temporal areas more frequently than did verbal fluency (LTLE: 93%/73%; RTLE: 92%/58%; controls: 82%/70% (left/right)). Controlling for auditory input resulted in predominantly left-sided temporal activations. Picture naming resulted in temporal lobe activations less frequently than did auditory naming (LTLE 65%/55%; RTLE 53%/46%; controls 52%/35% (left/right)). Controlling for visual input had left-lateralising effects. Auditory and picture naming activated

  2. Hierarchical video summarization based on context clustering

    Science.gov (United States)

    Tseng, Belle L.; Smith, John R.

    2003-11-01

    A personalized video summary is dynamically generated in our video personalization and summarization system based on user preference and usage environment. The three-tier personalization system adopts the server-middleware-client architecture in order to maintain, select, adapt, and deliver rich media content to the user. The server stores the content sources along with their corresponding MPEG-7 metadata descriptions. In this paper, the metadata includes visual semantic annotations and automatic speech transcriptions. Our personalization and summarization engine in the middleware selects the optimal set of desired video segments by matching shot annotations and sentence transcripts with user preferences. Besides finding the desired contents, the objective is to present a coherent summary. There are diverse methods for creating summaries, and we focus on the challenges of generating a hierarchical video summary based on context information. In our summarization algorithm, three inputs are used to generate the hierarchical video summary output. These inputs are (1) MPEG-7 metadata descriptions of the contents in the server, (2) user preference and usage environment declarations from the user client, and (3) context information including MPEG-7 controlled term list and classification scheme. In a video sequence, descriptions and relevance scores are assigned to each shot. Based on these shot descriptions, context clustering is performed to collect consecutively similar shots to correspond to hierarchical scene representations. The context clustering is based on the available context information, and may be derived from domain knowledge or rules engines. Finally, the selection of structured video segments to generate the hierarchical summary efficiently balances between scene representation and shot selection.

  3. Analyzing the User Behavior toward Electronic Commerce Stimuli.

    Science.gov (United States)

    Lorenzo-Romero, Carlota; Alarcón-Del-Amo, María-Del-Carmen; Gómez-Borja, Miguel-Ángel

    2016-01-01

    Based on the Stimulus-Organism-Response paradigm this research analyzes the main differences between the effects of two types of web technologies: Verbal web technology (i.e., navigational structure as utilitarian stimulus) versus non-verbal web technology (music and presentation of products as hedonic stimuli). Specific webmosphere stimuli have not been examined yet as separate variables and their impact on internal and behavioral responses seems unknown. Therefore, the objective of this research consists in analyzing the impact of these web technologies -which constitute the web atmosphere or webmosphere of a website- on shopping human behavior (i.e., users' internal states -affective, cognitive, and satisfaction- and behavioral responses - approach responses, and real shopping outcomes-) within the retail online store created by computer, taking into account some mediator variables (i.e., involvement, atmospheric responsiveness, and perceived risk). A 2 ("free" versus "hierarchical" navigational structure) × 2 ("on" versus "off" music) × 2 ("moving" versus "static" images) between-subjects computer experimental design is used to test empirically this research. In addition, an integrated methodology was developed allowing the simulation, tracking and recording of virtual user behavior within an online shopping environment. As main conclusion, this study suggests that the positive responses of online consumers might increase when they are allowed to freely navigate the online stores and their experience is enriched by animate gifts and music background. The effect caused by mediator variables modifies relatively the final shopping human behavior.

  4. Analyzing the user behavior towards Electronic Commerce stimuli

    Directory of Open Access Journals (Sweden)

    Carlota Lorenzo-Romero

    2016-11-01

    Full Text Available Based on the Stimulus-Organism-Response paradigm this research analyzes the main differences between the effects of two types of web technologies: Verbal web technology (i.e. navigational structure as utilitarian stimulus versus nonverbal web technology (music and presentation of products as hedonic stimuli. Specific webmosphere stimuli have not been examined yet as separate variables and their impact on internal and behavioral responses seems unknown. Therefore, the objective of this research consists in analyzing the impact of these web technologies –which constitute the web atmosphere or webmosphere of a website– on shopping human bebaviour (i.e. users’ internal states -affective, cognitive, and satisfaction- and behavioral responses - approach responses, and real shopping outcomes- within the retail online store created by computer, taking into account some mediator variables (i.e. involvement, atmospheric responsiveness, and perceived risk. A 2(free versus hierarchical navigational structure x2(on versus off music x2(moving versus static images between-subjects computer experimental design is used to test empirically this research. In addition, an integrated methodology was developed allowing the simulation, tracking and recording of virtual user behavior within an online shopping environment. As main conclusion, this study suggests that the positive responses of online consumers might increase when they are allowed to freely navigate the online stores and their experience is enriched by animate gifts and music background. The effect caused by mediator variables modifies relatively the final shopping human behavior.

  5. Effects of Visual Speech on Early Auditory Evoked Fields - From the Viewpoint of Individual Variance

    Science.gov (United States)

    Yahata, Izumi; Kanno, Akitake; Hidaka, Hiroshi; Sakamoto, Shuichi; Nakasato, Nobukazu; Kawashima, Ryuta; Katori, Yukio

    2017-01-01

    The effects of visual speech (the moving image of the speaker’s face uttering speech sound) on early auditory evoked fields (AEFs) were examined using a helmet-shaped magnetoencephalography system in 12 healthy volunteers (9 males, mean age 35.5 years). AEFs (N100m) in response to the monosyllabic sound /be/ were recorded and analyzed under three different visual stimulus conditions, the moving image of the same speaker’s face uttering /be/ (congruent visual stimuli) or uttering /ge/ (incongruent visual stimuli), and visual noise (still image processed from speaker’s face using a strong Gaussian filter: control condition). On average, latency of N100m was significantly shortened in the bilateral hemispheres for both congruent and incongruent auditory/visual (A/V) stimuli, compared to the control A/V condition. However, the degree of N100m shortening was not significantly different between the congruent and incongruent A/V conditions, despite the significant differences in psychophysical responses between these two A/V conditions. Moreover, analysis of the magnitudes of these visual effects on AEFs in individuals showed that the lip-reading effects on AEFs tended to be well correlated between the two different audio-visual conditions (congruent vs. incongruent visual stimuli) in the bilateral hemispheres but were not significantly correlated between right and left hemisphere. On the other hand, no significant correlation was observed between the magnitudes of visual speech effects and psychophysical responses. These results may indicate that the auditory-visual interaction observed on the N100m is a fundamental process which does not depend on the congruency of the visual information. PMID:28141836

  6. TypingSuite: Integrated Software for Presenting Stimuli, and Collecting and Analyzing Typing Data

    Science.gov (United States)

    Mazerolle, Erin L.; Marchand, Yannick

    2015-01-01

    Research into typing patterns has broad applications in both psycholinguistics and biometrics (i.e., improving security of computer access via each user's unique typing patterns). We present a new software package, TypingSuite, which can be used for presenting visual and auditory stimuli, collecting typing data, and summarizing and analyzing the…

  7. Comparable mechanisms of working memory interference by auditory and visual motion in youth and aging.

    Science.gov (United States)

    Mishra, Jyoti; Zanto, Theodore; Nilakantan, Aneesha; Gazzaley, Adam

    2013-08-01

    Intrasensory interference during visual working memory (WM) maintenance by object stimuli (such as faces and scenes), has been shown to negatively impact WM performance, with greater detrimental impacts of interference observed in aging. Here we assessed age-related impacts by intrasensory WM interference from lower-level stimulus features such as visual and auditory motion stimuli. We consistently found that interference in the form of ignored distractions and secondary task interruptions presented during a WM maintenance period, degraded memory accuracy in both the visual and auditory domain. However, in contrast to prior studies assessing WM for visual object stimuli, feature-based interference effects were not observed to be significantly greater in older adults. Analyses of neural oscillations in the alpha frequency band further revealed preserved mechanisms of interference processing in terms of post-stimulus alpha suppression, which was observed maximally for secondary task interruptions in visual and auditory modalities in both younger and older adults. These results suggest that age-related sensitivity of WM to interference may be limited to complex object stimuli, at least at low WM loads. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. Visual Attention in Flies-Dopamine in the Mushroom Bodies Mediates the After-Effect of Cueing.

    Science.gov (United States)

    Koenig, Sebastian; Wolf, Reinhard; Heisenberg, Martin

    2016-01-01

    Visual environments may simultaneously comprise stimuli of different significance. Often such stimuli require incompatible responses. Selective visual attention allows an animal to respond exclusively to the stimuli at a certain location in the visual field. In the process of establishing its focus of attention the animal can be influenced by external cues. Here we characterize the behavioral properties and neural mechanism of cueing in the fly Drosophila melanogaster. A cue can be attractive, repulsive or ineffective depending upon (e.g.) its visual properties and location in the visual field. Dopamine signaling in the brain is required to maintain the effect of cueing once the cue has disappeared. Raising or lowering dopamine at the synapse abolishes this after-effect. Specifically, dopamine is necessary and sufficient in the αβ-lobes of the mushroom bodies. Evidence is provided for an involvement of the αβposterior Kenyon cells.

  9. On hierarchical models for visual recognition and learning of objects, scenes, and activities

    CERN Document Server

    Spehr, Jens

    2015-01-01

    In many computer vision applications, objects have to be learned and recognized in images or image sequences. This book presents new probabilistic hierarchical models that allow an efficient representation of multiple objects of different categories, scales, rotations, and views. The idea is to exploit similarities between objects and object parts in order to share calculations and avoid redundant information. Furthermore inference approaches for fast and robust detection are presented. These new approaches combine the idea of compositional and similarity hierarchies and overcome limitations of previous methods. Besides classical object recognition the book shows the use for detection of human poses in a project for gait analysis. The use of activity detection is presented for the design of environments for ageing, to identify activities and behavior patterns in smart homes. In a presented project for parking spot detection using an intelligent vehicle, the proposed approaches are used to hierarchically model...

  10. Face processing is gated by visual spatial attention

    Directory of Open Access Journals (Sweden)

    Roy E Crist

    2008-03-01

    Full Text Available Human perception of faces is widely believed to rely on automatic processing by a domain-specifi c, modular component of the visual system. Scalp-recorded event-related potential (ERP recordings indicate that faces receive special stimulus processing at around 170 ms poststimulus onset, in that faces evoke an enhanced occipital negative wave, known as the N170, relative to the activity elicited by other visual objects. As predicted by modular accounts of face processing, this early face-specifi c N170 enhancement has been reported to be largely immune to the infl uence of endogenous processes such as task strategy or attention. However, most studies examining the infl uence of attention on face processing have focused on non-spatial attention, such as object-based attention, which tend to have longer-latency effects. In contrast, numerous studies have demonstrated that visual spatial attention can modulate the processing of visual stimuli as early as 80 ms poststimulus – substantially earlier than the N170. These temporal characteristics raise the question of whether this initial face-specifi c processing is immune to the infl uence of spatial attention. This question was addressed in a dual-visualstream ERP study in which the infl uence of spatial attention on the face-specifi c N170 could be directly examined. As expected, early visual sensory responses to all stimuli presented in an attended location were larger than responses evoked by those same stimuli when presented in an unattended location. More importantly, a signifi cant face-specifi c N170 effect was elicited by faces that appeared in an attended location, but not in an unattended one. In summary, early face-specifi c processing is not automatic, but rather, like other objects, strongly depends on endogenous factors such as the allocation of spatial attention. Moreover, these fi ndings underscore the extensive infl uence that top-down attention exercises over the processing of

  11. Visual hallucinatory syndromes and the anatomy of the visual brain.

    Science.gov (United States)

    Santhouse, A M; Howard, R J; ffytche, D H

    2000-10-01

    We have set out to identify phenomenological correlates of cerebral functional architecture within Charles Bonnet syndrome (CBS) hallucinations by looking for associations between specific hallucination categories. Thirty-four CBS patients were examined with a structured interview/questionnaire to establish the presence of 28 different pathological visual experiences. Associations between categories of pathological experience were investigated by an exploratory factor analysis. Twelve of the pathological experiences partitioned into three segregated syndromic clusters. The first cluster consisted of hallucinations of extended landscape scenes and small figures in costumes with hats; the second, hallucinations of grotesque, disembodied and distorted faces with prominent eyes and teeth; and the third, visual perseveration and delayed palinopsia. The three visual psycho-syndromes mirror the segregation of hierarchical visual pathways into streams and suggest a novel theoretical framework for future research into the pathophysiology of neuropsychiatric syndromes.

  12. Audio visual interaction in the context of multi-media applications

    NARCIS (Netherlands)

    Kohlrausch, A.G.; Par, van de S.L.J.D.E.; Blauert, J.

    2005-01-01

    In our natural environment, we simultaneously receive information through various sensory modalities. The properties of these stimuli are coupled by physical laws, so that, e. g., auditory and visual stimuli caused by the same event have a specific temporal, spatial and contextual relation when

  13. First- and second-order contrast sensitivity functions reveal disrupted visual processing following mild traumatic brain injury.

    Science.gov (United States)

    Spiegel, Daniel P; Reynaud, Alexandre; Ruiz, Tatiana; Laguë-Beauvais, Maude; Hess, Robert; Farivar, Reza

    2016-05-01

    Vision is disrupted by traumatic brain injury (TBI), with vision-related complaints being amongst the most common in this population. Based on the neural responses of early visual cortical areas, injury to the visual cortex would be predicted to affect both 1(st) order and 2(nd) order contrast sensitivity functions (CSFs)-the height and/or the cut-off of the CSF are expected to be affected by TBI. Previous studies have reported disruptions only in 2(nd) order contrast sensitivity, but using a narrow range of parameters and divergent methodologies-no study has characterized the effect of TBI on the full CSF for both 1(st) and 2(nd) order stimuli. Such information is needed to properly understand the effect of TBI on contrast perception, which underlies all visual processing. Using a unified framework based on the quick contrast sensitivity function, we measured full CSFs for static and dynamic 1(st) and 2(nd) order stimuli. Our results provide a unique dataset showing alterations in sensitivity for both 1(st) and 2(nd) order visual stimuli. In particular, we show that TBI patients have increased sensitivity for 1(st) order motion stimuli and decreased sensitivity to orientation-defined and contrast-defined 2(nd) order stimuli. In addition, our data suggest that TBI patients' sensitivity for both 1(st) order stimuli and 2(nd) order contrast-defined stimuli is shifted towards higher spatial frequencies. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  14. Implicit and Explicit Associations with Erotic Stimuli in Women with and Without Sexual Problems.

    Science.gov (United States)

    van Lankveld, Jacques J D M; Bandell, Myrthe; Bastin-Hurek, Eva; van Beurden, Myra; Araz, Suzan

    2018-02-20

    Conceptual models of sexual functioning have suggested a major role for implicit cognitive processing in sexual functioning. The present study aimed to investigate implicit and explicit cognition in sexual functioning in women. Gynecological patients with (N = 38) and without self-reported sexual problems (N = 41) were compared. Participants performed two Single-Target Implicit Association Tests (ST-IAT), measuring the implicit association of visual erotic stimuli with attributes representing, respectively, valence and motivation. Participants also rated the erotic pictures that were shown in the ST-IATs on the dimensions of valence, attractiveness, and sexual excitement, to assess their explicit associations with these erotic stimuli. Participants completed the Female Sexual Functioning Index and the Female Sexual Distress Scale for continuous measures of sexual functioning, and the Hospital Anxiety and Depression Scale to assess depressive symptoms. Compared to nonsymptomatic women, women with sexual problems were found to show more negative implicit associations of erotic stimuli with wanting (implicit sexual motivation). Across both groups, stronger implicit associations of erotic stimuli with wanting predicted higher level of sexual functioning. More positive explicit ratings of erotic stimuli predicted lower level of sexual distress across both groups.

  15. Linking crowding, visual span, and reading.

    Science.gov (United States)

    He, Yingchen; Legge, Gordon E

    2017-09-01

    The visual span is hypothesized to be a sensory bottleneck on reading speed with crowding thought to be the major sensory factor limiting the size of the visual span. This proposed linkage between crowding, visual span, and reading speed is challenged by the finding that training to read crowded letters reduced crowding but did not improve reading speed (Chung, 2007). Here, we examined two properties of letter-recognition training that may influence the transfer to improved reading: the spatial arrangement of training stimuli and the presence of flankers. Three groups of nine young adults were trained with different configurations of letter stimuli at 10° in the lower visual field: a flanked-local group (flanked letters localized at one position), a flanked-distributed group (flanked letters distributed across different horizontal locations), and an isolated-distributed group (isolated and distributed letters). We found that distributed training, but not the presence of flankers, appears to be necessary for the training benefit to transfer to increased reading speed. Localized training may have biased attention to one specific, small area in the visual field, thereby failing to improve reading. We conclude that the visual span represents a sensory bottleneck on reading, but there may also be an attentional bottleneck. Reducing the impact of crowding can enlarge the visual span and can potentially facilitate reading, but not when adverse attentional bias is present. Our results clarify the association between crowding, visual span, and reading.

  16. The flanker compatibility effect as a function of visual angle, attentional focus, visual transients, and perceptual load: a search for boundary conditions.

    Science.gov (United States)

    Miller, J

    1991-03-01

    When subjects must respond to a relevant center letter and ignore irrelevant flanking letters, the identities of the flankers produce a response compatibility effect, indicating that they are processed semantically at least to some extent. Because this effect decreases as the separation between target and flankers increases, the effect appears to result from imperfect early selection (attenuation). In the present experiments, several features of the focused attention paradigm were examined, in order to determine whether they might produce the flanker compatibility effect by interfering with the operation of an early selective mechanism. Specifically, the effect might be produced because the paradigm requires subjects to (1) attend exclusively to stimuli within a very small visual angle, (2) maintain a long-term attentional focus on a constant display location, (3) focus attention on an empty display location, (4) exclude onset-transient flankers from semantic processing, or (5) ignore some of the few stimuli in an impoverished visual field. The results indicate that none of these task features is required for semantic processing of unattended stimuli to occur. In fact, visual angle is the only one of the task features that clearly has a strong influence on the size of the flanker compatibility effect. The invariance of the flanker compatibility effect across these conditions suggests that the mechanism for early selection rarely, if ever, completely excludes unattended stimuli from semantic analysis. In addition, it shows that selective mechanisms are relatively insensitive to several factors that might be expected to influence them, thereby supporting the view that spatial separation has a special status for visual selective attention.

  17. Integration of motion energy from overlapping random background noise increases perceived speed of coherently moving stimuli.

    Science.gov (United States)

    Chuang, Jason; Ausloos, Emily C; Schwebach, Courtney A; Huang, Xin

    2016-12-01

    The perception of visual motion can be profoundly influenced by visual context. To gain insight into how the visual system represents motion speed, we investigated how a background stimulus that did not move in a net direction influenced the perceived speed of a center stimulus. Visual stimuli were two overlapping random-dot patterns. The center stimulus moved coherently in a fixed direction, whereas the background stimulus moved randomly. We found that human subjects perceived the speed of the center stimulus to be significantly faster than its veridical speed when the background contained motion noise. Interestingly, the perceived speed was tuned to the noise level of the background. When the speed of the center stimulus was low, the highest perceived speed was reached when the background had a low level of motion noise. As the center speed increased, the peak perceived speed was reached at a progressively higher background noise level. The effect of speed overestimation required the center stimulus to overlap with the background. Increasing the background size within a certain range enhanced the effect, suggesting spatial integration. The speed overestimation was significantly reduced or abolished when the center stimulus and the background stimulus had different colors, or when they were placed at different depths. When the center- and background-stimuli were perceptually separable, speed overestimation was correlated with perceptual similarity between the center- and background-stimuli. These results suggest that integration of motion energy from random motion noise has a significant impact on speed perception. Our findings put new constraints on models regarding the neural basis of speed perception. Copyright © 2016 the American Physiological Society.

  18. Evidence for unlimited capacity processing of simple features in visual cortex.

    Science.gov (United States)

    White, Alex L; Runeson, Erik; Palmer, John; Ernst, Zachary R; Boynton, Geoffrey M

    2017-06-01

    Performance in many visual tasks is impaired when observers attempt to divide spatial attention across multiple visual field locations. Correspondingly, neuronal response magnitudes in visual cortex are often reduced during divided compared with focused spatial attention. This suggests that early visual cortex is the site of capacity limits, where finite processing resources must be divided among attended stimuli. However, behavioral research demonstrates that not all visual tasks suffer such capacity limits: The costs of divided attention are minimal when the task and stimulus are simple, such as when searching for a target defined by orientation or contrast. To date, however, every neuroimaging study of divided attention has used more complex tasks and found large reductions in response magnitude. We bridged that gap by using functional magnetic resonance imaging to measure responses in the human visual cortex during simple feature detection. The first experiment used a visual search task: Observers detected a low-contrast Gabor patch within one or four potentially relevant locations. The second experiment used a dual-task design, in which observers made independent judgments of Gabor presence in patches of dynamic noise at two locations. In both experiments, blood-oxygen level-dependent (BOLD) signals in the retinotopic cortex were significantly lower for ignored than attended stimuli. However, when observers divided attention between multiple stimuli, BOLD signals were not reliably reduced and behavioral performance was unimpaired. These results suggest that processing of simple features in early visual cortex has unlimited capacity.

  19. Audio-visual identification of place of articulation and voicing in white and babble noise.

    Science.gov (United States)

    Alm, Magnus; Behne, Dawn M; Wang, Yue; Eg, Ragnhild

    2009-07-01

    Research shows that noise and phonetic attributes influence the degree to which auditory and visual modalities are used in audio-visual speech perception (AVSP). Research has, however, mainly focused on white noise and single phonetic attributes, thus neglecting the more common babble noise and possible interactions between phonetic attributes. This study explores whether white and babble noise differentially influence AVSP and whether these differences depend on phonetic attributes. White and babble noise of 0 and -12 dB signal-to-noise ratio were added to congruent and incongruent audio-visual stop consonant-vowel stimuli. The audio (A) and video (V) of incongruent stimuli differed either in place of articulation (POA) or voicing. Responses from 15 young adults show that, compared to white noise, babble resulted in more audio responses for POA stimuli, and fewer for voicing stimuli. Voiced syllables received more audio responses than voiceless syllables. Results can be attributed to discrepancies in the acoustic spectra of both the noise and speech target. Voiced consonants may be more auditorily salient than voiceless consonants which are more spectrally similar to white noise. Visual cues contribute to identification of voicing, but only if the POA is visually salient and auditorily susceptible to the noise type.

  20. Mouse V1 population correlates of visual detection rely on heterogeneity within neuronal response patterns

    Science.gov (United States)

    Montijn, Jorrit S; Goltstein, Pieter M; Pennartz, Cyriel MA

    2015-01-01

    Previous studies have demonstrated the importance of the primary sensory cortex for the detection, discrimination, and awareness of visual stimuli, but it is unknown how neuronal populations in this area process detected and undetected stimuli differently. Critical differences may reside in the mean strength of responses to visual stimuli, as reflected in bulk signals detectable in functional magnetic resonance imaging, electro-encephalogram, or magnetoencephalography studies, or may be more subtly composed of differentiated activity of individual sensory neurons. Quantifying single-cell Ca2+ responses to visual stimuli recorded with in vivo two-photon imaging, we found that visual detection correlates more strongly with population response heterogeneity rather than overall response strength. Moreover, neuronal populations showed consistencies in activation patterns across temporally spaced trials in association with hit responses, but not during nondetections. Contrary to models relying on temporally stable networks or bulk signaling, these results suggest that detection depends on transient differentiation in neuronal activity within cortical populations. DOI: http://dx.doi.org/10.7554/eLife.10163.001 PMID:26646184

  1. Automatic processing of unattended lexical information in visual oddball presentation: neurophysiological evidence

    Directory of Open Access Journals (Sweden)

    Yury eShtyrov

    2013-08-01

    Full Text Available Previous electrophysiological studies of automatic language processing revealed early (100-200 ms reflections of access to lexical characteristics of speech signal using the so-called mismatch negativity (MMN, a negative ERP deflection elicited by infrequent irregularities in unattended repetitive auditory stimulation. In those studies, lexical processing of spoken stimuli became manifest as an enhanced ERP in response to unattended real words as opposed to phonologically matched but meaningless pseudoword stimuli. This lexical ERP enhancement was explained by automatic activation of word memory traces realised as distributed strongly intra-connected neuronal circuits, whose robustness guarantees memory trace activation even in the absence of attention on spoken input. Such an account would predict the automatic activation of these memory traces upon any presentation of linguistic information, irrespective of the presentation modality. As previous lexical MMN studies exclusively used auditory stimulation, we here adapted the lexical MMN paradigm to investigate early automatic lexical effects in the visual modality. In a visual oddball sequence, matched short word and pseudoword stimuli were presented tachistoscopically in perifoveal area outside the visual focus of attention, as the subjects’ attention was concentrated on a concurrent non-linguistic visual dual task in the centre of the screen. Using EEG, we found a visual analogue of the lexical ERP enhancement effect, with unattended written words producing larger brain response amplitudes than matched pseudowords, starting at ~100 ms. Furthermore, we also found significant visual MMN, reported here for the first time for unattended lexical stimuli presented perifoveally. The data suggest early automatic lexical processing of visually presented language outside the focus of attention.

  2. Brain activity during auditory and visual phonological, spatial and simple discrimination tasks.

    Science.gov (United States)

    Salo, Emma; Rinne, Teemu; Salonen, Oili; Alho, Kimmo

    2013-02-16

    We used functional magnetic resonance imaging to measure human brain activity during tasks demanding selective attention to auditory or visual stimuli delivered in concurrent streams. Auditory stimuli were syllables spoken by different voices and occurring in central or peripheral space. Visual stimuli were centrally or more peripherally presented letters in darker or lighter fonts. The participants performed a phonological, spatial or "simple" (speaker-gender or font-shade) discrimination task in either modality. Within each modality, we expected a clear distinction between brain activations related to nonspatial and spatial processing, as reported in previous studies. However, within each modality, different tasks activated largely overlapping areas in modality-specific (auditory and visual) cortices, as well as in the parietal and frontal brain regions. These overlaps may be due to effects of attention common for all three tasks within each modality or interaction of processing task-relevant features and varying task-irrelevant features in the attended-modality stimuli. Nevertheless, brain activations caused by auditory and visual phonological tasks overlapped in the left mid-lateral prefrontal cortex, while those caused by the auditory and visual spatial tasks overlapped in the inferior parietal cortex. These overlapping activations reveal areas of multimodal phonological and spatial processing. There was also some evidence for intermodal attention-related interaction. Most importantly, activity in the superior temporal sulcus elicited by unattended speech sounds was attenuated during the visual phonological task in comparison with the other visual tasks. This effect might be related to suppression of processing irrelevant speech presumably distracting the phonological task involving the letters. Copyright © 2012 Elsevier B.V. All rights reserved.

  3. Effects of auditory and visual modalities in recall of words.

    Science.gov (United States)

    Gadzella, B M; Whitehead, D A

    1975-02-01

    Ten experimental conditions were used to study the effects of auditory and visual (printed words, uncolored and colored pictures) modalities and their various combinations with college students. A recall paradigm was employed in which subjects responded in a written test. Analysis of data showed the auditory modality was superior to visual (pictures) ones but was not significantly different from visual (printed words) modality. In visual modalities, printed words were superior to colored pictures. Generally, conditions with multiple modes of representation of stimuli were significantly higher than for conditions with single modes. Multiple modalities, consisting of two or three modes, did not differ significantly from each other. It was concluded that any two modalities of the stimuli presented simultaneously were just as effective as three in recall of stimulus words.

  4. Audio-visual onset differences are used to determine syllable identity for ambiguous audio-visual stimulus pairs.

    Science.gov (United States)

    Ten Oever, Sanne; Sack, Alexander T; Wheat, Katherine L; Bien, Nina; van Atteveldt, Nienke

    2013-01-01

    Content and temporal cues have been shown to interact during audio-visual (AV) speech identification. Typically, the most reliable unimodal cue is used more strongly to identify specific speech features; however, visual cues are only used if the AV stimuli are presented within a certain temporal window of integration (TWI). This suggests that temporal cues denote whether unimodal stimuli belong together, that is, whether they should be integrated. It is not known whether temporal cues also provide information about the identity of a syllable. Since spoken syllables have naturally varying AV onset asynchronies, we hypothesize that for suboptimal AV cues presented within the TWI, information about the natural AV onset differences can aid in speech identification. To test this, we presented low-intensity auditory syllables concurrently with visual speech signals, and varied the stimulus onset asynchronies (SOA) of the AV pair, while participants were instructed to identify the auditory syllables. We revealed that specific speech features (e.g., voicing) were identified by relying primarily on one modality (e.g., auditory). Additionally, we showed a wide window in which visual information influenced auditory perception, that seemed even wider for congruent stimulus pairs. Finally, we found a specific response pattern across the SOA range for syllables that were not reliably identified by the unimodal cues, which we explained as the result of the use of natural onset differences between AV speech signals. This indicates that temporal cues not only provide information about the temporal integration of AV stimuli, but additionally convey information about the identity of AV pairs. These results provide a detailed behavioral basis for further neuro-imaging and stimulation studies to unravel the neurofunctional mechanisms of the audio-visual-temporal interplay within speech perception.

  5. Sustained Activity in Hierarchical Modular Neural Networks: Self-Organized Criticality and Oscillations

    Science.gov (United States)

    Wang, Sheng-Jun; Hilgetag, Claus C.; Zhou, Changsong

    2010-01-01

    Cerebral cortical brain networks possess a number of conspicuous features of structure and dynamics. First, these networks have an intricate, non-random organization. In particular, they are structured in a hierarchical modular fashion, from large-scale regions of the whole brain, via cortical areas and area subcompartments organized as structural and functional maps to cortical columns, and finally circuits made up of individual neurons. Second, the networks display self-organized sustained activity, which is persistent in the absence of external stimuli. At the systems level, such activity is characterized by complex rhythmical oscillations over a broadband background, while at the cellular level, neuronal discharges have been observed to display avalanches, indicating that cortical networks are at the state of self-organized criticality (SOC). We explored the relationship between hierarchical neural network organization and sustained dynamics using large-scale network modeling. Previously, it was shown that sparse random networks with balanced excitation and inhibition can sustain neural activity without external stimulation. We found that a hierarchical modular architecture can generate sustained activity better than random networks. Moreover, the system can simultaneously support rhythmical oscillations and SOC, which are not present in the respective random networks. The mechanism underlying the sustained activity is that each dense module cannot sustain activity on its own, but displays SOC in the presence of weak perturbations. Therefore, the hierarchical modular networks provide the coupling among subsystems with SOC. These results imply that the hierarchical modular architecture of cortical networks plays an important role in shaping the ongoing spontaneous activity of the brain, potentially allowing the system to take advantage of both the sensitivity of critical states and the predictability and timing of oscillations for efficient information

  6. Sustained activity in hierarchical modular neural networks: self-organized criticality and oscillations

    Directory of Open Access Journals (Sweden)

    Sheng-Jun Wang

    2011-06-01

    Full Text Available Cerebral cortical brain networks possess a number of conspicuous features of structure and dynamics. First, these networks have an intricate, non-random organization. They are structured in a hierarchical modular fashion, from large-scale regions of the whole brain, via cortical areas and area subcompartments organized as structural and functional maps to cortical columns, and finally circuits made up of individual neurons. Second, the networks display self-organized sustained activity, which is persistent in the absence of external stimuli. At the systems level, such activity is characterized by complex rhythmical oscillations over a broadband background, while at the cellular level, neuronal discharges have been observed to display avalanches, indicating that cortical networks are at the state of self-organized criticality. We explored the relationship between hierarchical neural network organization and sustained dynamics using large-scale network modeling. It was shown that sparse random networks with balanced excitation and inhibition can sustain neural activity without external stimulation. We find that a hierarchical modular architecture can generate sustained activity better than random networks. Moreover, the system can simultaneously support rhythmical oscillations and self-organized criticality, which are not present in the respective random networks. The underlying mechanism is that each dense module cannot sustain activity on its own, but displays self-organized criticality in the presence of weak perturbations. The hierarchical modular networks provide the coupling among subsystems with self-organized criticality. These results imply that the hierarchical modular architecture of cortical networks plays an important role in shaping the ongoing spontaneous activity of the brain, potentially allowing the system to take advantage of both the sensitivityof critical state and predictability and timing of oscillations for efficient

  7. Modelling auditory attention: Insights from the Theory of Visual Attention (TVA)

    DEFF Research Database (Denmark)

    Roberts, K. L.; Andersen, Tobias; Kyllingsbæk, Søren

    modelled using a log-logistic function than an exponential function. A more challenging difference is that in the partial report task, there is more target-distractor confusion for auditory than visual stimuli. This failure of object-formation (prior to attentional object-selection) is not yet effectively......We report initial progress towards creating an auditory analogue of a mathematical model of visual attention: the ‘Theory of Visual Attention’ (TVA; Bundesen, 1990). TVA is one of the best established models of visual attention. It assumes that visual stimuli are initially processed in parallel......, and that there is a ‘race’ for selection and representation in visual short term memory (VSTM). In the basic TVA task, participants view a brief display of letters and are asked to report either all of the letters (whole report) or a subset of the letters (e.g., the red letters; partial report). Fitting the model...

  8. Picture book exposure elicits positive visual preferences in toddlers.

    Science.gov (United States)

    Houston-Price, Carmel; Burton, Eliza; Hickinson, Rachel; Inett, Jade; Moore, Emma; Salmon, Katherine; Shiba, Paula

    2009-09-01

    Although the relationship between "mere exposure" and attitude enhancement is well established in the adult domain, there has been little similar work with children. This article examines whether toddlers' visual attention toward pictures of foods can be enhanced by repeated visual exposure to pictures of foods in a parent-administered picture book. We describe three studies that explored the number and nature of exposures required to elicit positive visual preferences for stimuli and the extent to which induced preferences generalize to other similar items. Results show that positive preferences for stimuli are easily and reliably induced in children and, importantly, that this effect of exposure is not restricted to the exposed stimulus per se but also applies to new representations of the exposed item.

  9. Hierarchical ordering with partial pairwise hierarchical relationships on the macaque brain data sets.

    Directory of Open Access Journals (Sweden)

    Woosang Lim

    Full Text Available Hierarchical organizations of information processing in the brain networks have been known to exist and widely studied. To find proper hierarchical structures in the macaque brain, the traditional methods need the entire pairwise hierarchical relationships between cortical areas. In this paper, we present a new method that discovers hierarchical structures of macaque brain networks by using partial information of pairwise hierarchical relationships. Our method uses a graph-based manifold learning to exploit inherent relationship, and computes pseudo distances of hierarchical levels for every pair of cortical areas. Then, we compute hierarchy levels of all cortical areas by minimizing the sum of squared hierarchical distance errors with the hierarchical information of few cortical areas. We evaluate our method on the macaque brain data sets whose true hierarchical levels are known as the FV91 model. The experimental results show that hierarchy levels computed by our method are similar to the FV91 model, and its errors are much smaller than the errors of hierarchical clustering approaches.

  10. A hierarchical stochastic model for bistable perception.

    Directory of Open Access Journals (Sweden)

    Stefan Albert

    2017-11-01

    Full Text Available Viewing of ambiguous stimuli can lead to bistable perception alternating between the possible percepts. During continuous presentation of ambiguous stimuli, percept changes occur as single events, whereas during intermittent presentation of ambiguous stimuli, percept changes occur at more or less regular intervals either as single events or bursts. Response patterns can be highly variable and have been reported to show systematic differences between patients with schizophrenia and healthy controls. Existing models of bistable perception often use detailed assumptions and large parameter sets which make parameter estimation challenging. Here we propose a parsimonious stochastic model that provides a link between empirical data analysis of the observed response patterns and detailed models of underlying neuronal processes. Firstly, we use a Hidden Markov Model (HMM for the times between percept changes, which assumes one single state in continuous presentation and a stable and an unstable state in intermittent presentation. The HMM captures the observed differences between patients with schizophrenia and healthy controls, but remains descriptive. Therefore, we secondly propose a hierarchical Brownian model (HBM, which produces similar response patterns but also provides a relation to potential underlying mechanisms. The main idea is that neuronal activity is described as an activity difference between two competing neuronal populations reflected in Brownian motions with drift. This differential activity generates switching between the two conflicting percepts and between stable and unstable states with similar mechanisms on different neuronal levels. With only a small number of parameters, the HBM can be fitted closely to a high variety of response patterns and captures group differences between healthy controls and patients with schizophrenia. At the same time, it provides a link to mechanistic models of bistable perception, linking the group

  11. A hierarchical stochastic model for bistable perception.

    Science.gov (United States)

    Albert, Stefan; Schmack, Katharina; Sterzer, Philipp; Schneider, Gaby

    2017-11-01

    Viewing of ambiguous stimuli can lead to bistable perception alternating between the possible percepts. During continuous presentation of ambiguous stimuli, percept changes occur as single events, whereas during intermittent presentation of ambiguous stimuli, percept changes occur at more or less regular intervals either as single events or bursts. Response patterns can be highly variable and have been reported to show systematic differences between patients with schizophrenia and healthy controls. Existing models of bistable perception often use detailed assumptions and large parameter sets which make parameter estimation challenging. Here we propose a parsimonious stochastic model that provides a link between empirical data analysis of the observed response patterns and detailed models of underlying neuronal processes. Firstly, we use a Hidden Markov Model (HMM) for the times between percept changes, which assumes one single state in continuous presentation and a stable and an unstable state in intermittent presentation. The HMM captures the observed differences between patients with schizophrenia and healthy controls, but remains descriptive. Therefore, we secondly propose a hierarchical Brownian model (HBM), which produces similar response patterns but also provides a relation to potential underlying mechanisms. The main idea is that neuronal activity is described as an activity difference between two competing neuronal populations reflected in Brownian motions with drift. This differential activity generates switching between the two conflicting percepts and between stable and unstable states with similar mechanisms on different neuronal levels. With only a small number of parameters, the HBM can be fitted closely to a high variety of response patterns and captures group differences between healthy controls and patients with schizophrenia. At the same time, it provides a link to mechanistic models of bistable perception, linking the group differences to

  12. Sparsey^TM: Spatiotemporal Event Recognition via Deep Hierarchical Sparse Distributed Codes

    Directory of Open Access Journals (Sweden)

    Gerard J Rinkus

    2014-12-01

    Full Text Available The visual cortex’s hierarchical, multi-level organization is captured in many biologically inspired computational vision models, the general idea being that progressively larger scale (spatially/temporally and more complex visual features are represented in progressively higher areas. However, most earlier models use localist representations (codes in each representational field (which we equate with the cortical macrocolumn, mac, at each level. In localism, each represented feature/concept/event (hereinafter item is coded by a single unit. The model we describe, Sparsey, is hierarchical as well but crucially, it uses sparse distributed coding (SDC in every mac in all levels. In SDC, each represented item is coded by a small subset of the mac’s units. The SDCs of different items can overlap and the size of overlap between items can be used to represent their similarity. The difference between localism and SDC is crucial because SDC allows the two essential operations of associative memory, storing a new item and retrieving the best-matching stored item, to be done in fixed time for the life of the model. Since the model’s core algorithm, which does both storage and retrieval (inference, makes a single pass over all macs on each time step, the overall model’s storage/retrieval operation is also fixed-time, a criterion we consider essential for scalability to the huge (Big Data problems. A 2010 paper described a non-hierarchical version of this model in the context of purely spatial pattern processing. Here, we elaborate a fully hierarchical model (arbitrary numbers of levels and macs per level, describing novel model principles like progressive critical periods, dynamic modulation of principal cells’ activation functions based on a mac-level familiarity measure, representation of multiple simultaneously active hypotheses, a novel method of time warp invariant recognition, and we report results showing learning/recognition of

  13. Audiovisual Capture with Ambiguous Audiovisual Stimuli

    Directory of Open Access Journals (Sweden)

    Jean-Michel Hupé

    2011-10-01

    Full Text Available Audiovisual capture happens when information across modalities get fused into a coherent percept. Ambiguous multi-modal stimuli have the potential to be powerful tools to observe such effects. We used such stimuli made of temporally synchronized and spatially co-localized visual flashes and auditory tones. The flashes produced bistable apparent motion and the tones produced ambiguous streaming. We measured strong interferences between perceptual decisions in each modality, a case of audiovisual capture. However, does this mean that audiovisual capture occurs before bistable decision? We argue that this is not the case, as the interference had a slow temporal dynamics and was modulated by audiovisual congruence, suggestive of high-level factors such as attention or intention. We propose a framework to integrate bistability and audiovisual capture, which distinguishes between “what” competes and “how” it competes (Hupé et al., 2008. The audiovisual interactions may be the result of contextual influences on neural representations (“what” competes, quite independent from the causal mechanisms of perceptual switches (“how” it competes. This framework predicts that audiovisual capture can bias bistability especially if modalities are congruent (Sato et al., 2007, but that is fundamentally distinct in nature from the bistable competition mechanism.

  14. Visual search and contextual cueing: differential effects in 10-year-old children and adults.

    Science.gov (United States)

    Couperus, Jane W; Hunt, Ruskin H; Nelson, Charles A; Thomas, Kathleen M

    2011-02-01

    The development of contextual cueing specifically in relation to attention was examined in two experiments. Adult and 10-year-old participants completed a context cueing visual search task (Jiang & Chun, The Quarterly Journal of Experimental Psychology, 54A(4), 1105-1124, 2001) containing stimuli presented in an attended (e.g., red) and unattended (e.g., green) color. When the spatial configuration of stimuli in the attended and unattended color was invariant and consistently paired with the target location, adult reaction times improved, demonstrating learning. Learning also occurred if only the attended stimuli's configuration remained fixed. In contrast, while 10 year olds, like adults, showed incrementally slower reaction times as the number of attended stimuli increased, they did not show learning in the standard paradigm. However, they did show learning when the ratio of attended to unattended stimuli was high, irrespective of the total number of attended stimuli. Findings suggest children show efficient attentional guidance by color in visual search but differences in contextual cueing.

  15. Auditory-visual integration of emotional signals in a virtual environment for cynophobia.

    Science.gov (United States)

    Taffou, Marine; Chapoulie, Emmanuelle; David, Adrien; Guerchouche, Rachid; Drettakis, George; Viaud-Delmon, Isabelle

    2012-01-01

    Cynophobia (dog phobia) has both visual and auditory relevant components. In order to investigate the efficacy of virtual reality (VR) exposure-based treatment for cynophobia, we studied the efficiency of auditory-visual environments in generating presence and emotion. We conducted an evaluation test with healthy participants sensitive to cynophobia in order to assess the capacity of auditory-visual virtual environments (VE) to generate fear reactions. Our application involves both high fidelity visual stimulation displayed in an immersive space and 3D sound. This specificity enables us to present and spatially manipulate fearful stimuli in the auditory modality, the visual modality and both. Our specific presentation of animated dog stimuli creates an environment that is highly arousing, suggesting that VR is a promising tool for cynophobia treatment and that manipulating auditory-visual integration might provide a way to modulate affect.

  16. Facilitation of listening comprehension by visual information under noisy listening condition

    Science.gov (United States)

    Kashimada, Chiho; Ito, Takumi; Ogita, Kazuki; Hasegawa, Hiroshi; Kamata, Kazuo; Ayama, Miyoshi

    2009-02-01

    Comprehension of a sentence under a wide range of delay conditions between auditory and visual stimuli was measured in the environment with low auditory clarity of the level of -10dB and -15dB pink noise. Results showed that the image was helpful for comprehension of the noise-obscured voice stimulus when the delay between the auditory and visual stimuli was 4 frames (=132msec) or less, the image was not helpful for comprehension when the delay between the auditory and visual stimulus was 8 frames (=264msec) or more, and in some cases of the largest delay (32 frames), the video image interfered with comprehension.

  17. Manipulation of the extrastriate frontal loop can resolve visual disability in blindsight patients.

    Science.gov (United States)

    Badgaiyan, Rajendra D

    2012-12-01

    Patients with blindsight are not consciously aware of visual stimuli in the affected field of vision but retain nonconscious perception. This disability can be resolved if nonconsciously perceived information can be brought to their conscious awareness. It can be accomplished by manipulating neural network of visual awareness. To understand this network, we studied the pattern of cortical activity elicited during processing of visual stimuli with or without conscious awareness. The analysis indicated that a re-entrant signaling loop between the area V3A (located in the extrastriate cortex) and the frontal cortex is critical for processing conscious awareness. The loop is activated by visual signals relayed in the primary visual cortex, which is damaged in blindsight patients. Because of the damage, V3A-frontal loop is not activated and the signals are not processed for conscious awareness. These patients however continue to receive visual signals through the lateral geniculate nucleus. Since these signals do not activate the V3A-frontal loop, the stimuli are not consciously perceived. If visual input from the lateral geniculate nucleus is appropriately manipulated and made to activate the V3A-frontal loop, blindsight patients can regain conscious vision. Published by Elsevier Ltd.

  18. Attention modulates the responses of simple cells in monkey primary visual cortex.

    Science.gov (United States)

    McAdams, Carrie J; Reid, R Clay

    2005-11-23

    Spatial attention has long been postulated to act as a spotlight that increases the salience of visual stimuli at the attended location. We examined the effects of attention on the receptive fields of simple cells in primary visual cortex (V1) by training macaque monkeys to perform a task with two modes. In the attended mode, the stimuli relevant to the animal's task overlay the receptive field of the neuron being recorded. In the unattended mode, the animal was cued to attend to stimuli outside the receptive field of that neuron. The relevant stimulus, a colored pixel, was briefly presented within a white-noise stimulus, a flickering grid of black and white pixels. The receptive fields of the neurons were mapped by correlating spikes with the white-noise stimulus in both attended and unattended modes. We found that attention could cause significant modulation of the visually evoked response despite an absence of significant effects on the overall firing rates. On further examination of the relationship between the strength of the visual stimulation and the firing rate, we found that attention appears to cause multiplicative scaling of the visually evoked responses of simple cells, demonstrating that attention reaches back to the initial stages of visual cortical processing.

  19. Olfactory or auditory stimulation and their hedonic valúes differentially modulate visual working memory

    Directory of Open Access Journals (Sweden)

    ANA M DONOSO

    2008-12-01

    Full Text Available Working memory (WM designates the retention of objects or events in conscious awareness when these are not present in the environment. Many studies have focused on the interference properties of distracter stimuli in working memory, but these studies have mainly examined the influence of the intensity of these stimuli. Little is known about the memory modulation of hedonic content of distracter stimuli as they also may affect WM performance or attentional tasks. In this paper, we have studied the performance of a visual WM task where subjects recollect from five to eight visually presented objects while they are simultaneously exposed to additional - albeit weak- auditory or olfactory distracter stimulus. We found that WM performance decreases as the number of Ítems to remember increases, but this performance was unaltered by any of the distracter stimuli. However, when performance was correlated to the subject's perceived hedonic valúes, distracter stimuli classified as negative exhibit higher error rates than positive, neutral or control stimuli. We demónstrate that some hedonic content of otherwise neutral stimuli can strongly modulate memory processes.

  20. Visual input that matches the content of vist of visual working memory requires less (not faster) evidence sampling to reach conscious access

    NARCIS (Netherlands)

    Gayet, S.; van Maanen, L.; Heilbron, M.; Paffen, C.L.E.; Van Der Stigchel, S.

    2016-01-01

    The content of visual working memory (VWM) affects the processing of concurrent visual input. Recently, it has been demonstrated that stimuli are released from interocular suppression faster when they match rather than mismatch a color that is memorized for subsequent recall. In order to investigate

  1. A Neural Theory of Visual Attention: Bridging Cognition and Neurophysiology

    Science.gov (United States)

    Bundesen, Claus; Habekost, Thomas; Kyllingsbaek, Soren

    2005-01-01

    A neural theory of visual attention (NTVA) is presented. NTVA is a neural interpretation of C. Bundesen's (1990) theory of visual attention (TVA). In NTVA, visual processing capacity is distributed across stimuli by dynamic remapping of receptive fields of cortical cells such that more processing resources (cells) are devoted to behaviorally…

  2. Electrophysiological evidence for altered visual, but not auditory, selective attention in adolescent cochlear implant users.

    Science.gov (United States)

    Harris, Jill; Kamke, Marc R

    2014-11-01

    Selective attention fundamentally alters sensory perception, but little is known about the functioning of attention in individuals who use a cochlear implant. This study aimed to investigate visual and auditory attention in adolescent cochlear implant users. Event related potentials were used to investigate the influence of attention on visual and auditory evoked potentials in six cochlear implant users and age-matched normally-hearing children. Participants were presented with streams of alternating visual and auditory stimuli in an oddball paradigm: each modality contained frequently presented 'standard' and infrequent 'deviant' stimuli. Across different blocks attention was directed to either the visual or auditory modality. For the visual stimuli attention boosted the early N1 potential, but this effect was larger for cochlear implant users. Attention was also associated with a later P3 component for the visual deviant stimulus, but there was no difference between groups in the later attention effects. For the auditory stimuli, attention was associated with a decrease in N1 latency as well as a robust P3 for the deviant tone. Importantly, there was no difference between groups in these auditory attention effects. The results suggest that basic mechanisms of auditory attention are largely normal in children who are proficient cochlear implant users, but that visual attention may be altered. Ultimately, a better understanding of how selective attention influences sensory perception in cochlear implant users will be important for optimising habilitation strategies. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  3. The sensory components of high-capacity iconic memory and visual working memory.

    Science.gov (United States)

    Bradley, Claire; Pearson, Joel

    2012-01-01

    EARLY VISUAL MEMORY CAN BE SPLIT INTO TWO PRIMARY COMPONENTS: a high-capacity, short-lived iconic memory followed by a limited-capacity visual working memory that can last many seconds. Whereas a large number of studies have investigated visual working memory for low-level sensory features, much research on iconic memory has used more "high-level" alphanumeric stimuli such as letters or numbers. These two forms of memory are typically examined separately, despite an intrinsic overlap in their characteristics. Here, we used a purely sensory paradigm to examine visual short-term memory for 10 homogeneous items of three different visual features (color, orientation and motion) across a range of durations from 0 to 6 s. We found that the amount of information stored in iconic memory is smaller for motion than for color or orientation. Performance declined exponentially with longer storage durations and reached chance levels after ∼2 s. Further experiments showed that performance for the 10 items at 1 s was contingent on unperturbed attentional resources. In addition, for orientation stimuli, performance was contingent on the location of stimuli in the visual field, especially for short cue delays. Overall, our results suggest a smooth transition between an automatic, high-capacity, feature-specific sensory-iconic memory, and an effortful "lower-capacity" visual working memory.

  4. The sensory components of high-capacity iconic memory and visual working memory

    Directory of Open Access Journals (Sweden)

    Claire eBradley

    2012-09-01

    Full Text Available Early visual memory can be split into two primary components: a high-capacity, short-lived iconic memory followed by a limited-capacity visual working memory that can last many seconds. Whereas a large number of studies have investigated visual working memory for low-level sensory features, much research on iconic memory has used more high-level alphanumeric stimuli such as letters or numbers. These two forms of memory are typically examined separately, despite an intrinsic overlap in their characteristics. Here, we used a purely sensory paradigm to examine visual short-term memory for 10 homogeneous items of 3 different visual features (colour, orientation and motion across a range of durations from 0 to 6 seconds. We found that the amount of information stored in iconic memory is smaller for motion than for colour or orientation. Performance declined exponentially with longer storage durations and reached chance levels after ~2 seconds. Further experiments showed that performance for the 10 items at 1 second was contingent on unperturbed attentional resources. In addition, for orientation stimuli, performance was contingent on the location of stimuli in the visual field, especially for short cue delays. Overall, our results suggest a smooth transition between an automatic, high-capacity, feature-specific sensory-iconic memory and an effortful ‘lower-capacity’ visual working memory.

  5. Color vision in attention-deficit/hyperactivity disorder: a pilot visual evoked potential study.

    Science.gov (United States)

    Kim, Soyeon; Banaschewski, Tobias; Tannock, Rosemary

    2015-01-01

    Individuals with attention-deficit/hyperactivity disorder (ADHD) are reported to manifest visual problems (including ophthalmological and color perception, particularly for blue-yellow stimuli), but findings are inconsistent. Accordingly, this study investigated visual function and color perception in adolescents with ADHD using color Visual Evoked Potentials (cVEP), which provides an objective measure of color perception. Thirty-one adolescents (aged 13-18), 16 with a confirmed diagnosis of ADHD, and 15 healthy peers, matched for age, gender, and IQ participated in the study. All underwent an ophthalmological exam, as well as electrophysiological testing color Visual Evoked Potentials (cVEP), which measured the latency and amplitude of the neural P1 response to chromatic (blue-yellow, red-green) and achromatic stimuli. No intergroup differences were found in the ophthalmological exam. However, significantly larger P1 amplitude was found for blue and yellow stimuli, but not red/green or achromatic stimuli, in the ADHD group (particularly in the medicated group) compared to controls. Larger amplitude in the P1 component for blue-yellow in the ADHD group compared to controls may account for the lack of difference in color perception tasks. We speculate that the larger amplitude for blue-yellow stimuli in early sensory processing (P1) might reflect a compensatory strategy for underlying problems including compromised retinal input of s-cones due to hypo-dopaminergic tone. Copyright © 2014 Spanish General Council of Optometry. Published by Elsevier Espana. All rights reserved.

  6. A local adaptive algorithm for emerging scale-free hierarchical networks

    International Nuclear Information System (INIS)

    Gomez Portillo, I J; Gleiser, P M

    2010-01-01

    In this work we study a growing network model with chaotic dynamical units that evolves using a local adaptive rewiring algorithm. Using numerical simulations we show that the model allows for the emergence of hierarchical networks. First, we show that the networks that emerge with the algorithm present a wide degree distribution that can be fitted by a power law function, and thus are scale-free networks. Using the LaNet-vi visualization tool we present a graphical representation that reveals a central core formed only by hubs, and also show the presence of a preferential attachment mechanism. In order to present a quantitative analysis of the hierarchical structure we analyze the clustering coefficient. In particular, we show that as the network grows the clustering becomes independent of system size, and also presents a power law decay as a function of the degree. Finally, we compare our results with a similar version of the model that has continuous non-linear phase oscillators as dynamical units. The results show that local interactions play a fundamental role in the emergence of hierarchical networks.

  7. Distinct electrophysiological indices of maintenance in auditory and visual short-term memory.

    Science.gov (United States)

    Lefebvre, Christine; Vachon, François; Grimault, Stephan; Thibault, Jennifer; Guimond, Synthia; Peretz, Isabelle; Zatorre, Robert J; Jolicœur, Pierre

    2013-11-01

    We compared the electrophysiological correlates for the maintenance of non-musical tones sequences in auditory short-term memory (ASTM) to those for the short-term maintenance of sequences of coloured disks held in visual short-term memory (VSTM). The visual stimuli yielded a sustained posterior contralateral negativity (SPCN), suggesting that the maintenance of sequences of coloured stimuli engaged structures similar to those involved in the maintenance of simultaneous visual displays. On the other hand, maintenance of acoustic sequences produced a sustained negativity at fronto-central sites. This component is named the Sustained Anterior Negativity (SAN). The amplitude of the SAN increased with increasing load in ASTM and predicted individual differences in the performance. There was no SAN in a control condition with the same auditory stimuli but no memory task, nor one associated with visual memory. These results suggest that the SAN is an index of brain activity related to the maintenance of representations in ASTM that is distinct from the maintenance of representations in VSTM. © 2013 Elsevier Ltd. All rights reserved.

  8. Emotional intelligence is a second-stratum factor of intelligence: evidence from hierarchical and bifactor models.

    Science.gov (United States)

    MacCann, Carolyn; Joseph, Dana L; Newman, Daniel A; Roberts, Richard D

    2014-04-01

    This article examines the status of emotional intelligence (EI) within the structure of human cognitive abilities. To evaluate whether EI is a 2nd-stratum factor of intelligence, data were fit to a series of structural models involving 3 indicators each for fluid intelligence, crystallized intelligence, quantitative reasoning, visual processing, and broad retrieval ability, as well as 2 indicators each for emotion perception, emotion understanding, and emotion management. Unidimensional, multidimensional, hierarchical, and bifactor solutions were estimated in a sample of 688 college and community college students. Results suggest adequate fit for 2 models: (a) an oblique 8-factor model (with 5 traditional cognitive ability factors and 3 EI factors) and (b) a hierarchical solution (with cognitive g at the highest level and EI representing a 2nd-stratum factor that loads onto g at λ = .80). The acceptable relative fit of the hierarchical model confirms the notion that EI is a group factor of cognitive ability, marking the expression of intelligence in the emotion domain. The discussion proposes a possible expansion of Cattell-Horn-Carroll theory to include EI as a 2nd-stratum factor of similar standing to factors such as fluid intelligence and visual processing.

  9. Sustained Splits of Attention within versus across Visual Hemifields Produce Distinct Spatial Gain Profiles.

    Science.gov (United States)

    Walter, Sabrina; Keitel, Christian; Müller, Matthias M

    2016-01-01

    Visual attention can be focused concurrently on two stimuli at noncontiguous locations while intermediate stimuli remain ignored. Nevertheless, behavioral performance in multifocal attention tasks falters when attended stimuli fall within one visual hemifield as opposed to when they are distributed across left and right hemifields. This "different-hemifield advantage" has been ascribed to largely independent processing capacities of each cerebral hemisphere in early visual cortices. Here, we investigated how this advantage influences the sustained division of spatial attention. We presented six isoeccentric light-emitting diodes (LEDs) in the lower visual field, each flickering at a different frequency. Participants attended to two LEDs that were spatially separated by an intermediate LED and responded to synchronous events at to-be-attended LEDs. Task-relevant pairs of LEDs were either located in the same hemifield ("within-hemifield" conditions) or separated by the vertical meridian ("across-hemifield" conditions). Flicker-driven brain oscillations, steady-state visual evoked potentials (SSVEPs), indexed the allocation of attention to individual LEDs. Both behavioral performance and SSVEPs indicated enhanced processing of attended LED pairs during "across-hemifield" relative to "within-hemifield" conditions. Moreover, SSVEPs demonstrated effective filtering of intermediate stimuli in "across-hemifield" condition only. Thus, despite identical physical distances between LEDs of attended pairs, the spatial profiles of gain effects differed profoundly between "across-hemifield" and "within-hemifield" conditions. These findings corroborate that early cortical visual processing stages rely on hemisphere-specific processing capacities and highlight their limiting role in the concurrent allocation of visual attention to multiple locations.

  10. Putative inhibitory training of a stimulus makes it a facilitator: a within-subject comparison of visual and auditory stimuli in autoshaping.

    Science.gov (United States)

    Nakajima, S

    2000-03-14

    Pigeons were trained with the A+, AB-, ABC+, AD- and ADE+ task where each of stimulus A and stimulus compounds ABC and ADE signalled food (positive trials), and each of stimulus compounds AB and AD signalled no food (negative trials). Stimuli A, B, C and E were small visual figures localised on a response key, and stimulus D was a white noise. Stimulus B was more effective than D as an inhibitor of responding to A during the training. After the birds learned to respond exclusively on the positive trials, effects of B and D on responding to C and E, respectively, were tested by comparing C, BC, E and DE trials. Stimulus B continuously facilitated responding to C on the BC test trials, but D's facilitative effect was observed only on the first DE test trial. Stimulus B also facilitated responding to E on BE test trials. Implications for the Rescorla-Wagner elemental model and the Pearce configural model of Pavlovian conditioning were discussed.

  11. Extinction of Conditioned Responses to Methamphetamine-Associated Stimuli in Healthy Humans.

    Science.gov (United States)

    Cavallo, Joel S; Ruiz, Nicholas A; de Wit, Harriet

    2016-07-01

    Contextual stimuli present during drug experiences become associated with the drug through Pavlovian conditioning and are thought to sustain drug-seeking behavior. Thus, extinction of conditioned responses is an important target for treatment. To date, acquisition and extinction to drug-paired cues have been studied in animal models or drug-dependent individuals, but rarely in non-drug users. We have recently developed a procedure to study acquisition of conditioned responses after single doses of methamphetamine (MA) in healthy volunteers. Here, we examined extinction of these responses and their persistence after conditioning. Healthy adults (18-35 years; N = 20) received two pairings of audio-visual stimuli with MA (20 mg oral) or placebo. Responses to stimuli were assessed before and after conditioning, using three tasks: behavioral preference, attentional bias, and subjective "liking." Subjects exhibited behavioral preference for the drug-paired stimuli at the first post-conditioning test, but this declined rapidly on subsequent extinction tests. They also exhibited a bias to initially look towards the drug-paired stimuli at the first post-test session, but not thereafter. Subjects who experienced more positive subjective drug effects during conditioning exhibited a smaller decline in preference during the extinction phase. Further, longer inter-session intervals during the extinction phase were associated with less extinction of the behavioral preference measure. Conditioned responses after two pairings with MA extinguish quickly, and are influenced by both subjective drug effects and the extinction interval. Characterizing and refining this conditioning procedure will aid in understanding the acquisition and extinction processes of drug-related conditioned responses in humans.

  12. Intersensory Function in Newborns: Effect of Sound on Visual Preferences.

    Science.gov (United States)

    Lawson, Katharine Rieke; Turkewitz, Gerald

    1980-01-01

    Newborn infants' fixation of a graduated series of visual stimuli significantly differed in the absence and presence of white-noise bursts. Relative to the no-sound condition, sound resulted in the infants' tendency to look more at the low-intensity visual stimulus and less at the high- intensity visual stimulus. (Author/DB)

  13. Long-term memory of color stimuli in the jungle crow (Corvus macrorhynchos).

    Science.gov (United States)

    Bogale, Bezawork Afework; Sugawara, Satoshi; Sakano, Katsuhisa; Tsuda, Sonoko; Sugita, Shoei

    2012-03-01

    Wild-caught jungle crows (n = 20) were trained to discriminate between color stimuli in a two-alternative discrimination task. Next, crows were tested for long-term memory after 1-, 2-, 3-, 6-, and 10-month retention intervals. This preliminary study showed that jungle crows learn the task and reach a discrimination criterion (80% or more correct choices in two consecutive sessions of ten trials) in a few trials, and some even in a single session. Most, if not all, crows successfully remembered the constantly reinforced visual stimulus during training after all retention intervals. These results suggest that jungle crows have a high retention capacity for learned information, at least after a 10-month retention interval and make no or very few errors. This study is the first to show long-term memory capacity of color stimuli in corvids following a brief training that memory rather than rehearsal was apparent. Memory of visual color information is vital for exploitation of biological resources in crows. We suspect that jungle crows could remember the learned color discrimination task even after a much longer retention interval.

  14. Which visual functions depend on intermediate visual regions? Insights from a case of developmental visual form agnosia.

    Science.gov (United States)

    Gilaie-Dotan, Sharon

    2016-03-01

    A key question in visual neuroscience is the causal link between specific brain areas and perceptual functions; which regions are necessary for which visual functions? While the contribution of primary visual cortex and high-level visual regions to visual perception has been extensively investigated, the contribution of intermediate visual areas (e.g. V2/V3) to visual processes remains unclear. Here I review more than 20 visual functions (early, mid, and high-level) of LG, a developmental visual agnosic and prosopagnosic young adult, whose intermediate visual regions function in a significantly abnormal fashion as revealed through extensive fMRI and ERP investigations. While expectedly, some of LG's visual functions are significantly impaired, some of his visual functions are surprisingly normal (e.g. stereopsis, color, reading, biological motion). During the period of eight-year testing described here, LG trained on a perceptual learning paradigm that was successful in improving some but not all of his visual functions. Following LG's visual performance and taking into account additional findings in the field, I propose a framework for how different visual areas contribute to different visual functions, with an emphasis on intermediate visual regions. Thus, although rewiring and plasticity in the brain can occur during development to overcome and compensate for hindering developmental factors, LG's case seems to indicate that some visual functions are much less dependent on strict hierarchical flow than others, and can develop normally in spite of abnormal mid-level visual areas, thereby probably less dependent on intermediate visual regions. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Visual Information Present in Infragranular Layers of Mouse Auditory Cortex.

    Science.gov (United States)

    Morrill, Ryan J; Hasenstaub, Andrea R

    2018-03-14

    The cerebral cortex is a major hub for the convergence and integration of signals from across the sensory modalities; sensory cortices, including primary regions, are no exception. Here we show that visual stimuli influence neural firing in the auditory cortex of awake male and female mice, using multisite probes to sample single units across multiple cortical layers. We demonstrate that visual stimuli influence firing in both primary and secondary auditory cortex. We then determine the laminar location of recording sites through electrode track tracing with fluorescent dye and optogenetic identification using layer-specific markers. Spiking responses to visual stimulation occur deep in auditory cortex and are particularly prominent in layer 6. Visual modulation of firing rate occurs more frequently at areas with secondary-like auditory responses than those with primary-like responses. Auditory cortical responses to drifting visual gratings are not orientation-tuned, unlike visual cortex responses. The deepest cortical layers thus appear to be an important locus for cross-modal integration in auditory cortex. SIGNIFICANCE STATEMENT The deepest layers of the auditory cortex are often considered its most enigmatic, possessing a wide range of cell morphologies and atypical sensory responses. Here we show that, in mouse auditory cortex, these layers represent a locus of cross-modal convergence, containing many units responsive to visual stimuli. Our results suggest that this visual signal conveys the presence and timing of a stimulus rather than specifics about that stimulus, such as its orientation. These results shed light on both how and what types of cross-modal information is integrated at the earliest stages of sensory cortical processing. Copyright © 2018 the authors 0270-6474/18/382854-09$15.00/0.

  16. Preprocessing of emotional visual information in the human piriform cortex.

    Science.gov (United States)

    Schulze, Patrick; Bestgen, Anne-Kathrin; Lech, Robert K; Kuchinke, Lars; Suchan, Boris

    2017-08-23

    This study examines the processing of visual information by the olfactory system in humans. Recent data point to the processing of visual stimuli by the piriform cortex, a region mainly known as part of the primary olfactory cortex. Moreover, the piriform cortex generates predictive templates of olfactory stimuli to facilitate olfactory processing. This study fills the gap relating to the question whether this region is also capable of preprocessing emotional visual information. To gain insight into the preprocessing and transfer of emotional visual information into olfactory processing, we recorded hemodynamic responses during affective priming using functional magnetic resonance imaging (fMRI). Odors of different valence (pleasant, neutral and unpleasant) were primed by images of emotional facial expressions (happy, neutral and disgust). Our findings are the first to demonstrate that the piriform cortex preprocesses emotional visual information prior to any olfactory stimulation and that the emotional connotation of this preprocessing is subsequently transferred and integrated into an extended olfactory network for olfactory processing.

  17. Sex differences in visual attention to erotic and non-erotic stimuli.

    Science.gov (United States)

    Lykins, Amy D; Meana, Marta; Strauss, Gregory P

    2008-04-01

    It has been suggested that sex differences in the processing of erotic material (e.g., memory, genital arousal, brain activation patterns) may also be reflected by differential attention to visual cues in erotic material. To test this hypothesis, we presented 20 heterosexual men and 20 heterosexual women with erotic and non-erotic images of heterosexual couples and tracked their eye movements during scene presentation. Results supported previous findings that erotic and non-erotic information was visually processed in a different manner by both men and women. Men looked at opposite sex figures significantly longer than did women, and women looked at same sex figures significantly longer than did men. Within-sex analyses suggested that men had a strong visual attention preference for opposite sex figures as compared to same sex figures, whereas women appeared to disperse their attention evenly between opposite and same sex figures. These differences, however, were not limited to erotic images but evidenced in non-erotic images as well. No significant sex differences were found for attention to the contextual region of the scenes. Results were interpreted as potentially supportive of recent studies showing a greater non-specificity of sexual arousal in women. This interpretation assumes there is an erotic valence to images of the sex to which one orients, even when the image is not explicitly erotic. It also assumes a relationship between visual attention and erotic valence.

  18. Steady-state signatures of visual perceptual load, multimodal distractor filtering, and neural competition.

    Science.gov (United States)

    Parks, Nathan A; Hilimire, Matthew R; Corballis, Paul M

    2011-05-01

    The perceptual load theory of attention posits that attentional selection occurs early in processing when a task is perceptually demanding but occurs late in processing otherwise. We used a frequency-tagged steady-state evoked potential paradigm to investigate the modality specificity of perceptual load-induced distractor filtering and the nature of neural-competitive interactions between task and distractor stimuli. EEG data were recorded while participants monitored a stream of stimuli occurring in rapid serial visual presentation (RSVP) for the appearance of previously assigned targets. Perceptual load was manipulated by assigning targets that were identifiable by color alone (low load) or by the conjunction of color and orientation (high load). The RSVP task was performed alone and in the presence of task-irrelevant visual and auditory distractors. The RSVP stimuli, visual distractors, and auditory distractors were "tagged" by modulating each at a unique frequency (2.5, 8.5, and 40.0 Hz, respectively), which allowed each to be analyzed separately in the frequency domain. We report three important findings regarding the neural mechanisms of perceptual load. First, we replicated previous findings of within-modality distractor filtering and demonstrated a reduction in visual distractor signals with high perceptual load. Second, auditory steady-state distractor signals were unaffected by manipulations of visual perceptual load, consistent with the idea that perceptual load-induced distractor filtering is modality specific. Third, analysis of task-related signals revealed that visual distractors competed with task stimuli for representation and that increased perceptual load appeared to resolve this competition in favor of the task stimulus.

  19. Medial temporal lobe damage impairs representation of simple stimuli

    Directory of Open Access Journals (Sweden)

    David E Warren

    2010-05-01

    Full Text Available Medial temporal lobe damage in humans is typically thought to produce a circumscribed impairment in the acquisition of new enduring memories, but recent reports have documented deficits even in short-term maintenance. We examined possible maintenance deficits in a population of medial temporal lobe amnesics, with the goal of characterizing their impairments as either representational drift or outright loss of representation over time. Patients and healthy comparisons performed a visual search task in which the similarity of various lures to a target was varied parametrically. Stimuli were simple shapes varying along one of several visual dimensions. The task was performed in two conditions, one presenting a sample target simultaneously with the search array and the other imposing a delay between sample and array. Eye-movement data collected during search revealed that the duration of fixations to items varied with lure-target similarity for all participants, i.e., fixations were longer for items more similar to the target. In the simultaneous condition, patients and comparisons exhibited an equivalent effect of similarity on fixation durations. However, imposing a delay modulated the effect differently for the two groups: in comparisons, fixation duration to similar items was exaggerated; in patients, the original effect was diminished. These findings indicate that medial temporal lobe lesions subtly impair short-term maintenance of even simple stimuli, with performance reflecting not the complete loss of the maintained representation but rather a degradation or progressive drift of the representation over time.

  20. Inattentional Deafness: Visual Load Leads to Time-Specific Suppression of Auditory Evoked Responses.

    Science.gov (United States)

    Molloy, Katharine; Griffiths, Timothy D; Chait, Maria; Lavie, Nilli

    2015-12-09

    Due to capacity limits on perception, conditions of high perceptual load lead to reduced processing of unattended stimuli (Lavie et al., 2014). Accumulating work demonstrates the effects of visual perceptual load on visual cortex responses, but the effects on auditory processing remain poorly understood. Here we establish the neural mechanisms underlying "inattentional deafness"--the failure to perceive auditory stimuli under high visual perceptual load. Participants performed a visual search task of low (target dissimilar to nontarget items) or high (target similar to nontarget items) load. On a random subset (50%) of trials, irrelevant tones were presented concurrently with the visual stimuli. Brain activity was recorded with magnetoencephalography, and time-locked responses to the visual search array and to the incidental presence of unattended tones were assessed. High, compared to low, perceptual load led to increased early visual evoked responses (within 100 ms from onset). This was accompanied by reduced early (∼ 100 ms from tone onset) auditory evoked activity in superior temporal sulcus and posterior middle temporal gyrus. A later suppression of the P3 "awareness" response to the tones was also observed under high load. A behavioral experiment revealed reduced tone detection sensitivity under high visual load, indicating that the reduction in neural responses was indeed associated with reduced awareness of the sounds. These findings support a neural account of shared audiovisual resources, which, when depleted under load, leads to failures of sensory perception and awareness. The present work clarifies the neural underpinning of inattentional deafness under high visual load. The findings of near-simultaneous load effects on both visual and auditory evoked responses suggest shared audiovisual processing capacity. Temporary depletion of shared capacity in perceptually demanding visual tasks leads to a momentary reduction in sensory processing of auditory