WorldWideScience

Sample records for external visual stimuli

  1. External-stimuli responsive systems for cancer theranostic

    Directory of Open Access Journals (Sweden)

    Jianhui Yao

    2016-10-01

    Full Text Available The upsurge of novel nanomaterials and nanotechnologies has inspired the researchers who are striving for designing safer and more efficient drug delivery systems for cancer therapy. Stimuli responsive nanomaterial offered an alternative to design controllable drug delivery system on account of its spatiotemporally controllable properties. Additionally, external stimuli (light, magnetic field and ultrasound could develop into theranostic applications for personalized medicine use because of their unique characteristics. In this review, we give a brief overview about the significant progresses and challenges of certain external-stimuli responsive systems that have been extensively investigated in drug delivery and theranostics within the last few years.

  2. Effect of Size Change and Brightness Change of Visual Stimuli on Loudness Perception and Pitch Perception of Auditory Stimuli

    Directory of Open Access Journals (Sweden)

    Syouya Tanabe

    2011-10-01

    Full Text Available People obtain a lot of information from visual and auditory sensation on daily life. Regarding the effect of visual stimuli on perception of auditory stimuli, studies of phonological perception and sound localization have been made in great numbers. This study examined the effect of visual stimuli on perception in loudness and pitch of auditory stimuli. We used the image of figures whose size or brightness was changed as visual stimuli, and the sound of pure tone whose loudness or pitch was changed as auditory stimuli. Those visual and auditory stimuli were combined independently to make four types of audio-visual multisensory stimuli for psychophysical experiments. In the experiments, participants judged change in loudness or pitch of auditory stimuli, while they judged the direction of size change or the kind of a presented figure in visual stimuli. Therefore they cannot neglect visual stimuli while they judged auditory stimuli. As a result, perception in loudness and pitch were promoted significantly around their difference limen, when the image was getting bigger or brighter, compared with the case in which the image had no changes. This indicates that perception in loudness and pitch were affected by change in size and brightness of visual stimuli.

  3. Effects of Auditory Stimuli on Visual Velocity Perception

    Directory of Open Access Journals (Sweden)

    Michiaki Shibata

    2011-10-01

    Full Text Available We investigated the effects of auditory stimuli on the perceived velocity of a moving visual stimulus. Previous studies have reported that the duration of visual events is perceived as being longer for events filled with auditory stimuli than for events not filled with auditory stimuli, ie, the so-called “filled-duration illusion.” In this study, we have shown that auditory stimuli also affect the perceived velocity of a moving visual stimulus. In Experiment 1, a moving comparison stimulus (4.2∼5.8 deg/s was presented together with filled (or unfilled white-noise bursts or with no sound. The standard stimulus was a moving visual stimulus (5 deg/s presented before or after the comparison stimulus. The participants had to judge which stimulus was moving faster. The results showed that the perceived velocity in the auditory-filled condition was lower than that in the auditory-unfilled and no-sound conditions. In Experiment 2, we investigated the effects of auditory stimuli on velocity adaptation. The results showed that the effects of velocity adaptation in the auditory-filled condition were weaker than those in the no-sound condition. These results indicate that auditory stimuli tend to decrease the perceived velocity of a moving visual stimulus.

  4. Instructed fear stimuli bias visual attention

    NARCIS (Netherlands)

    Deltomme, Berre; Mertens, G.; Tibboel, Helen; Braem, Senne

    We investigated whether stimuli merely instructed to be fear-relevant can bias visual attention, even when the fear relation was never experienced before. Participants performed a dot-probe task with pictures of naturally fear-relevant (snake or spider) or -irrelevant (bird or butterfly) stimuli.

  5. Effects of Temporal Congruity Between Auditory and Visual Stimuli Using Rapid Audio-Visual Serial Presentation.

    Science.gov (United States)

    An, Xingwei; Tang, Jiabei; Liu, Shuang; He, Feng; Qi, Hongzhi; Wan, Baikun; Ming, Dong

    2016-10-01

    Combining visual and auditory stimuli in event-related potential (ERP)-based spellers gained more attention in recent years. Few of these studies notice the difference of ERP components and system efficiency caused by the shifting of visual and auditory onset. Here, we aim to study the effect of temporal congruity of auditory and visual stimuli onset on bimodal brain-computer interface (BCI) speller. We designed five visual and auditory combined paradigms with different visual-to-auditory delays (-33 to +100 ms). Eleven participants attended in this study. ERPs were acquired and aligned according to visual and auditory stimuli onset, respectively. ERPs of Fz, Cz, and PO7 channels were studied through the statistical analysis of different conditions both from visual-aligned ERPs and audio-aligned ERPs. Based on the visual-aligned ERPs, classification accuracy was also analyzed to seek the effects of visual-to-auditory delays. The latencies of ERP components depended mainly on the visual stimuli onset. Auditory stimuli onsets influenced mainly on early component accuracies, whereas visual stimuli onset determined later component accuracies. The latter, however, played a dominate role in overall classification. This study is important for further studies to achieve better explanations and ultimately determine the way to optimize the bimodal BCI application.

  6. Perceived duration of visual and tactile stimuli depends on perceived speed

    Directory of Open Access Journals (Sweden)

    Alice eTomassini

    2011-09-01

    Full Text Available It is known that the perceived duration of visual stimuli is strongly influenced by speed: faster moving stimuli appear to last longer. To test whether this is a general property of sensory systems we asked participants to reproduce the duration of visual and tactile gratings, and visuo-tactile gratings moving at a variable speed (3.5 – 15 cm/s for three different durations (400, 600 and 800 ms. For both modalities, the apparent duration of the stimulus increased strongly with stimulus speed, more so for tactile than for visual stimuli. In addition, visual stimuli were perceived to last approximately 200 ms longer than tactile stimuli. The apparent duration of visuo-tactile stimuli lay between the unimodal estimates, as the Bayesian account predicts, but the bimodal precision of the reproduction did not show the theoretical improvement. A cross-modal speed-matching task revealed that visual stimuli were perceived to move faster than tactile stimuli. To test whether the large difference in the perceived duration of visual and tactile stimuli resulted from the difference in their perceived speed, we repeated the time reproduction task with visual and tactile stimuli matched in apparent speed. This reduced, but did not completely eliminate the difference in apparent duration. These results show that for both vision and touch, perceived duration depends on speed, pointing to common strategies of time perception.

  7. Gender differences in identifying emotions from auditory and visual stimuli.

    Science.gov (United States)

    Waaramaa, Teija

    2017-12-01

    The present study focused on gender differences in emotion identification from auditory and visual stimuli produced by two male and two female actors. Differences in emotion identification from nonsense samples, language samples and prolonged vowels were investigated. It was also studied whether auditory stimuli can convey the emotional content of speech without visual stimuli, and whether visual stimuli can convey the emotional content of speech without auditory stimuli. The aim was to get a better knowledge of vocal attributes and a more holistic understanding of the nonverbal communication of emotion. Females tended to be more accurate in emotion identification than males. Voice quality parameters played a role in emotion identification in both genders. The emotional content of the samples was best conveyed by nonsense sentences, better than by prolonged vowels or shared native language of the speakers and participants. Thus, vocal non-verbal communication tends to affect the interpretation of emotion even in the absence of language. The emotional stimuli were better recognized from visual stimuli than auditory stimuli by both genders. Visual information about speech may not be connected to the language; instead, it may be based on the human ability to understand the kinetic movements in speech production more readily than the characteristics of the acoustic cues.

  8. Endogenous sequential cortical activity evoked by visual stimuli.

    Science.gov (United States)

    Carrillo-Reid, Luis; Miller, Jae-Eun Kang; Hamm, Jordan P; Jackson, Jesse; Yuste, Rafael

    2015-06-10

    Although the functional properties of individual neurons in primary visual cortex have been studied intensely, little is known about how neuronal groups could encode changing visual stimuli using temporal activity patterns. To explore this, we used in vivo two-photon calcium imaging to record the activity of neuronal populations in primary visual cortex of awake mice in the presence and absence of visual stimulation. Multidimensional analysis of the network activity allowed us to identify neuronal ensembles defined as groups of cells firing in synchrony. These synchronous groups of neurons were themselves activated in sequential temporal patterns, which repeated at much higher proportions than chance and were triggered by specific visual stimuli such as natural visual scenes. Interestingly, sequential patterns were also present in recordings of spontaneous activity without any sensory stimulation and were accompanied by precise firing sequences at the single-cell level. Moreover, intrinsic dynamics could be used to predict the occurrence of future neuronal ensembles. Our data demonstrate that visual stimuli recruit similar sequential patterns to the ones observed spontaneously, consistent with the hypothesis that already existing Hebbian cell assemblies firing in predefined temporal sequences could be the microcircuit substrate that encodes visual percepts changing in time. Copyright © 2015 Carrillo-Reid et al.

  9. Visual and cross-modal cues increase the identification of overlapping visual stimuli in Balint's syndrome.

    Science.gov (United States)

    D'Imperio, Daniela; Scandola, Michele; Gobbetto, Valeria; Bulgarelli, Cristina; Salgarello, Matteo; Avesani, Renato; Moro, Valentina

    2017-10-01

    Cross-modal interactions improve the processing of external stimuli, particularly when an isolated sensory modality is impaired. When information from different modalities is integrated, object recognition is facilitated probably as a result of bottom-up and top-down processes. The aim of this study was to investigate the potential effects of cross-modal stimulation in a case of simultanagnosia. We report a detailed analysis of clinical symptoms and an 18 F-fluorodeoxyglucose (FDG) brain positron emission tomography/computed tomography (PET/CT) study of a patient affected by Balint's syndrome, a rare and invasive visual-spatial disorder following bilateral parieto-occipital lesions. An experiment was conducted to investigate the effects of visual and nonvisual cues on performance in tasks involving the recognition of overlapping pictures. Four modalities of sensory cues were used: visual, tactile, olfactory, and auditory. Data from neuropsychological tests showed the presence of ocular apraxia, optic ataxia, and simultanagnosia. The results of the experiment indicate a positive effect of the cues on the recognition of overlapping pictures, not only in the identification of the congruent valid-cued stimulus (target) but also in the identification of the other, noncued stimuli. All the sensory modalities analyzed (except the auditory stimulus) were efficacious in terms of increasing visual recognition. Cross-modal integration improved the patient's ability to recognize overlapping figures. However, while in the visual unimodal modality both bottom-up (priming, familiarity effect, disengagement of attention) and top-down processes (mental representation and short-term memory, the endogenous orientation of attention) are involved, in the cross-modal integration it is semantic representations that mainly activate visual recognition processes. These results are potentially useful for the design of rehabilitation training for attentional and visual-perceptual deficits.

  10. Multisensory training can promote or impede visual perceptual learning of speech stimuli: visual-tactile vs. visual-auditory training.

    Science.gov (United States)

    Eberhardt, Silvio P; Auer, Edward T; Bernstein, Lynne E

    2014-01-01

    In a series of studies we have been investigating how multisensory training affects unisensory perceptual learning with speech stimuli. Previously, we reported that audiovisual (AV) training with speech stimuli can promote auditory-only (AO) perceptual learning in normal-hearing adults but can impede learning in congenitally deaf adults with late-acquired cochlear implants. Here, impeder and promoter effects were sought in normal-hearing adults who participated in lipreading training. In Experiment 1, visual-only (VO) training on paired associations between CVCVC nonsense word videos and nonsense pictures demonstrated that VO words could be learned to a high level of accuracy even by poor lipreaders. In Experiment 2, visual-auditory (VA) training in the same paradigm but with the addition of synchronous vocoded acoustic speech impeded VO learning of the stimuli in the paired-associates paradigm. In Experiment 3, the vocoded AO stimuli were shown to be less informative than the VO speech. Experiment 4 combined vibrotactile speech stimuli with the visual stimuli during training. Vibrotactile stimuli were shown to promote visual perceptual learning. In Experiment 5, no-training controls were used to show that training with visual speech carried over to consonant identification of untrained CVCVC stimuli but not to lipreading words in sentences. Across this and previous studies, multisensory training effects depended on the functional relationship between pathways engaged during training. Two principles are proposed to account for stimulus effects: (1) Stimuli presented to the trainee's primary perceptual pathway will impede learning by a lower-rank pathway. (2) Stimuli presented to the trainee's lower rank perceptual pathway will promote learning by a higher-rank pathway. The mechanisms supporting these principles are discussed in light of multisensory reverse hierarchy theory (RHT).

  11. Haptic and Audio-visual Stimuli: Enhancing Experiences and Interaction

    NARCIS (Netherlands)

    Nijholt, Antinus; Dijk, Esko O.; Lemmens, Paul M.C.; Luitjens, S.B.

    2010-01-01

    The intention of the symposium on Haptic and Audio-visual stimuli at the EuroHaptics 2010 conference is to deepen the understanding of the effect of combined Haptic and Audio-visual stimuli. The knowledge gained will be used to enhance experiences and interactions in daily life. To this end, a

  12. Visual and auditory stimuli associated with swallowing. An fMRI study

    International Nuclear Information System (INIS)

    Kawai, Takeshi; Watanabe, Yutaka; Tonogi, Morio; Yamane, Gen-yuki; Abe, Shinichi; Yamada, Yoshiaki; Callan, Akiko

    2009-01-01

    We focused on brain areas activated by audiovisual stimuli related to swallowing motions. In this study, three kinds of stimuli related to human swallowing movement (auditory stimuli alone, visual stimuli alone, or audiovisual stimuli) were presented to the subjects, and activated brain areas were measured using functional MRI (fMRI) and analyzed. When auditory stimuli alone were presented, the supplementary motor area was activated. When visual stimuli alone were presented, the premotor and primary motor areas of the left and right hemispheres and prefrontal area of the left hemisphere were activated. When audiovisual stimuli were presented, the prefrontal and premotor areas of the left and right hemispheres were activated. Activation of Broca's area, which would have been characteristic of mirror neuron system activation on presentation of motion images, was not observed; however, activation of brain areas related to swallowing motion programming and performance was verified for auditory, visual and audiovisual stimuli related to swallowing motion. These results suggest that audiovisual stimuli related to swallowing motion could be applied to the treatment of patients with dysphagia. (author)

  13. Positive mood broadens visual attention to positive stimuli.

    Science.gov (United States)

    Wadlinger, Heather A; Isaacowitz, Derek M

    2006-03-01

    In an attempt to investigate the impact of positive emotions on visual attention within the context of Fredrickson's (1998) broaden-and-build model, eye tracking was used in two studies to measure visual attentional preferences of college students (n=58, n=26) to emotional pictures. Half of each sample experienced induced positive mood immediately before viewing slides of three similarly-valenced images, in varying central-peripheral arrays. Attentional breadth was determined by measuring the percentage viewing time to peripheral images as well as by the number of visual saccades participants made per slide. Consistent with Fredrickson's theory, the first study showed that individuals induced into positive mood fixated more on peripheral stimuli than did control participants; however, this only held true for highly-valenced positive stimuli. Participants under induced positive mood also made more frequent saccades for slides of neutral and positive valence. A second study showed that these effects were not simply due to differences in emotional arousal between stimuli. Selective attentional broadening to positive stimuli may act both to facilitate later building of resources as well as to maintain current positive affective states.

  14. Multisensory stimuli improve relative localisation judgments compared to unisensory auditory or visual stimuli

    OpenAIRE

    Bizley, Jennifer; Wood, Katherine; Freeman, Laura

    2018-01-01

    Observers performed a relative localisation task in which they reported whether the second of two sequentially presented signals occurred to the left or right of the first. Stimuli were detectability-matched auditory, visual, or auditory-visual signals and the goal was to compare changes in performance with eccentricity across modalities. Visual performance was superior to auditory at the midline, but inferior in the periphery, while auditory-visual performance exceeded both at all locations....

  15. Afferent activity to necklace glomeruli is dependent on external stimuli

    Directory of Open Access Journals (Sweden)

    Munger Steven D

    2009-03-01

    Full Text Available Abstract Background The main olfactory epithelium (MOE is a complex organ containing several functionally distinct subpopulations of sensory neurons. One such subpopulation is distinguished by its expression of the guanylyl cyclase GC-D. The axons of GC-D-expressing (GC-D+ neurons innervate 9–15 "necklace" glomeruli encircling the caudal main olfactory bulb (MOB. Chemosensory stimuli for GC-D+ neurons include two natriuretic peptides, uroguanylin and guanylin, and CO2. However, the biologically-relevant source of these chemostimuli is unclear: uroguanylin is both excreted in urine, a rich source of olfactory stimuli for rodents, and expressed in human nasal epithelium; CO2 is present in both inspired and expired air. Findings To determine whether the principal source of chemostimuli for GC-D+ neurons is external or internal to the nose, we assessed the consequences of removing external chemostimuli for afferent activity to the necklace glomeruli. To do so, we performed unilateral naris occlusions in Gucy2d-Mapt-lacZ +/- mice [which express a β-galactosidase (β-gal reporter specifically in GC-D+ neurons] followed by immunohistochemistry for β-gal and a glomerular marker of afferent activity, tyrosine hydroxylase (TH. We observed a dramatic decrease in TH immunostaining, consistent with reduced or absent afferent activity, in both necklace and non-necklace glomeruli ipsilateral to the occluded naris. Conclusion Like other MOB glomeruli, necklace glomeruli exhibit a large decrease in afferent activity upon removal of external stimuli. Thus, we conclude that activity in GC-D+ neurons, which specifically innervate necklace glomeruli, is not dependent on internal stimuli. Instead, GC-D+ neurons, like other OSNs in the MOE, primarily sense the external world.

  16. Auditory stimulus timing influences perceived duration of co-occurring visual stimuli

    Directory of Open Access Journals (Sweden)

    Vincenzo eRomei

    2011-09-01

    Full Text Available There is increasing interest in multisensory influences upon sensory-specific judgements, such as when auditory stimuli affect visual perception. Here we studied whether the duration of an auditory event can objectively affect the perceived duration of a co-occurring visual event. On each trial, participants were presented with a pair of successive flashes and had to judge whether the first or second was longer. Two beeps were presented with the flashes. The order of short and long stimuli could be the same across audition and vision (audiovisual congruent or reversed, so that the longer flash was accompanied by the shorter beep and vice versa (audiovisual incongruent; or the two beeps could have the same duration as each other. Beeps and flashes could onset synchronously or asynchronously. In a further control experiment, the beep durations were much longer (tripled than the flashes. Results showed that visual duration-discrimination sensitivity (d' was significantly higher for congruent (and significantly lower for incongruent audiovisual synchronous combinations, relative to the visual only presentation. This effect was abolished when auditory and visual stimuli were presented asynchronously, or when sound durations tripled those of flashes. We conclude that the temporal properties of co-occurring auditory stimuli influence the perceived duration of visual stimuli and that this can reflect genuine changes in visual sensitivity rather than mere response bias.

  17. Effects of hypnagogic imagery on the event-related potential to external tone stimuli.

    Science.gov (United States)

    Michida, Nanae; Hayashi, Mitsuo; Hori, Tadao

    2005-07-01

    The purpose of this study was to examine the influence of hypnagogic imagery on the information processes of external tone stimuli during the sleep onset period with the use of event-related potentials. Event-related potentials to tone stimuli were compared between conditions with and without the experience of hypnagogic imagery. To control the arousal level when the tone was presented, a certain criterion named the electroencephalogram stage was used. Stimuli were presented at electroencephalogram stage 4, which was characterized by the appearance of a vertex sharp wave. Data were collected in the sleep laboratory at Hiroshima University. Eleven healthy university and graduate school students participated in the study. N/A. Experiments were performed at night. Reaction times to tone stimuli were measured, and only trials with shorter reaction times than 5000 milliseconds were analyzed. Electroencephalograms were recorded from Fz, Cz, Pz, Oz, T5 and T6. There were no differences in reaction times and electroencephalogram spectra between the conditions of with and without hypnagogic imagery. These results indicated that the arousal levels were not different between the 2 conditions. On the other hand, the N550 amplitude of the event-related potentials in the imagery condition was lower than in the no-imagery condition. The decrease in the N550 amplitude in the imagery condition showed that experiences of hypnagogic imagery exert some influence on the information processes of external tone stimuli. It is possible that the processing of hypnagogic imagery interferes with the processing of external stimuli, lowering the sensitivity to external stimuli.

  18. Auditory-visual aversive stimuli modulate the conscious experience of fear.

    Science.gov (United States)

    Taffou, Marine; Guerchouche, Rachid; Drettakis, George; Viaud-Delmon, Isabelle

    2013-01-01

    In a natural environment, affective information is perceived via multiple senses, mostly audition and vision. However, the impact of multisensory information on affect remains relatively undiscovered. In this study, we investigated whether the auditory-visual presentation of aversive stimuli influences the experience of fear. We used the advantages of virtual reality to manipulate multisensory presentation and to display potentially fearful dog stimuli embedded in a natural context. We manipulated the affective reactions evoked by the dog stimuli by recruiting two groups of participants: dog-fearful and non-fearful participants. The sensitivity to dog fear was assessed psychometrically by a questionnaire and also at behavioral and subjective levels using a Behavioral Avoidance Test (BAT). Participants navigated in virtual environments, in which they encountered virtual dog stimuli presented through the auditory channel, the visual channel or both. They were asked to report their fear using Subjective Units of Distress. We compared the fear for unimodal (visual or auditory) and bimodal (auditory-visual) dog stimuli. Dog-fearful participants as well as non-fearful participants reported more fear in response to bimodal audiovisual compared to unimodal presentation of dog stimuli. These results suggest that fear is more intense when the affective information is processed via multiple sensory pathways, which might be due to a cross-modal potentiation. Our findings have implications for the field of virtual reality-based therapy of phobias. Therapies could be refined and improved by implicating and manipulating the multisensory presentation of the feared situations.

  19. Spatiotemporal Relationships among Audiovisual Stimuli Modulate Auditory Facilitation of Visual Target Discrimination.

    Science.gov (United States)

    Li, Qi; Yang, Huamin; Sun, Fang; Wu, Jinglong

    2015-03-01

    Sensory information is multimodal; through audiovisual interaction, task-irrelevant auditory stimuli tend to speed response times and increase visual perception accuracy. However, mechanisms underlying these performance enhancements have remained unclear. We hypothesize that task-irrelevant auditory stimuli might provide reliable temporal and spatial cues for visual target discrimination and behavioral response enhancement. Using signal detection theory, the present study investigated the effects of spatiotemporal relationships on auditory facilitation of visual target discrimination. Three experiments were conducted where an auditory stimulus maintained reliable temporal and/or spatial relationships with visual target stimuli. Results showed that perception sensitivity (d') to visual target stimuli was enhanced only when a task-irrelevant auditory stimulus maintained reliable spatiotemporal relationships with a visual target stimulus. When only reliable spatial or temporal information was contained, perception sensitivity was not enhanced. These results suggest that reliable spatiotemporal relationships between visual and auditory signals are required for audiovisual integration during a visual discrimination task, most likely due to a spread of attention. These results also indicate that auditory facilitation of visual target discrimination follows from late-stage cognitive processes rather than early stage sensory processes. © 2015 SAGE Publications.

  20. Attentional load modulates responses of human primary visual cortex to invisible stimuli.

    Science.gov (United States)

    Bahrami, Bahador; Lavie, Nilli; Rees, Geraint

    2007-03-20

    Visual neuroscience has long sought to determine the extent to which stimulus-evoked activity in visual cortex depends on attention and awareness. Some influential theories of consciousness maintain that the allocation of attention is restricted to conscious representations [1, 2]. However, in the load theory of attention [3], competition between task-relevant and task-irrelevant stimuli for limited-capacity attention does not depend on conscious perception of the irrelevant stimuli. The critical test is whether the level of attentional load in a relevant task would determine unconscious neural processing of invisible stimuli. Human participants were scanned with high-field fMRI while they performed a foveal task of low or high attentional load. Irrelevant, invisible monocular stimuli were simultaneously presented peripherally and were continuously suppressed by a flashing mask in the other eye [4]. Attentional load in the foveal task strongly modulated retinotopic activity evoked in primary visual cortex (V1) by the invisible stimuli. Contrary to traditional views [1, 2, 5, 6], we found that availability of attentional capacity determines neural representations related to unconscious processing of continuously suppressed stimuli in human primary visual cortex. Spillover of attention to cortical representations of invisible stimuli (under low load) cannot be a sufficient condition for their awareness.

  1. Heightened attentional capture by visual food stimuli in anorexia nervosa.

    Science.gov (United States)

    Neimeijer, Renate A M; Roefs, Anne; de Jong, Peter J

    2017-08-01

    The present study was designed to test the hypothesis that anorexia nervosa (AN) patients are relatively insensitive to the attentional capture of visual food stimuli. Attentional avoidance of food might help AN patients to prevent more elaborate processing of food stimuli and the subsequent generation of craving, which might enable AN patients to maintain their strict diet. Participants were 66 restrictive AN spectrum patients and 55 healthy controls. A single-target rapid serial visual presentation task was used with food and disorder-neutral cues as critical distracter stimuli and disorder-neutral pictures as target stimuli. AN spectrum patients showed diminished task performance when visual food cues were presented in close temporal proximity of the to-be-identified target. In contrast to our hypothesis, results indicate that food cues automatically capture AN spectrum patients' attention. One explanation could be that the enhanced attentional capture of food cues in AN is driven by the relatively high threat value of food items in AN. Implications and suggestions for future research are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  2. Brain response to visual sexual stimuli in homosexual pedophiles.

    Science.gov (United States)

    Schiffer, Boris; Krueger, Tillmann; Paul, Thomas; de Greiff, Armin; Forsting, Michael; Leygraf, Norbert; Schedlowski, Manfred; Gizewski, Elke

    2008-01-01

    The neurobiological mechanisms of deviant sexual preferences such as pedophilia are largely unknown. The objective of this study was to analyze whether brain activation patterns of homosexual pedophiles differed from those of a nonpedophile homosexual control group during visual sexual stimulation. A consecutive sample of 11 pedophile forensic inpatients exclusively attracted to boys and 12 age-matched homosexual control participants from a comparable socioeconomic stratum underwent functional magnetic resonance imaging during a visual sexual stimulation procedure that used sexually stimulating and emotionally neutral photographs. Sexual arousal was assessed according to a subjective rating scale. In contrast to sexually neutral pictures, in both groups sexually arousing pictures having both homosexual and pedophile content activated brain areas known to be involved in processing visual stimuli containing emotional content, including the occipitotemporal and prefrontal cortices. However, during presentation of the respective sexual stimuli, the thalamus, globus pallidus and striatum, which correspond to the key areas of the brain involved in sexual arousal and behaviour, showed significant activation in pedophiles, but not in control subjects. Central processing of visual sexual stimuli in homosexual pedophiles seems to be comparable to that in nonpedophile control subjects. However, compared with homosexual control subjects, activation patterns in pedophiles refer more strongly to subcortical regions, which have previously been discussed in the context of processing reward signals and also play an important role in addictive and stimulus-controlled behaviour. Thus future studies should further elucidate the specificity of these brain regions for the processing of sexual stimuli in pedophilia and should address the generally weaker activation pattern in homosexual men.

  3. Distraction by deviance: comparing the effects of auditory and visual deviant stimuli on auditory and visual target processing.

    Science.gov (United States)

    Leiva, Alicia; Parmentier, Fabrice B R; Andrés, Pilar

    2015-01-01

    We report the results of oddball experiments in which an irrelevant stimulus (standard, deviant) was presented before a target stimulus and the modality of these stimuli was manipulated orthogonally (visual/auditory). Experiment 1 showed that auditory deviants yielded distraction irrespective of the target's modality while visual deviants did not impact on performance. When participants were forced to attend the distractors in order to detect a rare target ("target-distractor"), auditory deviants yielded distraction irrespective of the target's modality and visual deviants yielded a small distraction effect when targets were auditory (Experiments 2 & 3). Visual deviants only produced distraction for visual targets when deviant stimuli were not visually distinct from the other distractors (Experiment 4). Our results indicate that while auditory deviants yield distraction irrespective of the targets' modality, visual deviants only do so when attended and under selective conditions, at least when irrelevant and target stimuli are temporally and perceptually decoupled.

  4. External stimuli response on a novel chitosan hydrogel crosslinked ...

    Indian Academy of Sciences (India)

    The influence of external stimuli such as pH, temperature, and ionic strength of the swelling media on equilibrium swelling properties has been observed. Hydrogels showed a typical pH and temperature responsive behaviour such as low pH and high temperature has maximum swelling while high pH and low temperature ...

  5. United we sense, divided we fail: context-driven perception of ambiguous visual stimuli.

    NARCIS (Netherlands)

    Klink, P.C.; van Wezel, R.J.A.; van Ee, R.

    2012-01-01

    Ambiguous visual stimuli provide the brain with sensory information that contains conflicting evidence for multiple mutually exclusive interpretations. Two distinct aspects of the phenomenological experience associated with viewing ambiguous visual stimuli are the apparent stability of perception

  6. United we sense, divided we fail: context-driven perception of ambiguous visual stimuli

    NARCIS (Netherlands)

    Klink, P. C; van Wezel, Richard Jack Anton; van Ee, R.

    2012-01-01

    Ambiguous visual stimuli provide the brain with sensory information that contains conflicting evidence for multiple mutually exclusive interpretations. Two distinct aspects of the phenomenological experience associated with viewing ambiguous visual stimuli are the apparent stability of perception

  7. Generating Stimuli for Neuroscience Using PsychoPy

    OpenAIRE

    Peirce, Jonathan W.

    2009-01-01

    PsychoPy is a software library written in Python, using OpenGL to generate very precise visual stimuli on standard personal computers. It is designed to allow the construction of as wide a variety of neuroscience experiments as possible, with the least effort. By writing scripts in standard Python syntax users can generate an enormous variety of visual and auditory stimuli and can interact with a wide range of external hardware (enabling its use in fMRI, EEG, MEG etc.). The structure of scrip...

  8. Heightened attentional capture by visual food stimuli in Anorexia Nervosa

    NARCIS (Netherlands)

    Neimeijer, Renate A.M.; Roefs, Anne; de Jong, Peter J.

    The present study was designed to test the hypothesis that anorexia nervosa (AN) patients are relatively insensitive to the attentional capture of visual food stimuli. Attentional avoidance of food might help AN patients to prevent more elaborate processing of food stimuli and the subsequent

  9. Gestalt perceptual organization of visual stimuli captures attention automatically: Electrophysiological evidence

    Directory of Open Access Journals (Sweden)

    Francesco Marini

    2016-08-01

    Full Text Available The visual system leverages organizational regularities of perceptual elements to create meaningful representations of the world. One clear example of such function, which has been formalized in the Gestalt psychology principles, is the perceptual grouping of simple visual elements (e.g., lines and arcs into unitary objects (e.g., forms and shapes. The present study sought to characterize automatic attentional capture and related cognitive processing of Gestalt-like visual stimuli at the psychophysiological level by using event-related potentials (ERPs. We measured ERPs during a simple visual reaction time task with bilateral presentations of physically matched elements with or without a Gestalt organization. Results showed that Gestalt (vs. non-Gestalt stimuli are characterized by a larger N2pc together with enhanced ERP amplitudes of non-lateralized components (N1, N2, P3 starting around 150ms post-stimulus onset. Thus, we conclude that Gestalt stimuli capture attention automatically and entail characteristic psychophysiological signatures at both early and late processing stages.

  10. How stimuli presentation format affects visual attention and choice outcomes in choice experiments

    DEFF Research Database (Denmark)

    Orquin, Jacob Lund; Mueller Loose, Simone

    This study analyses visual attention and part-worth utilities in choice experiments across three different choice stimuli presentation formats. Visual attention and choice behaviour in discrete choice experiments are found to be strongly affected by stimuli presentation format. These results...

  11. A crossmodal crossover: opposite effects of visual and auditory perceptual load on steady-state evoked potentials to irrelevant visual stimuli.

    Science.gov (United States)

    Jacoby, Oscar; Hall, Sarah E; Mattingley, Jason B

    2012-07-16

    Mechanisms of attention are required to prioritise goal-relevant sensory events under conditions of stimulus competition. According to the perceptual load model of attention, the extent to which task-irrelevant inputs are processed is determined by the relative demands of discriminating the target: the more perceptually demanding the target task, the less unattended stimuli will be processed. Although much evidence supports the perceptual load model for competing stimuli within a single sensory modality, the effects of perceptual load in one modality on distractor processing in another is less clear. Here we used steady-state evoked potentials (SSEPs) to measure neural responses to irrelevant visual checkerboard stimuli while participants performed either a visual or auditory task that varied in perceptual load. Consistent with perceptual load theory, increasing visual task load suppressed SSEPs to the ignored visual checkerboards. In contrast, increasing auditory task load enhanced SSEPs to the ignored visual checkerboards. This enhanced neural response to irrelevant visual stimuli under auditory load suggests that exhausting capacity within one modality selectively compromises inhibitory processes required for filtering stimuli in another. Copyright © 2012 Elsevier Inc. All rights reserved.

  12. Gender differences in pre-attentive change detection for visual but not auditory stimuli.

    Science.gov (United States)

    Yang, Xiuxian; Yu, Yunmiao; Chen, Lu; Sun, Hailian; Qiao, Zhengxue; Qiu, Xiaohui; Zhang, Congpei; Wang, Lin; Zhu, Xiongzhao; He, Jincai; Zhao, Lun; Yang, Yanjie

    2016-01-01

    Despite ongoing debate about gender differences in pre-attention processes, little is known about gender effects on change detection for auditory and visual stimuli. We explored gender differences in change detection while processing duration information in auditory and visual modalities. We investigated pre-attentive processing of duration information using a deviant-standard reverse oddball paradigm (50 ms/150 ms) for auditory and visual mismatch negativity (aMMN and vMMN) in males and females (n=21/group). In the auditory modality, decrement and increment aMMN were observed at 150-250 ms after the stimulus onset, and there was no significant gender effect on MMN amplitudes in temporal or fronto-central areas. In contrast, in the visual modality, only increment vMMN was observed at 180-260 ms after the onset of stimulus, and it was higher in males than in females. No gender effect was found in change detection for auditory stimuli, but change detection was facilitated for visual stimuli in males. Gender effects should be considered in clinical studies of pre-attention for visual stimuli. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  13. Brain reactivity to visual food stimuli after moderate-intensity exercise in children.

    Science.gov (United States)

    Masterson, Travis D; Kirwan, C Brock; Davidson, Lance E; Larson, Michael J; Keller, Kathleen L; Fearnbach, S Nicole; Evans, Alyssa; LeCheminant, James D

    2017-09-19

    Exercise may play a role in moderating eating behaviors. The purpose of this study was to examine the effect of an acute bout of exercise on neural responses to visual food stimuli in children ages 8-11 years. We hypothesized that acute exercise would result in reduced activity in reward areas of the brain. Using a randomized cross-over design, 26 healthy weight children completed two separate laboratory conditions (exercise; sedentary). During the exercise condition, each participant completed a 30-min bout of exercise at moderate-intensity (~ 67% HR maximum) on a motor-driven treadmill. During the sedentary session, participants sat continuously for 30 min. Neural responses to high- and low-calorie pictures of food were determined immediately following each condition using functional magnetic resonance imaging. There was a significant exercise condition*stimulus-type (high- vs. low-calorie pictures) interaction in the left hippocampus and right medial temporal lobe (p visual food stimuli differently following an acute bout of exercise compared to a non-exercise sedentary session in 8-11 year-old children. Specifically, an acute bout of exercise results in greater activation to high-calorie and reduced activation to low-calorie pictures of food in both the left hippocampus and right medial temporal lobe. This study shows that response to external food cues can be altered by exercise and understanding this mechanism will inform the development of future interventions aimed at altering energy intake in children.

  14. Learning of arbitrary association between visual and auditory novel stimuli in adults: the "bond effect" of haptic exploration.

    Directory of Open Access Journals (Sweden)

    Benjamin Fredembach

    Full Text Available BACKGROUND: It is well-known that human beings are able to associate stimuli (novel or not perceived in their environment. For example, this ability is used by children in reading acquisition when arbitrary associations between visual and auditory stimuli must be learned. The studies tend to consider it as an "implicit" process triggered by the learning of letter/sound correspondences. The study described in this paper examined whether the addition of the visuo-haptic exploration would help adults to learn more effectively the arbitrary association between visual and auditory novel stimuli. METHODOLOGY/PRINCIPAL FINDINGS: Adults were asked to learn 15 new arbitrary associations between visual stimuli and their corresponding sounds using two learning methods which differed according to the perceptual modalities involved in the exploration of the visual stimuli. Adults used their visual modality in the "classic" learning method and both their visual and haptic modalities in the "multisensory" learning one. After both learning methods, participants showed a similar above-chance ability to recognize the visual and auditory stimuli and the audio-visual associations. However, the ability to recognize the visual-auditory associations was better after the multisensory method than after the classic one. CONCLUSION/SIGNIFICANCE: This study revealed that adults learned more efficiently the arbitrary association between visual and auditory novel stimuli when the visual stimuli were explored with both vision and touch. The results are discussed from the perspective of how they relate to the functional differences of the manual haptic modality and the hypothesis of a "haptic bond" between visual and auditory stimuli.

  15. External noise distinguishes attention mechanisms.

    Science.gov (United States)

    Lu, Z L; Dosher, B A

    1998-05-01

    We developed and tested a powerful method for identifying and characterizing the effect of attention on performance in visual tasks as due to signal enhancement, distractor exclusion, or internal noise suppression. Based on a noisy Perceptual Template Model (PTM) of a human observer, the method adds increasing amounts of external noise (white gaussian random noise) to the visual stimulus and observes the effect on performance of a perceptual task for attended and unattended stimuli. The three mechanisms of attention yield three "signature" patterns of performance. The general framework for characterizing the mechanisms of attention is used here to investigate the attentional mechanisms in a concurrent location-cued orientation discrimination task. Test stimuli--Gabor patches tilted slightly to the right or left--always appeared on both the left and the right of fixation, and varied independently. Observers were cued on each trial to attend to the left, the right, or evenly to both stimuli, and decide the direction of tilt of both test stimuli. For eight levels of added external noise and three attention conditions (attended, unattended, and equal), subjects' contrast threshold levels were determined. At low levels of external noise, attention affected threshold contrast: threshold contrasts for non-attended stimuli were systematically higher than for equal attention stimuli, which were, in turn, higher than for attended stimuli. Specifically, when the rms contrast of the external noise is below 10%, there is a consistent 17% elevation of contrast threshold from attended to unattended condition across all three subjects. For higher levels of external noise, attention conditions did not affect threshold contrast values at all. These strong results are characteristic of a signal enhancement, or equivalently, an internal additive noise reduction mechanism of attention.

  16. Fusion and rivalry are dependent on the perceptual meaning of visual stimuli.

    Science.gov (United States)

    Andrews, Timothy J; Lotto, R Beau

    2004-03-09

    We view the world with two eyes and yet are typically only aware of a single, coherent image. Arguably the simplest explanation for this is that the visual system unites the two monocular stimuli into a common stream that eventually leads to a single coherent sensation. However, this notion is inconsistent with the well-known phenomenon of rivalry; when physically different stimuli project to the same retinal location, the ensuing perception alternates between the two monocular views in space and time. Although fundamental for understanding the principles of binocular vision and visual awareness, the mechanisms under-lying binocular rivalry remain controversial. Specifically, there is uncertainty about what determines whether monocular images undergo fusion or rivalry. By taking advantage of the perceptual phenomenon of color contrast, we show that physically identical monocular stimuli tend to rival-not fuse-when they signify different objects at the same location in visual space. Conversely, when physically different monocular stimuli are likely to represent the same object at the same location in space, fusion is more likely to result. The data suggest that what competes for visual awareness in the two eyes is not the physical similarity between images but the similarity in their perceptual/empirical meaning.

  17. Effects of inter- and intramodal selective attention to non-spatial visual stimuli: An event-related potential analysis.

    NARCIS (Netherlands)

    de Ruiter, M.B.; Kok, A.; van der Schoot, M.

    1998-01-01

    Event-related potentials (ERPs) were recorded to trains of rapidly presented auditory and visual stimuli. ERPs in conditions in which Ss attended to different features of visual stimuli were compared with ERPs to the same type of stimuli when Ss attended to different features of auditory stimuli,

  18. Modification of sudden onset auditory ERP by involuntary attention to visual stimuli.

    Science.gov (United States)

    Oray, Serkan; Lu, Zhong-Lin; Dawson, Michael E

    2002-03-01

    To investigate the cross-modal nature of the exogenous attention system, we studied how involuntary attention in the visual modality affects ERPs elicited by sudden onset of events in the auditory modality. Relatively loud auditory white noise bursts were presented to subjects with random and long inter-trial intervals. The noise bursts were either presented alone, or paired with a visual stimulus with a visual to auditory onset asynchrony of 120 ms. In a third condition, the visual stimuli were shown alone. All three conditions, auditory alone, visual alone, and paired visual/auditory, were randomly inter-mixed and presented with equal probabilities. Subjects were instructed to fixate on a point in front of them without task instructions concerning either the auditory or visual stimuli. ERPs were recorded from 28 scalp sites throughout every experimental session. Compared to ERPs in the auditory alone condition, pairing the auditory noise bursts with the visual stimulus reduced the amplitude of the auditory N100 component at Cz by 40% and the auditory P200/P300 component at Cz by 25%. No significant topographical change was observed in the scalp distributions of the N100 and P200/P300. Our results suggest that involuntary attention to visual stimuli suppresses early sensory (N100) as well as late cognitive (P200/P300) processing of sudden auditory events. The activation of the exogenous attention system by sudden auditory onset can be modified by involuntary visual attention in a cross-model, passive prepulse inhibition paradigm.

  19. Brain activation by visual erotic stimuli in healthy middle aged males.

    Science.gov (United States)

    Kim, S W; Sohn, D W; Cho, Y-H; Yang, W S; Lee, K-U; Juh, R; Ahn, K-J; Chung, Y-A; Han, S-I; Lee, K H; Lee, C U; Chae, J-H

    2006-01-01

    The objective of the present study was to identify brain centers, whose activity changes are related to erotic visual stimuli in healthy, heterosexual, middle aged males. Ten heterosexual, right-handed males with normal sexual function were entered into the present study (mean age 52 years, range 46-55). All potential subjects were screened over 1 h interview, and were encouraged to fill out questionnaires including the Brief Male Sexual Function Inventory. All subjects with a history of sexual arousal disorder or erectile dysfunction were excluded. We performed functional brain magnetic resonance imaging (fMRI) in male volunteers when an alternatively combined erotic and nonerotic film was played for 14 min and 9 s. The major areas of activation associated with sexual arousal to visual stimuli were occipitotemporal area, anterior cingulate gyrus, insula, orbitofrontal cortex, caudate nucleus. However, hypothalamus and thalamus were not activated. We suggest that the nonactivation of hypothalamus and thalamus in middle aged males may be responsible for the lesser physiological arousal in response to the erotic visual stimuli.

  20. Dynamic Stimuli And Active Processing In Human Visual Perception

    Science.gov (United States)

    Haber, Ralph N.

    1990-03-01

    Theories of visual perception traditionally have considered a static retinal image to be the starting point for processing; and has considered processing both to be passive and a literal translation of that frozen, two dimensional, pictorial image. This paper considers five problem areas in the analysis of human visually guided locomotion, in which the traditional approach is contrasted to newer ones that utilize dynamic definitions of stimulation, and an active perceiver: (1) differentiation between object motion and self motion, and among the various kinds of self motion (e.g., eyes only, head only, whole body, and their combinations); (2) the sources and contents of visual information that guide movement; (3) the acquisition and performance of perceptual motor skills; (4) the nature of spatial representations, percepts, and the perceived layout of space; and (5) and why the retinal image is a poor starting point for perceptual processing. These newer approaches argue that stimuli must be considered as dynamic: humans process the systematic changes in patterned light when objects move and when they themselves move. Furthermore, the processing of visual stimuli must be active and interactive, so that perceivers can construct panoramic and stable percepts from an interaction of stimulus information and expectancies of what is contained in the visual environment. These developments all suggest a very different approach to the computational analyses of object location and identification, and of the visual guidance of locomotion.

  1. Opposite Distortions in Interval Timing Perception for Visual and Auditory Stimuli with Temporal Modulations.

    Science.gov (United States)

    Yuasa, Kenichi; Yotsumoto, Yuko

    2015-01-01

    When an object is presented visually and moves or flickers, the perception of its duration tends to be overestimated. Such an overestimation is called time dilation. Perceived time can also be distorted when a stimulus is presented aurally as an auditory flutter, but the mechanisms and their relationship to visual processing remains unclear. In the present study, we measured interval timing perception while modulating the temporal characteristics of visual and auditory stimuli, and investigated whether the interval times of visually and aurally presented objects shared a common mechanism. In these experiments, participants compared the durations of flickering or fluttering stimuli to standard stimuli, which were presented continuously. Perceived durations for auditory flutters were underestimated, while perceived durations of visual flickers were overestimated. When auditory flutters and visual flickers were presented simultaneously, these distortion effects were cancelled out. When auditory flutters were presented with a constantly presented visual stimulus, the interval timing perception of the visual stimulus was affected by the auditory flutters. These results indicate that interval timing perception is governed by independent mechanisms for visual and auditory processing, and that there are some interactions between the two processing systems.

  2. Amplitude-modulated stimuli reveal auditory-visual interactions in brain activity and brain connectivity

    Directory of Open Access Journals (Sweden)

    Mark eLaing

    2015-10-01

    Full Text Available The temporal congruence between auditory and visual signals coming from the same source can be a powerful means by which the brain integrates information from different senses. To investigate how the brain uses temporal information to integrate auditory and visual information from continuous yet unfamiliar stimuli, we use amplitude-modulated tones and size-modulated shapes with which we could manipulate the temporal congruence between the sensory signals. These signals were independently modulated at a slow or a fast rate. Participants were presented with auditory-only, visual-only or auditory-visual (AV trials in the scanner. On AV trials, the auditory and visual signal could have the same (AV congruent or different modulation rates (AV incongruent. Using psychophysiological interaction analyses, we found that auditory regions showed increased functional connectivity predominantly with frontal regions for AV incongruent relative to AV congruent stimuli. We further found that superior temporal regions, shown previously to integrate auditory and visual signals, showed increased connectivity with frontal and parietal regions for the same contrast. Our findings provide evidence that both activity in a network of brain regions and their connectivity are important for auditory-visual integration, and help to bridge the gap between transient and familiar AV stimuli used in previous studies.

  3. Toward a reliable gaze-independent hybrid BCI combining visual and natural auditory stimuli.

    Science.gov (United States)

    Barbosa, Sara; Pires, Gabriel; Nunes, Urbano

    2016-03-01

    Brain computer interfaces (BCIs) are one of the last communication options for patients in the locked-in state (LIS). For complete LIS patients, interfaces must be gaze-independent due to their eye impairment. However, unimodal gaze-independent approaches typically present levels of performance substantially lower than gaze-dependent approaches. The combination of multimodal stimuli has been pointed as a viable way to increase users' performance. A hybrid visual and auditory (HVA) P300-based BCI combining simultaneously visual and auditory stimulation is proposed. Auditory stimuli are based on natural meaningful spoken words, increasing stimuli discrimination and decreasing user's mental effort in associating stimuli to the symbols. The visual part of the interface is covertly controlled ensuring gaze-independency. Four conditions were experimentally tested by 10 healthy participants: visual overt (VO), visual covert (VC), auditory (AU) and covert HVA. Average online accuracy for the hybrid approach was 85.3%, which is more than 32% over VC and AU approaches. Questionnaires' results indicate that the HVA approach was the less demanding gaze-independent interface. Interestingly, the P300 grand average for HVA approach coincides with an almost perfect sum of P300 evoked separately by VC and AU tasks. The proposed HVA-BCI is the first solution simultaneously embedding natural spoken words and visual words to provide a communication lexicon. Online accuracy and task demand of the approach compare favorably with state-of-the-art. The proposed approach shows that the simultaneous combination of visual covert control and auditory modalities can effectively improve the performance of gaze-independent BCIs. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Amplitude-modulated stimuli reveal auditory-visual interactions in brain activity and brain connectivity.

    Science.gov (United States)

    Laing, Mark; Rees, Adrian; Vuong, Quoc C

    2015-01-01

    The temporal congruence between auditory and visual signals coming from the same source can be a powerful means by which the brain integrates information from different senses. To investigate how the brain uses temporal information to integrate auditory and visual information from continuous yet unfamiliar stimuli, we used amplitude-modulated tones and size-modulated shapes with which we could manipulate the temporal congruence between the sensory signals. These signals were independently modulated at a slow or a fast rate. Participants were presented with auditory-only, visual-only, or auditory-visual (AV) trials in the fMRI scanner. On AV trials, the auditory and visual signal could have the same (AV congruent) or different modulation rates (AV incongruent). Using psychophysiological interaction analyses, we found that auditory regions showed increased functional connectivity predominantly with frontal regions for AV incongruent relative to AV congruent stimuli. We further found that superior temporal regions, shown previously to integrate auditory and visual signals, showed increased connectivity with frontal and parietal regions for the same contrast. Our findings provide evidence that both activity in a network of brain regions and their connectivity are important for AV integration, and help to bridge the gap between transient and familiar AV stimuli used in previous studies.

  5. Suppressed visual looming stimuli are not integrated with auditory looming signals: Evidence from continuous flash suppression.

    Science.gov (United States)

    Moors, Pieter; Huygelier, Hanne; Wagemans, Johan; de-Wit, Lee; van Ee, Raymond

    2015-01-01

    Previous studies using binocular rivalry have shown that signals in a modality other than the visual can bias dominance durations depending on their congruency with the rivaling stimuli. More recently, studies using continuous flash suppression (CFS) have reported that multisensory integration influences how long visual stimuli remain suppressed. In this study, using CFS, we examined whether the contrast thresholds for detecting visual looming stimuli are influenced by a congruent auditory stimulus. In Experiment 1, we show that a looming visual stimulus can result in lower detection thresholds compared to a static concentric grating, but that auditory tone pips congruent with the looming stimulus did not lower suppression thresholds any further. In Experiments 2, 3, and 4, we again observed no advantage for congruent multisensory stimuli. These results add to our understanding of the conditions under which multisensory integration is possible, and suggest that certain forms of multisensory integration are not evident when the visual stimulus is suppressed from awareness using CFS.

  6. Precuneus-prefrontal activity during awareness of visual verbal stimuli

    DEFF Research Database (Denmark)

    Kjaer, T W; Nowak, M; Kjær, Klaus Wilbrandt

    2001-01-01

    Awareness is a personal experience, which is only accessible to the rest of world through interpretation. We set out to identify a neural correlate of visual awareness, using brief subliminal and supraliminal verbal stimuli while measuring cerebral blood flow distribution with H(2)(15)O PET. Awar...

  7. Generating Stimuli for Neuroscience Using PsychoPy.

    Science.gov (United States)

    Peirce, Jonathan W

    2008-01-01

    PsychoPy is a software library written in Python, using OpenGL to generate very precise visual stimuli on standard personal computers. It is designed to allow the construction of as wide a variety of neuroscience experiments as possible, with the least effort. By writing scripts in standard Python syntax users can generate an enormous variety of visual and auditory stimuli and can interact with a wide range of external hardware (enabling its use in fMRI, EEG, MEG etc.). The structure of scripts is simple and intuitive. As a result, new experiments can be written very quickly, and trying to understand a previously written script is easy, even with minimal code comments. PsychoPy can also generate movies and image sequences to be used in demos or simulated neuroscience experiments. This paper describes the range of tools and stimuli that it provides and the environment in which experiments are conducted.

  8. Contingent capture of involuntary visual attention interferes with detection of auditory stimuli.

    Science.gov (United States)

    Kamke, Marc R; Harris, Jill

    2014-01-01

    The involuntary capture of attention by salient visual stimuli can be influenced by the behavioral goals of an observer. For example, when searching for a target item, irrelevant items that possess the target-defining characteristic capture attention more strongly than items not possessing that feature. Such contingent capture involves a shift of spatial attention toward the item with the target-defining characteristic. It is not clear, however, if the associated decrements in performance for detecting the target item are entirely due to involuntary orienting of spatial attention. To investigate whether contingent capture also involves a non-spatial interference, adult observers were presented with streams of visual and auditory stimuli and were tasked with simultaneously monitoring for targets in each modality. Visual and auditory targets could be preceded by a lateralized visual distractor that either did, or did not, possess the target-defining feature (a specific color). In agreement with the contingent capture hypothesis, target-colored distractors interfered with visual detection performance (response time and accuracy) more than distractors that did not possess the target color. Importantly, the same pattern of results was obtained for the auditory task: visual target-colored distractors interfered with sound detection. The decrement in auditory performance following a target-colored distractor suggests that contingent capture involves a source of processing interference in addition to that caused by a spatial shift of attention. Specifically, we argue that distractors possessing the target-defining characteristic enter a capacity-limited, serial stage of neural processing, which delays detection of subsequently presented stimuli regardless of the sensory modality.

  9. Contingent capture of involuntary visual attention interferes with detection of auditory stimuli

    Directory of Open Access Journals (Sweden)

    Marc R. Kamke

    2014-06-01

    Full Text Available The involuntary capture of attention by salient visual stimuli can be influenced by the behavioral goals of an observer. For example, when searching for a target item, irrelevant items that possess the target-defining characteristic capture attention more strongly than items not possessing that feature. Such contingent capture involves a shift of spatial attention toward the item with the target-defining characteristic. It is not clear, however, if the associated decrements in performance for detecting the target item are entirely due to involuntary orienting of spatial attention. To investigate whether contingent capture also involves a non-spatial interference, adult observers were presented with streams of visual and auditory stimuli and were tasked with simultaneously monitoring for targets in each modality. Visual and auditory targets could be preceded by a lateralized visual distractor that either did, or did not, possess the target-defining feature (a specific color. In agreement with the contingent capture hypothesis, target-colored distractors interfered with visual detection performance (response time and accuracy more than distractors that did not possess the target color. Importantly, the same pattern of results was obtained for the auditory task: visual target-colored distractors interfered with sound detection. The decrement in auditory performance following a target-colored distractor suggests that contingent capture involves a source of processing interference in addition to that caused by a spatial shift of attention. Specifically, we argue that distractors possessing the target-defining characteristic enter a capacity-limited, serial stage of neural processing, which delays detection of subsequently presented stimuli regardless of the sensory modality.

  10. Bingo! Externally-Supported Performance Intervention for Deficient Visual Search in Normal Aging, Parkinson’s Disease and Alzheimer’s Disease

    Science.gov (United States)

    Laudate, Thomas M.; Neargarder, Sandy; Dunne, Tracy E.; Sullivan, Karen D.; Joshi, Pallavi; Gilmore, Grover C.; Riedel, Tatiana M.; Cronin-Golomb, Alice

    2011-01-01

    External support may improve task performance regardless of an individual’s ability to compensate for cognitive deficits through internally-generated mechanisms. We investigated if performance of a complex, familiar visual search task (the game of bingo) could be enhanced in groups with suboptimal vision by providing external support through manipulation of task stimuli. Participants were 19 younger adults, 14 individuals with probable Alzheimer’s disease (AD), 13 AD-matched healthy adults, 17 non-demented individuals with Parkinson’s disease (PD), and 20 PD-matched healthy adults. We varied stimulus contrast, size, and visual complexity during game play. The externally-supported performance interventions of increased stimulus size and decreased complexity resulted in improvements in performance by all groups. Performance improvement through increased stimulus size and decreased complexity was demonstrated by all groups. AD also obtained benefit from increasing contrast, presumably by compensating for their contrast sensitivity deficit. The general finding of improved performance across healthy and afflicted groups suggests the value of visual support as an easy-to-apply intervention to enhance cognitive performance. PMID:22066941

  11. Switching between internally and externally focused attention in obsessive-compulsive disorder: Abnormal visual cortex activation and connectivity.

    Science.gov (United States)

    Stern, Emily R; Muratore, Alexandra F; Taylor, Stephan F; Abelson, James L; Hof, Patrick R; Goodman, Wayne K

    2017-07-30

    Obsessive-compulsive disorder (OCD) is characterized by excessive absorption with internally-generated distressing thoughts and urges, with difficulty incorporating external information running counter to their fears and concerns. In the present study, we experimentally probed this core feature of OCD through the use of a novel attention switching task that investigates transitions between internally focused (IF) and externally focused (EF) attentional states. Eighteen OCD patients and 18 controls imagined positive and negative personal event scenarios (IF state) or performed a color-word Stroop task (EF state). The IF/EF states were followed by a target detection (TD) task requiring responses to external stimuli. Compared to controls, OCD patients made significantly more errors and showed reduced activation of superior and inferior occipital cortex, thalamus, and putamen during TD following negative IF, with the inferior occipital hypoactivation being significantly greater for TD following negative IF compared to TD following the other conditions. Patients showed stronger functional connectivity between the inferior occipital region and dorsomedial prefrontal cortex. These findings point to an OCD-related impairment in the visual processing of external stimuli specifically when they follow a period of negative internal focus, and suggest that future treatments may wish to target the transition between attentional states. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  12. Characterization of functional biopolymers under various external stimuli

    Energy Technology Data Exchange (ETDEWEB)

    Maleki, Atoosa

    2008-07-01

    Polymers are large molecules composed of repeating structural units connected by covalent chemical bonds. Biopolymers are a class of polymers produced by living organisms, which exhibit both biocompatible and biodegradable properties. The behavior of a biopolymer in solution is strongly dependent on the chemical and physical structure of the polymer chain, as well as external environmental conditions. To improve biopolymers in the direction of higher performance and better functionality, understanding of their physicochemical behavior and their response to external stimuli are of great importance. Rheology, rheo-small angle light scattering, dynamic light scattering, small angle neutron scattering, and asymmetric flow field-flow fractionation were utilized in this thesis to investigate the properties of hydroxyethyl cellulose and its hydrophobically modified analogue, as well as dextran, hyaluronan, and mucin under different conditions such as temperature, solvent, mechanical stress and strain, and radiation. Different novel hydrogels were prepared by using various chemical cross-linking agents. Specific features of these macromolecules provide them to be used as 'functional' materials, e.g., sensors, actuators, personal care products, enhanced oil recovery, and controlled drug delivery systems (author)

  13. Enhanced recognition memory in grapheme-color synaesthesia for different categories of visual stimuli.

    Science.gov (United States)

    Ward, Jamie; Hovard, Peter; Jones, Alicia; Rothen, Nicolas

    2013-01-01

    Memory has been shown to be enhanced in grapheme-color synaesthesia, and this enhancement extends to certain visual stimuli (that don't induce synaesthesia) as well as stimuli comprised of graphemes (which do). Previous studies have used a variety of testing procedures to assess memory in synaesthesia (e.g., free recall, recognition, associative learning) making it hard to know the extent to which memory benefits are attributable to the stimulus properties themselves, the testing method, participant strategies, or some combination of these factors. In the first experiment, we use the same testing procedure (recognition memory) for a variety of stimuli (written words, non-words, scenes, and fractals) and also check which memorization strategies were used. We demonstrate that grapheme-color synaesthetes show enhanced memory across all these stimuli, but this is not found for a non-visual type of synaesthesia (lexical-gustatory). In the second experiment, the memory advantage for scenes is explored further by manipulating the properties of the old and new images (changing color, orientation, or object presence). Again, grapheme-color synaesthetes show a memory advantage for scenes across all manipulations. Although recognition memory is generally enhanced in this study, the largest effects were found for abstract visual images (fractals) and scenes for which color can be used to discriminate old/new status.

  14. Generating stimuli for neuroscience using PsychoPy

    Directory of Open Access Journals (Sweden)

    Jonathan W Peirce

    2009-01-01

    Full Text Available PsychoPy is a software library written in Python, using OpenGL to generate very precise visual stimuli on standard personal computers. It is designed to allow the construction of as wide a variety of neuroscience experiments as possible, with the least effort. By writing scripts in standard Python syntax users can generate an enormous variety of visual and auditory stimuli and can interact with a wide range of external hardware (enabling its use in fMRI, EEG, MEG etc.. The structure of scripts is simple and intuitive. As a result, new experiments can be written very quickly, and trying to understand a previously written script is easy, even with minimal code comments. PsychoPy can also generate movies and image sequences to be used in demos or simulated neuroscience experiments. This paper describes the range of tools and stimuli that it provides and the environment in which experiments are conducted.

  15. The Lurking Snake in the Grass: Interference of Snake Stimuli in Visually Taxing Conditions

    Directory of Open Access Journals (Sweden)

    Sandra Cristina Soares

    2012-04-01

    Full Text Available Based on evolutionary considerations, it was hypothesized that humans have been shaped to easily spot snakes in visually cluttered scenes that might otherwise hide camouflaged snakes. This hypothesis was tested in a visual search experiment in which I assessed automatic attention capture to evolutionarily-relevant distractor stimuli (snakes, in comparison with another animal which is also feared but where this fear has a disputed evolutionary origin (spiders, and neutral stimuli (mushrooms. Sixty participants were engaged in a task that involved the detection of a target (a bird among pictures of fruits. Unexpectedly, on some trials, a snake, a spider, or a mushroom replaced one of the fruits. The question of interest was whether the distracting stimuli slowed the reaction times for finding the target (the bird to different degrees. Perceptual load of the task was manipulated by increments in the set size (6 or 12 items on different trials. The findings showed that snake stimuli were processed preferentially, particularly under conditions where attentional resources were depleted, which reinforced the role of this evolutionarily-relevant stimulus in accessing the visual system (Isbell, 2009.

  16. Comparisons of memory for nonverbal auditory and visual sequential stimuli.

    Science.gov (United States)

    McFarland, D J; Cacace, A T

    1995-01-01

    Properties of auditory and visual sensory memory were compared by examining subjects' recognition performance of randomly generated binary auditory sequential frequency patterns and binary visual sequential color patterns within a forced-choice paradigm. Experiment 1 demonstrated serial-position effects in auditory and visual modalities consisting of both primacy and recency effects. Experiment 2 found that retention of auditory and visual information was remarkably similar when assessed across a 10s interval. Experiments 3 and 4, taken together, showed that the recency effect in sensory memory is affected more by the type of response required (recognition vs. reproduction) than by the sensory modality employed. These studies suggest that auditory and visual sensory memory stores for nonverbal stimuli share similar properties with respect to serial-position effects and persistence over time.

  17. Enhanced Recognition Memory in Grapheme-Colour Synaesthesia for Different Categories of Visual Stimuli

    Directory of Open Access Journals (Sweden)

    Jamie eWard

    2013-10-01

    Full Text Available Memory has been shown to be enhanced in grapheme-colour synaesthesia, and this enhancement extends to certain visual stimuli (that don’t induce synaesthesia as well as stimuli comprised of graphemes (which do. Previous studies have used a variety of testing procedures to assess memory in synaesthesia (e.g. free recall, recognition, associative learning making it hard to know the extent to which memory benefits are attributable to the stimulus properties themselves, the testing method, participant strategies, or some combination of these factors. In the first experiment, we use the same testing procedure (recognition memory for a variety of stimuli (written words, nonwords, scenes, and fractals and also check which memorisation strategies were used. We demonstrate that grapheme-colour synaesthetes show enhanced memory across all these stimuli, but this is not found for a non-visual type of synaesthesia (lexical-gustatory. In the second experiment, the memory advantage for scenes is explored further by manipulating the properties of the old and new images (changing colour, orientation, or object presence. Again, grapheme-colour synaesthetes show a memory advantage for scenes across all manipulations. Although recognition memory is generally enhanced in this study, the largest effects were found for abstract visual images (fractals and scenes for which colour can be used to discriminate old/new status.

  18. Visual arts training is linked to flexible attention to local and global levels of visual stimuli.

    Science.gov (United States)

    Chamberlain, Rebecca; Wagemans, Johan

    2015-10-01

    Observational drawing skill has been shown to be associated with the ability to focus on local visual details. It is unclear whether superior performance in local processing is indicative of the ability to attend to, and flexibly switch between, local and global levels of visual stimuli. It is also unknown whether these attentional enhancements remain specific to observational drawing skill or are a product of a wide range of artistic activities. The current study aimed to address these questions by testing if flexible visual processing predicts artistic group membership and observational drawing skill in a sample of first-year bachelor's degree art students (n=23) and non-art students (n=23). A pattern of local and global visual processing enhancements was found in relation to artistic group membership and drawing skill, with local processing ability found to be specifically related to individual differences in drawing skill. Enhanced global processing and more fluent switching between local and global levels of hierarchical stimuli predicted both drawing skill and artistic group membership, suggesting that these are beneficial attentional mechanisms for art-making in a range of domains. These findings support a top-down attentional model of artistic expertise and shed light on the domain specific and domain-general attentional enhancements induced by proficiency in the visual arts. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Klinefelter syndrome has increased brain responses to auditory stimuli and motor output, but not to visual stimuli or Stroop adaptation.

    Science.gov (United States)

    Wallentin, Mikkel; Skakkebæk, Anne; Bojesen, Anders; Fedder, Jens; Laurberg, Peter; Østergaard, John R; Hertz, Jens Michael; Pedersen, Anders Degn; Gravholt, Claus Højbjerg

    2016-01-01

    Klinefelter syndrome (47, XXY) (KS) is a genetic syndrome characterized by the presence of an extra X chromosome and low level of testosterone, resulting in a number of neurocognitive abnormalities, yet little is known about brain function. This study investigated the fMRI-BOLD response from KS relative to a group of Controls to basic motor, perceptual, executive and adaptation tasks. Participants (N: KS = 49; Controls = 49) responded to whether the words "GREEN" or "RED" were displayed in green or red (incongruent versus congruent colors). One of the colors was presented three times as often as the other, making it possible to study both congruency and adaptation effects independently. Auditory stimuli saying "GREEN" or "RED" had the same distribution, making it possible to study effects of perceptual modality as well as Frequency effects across modalities. We found that KS had an increased response to motor output in primary motor cortex and an increased response to auditory stimuli in auditory cortices, but no difference in primary visual cortices. KS displayed a diminished response to written visual stimuli in secondary visual regions near the Visual Word Form Area, consistent with the widespread dyslexia in the group. No neural differences were found in inhibitory control (Stroop) or in adaptation to differences in stimulus frequencies. Across groups we found a strong positive correlation between age and BOLD response in the brain's motor network with no difference between groups. No effects of testosterone level or brain volume were found. In sum, the present findings suggest that auditory and motor systems in KS are selectively affected, perhaps as a compensatory strategy, and that this is not a systemic effect as it is not seen in the visual system.

  20. Klinefelter syndrome has increased brain responses to auditory stimuli and motor output, but not to visual stimuli or Stroop adaptation

    Directory of Open Access Journals (Sweden)

    Mikkel Wallentin

    2016-01-01

    Full Text Available Klinefelter syndrome (47, XXY (KS is a genetic syndrome characterized by the presence of an extra X chromosome and low level of testosterone, resulting in a number of neurocognitive abnormalities, yet little is known about brain function. This study investigated the fMRI-BOLD response from KS relative to a group of Controls to basic motor, perceptual, executive and adaptation tasks. Participants (N: KS = 49; Controls = 49 responded to whether the words “GREEN” or “RED” were displayed in green or red (incongruent versus congruent colors. One of the colors was presented three times as often as the other, making it possible to study both congruency and adaptation effects independently. Auditory stimuli saying “GREEN” or “RED” had the same distribution, making it possible to study effects of perceptual modality as well as Frequency effects across modalities. We found that KS had an increased response to motor output in primary motor cortex and an increased response to auditory stimuli in auditory cortices, but no difference in primary visual cortices. KS displayed a diminished response to written visual stimuli in secondary visual regions near the Visual Word Form Area, consistent with the widespread dyslexia in the group. No neural differences were found in inhibitory control (Stroop or in adaptation to differences in stimulus frequencies. Across groups we found a strong positive correlation between age and BOLD response in the brain's motor network with no difference between groups. No effects of testosterone level or brain volume were found. In sum, the present findings suggest that auditory and motor systems in KS are selectively affected, perhaps as a compensatory strategy, and that this is not a systemic effect as it is not seen in the visual system.

  1. Physiological and behavioral reactions elicited by simulated and real-life visual and acoustic helicopter stimuli in dairy goats

    Science.gov (United States)

    2011-01-01

    Background Anecdotal reports and a few scientific publications suggest that flyovers of helicopters at low altitude may elicit fear- or anxiety-related behavioral reactions in grazing feral and farm animals. We investigated the behavioral and physiological stress reactions of five individually housed dairy goats to different acoustic and visual stimuli from helicopters and to combinations of these stimuli under controlled environmental (indoor) conditions. The visual stimuli were helicopter animations projected on a large screen in front of the enclosures of the goats. Acoustic and visual stimuli of a tractor were also presented. On the final day of the study the goats were exposed to two flyovers (altitude 50 m and 75 m) of a Chinook helicopter while grazing in a pasture. Salivary cortisol, behavior, and heart rate of the goats were registered before, during and after stimulus presentations. Results The goats reacted alert to the visual and/or acoustic stimuli that were presented in their room. They raised their heads and turned their ears forward in the direction of the stimuli. There was no statistically reliable rise of the average velocity of moving of the goats in their enclosure and no increase of the duration of moving during presentation of the stimuli. Also there was no increase in heart rate or salivary cortisol concentration during the indoor test sessions. Surprisingly, no physiological and behavioral stress responses were observed during the flyover of a Chinook at 50 m, which produced a peak noise of 110 dB. Conclusions We conclude that the behavior and physiology of goats are unaffected by brief episodes of intense, adverse visual and acoustic stimulation such as the sight and noise of overflying helicopters. The absence of a physiological stress response and of elevated emotional reactivity of goats subjected to helicopter stimuli is discussed in relation to the design and testing schedule of this study. PMID:21496239

  2. The sensory channel of presentation alters subjective ratings and autonomic responses towards disgusting stimuli -Blood pressure, heart rate and skin conductance in response to visual, auditory, haptic and olfactory presented disgusting stimuli-

    Directory of Open Access Journals (Sweden)

    Ilona eCroy

    2013-09-01

    Full Text Available Disgust causes specific reaction patterns, observable in mimic responses and body reactions. Most research on disgust deals with visual stimuli. However, pictures may cause another disgust experience than sounds, odors or tactile stimuli. Therefore disgust experience evoked by four different sensory channels was compared.A total of 119 participants received 3 different disgusting and one control stimulus, each presented through the visual, auditory, tactile and olfactory channel. Ratings of evoked disgust as well as responses of the autonomic nervous system (heart rate, skin conductance level, systolic blood pressure were recorded and the effect of stimulus labeling and of repeated presentation was analyzed. Ratings suggested that disgust could be evoked through all senses; they were highest for visual stimuli. However, autonomic reaction towards disgusting stimuli differed according to the channel of presentation. In contrast to the other, olfactory disgust stimuli provoked a strong decrease of systolic blood pressure. Additionally, labeling enhanced disgust ratings and autonomic reaction for olfactory and tactile, but not for visual and auditory stimuli. Repeated presentation indicated that participant’s disgust rating diminishes to all but olfactory disgust stimuli. Taken together we argue that the sensory channel through which a disgust reaction is evoked matters.

  3. Effects of emotional valence and three-dimensionality of visual stimuli on brain activation: an fMRI study.

    Science.gov (United States)

    Dores, A R; Almeida, I; Barbosa, F; Castelo-Branco, M; Monteiro, L; Reis, M; de Sousa, L; Caldas, A Castro

    2013-01-01

    Examining changes in brain activation linked with emotion-inducing stimuli is essential to the study of emotions. Due to the ecological potential of techniques such as virtual reality (VR), inspection of whether brain activation in response to emotional stimuli can be modulated by the three-dimensional (3D) properties of the images is important. The current study sought to test whether the activation of brain areas involved in the emotional processing of scenarios of different valences can be modulated by 3D. Therefore, the focus was made on the interaction effect between emotion-inducing stimuli of different emotional valences (pleasant, unpleasant and neutral valences) and visualization types (2D, 3D). However, main effects were also analyzed. The effect of emotional valence and visualization types and their interaction were analyzed through a 3 × 2 repeated measures ANOVA. Post-hoc t-tests were performed under a ROI-analysis approach. The results show increased brain activation for the 3D affective-inducing stimuli in comparison with the same stimuli in 2D scenarios, mostly in cortical and subcortical regions that are related to emotional processing, in addition to visual processing regions. This study has the potential of clarify brain mechanisms involved in the processing of emotional stimuli (scenarios' valence) and their interaction with three-dimensionality.

  4. Visual laterality in dolphins: importance of the familiarity of stimuli

    Science.gov (United States)

    2012-01-01

    Background Many studies of cerebral asymmetries in different species lead, on the one hand, to a better understanding of the functions of each cerebral hemisphere and, on the other hand, to develop an evolutionary history of hemispheric laterality. Our animal model is particularly interesting because of its original evolutionary path, i.e. return to aquatic life after a terrestrial phase. The rare reports concerning visual laterality of marine mammals investigated mainly discrimination processes. As dolphins are migrant species they are confronted to a changing environment. Being able to categorize new versus familiar objects would allow dolphins a rapid adaptation to novel environments. Visual laterality could be a prerequisite to this adaptability. To date, no study, to our knowledge, has analyzed the environmental factors that could influence their visual laterality. Results We investigated visual laterality expressed spontaneously at the water surface by a group of five common bottlenose dolphins (Tursiops truncatus) in response to various stimuli. The stimuli presented ranged from very familiar objects (known and manipulated previously) to familiar objects (known but never manipulated) to unfamiliar objects (unknown, never seen previously). At the group level, dolphins used their left eye to observe very familiar objects and their right eye to observe unfamiliar objects. However, eyes are used indifferently to observe familiar objects with intermediate valence. Conclusion Our results suggest different visual cerebral processes based either on the global shape of well-known objects or on local details of unknown objects. Moreover, the manipulation of an object appears necessary for these dolphins to construct a global representation of an object enabling its immediate categorization for subsequent use. Our experimental results pointed out some cognitive capacities of dolphins which might be crucial for their wild life given their fission-fusion social system

  5. Visual laterality in dolphins: importance of the familiarity of stimuli

    Directory of Open Access Journals (Sweden)

    Blois-Heulin Catherine

    2012-01-01

    Full Text Available Abstract Background Many studies of cerebral asymmetries in different species lead, on the one hand, to a better understanding of the functions of each cerebral hemisphere and, on the other hand, to develop an evolutionary history of hemispheric laterality. Our animal model is particularly interesting because of its original evolutionary path, i.e. return to aquatic life after a terrestrial phase. The rare reports concerning visual laterality of marine mammals investigated mainly discrimination processes. As dolphins are migrant species they are confronted to a changing environment. Being able to categorize new versus familiar objects would allow dolphins a rapid adaptation to novel environments. Visual laterality could be a prerequisite to this adaptability. To date, no study, to our knowledge, has analyzed the environmental factors that could influence their visual laterality. Results We investigated visual laterality expressed spontaneously at the water surface by a group of five common bottlenose dolphins (Tursiops truncatus in response to various stimuli. The stimuli presented ranged from very familiar objects (known and manipulated previously to familiar objects (known but never manipulated to unfamiliar objects (unknown, never seen previously. At the group level, dolphins used their left eye to observe very familiar objects and their right eye to observe unfamiliar objects. However, eyes are used indifferently to observe familiar objects with intermediate valence. Conclusion Our results suggest different visual cerebral processes based either on the global shape of well-known objects or on local details of unknown objects. Moreover, the manipulation of an object appears necessary for these dolphins to construct a global representation of an object enabling its immediate categorization for subsequent use. Our experimental results pointed out some cognitive capacities of dolphins which might be crucial for their wild life given their

  6. Motivationally Significant Stimuli Show Visual Prior Entry: Evidence for Attentional Capture

    Science.gov (United States)

    West, Greg L.; Anderson, Adam A. K.; Pratt, Jay

    2009-01-01

    Previous studies that have found attentional capture effects for stimuli of motivational significance do not directly measure initial attentional deployment, leaving it unclear to what extent these items produce attentional capture. Visual prior entry, as measured by temporal order judgments (TOJs), rests on the premise that allocated attention…

  7. Retinal image quality and visual stimuli processing by simulation of partial eye cataract

    Science.gov (United States)

    Ozolinsh, Maris; Danilenko, Olga; Zavjalova, Varvara

    2016-10-01

    Visual stimuli were demonstrated on a 4.3'' mobile phone screen inside a "Virtual Reality" adapter that allowed separation of the left and right eye visual fields. Contrast of the retina image thus can be controlled by the image on the phone screen and parallel to that at appropriate geometry by the AC voltage applied to scattering PDLC cell inside the adapter. Such optical pathway separation allows to demonstrate to both eyes spatially variant images, that after visual binocular fusion acquire their characteristic indications. As visual stimuli we used grey and different color (two opponent components to vision - red-green in L*a*b* color space) spatially periodical stimuli for left and right eyes; and with spatial content that by addition or subtraction resulted as clockwise or counter clockwise slanted Gabor gratings. We performed computer modeling with numerical addition or subtraction of signals similar to processing in brain via stimuli input decomposition in luminance and color opponency components. It revealed the dependence of the perception psychophysical equilibrium point between clockwise or counter clockwise perception of summation on one eye image contrast and color saturation, and on the strength of the retinal aftereffects. Existence of a psychophysical equilibrium point in perception of summation is only in the presence of a prior adaptation to a slanted periodical grating and at the appropriate slant orientation of adaptation grating and/or at appropriate spatial grating pattern phase according to grating nods. Actual observer perception experiments when one eye images were deteriorated by simulated cataract approved the shift of mentioned psychophysical equilibrium point on the degree of artificial cataract. We analyzed also the mobile devices stimuli emission spectra paying attention to areas sensitive to macula pigments absorption spectral maxima and blue areas where the intense irradiation can cause in abnormalities in periodic melatonin

  8. Visual sensitivity for luminance and chromatic stimuli during the execution of smooth pursuit and saccadic eye movements.

    Science.gov (United States)

    Braun, Doris I; Schütz, Alexander C; Gegenfurtner, Karl R

    2017-07-01

    Visual sensitivity is dynamically modulated by eye movements. During saccadic eye movements, sensitivity is reduced selectively for low-spatial frequency luminance stimuli and largely unaffected for high-spatial frequency luminance and chromatic stimuli (Nature 371 (1994), 511-513). During smooth pursuit eye movements, sensitivity for low-spatial frequency luminance stimuli is moderately reduced while sensitivity for chromatic and high-spatial frequency luminance stimuli is even increased (Nature Neuroscience, 11 (2008), 1211-1216). Since these effects are at least partly of different polarity, we investigated the combined effects of saccades and smooth pursuit on visual sensitivity. For the time course of chromatic sensitivity, we found that detection rates increased slightly around pursuit onset. During saccades to static and moving targets, detection rates dropped briefly before the saccade and reached a minimum at saccade onset. This reduction of chromatic sensitivity was present whenever a saccade was executed and it was not modified by subsequent pursuit. We also measured contrast sensitivity for flashed high- and low-spatial frequency luminance and chromatic stimuli during saccades and pursuit. During saccades, the reduction of contrast sensitivity was strongest for low-spatial frequency luminance stimuli (about 90%). However, a significant reduction was also present for chromatic stimuli (about 58%). Chromatic sensitivity was increased during smooth pursuit (about 12%). These results suggest that the modulation of visual sensitivity during saccades and smooth pursuit is more complex than previously assumed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Prey capture behaviour evoked by simple visual stimuli in larval zebrafish

    Directory of Open Access Journals (Sweden)

    Isaac Henry Bianco

    2011-12-01

    Full Text Available Understanding how the nervous system recognises salient stimuli in the environ- ment and selects and executes the appropriate behavioural responses is a fundamen- tal question in systems neuroscience. To facilitate the neuroethological study of visually-guided behaviour in larval zebrafish, we developed virtual reality assays in which precisely controlled visual cues can be presented to larvae whilst their behaviour is automatically monitored using machine-vision algorithms. Freely swimming larvae responded to moving stimuli in a size-dependent manner: they directed multiple low amplitude orienting turns (∼ 20◦ towards small moving spots (1◦ but reacted to larger spots (10◦ with high-amplitude aversive turns (∼ 60◦. The tracking of small spots led us to examine how larvae respond to prey during hunting routines. By analysing movie sequences of larvae hunting parame- cia, we discovered that all prey capture routines commence with eye convergence and larvae maintain their eyes in a highly converged position for the duration of the prey-tracking and capture swim phases. We adapted our virtual reality assay to deliver artificial visual cues to partially restrained larvae and found that small moving spots evoked convergent eye movements and J-turns of the tail, which are defining features of natural hunting. We propose that eye convergence represents the engagement of a predatory mode of behaviour in larval fish and serves to increase the region of binocular visual space to enable stereoscopic targeting of prey.

  10. Neurochemical responses to chromatic and achromatic stimuli in the human visual cortex.

    Science.gov (United States)

    Bednařík, Petr; Tkáč, Ivan; Giove, Federico; Eberly, Lynn E; Deelchand, Dinesh K; Barreto, Felipe R; Mangia, Silvia

    2018-02-01

    In the present study, we aimed at determining the metabolic responses of the human visual cortex during the presentation of chromatic and achromatic stimuli, known to preferentially activate two separate clusters of neuronal populations (called "blobs" and "interblobs") with distinct sensitivity to color or luminance features. Since blobs and interblobs have different cytochrome-oxidase (COX) content and micro-vascularization level (i.e., different capacities for glucose oxidation), different functional metabolic responses during chromatic vs. achromatic stimuli may be expected. The stimuli were optimized to evoke a similar load of neuronal activation as measured by the bold oxygenation level dependent (BOLD) contrast. Metabolic responses were assessed using functional 1 H MRS at 7 T in 12 subjects. During both chromatic and achromatic stimuli, we observed the typical increases in glutamate and lactate concentration, and decreases in aspartate and glucose concentration, that are indicative of increased glucose oxidation. However, within the detection sensitivity limits, we did not observe any difference between metabolic responses elicited by chromatic and achromatic stimuli. We conclude that the higher energy demands of activated blobs and interblobs are supported by similar increases in oxidative metabolism despite the different capacities of these neuronal populations.

  11. Exploring combinations of auditory and visual stimuli for gaze-independent brain-computer interfaces.

    Directory of Open Access Journals (Sweden)

    Xingwei An

    Full Text Available For Brain-Computer Interface (BCI systems that are designed for users with severe impairments of the oculomotor system, an appropriate mode of presenting stimuli to the user is crucial. To investigate whether multi-sensory integration can be exploited in the gaze-independent event-related potentials (ERP speller and to enhance BCI performance, we designed a visual-auditory speller. We investigate the possibility to enhance stimulus presentation by combining visual and auditory stimuli within gaze-independent spellers. In this study with N = 15 healthy users, two different ways of combining the two sensory modalities are proposed: simultaneous redundant streams (Combined-Speller and interleaved independent streams (Parallel-Speller. Unimodal stimuli were applied as control conditions. The workload, ERP components, classification accuracy and resulting spelling speed were analyzed for each condition. The Combined-speller showed a lower workload than uni-modal paradigms, without the sacrifice of spelling performance. Besides, shorter latencies, lower amplitudes, as well as a shift of the temporal and spatial distribution of discriminative information were observed for Combined-speller. These results are important and are inspirations for future studies to search the reason for these differences. For the more innovative and demanding Parallel-Speller, where the auditory and visual domains are independent from each other, a proof of concept was obtained: fifteen users could spell online with a mean accuracy of 87.7% (chance level <3% showing a competitive average speed of 1.65 symbols per minute. The fact that it requires only one selection period per symbol makes it a good candidate for a fast communication channel. It brings a new insight into the true multisensory stimuli paradigms. Novel approaches for combining two sensory modalities were designed here, which are valuable for the development of ERP-based BCI paradigms.

  12. [WMN: a negative ERPs component related to working memory during non-target visual stimuli processing].

    Science.gov (United States)

    Zhao, Lun; Wei, Jin-he

    2003-10-01

    To study non-target stimuli processing in the brain. Features of the event-related potentials (ERPs) from non-target stimuli during selective response task (SR) was compared with that during visual selective discrimination (DR) task in 26 normal subjects. The stimuli consisted of two color LED flashes (red and green) appeared randomly in left (LVF) or right (RVF) visual field with same probability. ERPs were derived at 9 electrode sites on the scalp under 2 task conditions: a) SR, making switch response to the target (NT) stimuli from LVF or RVF in one direction and making no response to the non-target (NT) ones; b) DR, making switching response to T stimuli differentially, i.e., to the left for T from LVF and to the right for T from RVF. 1) the non-target stimuli in DR conditions, compared with that in SR condition, elicited smaller P2 and P3 components and larger N2 component at the frontal brain areas; 2) a significant negative component, named as WMN (working memory negativity), appeared in the non-target ERPs during DR in the period of 100 to 700 ms post stimulation which was predominant at the frontal brain areas. According to the major difference between brain activities for non-target stimuli during SR and DR, the predominant appearance of WMN at the frontal brain areas demonstrated that the non-target stimulus processing was an active process and was related to working memory, i.e., the temporary elimination and the retrieval of the response mode which was stored in working memory.

  13. Metabolic response to optic centers to visual stimuli in the albino rat: anatomical and physiological considerations

    International Nuclear Information System (INIS)

    Toga, A.W.; Collins, R.C.

    1981-01-01

    The functional organization of the visual system was studied in the albino rat. Metabolic differences were measured using the 14 C-2-deoxyglucose (DG) autoradiographic technique during visual stimulation of one entire retina in unrestrained animals. All optic centers responded to changes in light intensity but to different degrees. The greatest change occurred in the superior colliculus, less in the lateral geniculate, and considerably less in second-order sites such as layer IV of visual cortex. These optic centers responded in particular to on/off stimuli, but showed no incremental change during pattern reversal or movement of orientation stimuli. Both the superior colliculus and lateral geniculate increased their metabolic rate as the frequency of stimulation increased, but the magnitude was twice as great in the colliculus. The histological pattern of metabolic change in the visual system was not homogenous. In the superior colliculus glucose utilization increased only in stratum griseum superficiale and was greatest in visuotopic regions representing the peripheral portions of the visual field. Similarly, in the lateral geniculate, only the dorsal nucleus showed an increased response to greater stimulus frequencies. Second-order regions of the visual system showed changes in metabolism in response to visual stimulation, but no incremental response specific for type or frequency of stimuli. To label proteins of axoplasmic transport to study the terminal fields of retinal projections 14 C-amino acids were used. This was done to study how the differences in the magnitude of the metabolic response among optic centers were related to the relative quantity of retinofugal projections to these centers

  14. Effects of External Stimuli on Microstructure-Property Relationship at the Nanoscale

    Science.gov (United States)

    Wang, Baoming

    The technical contribution of this research is a unique nanofabricated experimental setup that integrates nanoscale specimens with tools for interrogating mechanical (stress-strain, fracture, and fatigue), thermal and electrical (conductivity) properties as function of external stimuli such as strain, temperature, electrical field and radiation. It addresses the shortcomings of the state of the art characterization techniques, which are yet to perform such simultaneous and multi-domain measurements. Our technique has virtually no restriction on specimen material type and thickness, which makes the setup versatile. It is demonstrated with 100 nm thick nickel, aluminum, zirconium; 25 nm thick molybdenum di-sulphide (MoS2), 10 nm hexagonal boron nitride (h-BN) specimens and 100nm carbon nanofiber, all in freestanding thin film form. The technique is compatible with transmission electron microscopy (TEM). In-situ TEM captures microstructural features, (defects, phases, precipitates and interfaces), diffraction patterns and chemical microanalysis in real time. 'Seeing the microstructure while measuring properties' is our unique capability. It helps identifying fundamental mechanisms behind thermo-electro-mechanical coupling and degradation, so that these mechanisms can be used to (i) explain the results obtained for mesoscale specimens of the same materials and experimental conditions and (ii) develop computational models to explain and predict properties at both nano and meso scales. The uniqueness of this contribution is therefore simultaneously quantitative and qualitative probing of length-scale dependent external stimuli effects on microstructures and physical properties of nanoscale materials. The scientific contribution of this research is the experimental validation of the fundamental hypothesis that, if the nanoscale size can cause significant deviation in a certain domain, e.g., mechanical, it can also make that domain more sensitive to external stimuli when

  15. Partial recovery of visual-spatial remapping of touch after restoring vision in a congenitally blind man.

    Science.gov (United States)

    Ley, Pia; Bottari, Davide; Shenoy, Bhamy H; Kekunnaya, Ramesh; Röder, Brigitte

    2013-05-01

    In an initial processing step, sensory events are encoded in modality specific representations in the brain but seem to be automatically remapped into a supra-modal, presumably visual-external frame of reference. To test whether there is a sensitive phase in the first years of life during which visual input is crucial for the acquisition of this remapping process, we tested a single case of a congenitally blind man whose sight was restored after the age of two years. HS performed a tactile temporal order judgment task (TOJ) which required judging the temporal order of two tactile stimuli, one presented to each index finger. In addition, a visual-tactile cross-modal congruency task was run, in which spatially congruent and spatially incongruent visual distractor stimuli were presented together with tactile stimuli. The tactile stimuli had to be localized. Both tasks were performed with an uncrossed and a crossed hand posture. Similar to congenitally blind individuals HS did not show a crossing effect in the tactile TOJ task suggesting an anatomical rather than visual-external coding of touch. In the visual-tactile task, however, external remapping of touch was observed though incomplete compared to sighted controls. These data support the hypothesis of a sensitive phase for the acquisition of an automatic use of visual-spatial representations for coding tactile input. Nonetheless, these representations seem to be acquired to some extent after the end of congenital blindness but seem to be recruited only in the context of visual stimuli and are used with a reduced efficiency. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. P1-32: Response of Human Visual System to Paranormal Stimuli Appearing in Three-Dimensional Display

    Directory of Open Access Journals (Sweden)

    Jisoo Hong

    2012-10-01

    Full Text Available Three-dimensional (3D display became one of indispensable features of commercial TVs in recent years. However, the 3D content displayed by 3D display may contain the abrupt change of depth when the scene changes, which might be considered as a paranormal stimulus. Because the human visual system is not accustomed to such paranormal stimuli in natural conditions, they can cause unexpected responses which usually induce discomfort. Following the change of depth expressed by 3D display, the eyeballs rotate to match the convergence to the new 3D image position. The amount of rotation varies according to the initial longitudinal location and depth displacement of 3D image. Because the change of depth is abrupt, there is delay in human visual system following the change and such delay can be a source of discomfort. To guarantee the safety in watching 3D TV, the acceptable level of displacement in the longitudinal direction should be revealed quantitatively. Additionally, the artificially generated scenes also can provide paranormal stimuli such as periodic depth variations. In the presentation, we investigate the response of human visual system to such paranormal stimuli given by 3D display system. Using the result of investigation, we can give guideline to creating the 3D content to minimize the discomfort coming from the paranormal stimuli.

  17. A Basic Study on P300 Event-Related Potentials Evoked by Simultaneous Presentation of Visual and Auditory Stimuli for the Communication Interface

    Directory of Open Access Journals (Sweden)

    Masami Hashimoto

    2011-10-01

    Full Text Available We have been engaged in the development of a brain-computer interface (BCI based on the cognitive P300 event-related potentials (ERPs evoked by simultaneous presentation of visual and auditory stimuli in order to assist with the communication in severe physical limitation persons. The purpose of the simultaneous presentation of these stimuli is to give the user more choices as commands. First, we extracted P300 ERPs by either visual oddball paradigm or auditory oddball paradigm. Then amplitude and latency of the P300 ERPs were measured. Second, visual and auditory stimuli were presented simultaneously, we measured the P300 ERPs varying the condition of combinations of these stimuli. In this report, we used 3 colors as visual stimuli and 3 types of MIDI sounds as auditory stimuli. Two types of simultaneous presentations were examined. The one was conducted with random combination. The other was called group stimulation, combining one color, such as red, and one MIDI sound, such as piano, in order to make a group; three groups were made. Each group was presented to users randomly. We evaluated the possibility of BCI using these stimuli from the amplitudes and the latencies of P300 ERPs.

  18. Flory-type theories of polymer chains under different external stimuli

    Science.gov (United States)

    Budkov, Yu A.; Kiselev, M. G.

    2018-01-01

    In this Review, we present a critical analysis of various applications of the Flory-type theories to a theoretical description of the conformational behavior of single polymer chains in dilute polymer solutions under a few external stimuli. Different theoretical models of flexible polymer chains in the supercritical fluid are discussed and analysed. Different points of view on the conformational behavior of the polymer chain near the liquid-gas transition critical point of the solvent are presented. A theoretical description of the co-solvent-induced coil-globule transitions within the implicit-solvent-explicit-co-solvent models is discussed. Several explicit-solvent-explicit-co-solvent theoretical models of the coil-to-globule-to-coil transition of the polymer chain in a mixture of good solvents (co-nonsolvency) are analysed and compared with each other. Finally, a new theoretical model of the conformational behavior of the dielectric polymer chain under the external constant electric field in the dilute polymer solution with an explicit account for the many-body dipole correlations is discussed. The polymer chain collapse induced by many-body dipole correlations of monomers in the context of statistical thermodynamics of dielectric polymers is analysed.

  19. Dynamical responses to external stimuli for both cases of excitatory and inhibitory synchronization in a complex neuronal network.

    Science.gov (United States)

    Kim, Sang-Yoon; Lim, Woochang

    2017-10-01

    For studying how dynamical responses to external stimuli depend on the synaptic-coupling type, we consider two types of excitatory and inhibitory synchronization (i.e., synchronization via synaptic excitation and inhibition) in complex small-world networks of excitatory regular spiking (RS) pyramidal neurons and inhibitory fast spiking (FS) interneurons. For both cases of excitatory and inhibitory synchronization, effects of synaptic couplings on dynamical responses to external time-periodic stimuli S ( t ) (applied to a fraction of neurons) are investigated by varying the driving amplitude A of S ( t ). Stimulated neurons are phase-locked to external stimuli for both cases of excitatory and inhibitory couplings. On the other hand, the stimulation effect on non-stimulated neurons depends on the type of synaptic coupling. The external stimulus S ( t ) makes a constructive effect on excitatory non-stimulated RS neurons (i.e., it causes external phase lockings in the non-stimulated sub-population), while S ( t ) makes a destructive effect on inhibitory non-stimulated FS interneurons (i.e., it breaks up original inhibitory synchronization in the non-stimulated sub-population). As results of these different effects of S ( t ), the type and degree of dynamical response (e.g., synchronization enhancement or suppression), characterized by the dynamical response factor [Formula: see text] (given by the ratio of synchronization degree in the presence and absence of stimulus), are found to vary in a distinctly different way, depending on the synaptic-coupling type. Furthermore, we also measure the matching degree between the dynamics of the two sub-populations of stimulated and non-stimulated neurons in terms of a "cross-correlation" measure [Formula: see text]. With increasing A , based on [Formula: see text], we discuss the cross-correlations between the two sub-populations, affecting the dynamical responses to S ( t ).

  20. The processing of auditory and visual recognition of self-stimuli.

    Science.gov (United States)

    Hughes, Susan M; Nicholson, Shevon E

    2010-12-01

    This study examined self-recognition processing in both the auditory and visual modalities by determining how comparable hearing a recording of one's own voice was to seeing photograph of one's own face. We also investigated whether the simultaneous presentation of auditory and visual self-stimuli would either facilitate or inhibit self-identification. Ninety-one participants completed reaction-time tasks of self-recognition when presented with their own faces, own voices, and combinations of the two. Reaction time and errors made when responding with both the right and left hand were recorded to determine if there were lateralization effects on these tasks. Our findings showed that visual self-recognition for facial photographs appears to be superior to auditory self-recognition for voice recordings. Furthermore, a combined presentation of one's own face and voice appeared to inhibit rather than facilitate self-recognition and there was a left-hand advantage for reaction time on the combined-presentation tasks. Copyright © 2010 Elsevier Inc. All rights reserved.

  1. Correlation between MEG and BOLD fMRI signals induced by visual flicker stimuli

    Institute of Scientific and Technical Information of China (English)

    Chu Renxin; Holroyd Tom; Duyn Jeff

    2007-01-01

    The goal of this work was to investigate how the MEG signal amplitude correlates with that of BOLD fMRI.To investigate the correlation between fMRI and macroscopic electrical activity, BOLD fMRI and MEG was performed on the same subjects (n =5). A visual flicker stimulus of varying temporal frequency was used to elicit neural responses in early visual areas. A strong similarity was observed in frequency tuning curves between both modalities.Although, averaged over subjects, the BOLD tuning curve was somewhat broader than MEG, both BOLD and MEG had maxima at a flicker frequency of 10 Hz. Also, we measured the first and second harmonic components as the stimuli frequency by MEG. In the low stimuli frequency (less than 6 Hz), the second harmonic has comparable amplitude with the first harmonic, which implies that neural frequency response is nonlinear and has more nonlinear components in low frequency than in high frequency.

  2. Self-initiated actions result in suppressed auditory but amplified visual evoked components in healthy participants.

    Science.gov (United States)

    Mifsud, Nathan G; Oestreich, Lena K L; Jack, Bradley N; Ford, Judith M; Roach, Brian J; Mathalon, Daniel H; Whitford, Thomas J

    2016-05-01

    Self-suppression refers to the phenomenon that sensations initiated by our own movements are typically less salient, and elicit an attenuated neural response, compared to sensations resulting from changes in the external world. Evidence for self-suppression is provided by previous ERP studies in the auditory modality, which have found that healthy participants typically exhibit a reduced auditory N1 component when auditory stimuli are self-initiated as opposed to externally initiated. However, the literature investigating self-suppression in the visual modality is sparse, with mixed findings and experimental protocols. An EEG study was conducted to expand our understanding of self-suppression across different sensory modalities. Healthy participants experienced either an auditory (tone) or visual (pattern-reversal) stimulus following a willed button press (self-initiated), a random interval (externally initiated, unpredictable onset), or a visual countdown (externally initiated, predictable onset-to match the intrinsic predictability of self-initiated stimuli), while EEG was continuously recorded. Reduced N1 amplitudes for self- versus externally initiated tones indicated that self-suppression occurred in the auditory domain. In contrast, the visual N145 component was amplified for self- versus externally initiated pattern reversals. Externally initiated conditions did not differ as a function of their predictability. These findings highlight a difference in sensory processing of self-initiated stimuli across modalities, and may have implications for clinical disorders that are ostensibly associated with abnormal self-suppression. © 2016 Society for Psychophysiological Research.

  3. Conformation and structural changes of diblock copolymers with octopus-like micelle formation in the presence of external stimuli

    Science.gov (United States)

    Dammertz, K.; Saier, A. M.; Marti, O.; Amirkhani, M.

    2014-04-01

    External stimuli such as vapours and electric fields can be used to manipulate the formation of AB-diblock copolymers on surfaces. We study the conformational variation of PS-b-PMMA (polystyrene-block-poly(methyl methacrylate)), PS and PMMA adsorbed on mica and their response to saturated water or chloroform atmospheres. Using specimens with only partial polymer coverage, new unanticipated effects were observed. Water vapour, a non-solvent for all three polymers, was found to cause high surface mobility. In contrast, chloroform vapour (a solvent for all three polymers) proved to be less efficient. Furthermore, the influence of an additional applied electric field was investigated. A dc field oriented parallel to the sample surface induces the formation of polymer islands which assemble into wormlike chains. Moreover, PS-b-PMMA forms octopus-like micelles (OLMs) on mica. Under the external stimuli mentioned above, the wormlike formations of OLMs are able to align in the direction of the external electric field. In the absence of an electric field, the OLMs disaggregate and exhibit phase separated structures under chloroform vapour.

  4. Conformation and structural changes of diblock copolymers with octopus-like micelle formation in the presence of external stimuli

    International Nuclear Information System (INIS)

    Dammertz, K; Saier, A M; Marti, O; Amirkhani, M

    2014-01-01

    External stimuli such as vapours and electric fields can be used to manipulate the formation of AB-diblock copolymers on surfaces. We study the conformational variation of PS-b-PMMA (polystyrene-block-poly(methyl methacrylate)), PS and PMMA adsorbed on mica and their response to saturated water or chloroform atmospheres. Using specimens with only partial polymer coverage, new unanticipated effects were observed. Water vapour, a non-solvent for all three polymers, was found to cause high surface mobility. In contrast, chloroform vapour (a solvent for all three polymers) proved to be less efficient. Furthermore, the influence of an additional applied electric field was investigated. A dc field oriented parallel to the sample surface induces the formation of polymer islands which assemble into wormlike chains. Moreover, PS-b-PMMA forms octopus-like micelles (OLMs) on mica. Under the external stimuli mentioned above, the wormlike formations of OLMs are able to align in the direction of the external electric field. In the absence of an electric field, the OLMs disaggregate and exhibit phase separated structures under chloroform vapour. (paper)

  5. Association of Concurrent fNIRS and EEG Signatures in Response to Auditory and Visual Stimuli.

    Science.gov (United States)

    Chen, Ling-Chia; Sandmann, Pascale; Thorne, Jeremy D; Herrmann, Christoph S; Debener, Stefan

    2015-09-01

    Functional near-infrared spectroscopy (fNIRS) has been proven reliable for investigation of low-level visual processing in both infants and adults. Similar investigation of fundamental auditory processes with fNIRS, however, remains only partially complete. Here we employed a systematic three-level validation approach to investigate whether fNIRS could capture fundamental aspects of bottom-up acoustic processing. We performed a simultaneous fNIRS-EEG experiment with visual and auditory stimulation in 24 participants, which allowed the relationship between changes in neural activity and hemoglobin concentrations to be studied. In the first level, the fNIRS results showed a clear distinction between visual and auditory sensory modalities. Specifically, the results demonstrated area specificity, that is, maximal fNIRS responses in visual and auditory areas for the visual and auditory stimuli respectively, and stimulus selectivity, whereby the visual and auditory areas responded mainly toward their respective stimuli. In the second level, a stimulus-dependent modulation of the fNIRS signal was observed in the visual area, as well as a loudness modulation in the auditory area. Finally in the last level, we observed significant correlations between simultaneously-recorded visual evoked potentials and deoxygenated hemoglobin (DeoxyHb) concentration, and between late auditory evoked potentials and oxygenated hemoglobin (OxyHb) concentration. In sum, these results suggest good sensitivity of fNIRS to low-level sensory processing in both the visual and the auditory domain, and provide further evidence of the neurovascular coupling between hemoglobin concentration changes and non-invasive brain electrical activity.

  6. The Effects of Task Demand and External Stimuli on Learner's Stress Perception and Performance

    OpenAIRE

    Lim, Yee Mei; Ayesh, Aladdin, 1972-; Stacey, Martin; Tan, Li Peng

    2016-01-01

    Over the past decades, research in e-learning has begun to take emotions into account, which is also known as affective learning. It advocates an education system that is sentient of learner's cognitive and affective states, as learners' performance could be affected by emotional factors. This exploratory research examines the impacts of task demand and external stimuli on learner's stress perception and job performance. Experiments are conducted on 160 undergraduate students from a higher le...

  7. Stress Induction and Visual Working Memory Performance: The Effects of Emotional and Non-Emotional Stimuli

    Directory of Open Access Journals (Sweden)

    Zahra Khayyer

    2017-05-01

    Full Text Available Background Some studies have shown working memory impairment following stressful situations. Also, researchers have found that working memory performance depends on many different factors such as emotional load of stimuli and gender. Objectives The present study aimed to determine the effects of stress induction on visual working memory (VWM performance among female and male university students. Methods This quasi-experimental research employed a posttest with only control group design (within-group study. A total of 62 university students (32 males and 30 females were randomly selected and allocated to experimental and control groups (mean age of 23.73. Using cold presser test (CPT, stress was induced and then, an n-back task was implemented to evaluate visual working memory function (such as the number of true items, time reactions, and the number of wrong items through emotional and non-emotional pictures. 100 pictures were selected from the international affective picture system (IASP with different valences. Results Results showed that stress impaired different visual working memory functions (P < 0.002 for true scores, P < 0.001 for reaction time, and P < 0.002 for wrong items. Conclusions In general, stress significantly decreases the VWM performances. On the one hand, females were strongly impressed by stress more than males and on the other hand, the VWM performance was better for emotional stimuli than non-emotional stimuli.

  8. Seeing music: The perception of melodic 'ups and downs' modulates the spatial processing of visual stimuli.

    Science.gov (United States)

    Romero-Rivas, Carlos; Vera-Constán, Fátima; Rodríguez-Cuadrado, Sara; Puigcerver, Laura; Fernández-Prieto, Irune; Navarra, Jordi

    2018-05-10

    Musical melodies have "peaks" and "valleys". Although the vertical component of pitch and music is well-known, the mechanisms underlying its mental representation still remain elusive. We show evidence regarding the importance of previous experience with melodies for crossmodal interactions to emerge. The impact of these crossmodal interactions on other perceptual and attentional processes was also studied. Melodies including two tones with different frequency (e.g., E4 and D3) were repeatedly presented during the study. These melodies could either generate strong predictions (e.g., E4-D3-E4-D3-E4-[D3]) or not (e.g., E4-D3-E4-E4-D3-[?]). After the presentation of each melody, the participants had to judge the colour of a visual stimulus that appeared in a position that was, according to the traditional vertical connotations of pitch, either congruent (e.g., high-low-high-low-[up]), incongruent (high-low-high-low-[down]) or unpredicted with respect to the melody. Behavioural and electroencephalographic responses to the visual stimuli were obtained. Congruent visual stimuli elicited faster responses at the end of the experiment than at the beginning. Additionally, incongruent visual stimuli that broke the spatial prediction generated by the melody elicited larger P3b amplitudes (reflecting 'surprise' responses). Our results suggest that the passive (but repeated) exposure to melodies elicits spatial predictions that modulate the processing of other sensory events. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Numerosity estimation in visual stimuli in the absence of luminance-based cues.

    Directory of Open Access Journals (Sweden)

    Peter Kramer

    2011-02-01

    Full Text Available Numerosity estimation is a basic preverbal ability that humans share with many animal species and that is believed to be foundational of numeracy skills. It is notoriously difficult, however, to establish whether numerosity estimation is based on numerosity itself, or on one or more non-numerical cues like-in visual stimuli-spatial extent and density. Frequently, different non-numerical cues are held constant on different trials. This strategy, however, still allows numerosity estimation to be based on a combination of non-numerical cues rather than on any particular one by itself.Here we introduce a novel method, based on second-order (contrast-based visual motion, to create stimuli that exclude all first-order (luminance-based cues to numerosity. We show that numerosities can be estimated almost as well in second-order motion as in first-order motion.The results show that numerosity estimation need not be based on first-order spatial filtering, first-order density perception, or any other processing of luminance-based cues to numerosity. Our method can be used as an effective tool to control non-numerical variables in studies of numerosity estimation.

  10. Cortical responses from adults and infants to complex visual stimuli.

    Science.gov (United States)

    Schulman-Galambos, C; Galambos, R

    1978-10-01

    Event-related potentials (ERPs) time-locked to the onset of visual stimuli were extracted from the EEG of normal adult (N = 16) and infant (N = 23) subjects. Subjects were not required to make any response. Stimuli delivered to the adults were 150 msec exposures of 2 sets of colored slides projected in 4 blocks, 2 in focus and 2 out of focus. Infants received 2-sec exposures of slides showing people, colored drawings or scenes from Disneyland, as well as 2-sec illuminations of the experimenter as she played a game or of a TV screen the baby was watching. The adult ERPs showed 6 waves (N1 through P4) in the 140--600-msec range; this included a positive wave at around 350 msec that was large when the stimuli were focused and smaller when they were not. The waves in the 150--200-msec range, by contrast, steadily dropped in amplitude as the experiment progressed. The infant ERPs differed greatly from the adult ones in morphology, usually showing a positive (latency about 200 msec)--negative(5--600msec)--positive(1000msec) sequence. This ERP appeared in all the stimulus conditions; its presence or absence, furthermore, was correlated with whether or not the baby seemed interested in the stimuli. Four infants failed to produce these ERPs; an independent measure of attention to the stimuli, heart rate deceleration, was demonstrated in two of them. An electrode placed beneath the eye to monitor eye movements yielded ERPs closely resembling those derived from the scalp in most subjects; reasons are given for assigning this response to activity in the brain, probably at the frontal pole. This study appears to be one of the first to search for cognitive 'late waves' in a no-task situation. The results suggest that further work with such task-free paradigms may yield additional useful techniques for studying the ERP.

  11. Weld pool visual sensing without external illumination

    DEFF Research Database (Denmark)

    Liu, Jinchao; Fan, Zhun; Olsen, Soren Ingvor

    2011-01-01

    Visual sensing in arc welding has become more and more important, but still remains challenging because of the harsh environment with extremely strong illumination from the arc. This paper presents a low-cost camera-based sensor system, without using external Illumination, but nevertheless able...

  12. High-performance execution of psychophysical tasks with complex visual stimuli in MATLAB

    Science.gov (United States)

    Asaad, Wael F.; Santhanam, Navaneethan; McClellan, Steven

    2013-01-01

    Behavioral, psychological, and physiological experiments often require the ability to present sensory stimuli, monitor and record subjects' responses, interface with a wide range of devices, and precisely control the timing of events within a behavioral task. Here, we describe our recent progress developing an accessible and full-featured software system for controlling such studies using the MATLAB environment. Compared with earlier reports on this software, key new features have been implemented to allow the presentation of more complex visual stimuli, increase temporal precision, and enhance user interaction. These features greatly improve the performance of the system and broaden its applicability to a wider range of possible experiments. This report describes these new features and improvements, current limitations, and quantifies the performance of the system in a real-world experimental setting. PMID:23034363

  13. Auditory preferences of young children with and without hearing loss for meaningful auditory-visual compound stimuli.

    Science.gov (United States)

    Zupan, Barbra; Sussman, Joan E

    2009-01-01

    Experiment 1 examined modality preferences in children and adults with normal hearing to combined auditory-visual stimuli. Experiment 2 compared modality preferences in children using cochlear implants participating in an auditory emphasized therapy approach to the children with normal hearing from Experiment 1. A second objective in both experiments was to evaluate the role of familiarity in these preferences. Participants were exposed to randomized blocks of photographs and sounds of ten familiar and ten unfamiliar animals in auditory-only, visual-only and auditory-visual trials. Results indicated an overall auditory preference in children, regardless of hearing status, and a visual preference in adults. Familiarity only affected modality preferences in adults who showed a strong visual preference to unfamiliar stimuli only. The similar degree of auditory responses in children with hearing loss to those from children with normal hearing is an original finding and lends support to an auditory emphasis for habilitation. Readers will be able to (1) Describe the pattern of modality preferences reported in young children without hearing loss; (2) Recognize that differences in communication mode may affect modality preferences in young children with hearing loss; and (3) Understand the role of familiarity in modality preferences in children with and without hearing loss.

  14. Preserved suppression of salient irrelevant stimuli during visual search in Age-Associated Memory Impairment

    Directory of Open Access Journals (Sweden)

    Laura eLorenzo-López

    2016-01-01

    Full Text Available Previous studies have suggested that older adults with age-associated memory impairment (AAMI may show a significant decline in attentional resource capacity and inhibitory processes in addition to memory impairment. In the present paper, the potential attentional capture by task-irrelevant stimuli was examined in older adults with AAMI compared to healthy older adults using scalp-recorded event-related brain potentials (ERPs. ERPs were recorded during the execution of a visual search task, in which the participants had to detect the presence of a target stimulus that differed from distractors by orientation. To explore the automatic attentional capture phenomenon, an irrelevant distractor stimulus defined by a different feature (color was also presented without previous knowledge of the participants. A consistent N2pc, an electrophysiological indicator of attentional deployment, was present for target stimuli but not for task-irrelevant color stimuli, suggesting that these irrelevant distractors did not attract attention in AAMI older adults. Furthermore, the N2pc for targets was significantly delayed in AAMI patients compared to healthy older controls. Together, these findings suggest a specific impairment of the attentional selection process of relevant target stimuli in these individuals and indicate that the mechanism of top-down suppression of entirely task-irrelevant stimuli is preserved, at least when the target and the irrelevant stimuli are perceptually very different.

  15. The Influence of Visual Cues on Sound Externalization

    DEFF Research Database (Denmark)

    Carvajal, Juan Camilo Gil; Santurette, Sébastien; Cubick, Jens

    while listeners wore both earplugs and blindfolds. Half of the listeners were then blindfolded during testing but were provided auditory awareness of the room via a controlled noise source (condition A). The other half could see the room but were shielded from room-related acoustic input and tested......Background: The externalization of virtual sounds reproduced via binaural headphone-based auralization systems has been reported to be less robust when the listening environment differs from the room in which binaural room impulse responses (BRIRs) were recorded. It has been debated whether.......Methods: Eighteen naïve listeners rated the externalization of virtual stimuli in terms of perceived distance, azimuthal localization, and compactness in three rooms: 1) a standard IEC listening room, 2) a small reverberant room, and 3) a large dry room. Before testing, individual BRIRs were recorded in room 1...

  16. Contingent capture of involuntary visual attention interferes with detection of auditory stimuli

    OpenAIRE

    Kamke, Marc R.; Harris, Jill

    2014-01-01

    The involuntary capture of attention by salient visual stimuli can be influenced by the behavioral goals of an observer. For example, when searching for a target item, irrelevant items that possess the target-defining characteristic capture attention more strongly than items not possessing that feature. Such contingent capture involves a shift of spatial attention toward the item with the target-defining characteristic. It is not clear, however, if the associated decrements in performance for...

  17. When goals conflict with values: counterproductive attentional and oculomotor capture by reward-related stimuli.

    Science.gov (United States)

    Le Pelley, Mike E; Pearson, Daniel; Griffiths, Oren; Beesley, Tom

    2015-02-01

    Attention provides the gateway to cognition, by selecting certain stimuli for further analysis. Recent research demonstrates that whether a stimulus captures attention is not determined solely by its physical properties, but is malleable, being influenced by our previous experience of rewards obtained by attending to that stimulus. Here we show that this influence of reward learning on attention extends to task-irrelevant stimuli. In a visual search task, certain stimuli signaled the magnitude of available reward, but reward delivery was not contingent on responding to those stimuli. Indeed, any attentional capture by these critical distractor stimuli led to a reduction in the reward obtained. Nevertheless, distractors signaling large reward produced greater attentional and oculomotor capture than those signaling small reward. This counterproductive capture by task-irrelevant stimuli is important because it demonstrates how external reward structures can produce patterns of behavior that conflict with task demands, and similar processes may underlie problematic behavior directed toward real-world rewards.

  18. Learning efficient visual search for stimuli containing diagnostic spatial configurations and color-shape conjunctions.

    Science.gov (United States)

    Reavis, Eric A; Frank, Sebastian M; Tse, Peter U

    2018-04-12

    Visual search is often slow and difficult for complex stimuli such as feature conjunctions. Search efficiency, however, can improve with training. Search for stimuli that can be identified by the spatial configuration of two elements (e.g., the relative position of two colored shapes) improves dramatically within a few hundred trials of practice. Several recent imaging studies have identified neural correlates of this learning, but it remains unclear what stimulus properties participants learn to use to search efficiently. Influential models, such as reverse hierarchy theory, propose two major possibilities: learning to use information contained in low-level image statistics (e.g., single features at particular retinotopic locations) or in high-level characteristics (e.g., feature conjunctions) of the task-relevant stimuli. In a series of experiments, we tested these two hypotheses, which make different predictions about the effect of various stimulus manipulations after training. We find relatively small effects of manipulating low-level properties of the stimuli (e.g., changing their retinotopic location) and some conjunctive properties (e.g., color-position), whereas the effects of manipulating other conjunctive properties (e.g., color-shape) are larger. Overall, the findings suggest conjunction learning involving such stimuli might be an emergent phenomenon that reflects multiple different learning processes, each of which capitalizes on different types of information contained in the stimuli. We also show that both targets and distractors are learned, and that reversing learned target and distractor identities impairs performance. This suggests that participants do not merely learn to discriminate target and distractor stimuli, they also learn stimulus identity mappings that contribute to performance improvements.

  19. Use of Sine Shaped High-Frequency Rhythmic Visual Stimuli Patterns for SSVEP Response Analysis and Fatigue Rate Evaluation in Normal Subjects

    Directory of Open Access Journals (Sweden)

    Ahmadreza Keihani

    2018-05-01

    Full Text Available Background: Recent EEG-SSVEP signal based BCI studies have used high frequency square pulse visual stimuli to reduce subjective fatigue. However, the effect of total harmonic distortion (THD has not been considered. Compared to CRT and LCD monitors, LED screen displays high-frequency wave with better refresh rate. In this study, we present high frequency sine wave simple and rhythmic patterns with low THD rate by LED to analyze SSVEP responses and evaluate subjective fatigue in normal subjects.Materials and Methods: We used patterns of 3-sequence high-frequency sine waves (25, 30, and 35 Hz to design our visual stimuli. Nine stimuli patterns, 3 simple (repetition of each of above 3 frequencies e.g., P25-25-25 and 6 rhythmic (all of the frequencies in 6 different sequences e.g., P25-30-35 were chosen. A hardware setup with low THD rate (<0.1% was designed to present these patterns on LED. Twenty two normal subjects (aged 23–30 (25 ± 2.1 yrs were enrolled. Visual analog scale (VAS was used for subjective fatigue evaluation after presentation of each stimulus pattern. PSD, CCA, and LASSO methods were employed to analyze SSVEP responses. The data including SSVEP features and fatigue rate for different visual stimuli patterns were statistically evaluated.Results: All 9 visual stimuli patterns elicited SSVEP responses. Overall, obtained accuracy rates were 88.35% for PSD and > 90% for CCA and LASSO (for TWs > 1 s. High frequency rhythmic patterns group with low THD rate showed higher accuracy rate (99.24% than simple patterns group (98.48%. Repeated measure ANOVA showed significant difference between rhythmic pattern features (P < 0.0005. Overall, there was no significant difference between the VAS of rhythmic [3.85 ± 2.13] compared to the simple patterns group [3.96 ± 2.21], (P = 0.63. Rhythmic group had lower within group VAS variation (min = P25-30-35 [2.90 ± 2.45], max = P35-25-30 [4.81 ± 2.65] as well as least individual pattern VAS (P25

  20. System to induce and measure embodiment of an artificial hand with programmable convergent visual and tactile stimuli.

    Science.gov (United States)

    Benz, Heather L; Sieff, Talia R; Alborz, Mahsa; Kontson, Kimberly; Kilpatrick, Elizabeth; Civillico, Eugene F

    2016-08-01

    The sense of prosthesis embodiment, or the feeling that the device has been incorporated into a user's body image, may be enhanced by emerging technology such as invasive electrical stimulation for sensory feedback. In turn, prosthesis embodiment may be linked to increased prosthesis use and improved functional outcomes. We describe the development of a tool to assay artificial hand embodiment in a quantitative way in people with intact limbs, and characterize its operation. The system delivers temporally coordinated visual and tactile stimuli at a programmable latency while recording limb temperature. When programmed to deliver visual and tactile stimuli synchronously, recorded latency between the two was 33 ± 24 ms in the final pilot subject. This system enables standardized assays of the conditions necessary for prosthesis embodiment.

  1. An fMRI investigation into the effect of preceding stimuli during visual oddball tasks.

    Science.gov (United States)

    Fajkus, Jiří; Mikl, Michal; Shaw, Daniel Joel; Brázdil, Milan

    2015-08-15

    This study investigates the modulatory effect of stimulus sequence on neural responses to novel stimuli. A group of 34 healthy volunteers underwent event-related functional magnetic resonance imaging while performing a three-stimulus visual oddball task, involving randomly presented frequent stimuli and two types of infrequent stimuli - targets and distractors. We developed a modified categorization of rare stimuli that incorporated the type of preceding rare stimulus, and analyzed the event-related functional data according to this sequence categorization; specifically, we explored hemodynamic response modulation associated with increasing rare-to-rare stimulus interval. For two consecutive targets, a modulation of brain function was evident throughout posterior midline and lateral temporal cortex, while responses to targets preceded by distractors were modulated in a widely distributed fronto-parietal system. As for distractors that follow targets, brain function was modulated throughout a set of posterior brain structures. For two successive distractors, however, no significant modulation was observed, which is consistent with previous studies and our primary hypothesis. The addition of the aforementioned technique extends the possibilities of conventional oddball task analysis, enabling researchers to explore the effects of the whole range of rare stimuli intervals. This methodology can be applied to study a wide range of associated cognitive mechanisms, such as decision making, expectancy and attention. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Effect of a combination of flip and zooming stimuli on the performance of a visual brain-computer interface for spelling.

    Science.gov (United States)

    Cheng, Jiao; Jin, Jing; Daly, Ian; Zhang, Yu; Wang, Bei; Wang, Xingyu; Cichocki, Andrzej

    2018-02-13

    Brain-computer interface (BCI) systems can allow their users to communicate with the external world by recognizing intention directly from their brain activity without the assistance of the peripheral motor nervous system. The P300-speller is one of the most widely used visual BCI applications. In previous studies, a flip stimulus (rotating the background area of the character) that was based on apparent motion, suffered from less refractory effects. However, its performance was not improved significantly. In addition, a presentation paradigm that used a "zooming" action (changing the size of the symbol) has been shown to evoke relatively higher P300 amplitudes and obtain a better BCI performance. To extend this method of stimuli presentation within a BCI and, consequently, to improve BCI performance, we present a new paradigm combining both the flip stimulus with a zooming action. This new presentation modality allowed BCI users to focus their attention more easily. We investigated whether such an action could combine the advantages of both types of stimuli presentation to bring a significant improvement in performance compared to the conventional flip stimulus. The experimental results showed that the proposed paradigm could obtain significantly higher classification accuracies and bit rates than the conventional flip paradigm (p<0.01).

  3. Perceptual Discrimination of Basic Object Features Is Not Facilitated When Priming Stimuli Are Prevented From Reaching Awareness by Means of Visual Masking.

    Science.gov (United States)

    Peel, Hayden J; Sperandio, Irene; Laycock, Robin; Chouinard, Philippe A

    2018-01-01

    Our understanding of how form, orientation and size are processed within and outside of awareness is limited and requires further investigation. Therefore, we investigated whether or not the visual discrimination of basic object features can be influenced by subliminal processing of stimuli presented beforehand. Visual masking was used to render stimuli perceptually invisible. Three experiments examined if visible and invisible primes could facilitate the subsequent feature discrimination of visible targets. The experiments differed in the kind of perceptual discrimination that participants had to make. Namely, participants were asked to discriminate visual stimuli on the basis of their form, orientation, or size. In all three experiments, we demonstrated reliable priming effects when the primes were visible but not when the primes were made invisible. Our findings underscore the importance of conscious awareness in facilitating the perceptual discrimination of basic object features.

  4. Perceptual Discrimination of Basic Object Features Is Not Facilitated When Priming Stimuli Are Prevented From Reaching Awareness by Means of Visual Masking

    Science.gov (United States)

    Peel, Hayden J.; Sperandio, Irene; Laycock, Robin; Chouinard, Philippe A.

    2018-01-01

    Our understanding of how form, orientation and size are processed within and outside of awareness is limited and requires further investigation. Therefore, we investigated whether or not the visual discrimination of basic object features can be influenced by subliminal processing of stimuli presented beforehand. Visual masking was used to render stimuli perceptually invisible. Three experiments examined if visible and invisible primes could facilitate the subsequent feature discrimination of visible targets. The experiments differed in the kind of perceptual discrimination that participants had to make. Namely, participants were asked to discriminate visual stimuli on the basis of their form, orientation, or size. In all three experiments, we demonstrated reliable priming effects when the primes were visible but not when the primes were made invisible. Our findings underscore the importance of conscious awareness in facilitating the perceptual discrimination of basic object features. PMID:29725292

  5. Stress Sensitive Healthy Females Show Less Left Amygdala Activation in Response to Withdrawal-Related Visual Stimuli under Passive Viewing Conditions

    Science.gov (United States)

    Baeken, Chris; Van Schuerbeek, Peter; De Raedt, Rudi; Vanderhasselt, Marie-Anne; De Mey, Johan; Bossuyt, Axel; Luypaert, Robert

    2012-01-01

    The amygdalae are key players in the processing of a variety of emotional stimuli. Especially aversive visual stimuli have been reported to attract attention and activate the amygdalae. However, as it has been argued that passively viewing withdrawal-related images could attenuate instead of activate amygdalae neuronal responses, its role under…

  6. Visual attention to spatial and non-spatial visual stimuli is affected differentially by age: effects on event-related brain potentials and performance data.

    NARCIS (Netherlands)

    Talsma, D.; Kok, A.; Ridderinkhof, K.R.

    2006-01-01

    To assess selective attention processes in young and old adults, behavioral and event-related potential (ERP) measures were recorded. Streams of visual stimuli were presented from left or right locations (Experiment 1) or from a central location and comprising two different spatial frequencies

  7. Audiovisual Integration Delayed by Stimulus Onset Asynchrony Between Auditory and Visual Stimuli in Older Adults.

    Science.gov (United States)

    Ren, Yanna; Yang, Weiping; Nakahashi, Kohei; Takahashi, Satoshi; Wu, Jinglong

    2017-02-01

    Although neuronal studies have shown that audiovisual integration is regulated by temporal factors, there is still little knowledge about the impact of temporal factors on audiovisual integration in older adults. To clarify how stimulus onset asynchrony (SOA) between auditory and visual stimuli modulates age-related audiovisual integration, 20 younger adults (21-24 years) and 20 older adults (61-80 years) were instructed to perform an auditory or visual stimuli discrimination experiment. The results showed that in younger adults, audiovisual integration was altered from an enhancement (AV, A ± 50 V) to a depression (A ± 150 V). In older adults, the alterative pattern was similar to that for younger adults with the expansion of SOA; however, older adults showed significantly delayed onset for the time-window-of-integration and peak latency in all conditions, which further demonstrated that audiovisual integration was delayed more severely with the expansion of SOA, especially in the peak latency for V-preceded-A conditions in older adults. Our study suggested that audiovisual facilitative integration occurs only within a certain SOA range (e.g., -50 to 50 ms) in both younger and older adults. Moreover, our results confirm that the response for older adults was slowed and provided empirical evidence that integration ability is much more sensitive to the temporal alignment of audiovisual stimuli in older adults.

  8. Emotion attribution to basic parametric static and dynamic stimuli

    NARCIS (Netherlands)

    Visch, V.; Goudbeek, M.B.; Cohn, J.; Nijholt, A.; Pantic, P.

    2009-01-01

    The following research investigates the effect of basic visual stimuli on the attribution of basic emotions by the viewer. In an empirical study (N = 33) we used two groups of visually minimal expressive stimuli: dynamic and static. The dynamic stimuli consisted of an animated circle moving

  9. Visual cortex and auditory cortex activation in early binocularly blind macaques: A BOLD-fMRI study using auditory stimuli.

    Science.gov (United States)

    Wang, Rong; Wu, Lingjie; Tang, Zuohua; Sun, Xinghuai; Feng, Xiaoyuan; Tang, Weijun; Qian, Wen; Wang, Jie; Jin, Lixin; Zhong, Yufeng; Xiao, Zebin

    2017-04-15

    Cross-modal plasticity within the visual and auditory cortices of early binocularly blind macaques is not well studied. In this study, four healthy neonatal macaques were assigned to group A (control group) or group B (binocularly blind group). Sixteen months later, blood oxygenation level-dependent functional imaging (BOLD-fMRI) was conducted to examine the activation in the visual and auditory cortices of each macaque while being tested using pure tones as auditory stimuli. The changes in the BOLD response in the visual and auditory cortices of all macaques were compared with immunofluorescence staining findings. Compared with group A, greater BOLD activity was observed in the bilateral visual cortices of group B, and this effect was particularly obvious in the right visual cortex. In addition, more activated volumes were found in the bilateral auditory cortices of group B than of group A, especially in the right auditory cortex. These findings were consistent with the fact that there were more c-Fos-positive cells in the bilateral visual and auditory cortices of group B compared with group A (p visual cortices of binocularly blind macaques can be reorganized to process auditory stimuli after visual deprivation, and this effect is more obvious in the right than the left visual cortex. These results indicate the establishment of cross-modal plasticity within the visual and auditory cortices. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  10. Sensory Symptoms and Processing of Nonverbal Auditory and Visual Stimuli in Children with Autism Spectrum Disorder

    Science.gov (United States)

    Stewart, Claire R.; Sanchez, Sandra S.; Grenesko, Emily L.; Brown, Christine M.; Chen, Colleen P.; Keehn, Brandon; Velasquez, Francisco; Lincoln, Alan J.; Müller, Ralph-Axel

    2016-01-01

    Atypical sensory responses are common in autism spectrum disorder (ASD). While evidence suggests impaired auditory-visual integration for verbal information, findings for nonverbal stimuli are inconsistent. We tested for sensory symptoms in children with ASD (using the Adolescent/Adult Sensory Profile) and examined unisensory and bisensory…

  11. The relationship between age and brain response to visual erotic stimuli in healthy heterosexual males.

    Science.gov (United States)

    Seo, Y; Jeong, B; Kim, J-W; Choi, J

    2010-01-01

    The various changes of sexuality, including decreased sexual desire and erectile dysfunction, are also accompanied with aging. To understand the effect of aging on sexuality, we explored the relationship between age and the visual erotic stimulation-related brain response in sexually active male subjects. Twelve healthy, heterosexual male subjects (age 22-47 years) were recorded the functional magnetic resonance imaging (fMRI) signals of their brain activation elicited by passive viewing erotic (ERO), happy-faced (HA) couple, food and nature pictures. Mixed effect analysis and correlation analysis were performed to investigate the relationship between the age and the change of brain activity elicited by erotic stimuli. Our results showed age was positively correlated with the activation of right occipital fusiform gyrus and amygdala, and negatively correlated with the activation of right insula and inferior frontal gyrus. These findings suggest age might be related with functional decline in brain regions being involved in both interoceptive sensation and prefrontal modulation while it is related with the incremental activity of the brain region for early processing of visual emotional stimuli in sexually healthy men.

  12. Spatiotopic coding of BOLD signal in human visual cortex depends on spatial attention.

    Directory of Open Access Journals (Sweden)

    Sofia Crespi

    Full Text Available The neural substrate of the phenomenological experience of a stable visual world remains obscure. One possible mechanism would be to construct spatiotopic neural maps where the response is selective to the position of the stimulus in external space, rather than to retinal eccentricities, but evidence for these maps has been inconsistent. Here we show, with fMRI, that when human subjects perform concomitantly a demanding attentive task on stimuli displayed at the fovea, BOLD responses evoked by moving stimuli irrelevant to the task were mostly tuned in retinotopic coordinates. However, under more unconstrained conditions, where subjects could attend easily to the motion stimuli, BOLD responses were tuned not in retinal but in external coordinates (spatiotopic selectivity in many visual areas, including MT, MST, LO and V6, agreeing with our previous fMRI study. These results indicate that spatial attention may play an important role in mediating spatiotopic selectivity.

  13. Virtual reality stimuli for force platform posturography.

    Science.gov (United States)

    Tossavainen, Timo; Juhola, Martti; Ilmari, Pyykö; Aalto, Heikki; Toppila, Esko

    2002-01-01

    People relying much on vision in the control of posture are known to have an elevated risk of falling. Dependence on visual control is an important parameter in the diagnosis of balance disorders. We have previously shown that virtual reality methods can be used to produce visual stimuli that affect balance, but suitable stimuli need to be found. In this study the effect of six different virtual reality stimuli on the balance of 22 healthy test subjects was evaluated using force platform posturography. According to the tests two of the stimuli have a significant effect on balance.

  14. Visual attention to meaningful stimuli by 1- to 3-year olds: implications for the measurement of memory.

    Science.gov (United States)

    Hayne, Harlene; Jaeger, Katja; Sonne, Trine; Gross, Julien

    2016-11-01

    The visual recognition memory (VRM) paradigm has been widely used to measure memory during infancy and early childhood; it has also been used to study memory in human and nonhuman adults. Typically, participants are familiarized with stimuli that have no special significance to them. Under these conditions, greater attention to the novel stimulus during the test (i.e., novelty preference) is used as the primary index of memory. Here, we took a novel approach to the VRM paradigm and tested 1-, 2-, and 3-year olds using photos of meaningful stimuli that were drawn from the participants' own environment (e.g., photos of their mother, father, siblings, house). We also compared their performance to that of participants of the same age who were tested in an explicit pointing version of the VRM task. Two- and 3-year olds exhibited a strong familiarity preference for some, but not all, of the meaningful stimuli; 1-year olds did not. At no age did participants exhibit the kind of novelty preference that is commonly used to define memory in the VRM task. Furthermore, when compared to pointing, looking measures provided a rough approximation of recognition memory, but in some instances, the looking measure underestimated retention. The use of meaningful stimuli raise important questions about the way in which visual attention is interpreted in the VRM paradigm, and may provide new opportunities to measure memory during infancy and early childhood. © 2016 Wiley Periodicals, Inc.

  15. Economic valuation of the visual externalities of off-shore wind farms

    DEFF Research Database (Denmark)

    Ladenburg, Jacob; Dubgaard, Alex; Martinsen, Louise

    The primary focus of the study presented in this report is visual externalities of off-shore wind farms and the Danish population’s willingness to pay for having these ex-ternalities reduced. The investigation is part of the Danish monitoring programme for off-shore wind farms, comprising several...

  16. Auditory Preferences of Young Children with and without Hearing Loss for Meaningful Auditory-Visual Compound Stimuli

    Science.gov (United States)

    Zupan, Barbra; Sussman, Joan E.

    2009-01-01

    Experiment 1 examined modality preferences in children and adults with normal hearing to combined auditory-visual stimuli. Experiment 2 compared modality preferences in children using cochlear implants participating in an auditory emphasized therapy approach to the children with normal hearing from Experiment 1. A second objective in both…

  17. Pattern transformations in periodic cellular solids under external stimuli

    Science.gov (United States)

    Zhang, K.; Zhao, X. W.; Duan, H. L.; Karihaloo, B. L.; Wang, J.

    2011-04-01

    The structural patterns of periodic cellular materials play an important role in their properties. Here, we investigate how these patterns transform dramatically under external stimuli in simple periodic cellular structures that include a nanotube bundle and a millimeter-size plastic straw bundle. Under gradual hydrostatic straining up to 20%, the cross-section of the single walled carbon nanotube bundle undergoes several pattern transformations, while an amazing new hexagram pattern is triggered from the circular shape when the strain of 20% is applied suddenly in one step. Similar to the nanotube bundle, the circular plastic straw bundle is transformed into a hexagonal pattern on heating by conduction through a baseplate but into a hexagram pattern when heated by convection. Besides the well-known elastic buckling, we find other mechanisms of pattern transformation at different scales; these include the minimization of the surface energy at the macroscale or of the van der Waals energy at the nanoscale and the competition between the elastic energy of deformation and either the surface energy at the macroscale or the van der Waals energy at the nanoscale. The studies of the pattern transformations of periodic porous materials offer new insights into the fabrication of novel materials and devices with tailored properties.

  18. Externalizing proneness and brain response during pre-cuing and viewing of emotional pictures.

    Science.gov (United States)

    Foell, Jens; Brislin, Sarah J; Strickland, Casey M; Seo, Dongju; Sabatinelli, Dean; Patrick, Christopher J

    2016-07-01

    Externalizing proneness, or trait disinhibition, is a concept relevant to multiple high-impact disorders involving impulsive-aggressive behavior. Its mechanisms remain disputed: major models posit hyperresponsive reward circuitry or heightened threat-system reactivity as sources of disinhibitory tendencies. This study evaluated alternative possibilities by examining relations between trait disinhibition and brain reactivity during preparation for and processing of visual affective stimuli. Forty females participated in a functional neuroimaging procedure with stimuli presented in blocks containing either pleasurable or aversive pictures interspersed with neutral, with each picture preceded by a preparation signal. Preparing to view elicited activation in regions including nucleus accumbens, whereas visual regions and bilateral amygdala were activated during viewing of emotional pictures. High disinhibition predicted reduced nucleus accumbens activation during preparation within pleasant/neutral picture blocks, along with enhanced amygdala reactivity during viewing of pleasant and aversive pictures. Follow-up analyses revealed that the augmented amygdala response was related to reduced preparatory activation. Findings indicate that participants high in disinhibition are less able to process implicit cues and mentally prepare for upcoming stimuli, leading to limbic hyperreactivity during processing of actual stimuli. This outcome is helpful for integrating findings from studies suggesting reward-system hyperreactivity and others suggesting threat-system hyperreactivity as mechanisms for externalizing proneness. © The Author (2015). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  19. Effective visualization assay for alcohol content sensing and methanol differentiation with solvent stimuli-responsive supramolecular ionic materials.

    Science.gov (United States)

    Zhang, Li; Qi, Hetong; Wang, Yuexiang; Yang, Lifen; Yu, Ping; Mao, Lanqun

    2014-08-05

    This study demonstrates a rapid visualization assay for on-spot sensing of alcohol content as well as for discriminating methanol-containing beverages with solvent stimuli-responsive supramolecular ionic material (SIM). The SIM is synthesized by ionic self-assembling of imidazolium-based dication C10(mim)2 and dianionic 2,2'-azino-bis (3-ethylbenzothiazoline-6-sulfonic acid) (ABTS) in water and shows water stability, a solvent stimuli-responsive property, and adaptive encapsulation capability. The rationale for the visualization assay demonstrated here is based on the combined utilization of the unique properties of SIM, including its water stability, ethanol stimuli-responsive feature, and adaptive encapsulation capability toward optically active rhodamine 6G (Rh6G); the addition of ethanol into a stable aqueous dispersion of Rh6G-encapsulated SIM (Rh6G-SIM) destructs the Rh6G-SIM structure, resulting in the release of Rh6G from SIM into the solvent. Alcohol content can thus be visualized with the naked eyes through the color change of the dispersion caused by the addition of ethanol. Alcohol content can also be quantified by measuring the fluorescence line of Rh6G released from Rh6G-SIM on a thin-layer chromatography (TLC) plate in response to alcoholic beverages. By fixing the diffusion distance of the mobile phase, the fluorescence line of Rh6G shows a linear relationship with alcohol content (vol %) within a concentration range from 15% to 40%. We utilized this visualization assay for on-spot visualizing of the alcohol contents of three Chinese commercial spirits and discriminating methanol-containing counterfeit beverages. We found that addition of a trace amount of methanol leads to a large increase of the length of Rh6G on TLC plates, which provides a method to identify methanol adulterated beverages with labeled ethanol content. This study provides a simple yet effective assay for alcohol content sensing and methanol differentiation.

  20. Visual Sexual Stimuli-Cue or Reward? A Perspective for Interpreting Brain Imaging Findings on Human Sexual Behaviors

    NARCIS (Netherlands)

    Gola, M.; Wordecha, M.; Marchewka, A.; Sescousse, G.T.

    2016-01-01

    There is an increasing number of neuroimaging studies using visual sexual stimuli (VSS), especially within the emerging field of research on compulsive sexual behaviors (CSB). A central question in this field is whether behaviors such as excessive pornography consumption share common brain

  1. Visual Attention in Flies-Dopamine in the Mushroom Bodies Mediates the After-Effect of Cueing.

    Science.gov (United States)

    Koenig, Sebastian; Wolf, Reinhard; Heisenberg, Martin

    2016-01-01

    Visual environments may simultaneously comprise stimuli of different significance. Often such stimuli require incompatible responses. Selective visual attention allows an animal to respond exclusively to the stimuli at a certain location in the visual field. In the process of establishing its focus of attention the animal can be influenced by external cues. Here we characterize the behavioral properties and neural mechanism of cueing in the fly Drosophila melanogaster. A cue can be attractive, repulsive or ineffective depending upon (e.g.) its visual properties and location in the visual field. Dopamine signaling in the brain is required to maintain the effect of cueing once the cue has disappeared. Raising or lowering dopamine at the synapse abolishes this after-effect. Specifically, dopamine is necessary and sufficient in the αβ-lobes of the mushroom bodies. Evidence is provided for an involvement of the αβposterior Kenyon cells.

  2. Resting-state functional connectivity remains unaffected by preceding exposure to aversive visual stimuli.

    Science.gov (United States)

    Geissmann, Léonie; Gschwind, Leo; Schicktanz, Nathalie; Deuring, Gunnar; Rosburg, Timm; Schwegler, Kyrill; Gerhards, Christiane; Milnik, Annette; Pflueger, Marlon O; Mager, Ralph; de Quervain, Dominique J F; Coynel, David

    2018-02-15

    While much is known about immediate brain activity changes induced by the confrontation with emotional stimuli, the subsequent temporal unfolding of emotions has yet to be explored. To investigate whether exposure to emotionally aversive pictures affects subsequent resting-state networks differently from exposure to neutral pictures, a resting-state fMRI study implementing a two-group repeated-measures design in healthy young adults (N = 34) was conducted. We focused on investigating (i) patterns of amygdala whole-brain and hippocampus connectivity in both a seed-to-voxel and seed-to-seed approach, (ii) whole-brain resting-state networks with an independent component analysis coupled with dual regression, and (iii) the amygdala's fractional amplitude of low frequency fluctuations, all while EEG recording potential fluctuations in vigilance. In spite of the successful emotion induction, as demonstrated by stimuli rating and a memory-facilitating effect of negative emotionality, none of the resting-state measures was differentially affected by picture valence. In conclusion, resting-state networks connectivity as well as the amygdala's low frequency oscillations appear to be unaffected by preceding exposure to widely used emotionally aversive visual stimuli in healthy young adults. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  3. Visual Experience Shapes the Neural Networks Remapping Touch into External Space.

    Science.gov (United States)

    Crollen, Virginie; Lazzouni, Latifa; Rezk, Mohamed; Bellemare, Antoine; Lepore, Franco; Collignon, Olivier

    2017-10-18

    Localizing touch relies on the activation of skin-based and externally defined spatial frames of reference. Psychophysical studies have demonstrated that early visual deprivation prevents the automatic remapping of touch into external space. We used fMRI to characterize how visual experience impacts the brain circuits dedicated to the spatial processing of touch. Sighted and congenitally blind humans performed a tactile temporal order judgment (TOJ) task, either with the hands uncrossed or crossed over the body midline. Behavioral data confirmed that crossing the hands has a detrimental effect on TOJ judgments in sighted but not in early blind people. Crucially, the crossed hand posture elicited enhanced activity, when compared with the uncrossed posture, in a frontoparietal network in the sighted group only. Psychophysiological interaction analysis revealed, however, that the congenitally blind showed enhanced functional connectivity between parietal and frontal regions in the crossed versus uncrossed hand postures. Our results demonstrate that visual experience scaffolds the neural implementation of the location of touch in space. SIGNIFICANCE STATEMENT In daily life, we seamlessly localize touch in external space for action planning toward a stimulus making contact with the body. For efficient sensorimotor integration, the brain has therefore to compute the current position of our limbs in the external world. In the present study, we demonstrate that early visual deprivation alters the brain activity in a dorsal parietofrontal network typically supporting touch localization in the sighted. Our results therefore conclusively demonstrate the intrinsic role that developmental vision plays in scaffolding the neural implementation of touch perception. Copyright © 2017 the authors 0270-6474/17/3710097-07$15.00/0.

  4. Time- and Space-Order Effects in Timed Discrimination of Brightness and Size of Paired Visual Stimuli

    Science.gov (United States)

    Patching, Geoffrey R.; Englund, Mats P.; Hellstrom, Ake

    2012-01-01

    Despite the importance of both response probability and response time for testing models of choice, there is a dearth of chronometric studies examining systematic asymmetries that occur over time- and space-orders in the method of paired comparisons. In this study, systematic asymmetries in discriminating the magnitude of paired visual stimuli are…

  5. Roll motion stimuli : sensory conflict, perceptual weighting and motion sickness

    NARCIS (Netherlands)

    Graaf, B. de; Bles, W.; Bos, J.E.

    1998-01-01

    In an experiment with seventeen subjects interactions of visual roll motion stimuli and vestibular body tilt stimuli were examined in determining the subjective vertical. Interindi-vidual differences in weighting the visual information were observed, but in general visual and vestibular responses

  6. Temporal attention for visual food stimuli in restrained eaters.

    Science.gov (United States)

    Neimeijer, Renate A M; de Jong, Peter J; Roefs, Anne

    2013-05-01

    Although restrained eaters try to limit their food intake, they often fail and indulge in exactly those foods that they want to avoid. A possible explanation is a temporal attentional bias for food cues. It could be that for these people food stimuli are processed relatively efficiently and require less attentional resources to enter awareness. Once a food stimulus has captured attention, it may be preferentially processed and granted prioritized access to limited cognitive resources. This might help explain why restrained eaters often fail in their attempts to restrict their food intake. A Rapid Serial Visual Presentation task consisting of dual and single target trials with food and neutral pictures as targets and/or distractors was administered to restrained (n=40) and unrestrained (n=40) eaters to study temporal attentional bias. Results indicated that (1) food cues did not diminish the attentional blink in restrained eaters when presented as second target; (2) specifically restrained eaters showed an interference effect of identifying food targets on the identification of preceding neutral targets; (3) for both restrained and unrestrained eaters, food cues enhanced the attentional blink; (4) specifically in restrained eaters, food distractors elicited an attention blink in the single target trials. In restrained eaters, food cues get prioritized access to limited cognitive resources, even if this processing priority interferes with their current goals. This temporal attentional bias for food stimuli might help explain why restrained eaters typically have difficulties maintaining their diet rules. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. Heart rate reactivity associated to positive and negative food and non-food visual stimuli.

    Science.gov (United States)

    Kuoppa, Pekka; Tarvainen, Mika P; Karhunen, Leila; Narvainen, Johanna

    2016-08-01

    Using food as a stimuli is known to cause multiple psychophysiological reactions. Heart rate variability (HRV) is common tool for assessing physiological reactions in autonomic nervous system. However, the findings in HRV related to food stimuli have not been consistent. In this paper the quick changes in HRV related to positive and negative food and non-food visual stimuli are investigated. Electrocardiogram (ECG) was measured from 18 healthy females while being stimulated with the pictures. Subjects also filled Three-Factor Eating Questionnaire to determine their eating behavior. The inter-beat-interval time series and the HRV parameters were extracted from the ECG. The quick change in HRV parameters were studied by calculating the change from baseline value (10 s window before stimulus) to value after the onset of the stimulus (10 s window during stimulus). The paired t-test showed significant difference between positive and negative food pictures but not between positive and negative non-food pictures. All the HRV parameters decreased for positive food pictures while they stayed the same or increased a little for negative food pictures. The eating behavior characteristic cognitive restraint was negatively correlated with HRV parameters that describe decreasing of heart rate.

  8. The selective processing of emotional visual stimuli while detecting auditory targets: an ERP analysis.

    Science.gov (United States)

    Schupp, Harald T; Stockburger, Jessica; Bublatzky, Florian; Junghöfer, Markus; Weike, Almut I; Hamm, Alfons O

    2008-09-16

    Event-related potential studies revealed an early posterior negativity (EPN) for emotional compared to neutral pictures. Exploring the emotion-attention relationship, a previous study observed that a primary visual discrimination task interfered with the emotional modulation of the EPN component. To specify the locus of interference, the present study assessed the fate of selective visual emotion processing while attention is directed towards the auditory modality. While simply viewing a rapid and continuous stream of pleasant, neutral, and unpleasant pictures in one experimental condition, processing demands of a concurrent auditory target discrimination task were systematically varied in three further experimental conditions. Participants successfully performed the auditory task as revealed by behavioral performance and selected event-related potential components. Replicating previous results, emotional pictures were associated with a larger posterior negativity compared to neutral pictures. Of main interest, increasing demands of the auditory task did not modulate the selective processing of emotional visual stimuli. With regard to the locus of interference, selective emotion processing as indexed by the EPN does not seem to reflect shared processing resources of visual and auditory modality.

  9. Normal Threshold Size of Stimuli in Children Using a Game-Based Visual Field Test.

    Science.gov (United States)

    Wang, Yanfang; Ali, Zaria; Subramani, Siddharth; Biswas, Susmito; Fenerty, Cecilia; Henson, David B; Aslam, Tariq

    2017-06-01

    The aim of this study was to demonstrate and explore the ability of novel game-based perimetry to establish normal visual field thresholds in children. One hundred and eighteen children (aged 8.0 ± 2.8 years old) with no history of visual field loss or significant medical history were recruited. Each child had one eye tested using a game-based visual field test 'Caspar's Castle' at four retinal locations 12.7° (N = 118) from fixation. Thresholds were established repeatedly using up/down staircase algorithms with stimuli of varying diameter (luminance 20 cd/m 2 , duration 200 ms, background luminance 10 cd/m 2 ). Relationships between threshold and age were determined along with measures of intra- and intersubject variability. The Game-based visual field test was able to establish threshold estimates in the full range of children tested. Threshold size reduced with increasing age in children. Intrasubject variability and intersubject variability were inversely related to age in children. Normal visual field thresholds were established for specific locations in children using a novel game-based visual field test. These could be used as a foundation for developing a game-based perimetry screening test for children.

  10. Selective attention reduces physiological noise in the external ear canals of humans. II: Visual attention

    Science.gov (United States)

    Walsh, Kyle P.; Pasanen, Edward G.; McFadden, Dennis

    2014-01-01

    Human subjects performed in several behavioral conditions requiring, or not requiring, selective attention to visual stimuli. Specifically, the attentional task was to recognize strings of digits that had been presented visually. A nonlinear version of the stimulus-frequency otoacoustic emission (SFOAE), called the nSFOAE, was collected during the visual presentation of the digits. The segment of the physiological response discussed here occurred during brief silent periods immediately following the SFOAE-evoking stimuli. For all subjects tested, the physiological-noise magnitudes were substantially weaker (less noisy) during the tasks requiring the most visual attention. Effect sizes for the differences were >2.0. Our interpretation is that cortico-olivo influences adjusted the magnitude of efferent activation during the SFOAE-evoking stimulation depending upon the attention task in effect, and then that magnitude of efferent activation persisted throughout the silent period where it also modulated the physiological noise present. Because the results were highly similar to those obtained when the behavioral conditions involved auditory attention, similar mechanisms appear to operate both across modalities and within modalities. Supplementary measurements revealed that the efferent activation was spectrally global, as it was for auditory attention. PMID:24732070

  11. Schizophrenia spectrum participants have reduced visual contrast sensitivity to chromatic (red/green and luminance (light/dark stimuli: new insights into information processing, visual channel function and antipsychotic effects

    Directory of Open Access Journals (Sweden)

    Kristin Suzanne Cadenhead

    2013-08-01

    Full Text Available Background: Individuals with schizophrenia spectrum diagnoses have deficient visual information processing as assessed by a variety of paradigms including visual backward masking, motion perception and visual contrast sensitivity (VCS. In the present study, the VCS paradigm was used to investigate potential differences in magnocellular (M versus parvocellular (P channel function that might account for the observed information processing deficits of schizophrenia spectrum patients. Specifically, VCS for near threshold luminance (black/white stimuli is known to be governed primarily by the M channel, while VCS for near threshold chromatic (red/green stimuli is governed by the P channel. Methods: VCS for luminance and chromatic stimuli (counterphase-reversing sinusoidal gratings, 1.22 c/deg, 8.3 Hz was assessed in 53 patients with schizophrenia (including 5 off antipsychotic medication, 22 individuals diagnosed with schizotypal personality disorder and 53 healthy comparison subjects. Results: Schizophrenia spectrum groups demonstrated reduced VCS in both conditions relative to normals, and there was no significant group by condition interaction effect. Post-hoc analyses suggest that it was the patients with schizophrenia on antipsychotic medication as well as SPD participants who accounted for the deficits in the luminance condition. Conclusions: These results demonstrate visual information processing deficits in schizophrenia spectrum populations but do not support the notion of selective abnormalities in the function of subcortical channels as suggested by previous studies. Further work is needed in a longitudinal design to further assess VCS as a vulnerability marker for psychosis as well as the effect of antipsychotic agents on performance in schizophrenia spectrum populations.

  12. The Effect of Visual Stimuli on Stability and Complexity of Postural Control

    Directory of Open Access Journals (Sweden)

    Haizhen Luo

    2018-02-01

    Full Text Available Visual input could benefit balance control or increase postural sway, and it is far from fully understanding the effect of visual stimuli on postural stability and its underlying mechanism. In this study, the effect of different visual inputs on stability and complexity of postural control was examined by analyzing the mean velocity (MV, SD, and fuzzy approximate entropy (fApEn of the center of pressure (COP signal during quiet upright standing. We designed five visual exposure conditions: eyes-closed, eyes-open (EO, and three virtual reality (VR scenes (VR1–VR3. The VR scenes were a limited field view of an optokinetic drum rotating around yaw (VR1, pitch (VR2, and roll (VR3 axes, respectively. Sixteen healthy subjects were involved in the experiment, and their COP trajectories were assessed from the force plate data. MV, SD, and fApEn of the COP in anterior–posterior (AP, medial–lateral (ML directions were calculated. Two-way analysis of variance with repeated measures was conducted to test the statistical significance. We found that all the three parameters obtained the lowest values in the EO condition, and highest in the VR3 condition. We also found that the active neuromuscular intervention, indicated by fApEn, in response to changing the visual exposure conditions were more adaptive in AP direction, and the stability, indicated by SD, in ML direction reflected the changes of visual scenes. MV was found to capture both instability and active neuromuscular control dynamics. It seemed that the three parameters provided compensatory information about the postural control in the immersive virtual environment.

  13. Brain processing of visual sexual stimuli in healthy men: a functional magnetic resonance imaging study.

    Science.gov (United States)

    Mouras, Harold; Stoléru, Serge; Bittoun, Jacques; Glutron, Dominique; Pélégrini-Issac, Mélanie; Paradis, Anne-Lise; Burnod, Yves

    2003-10-01

    The brain plays a central role in sexual motivation. To identify cerebral areas whose activation was correlated with sexual desire, eight healthy male volunteers were studied with functional magnetic resonance imaging (fMRI). Visual stimuli were sexually stimulating photographs (S condition) and emotionally neutral photographs (N condition). Subjective responses pertaining to sexual desire were recorded after each condition. To image the entire brain, separate runs focused on the upper and the lower parts of the brain. Statistical Parametric Mapping was used for data analysis. Subjective ratings confirmed that sexual pictures effectively induced sexual arousal. In the S condition compared to the N condition, a group analysis conducted on the upper part of the brain demonstrated an increased signal in the parietal lobes (superior parietal lobules, left intraparietal sulcus, left inferior parietal lobule, and right postcentral gyrus), the right parietooccipital sulcus, the left superior occipital gyrus, and the precentral gyri. In addition, a decreased signal was recorded in the right posterior cingulate gyrus and the left precuneus. In individual analyses conducted on the lower part of the brain, an increased signal was found in the right and/or left middle occipital gyrus in seven subjects, and in the right and/or left fusiform gyrus in six subjects. In conclusion, fMRI allows to identify brain responses to visual sexual stimuli. Among activated regions in the S condition, parietal areas are known to be involved in attentional processes directed toward motivationally relevant stimuli, while frontal premotor areas have been implicated in motor preparation and motor imagery. Further work is needed to identify those specific features of the neural responses that distinguish sexual desire from other emotional and motivational states.

  14. Use of Sine Shaped High-Frequency Rhythmic Visual Stimuli Patterns for SSVEP Response Analysis and Fatigue Rate Evaluation in Normal Subjects.

    Science.gov (United States)

    Keihani, Ahmadreza; Shirzhiyan, Zahra; Farahi, Morteza; Shamsi, Elham; Mahnam, Amin; Makkiabadi, Bahador; Haidari, Mohsen R; Jafari, Amir H

    2018-01-01

    Background: Recent EEG-SSVEP signal based BCI studies have used high frequency square pulse visual stimuli to reduce subjective fatigue. However, the effect of total harmonic distortion (THD) has not been considered. Compared to CRT and LCD monitors, LED screen displays high-frequency wave with better refresh rate. In this study, we present high frequency sine wave simple and rhythmic patterns with low THD rate by LED to analyze SSVEP responses and evaluate subjective fatigue in normal subjects. Materials and Methods: We used patterns of 3-sequence high-frequency sine waves (25, 30, and 35 Hz) to design our visual stimuli. Nine stimuli patterns, 3 simple (repetition of each of above 3 frequencies e.g., P25-25-25) and 6 rhythmic (all of the frequencies in 6 different sequences e.g., P25-30-35) were chosen. A hardware setup with low THD rate ( 90% for CCA and LASSO (for TWs > 1 s). High frequency rhythmic patterns group with low THD rate showed higher accuracy rate (99.24%) than simple patterns group (98.48%). Repeated measure ANOVA showed significant difference between rhythmic pattern features ( P rhythmic [3.85 ± 2.13] compared to the simple patterns group [3.96 ± 2.21], ( P = 0.63). Rhythmic group had lower within group VAS variation (min = P25-30-35 [2.90 ± 2.45], max = P35-25-30 [4.81 ± 2.65]) as well as least individual pattern VAS (P25-30-35). Discussion and Conclusion: Overall, rhythmic and simple pattern groups had higher and similar accuracy rates. Rhythmic stimuli patterns showed insignificantly lower fatigue rate than simple patterns. We conclude that both rhythmic and simple visual high frequency sine wave stimuli require further research for human subject SSVEP-BCI studies.

  15. Brain Activation by Visual Food-Related Stimuli and Correlations with Metabolic and Hormonal Parameters: A fMRI Study

    NARCIS (Netherlands)

    Jakobsdottir, S.; de Ruiter, M.B.; Deijen, J.B.; Veltman, D.J.; Drent, M.L.

    2012-01-01

    Regional brain activity in 15 healthy, normal weight males during processing of visual food stimuli in a satiated and a hungry state was examined and correlated with neuroendocrine factors known to be involved in hunger and satiated states. Two functional Magnetic Resonance Imaging (fMRI) sessions

  16. Gender differences in the processing of standard emotional visual stimuli: integrating ERP and fMRI results

    Science.gov (United States)

    Yang, Lei; Tian, Jie; Wang, Xiaoxiang; Hu, Jin

    2005-04-01

    The comprehensive understanding of human emotion processing needs consideration both in the spatial distribution and the temporal sequencing of neural activity. The aim of our work is to identify brain regions involved in emotional recognition as well as to follow the time sequence in the millisecond-range resolution. The effect of activation upon visual stimuli in different gender by International Affective Picture System (IAPS) has been examined. Hemodynamic and electrophysiological responses were measured in the same subjects. Both fMRI and ERP study were employed in an event-related study. fMRI have been obtained with 3.0 T Siemens Magnetom whole-body MRI scanner. 128-channel ERP data were recorded using an EGI system. ERP is sensitive to millisecond changes in mental activity, but the source localization and timing is limited by the ill-posed 'inversed' problem. We try to investigate the ERP source reconstruction problem in this study using fMRI constraint. We chose ICA as a pre-processing step of ERP source reconstruction to exclude the artifacts and provide a prior estimate of the number of dipoles. The results indicate that male and female show differences in neural mechanism during emotion visual stimuli.

  17. Comparison of the influence of stimuli color on Steady-State Visual Evoked Potentials

    Directory of Open Access Journals (Sweden)

    Richard Junior Manuel Godinez Tello

    Full Text Available IntroductionThe main idea of a traditional Steady State Visually Evoked Potentials (SSVEP-BCI is the activation of commands through gaze control. For this purpose, the retina of the eye is excited by a stimulus at a certain frequency. Several studies have shown effects related to different kind of stimuli, frequencies, window lengths, techniques of feature extraction and even classification. So far, none of the previous studies has performed a comparison of performance of stimuli colors through LED technology. This study addresses precisely this important aspect and would be a great contribution to the topic of SSVEP-BCIs. Additionally, the performance of different colors at different frequencies and the visual comfort were evaluated in each case.MethodsLEDs of four different colors (red, green, blue and yellow flickering at four distinct frequencies (8, 11, 13 and 15 Hz were used. Twenty subjects were distributed in two groups performing different protocols. Multivariate Synchronization Index (MSI was the technique adopted as feature extractor.ResultsThe accuracy was gradually enhanced with the increase of the time window. From our observations, the red color provides, in most frequencies, both highest rates of accuracy and Information Transfer Rate (ITR for detection of SSVEP.ConclusionAlthough the red color has presented higher ITR, this color was turned in the less comfortable one and can even elicit epileptic responses according to the literature. For this reason, the green color is suggested as the best choice according to the proposed rules. In addition, this color has shown to be safe and accurate for an SSVEP-BCI.

  18. Use of a Remote Eye-Tracker for the Analysis of Gaze during Treadmill Walking and Visual Stimuli Exposition

    Directory of Open Access Journals (Sweden)

    V. Serchi

    2016-01-01

    Full Text Available The knowledge of the visual strategies adopted while walking in cognitively engaging environments is extremely valuable. Analyzing gaze when a treadmill and a virtual reality environment are used as motor rehabilitation tools is therefore critical. Being completely unobtrusive, remote eye-trackers are the most appropriate way to measure the point of gaze. Still, the point of gaze measurements are affected by experimental conditions such as head range of motion and visual stimuli. This study assesses the usability limits and measurement reliability of a remote eye-tracker during treadmill walking while visual stimuli are projected. During treadmill walking, the head remained within the remote eye-tracker workspace. Generally, the quality of the point of gaze measurements declined as the distance from the remote eye-tracker increased and data loss occurred for large gaze angles. The stimulus location (a dot-target did not influence the point of gaze accuracy, precision, and trackability during both standing and walking. Similar results were obtained when the dot-target was replaced by a static or moving 2D target and “region of interest” analysis was applied. These findings foster the feasibility of the use of a remote eye-tracker for the analysis of gaze during treadmill walking in virtual reality environments.

  19. Design of a Novel Servo-motorized Laser Device for Visual Pathways Diseases Therapy

    Directory of Open Access Journals (Sweden)

    Carlos Ignacio Sarmiento

    2015-12-01

    Full Text Available We discuss a novel servo-motorized laser device and a research protocol for visual pathways diseases therapies. The proposed servo-mechanized laser device can be used for potential rehabilitation of patients with hemianopia, quadrantanopia, scotoma, and some types of cortical damages. The device uses a semi spherical structure where the visual stimulus will be shown inside, according to a previous stimuli therapy designed by an ophthalmologist or neurologist. The device uses a pair of servomotors (with torque=1.5kg, which controls the laser stimuli position for the internal therapy and another pair for external therapy. Using electronic tools such as microcontrollers along with miscellaneous electronic materials, combined with LabVIEW based interface, a control mechanism is developed for the new device. The proposed device is well suited to run various visual stimuli therapies. We outline the major design principles including the physical dimensions, laser device’s kinematical analysis and the corresponding software development.

  20. Colour and luminance contrasts predict the human detection of natural stimuli in complex visual environments.

    Science.gov (United States)

    White, Thomas E; Rojas, Bibiana; Mappes, Johanna; Rautiala, Petri; Kemp, Darrell J

    2017-09-01

    Much of what we know about human colour perception has come from psychophysical studies conducted in tightly-controlled laboratory settings. An enduring challenge, however, lies in extrapolating this knowledge to the noisy conditions that characterize our actual visual experience. Here we combine statistical models of visual perception with empirical data to explore how chromatic (hue/saturation) and achromatic (luminant) information underpins the detection and classification of stimuli in a complex forest environment. The data best support a simple linear model of stimulus detection as an additive function of both luminance and saturation contrast. The strength of each predictor is modest yet consistent across gross variation in viewing conditions, which accords with expectation based upon general primate psychophysics. Our findings implicate simple visual cues in the guidance of perception amidst natural noise, and highlight the potential for informing human vision via a fusion between psychophysical modelling and real-world behaviour. © 2017 The Author(s).

  1. Determination of hemispheric language dominance using functional MRI : comparison of visual and auditory stimuli

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Ic Ryung; Ahn, Kook Jin; Lee, Jae Mun [The Catholic Univ. of Korea, Seoul (Korea, Republic of); Kim, Tae [The Catholic Magnetic Resonance Research Center, Seoul (Korea, Republic of)

    1999-12-01

    To assess the difference between auditory and visual stimuli when determining hemispheric language dominance by using functional MRI. In ten healthy adult volunteers (8 right-handed, 1 left-handed, 1 ambidextrous), motor language activation in axial slices of frontal lobe was mapped on a Simens 1.5T Vision Plus system using single-shot EPI. Series of 120 consecutive images per section were acquired during three cycles of task activation and rest. During each activation, a series of four syllables was delivered by means of both a visual and auditory method, and the volunteers were asked to mentally generate words starting with each syllable. In both in ferior frontal gyri and whole frontal lobes, lateralization indices were calculated from the activated pixels. We determined the language dominant hemisphere, and compared the results of the visual method and the auditory method. Seven right-handed persons were left-hemisphere dominant, and one left-handed and one ambidex-trous person were right-hemisphere dominant. Five of nine persons demonstrated larger lateralization indices with the auditory method than the visual method, while the remaining four showed larger lateralization indices with the visual method. No statistically significant difference was noted when comparing the results of the two methods(p>0.05). When determining hemispheric language dominance using functional MRI, the two methods are equally appropriate.

  2. Determination of hemispheric language dominance using functional MRI : comparison of visual and auditory stimuli

    International Nuclear Information System (INIS)

    Yoo, Ic Ryung; Ahn, Kook Jin; Lee, Jae Mun; Kim, Tae

    1999-01-01

    To assess the difference between auditory and visual stimuli when determining hemispheric language dominance by using functional MRI. In ten healthy adult volunteers (8 right-handed, 1 left-handed, 1 ambidextrous), motor language activation in axial slices of frontal lobe was mapped on a Simens 1.5T Vision Plus system using single-shot EPI. Series of 120 consecutive images per section were acquired during three cycles of task activation and rest. During each activation, a series of four syllables was delivered by means of both a visual and auditory method, and the volunteers were asked to mentally generate words starting with each syllable. In both in ferior frontal gyri and whole frontal lobes, lateralization indices were calculated from the activated pixels. We determined the language dominant hemisphere, and compared the results of the visual method and the auditory method. Seven right-handed persons were left-hemisphere dominant, and one left-handed and one ambidex-trous person were right-hemisphere dominant. Five of nine persons demonstrated larger lateralization indices with the auditory method than the visual method, while the remaining four showed larger lateralization indices with the visual method. No statistically significant difference was noted when comparing the results of the two methods(p>0.05). When determining hemispheric language dominance using functional MRI, the two methods are equally appropriate

  3. A comparative analysis of global and local processing of hierarchical visual stimuli in young children (Homo sapiens) and monkeys (Cebus apella).

    Science.gov (United States)

    De Lillo, Carlo; Spinozzi, Giovanna; Truppa, Valentina; Naylor, Donna M

    2005-05-01

    Results obtained with preschool children (Homo sapiens) were compared with results previously obtained from capuchin monkeys (Cebus apella) in matching-to-sample tasks featuring hierarchical visual stimuli. In Experiment 1, monkeys, in contrast with children, showed an advantage in matching the stimuli on the basis of their local features. These results were replicated in a 2nd experiment in which control trials enabled the authors to rule out that children used spurious cues to solve the matching task. In a 3rd experiment featuring conditions in which the density of the stimuli was manipulated, monkeys' accuracy in the processing of the global shape of the stimuli was negatively affected by the separation of the local elements, whereas children's performance was robust across testing conditions. Children's response latencies revealed a global precedence in the 2nd and 3rd experiments. These results show differences in the processing of hierarchical stimuli by humans and monkeys that emerge early during childhood. 2005 APA, all rights reserved

  4. Stimuli-Regulated Smart Polymeric Systems for Gene Therapy

    Directory of Open Access Journals (Sweden)

    Ansuja Pulickal Mathew

    2017-04-01

    Full Text Available The physiological condition of the human body is a composite of different environments, each with its own parameters that may differ under normal, as well as diseased conditions. These environmental conditions include factors, such as pH, temperature and enzymes that are specific to a type of cell, tissue or organ or a pathological state, such as inflammation, cancer or infection. These conditions can act as specific triggers or stimuli for the efficient release of therapeutics at their destination by overcoming many physiological and biological barriers. The efficacy of conventional treatment modalities can be enhanced, side effects decreased and patient compliance improved by using stimuli-responsive material that respond to these triggers at the target site. These stimuli or triggers can be physical, chemical or biological and can be internal or external in nature. Many smart/intelligent stimuli-responsive therapeutic gene carriers have been developed that can respond to either internal stimuli, which may be normally present, overexpressed or present in decreased levels, owing to a disease, or to stimuli that are applied externally, such as magnetic fields. This review focuses on the effects of various internal stimuli, such as temperature, pH, redox potential, enzymes, osmotic activity and other biomolecules that are present in the body, on modulating gene expression by using stimuli-regulated smart polymeric carriers.

  5. Visual sexual stimuli – cue or reward? A key for interpreting brain imaging studies on human sexual behaviors

    Directory of Open Access Journals (Sweden)

    Mateusz Gola

    2016-08-01

    Full Text Available There is an increasing number of neuroimaging studies using visual sexual stimuli (VSS for human sexuality studies, including emerging field of research on compulsive sexual behaviors. A central question in this field is whether behaviors such as extensive pornography consumption share common brain mechanisms with widely studied substance and behavioral addictions. Depending on how VSS are conceptualized, different predictions can be formulated within the frameworks of Reinforcement Learning or Incentive Salience Theory, where a crucial distinction is made between conditioned (cue and unconditioned (reward stimuli (related to reward anticipation vs reward consumption, respectively. Surveying 40 recent human neuroimaging studies we show existing ambiguity about the conceptualization of VSS. Therefore, we feel that it is important to address the question of whether VSS should be considered as cues (conditioned stimuli or rewards (unconditioned stimuli. Here we present our own perspective, which is that in most laboratory settings VSS play a role of reward (unconditioned stimuli, as evidenced by: 1. experience of pleasure while watching VSS, possibly accompanied by genital reaction 2. reward-related brain activity correlated with these pleasurable feelings in response to VSS, 3. a willingness to exert effort to view VSS similarly as for other rewarding stimuli such as money, and/or 4. conditioning for cues (CS predictive for. We hope that this perspective paper will initiate a scientific discussion on this important and overlooked topic and increase attention for appropriate interpretations of results of human neuroimaging studies using VSS.

  6. Recognition of visual stimuli and memory for spatial context in schizophrenic patients and healthy volunteers.

    Science.gov (United States)

    Brébion, Gildas; David, Anthony S; Pilowsky, Lyn S; Jones, Hugh

    2004-11-01

    Verbal and visual recognition tasks were administered to 40 patients with schizophrenia and 40 healthy comparison subjects. The verbal recognition task consisted of discriminating between 16 target words and 16 new words. The visual recognition task consisted of discriminating between 16 target pictures (8 black-and-white and 8 color) and 16 new pictures (8 black-and-white and 8 color). Visual recognition was followed by a spatial context discrimination task in which subjects were required to remember the spatial location of the target pictures at encoding. Results showed that recognition deficit in patients was similar for verbal and visual material. In both schizophrenic and healthy groups, men, but not women, obtained better recognition scores for the colored than for the black-and-white pictures. However, men and women similarly benefited from color to reduce spatial context discrimination errors. Patients showed a significant deficit in remembering the spatial location of the pictures, independently of accuracy in remembering the pictures themselves. These data suggest that patients are impaired in the amount of visual information that they can encode. With regards to the perceptual attributes of the stimuli, memory for spatial information appears to be affected, but not processing of color information.

  7. Cortical response tracking the conscious experience of threshold duration visual stimuli indicates visual perception is all or none

    Science.gov (United States)

    Sekar, Krithiga; Findley, William M.; Poeppel, David; Llinás, Rodolfo R.

    2013-01-01

    At perceptual threshold, some stimuli are available for conscious access whereas others are not. Such threshold inputs are useful tools for investigating the events that separate conscious awareness from unconscious stimulus processing. Here, viewing unmasked, threshold-duration images was combined with recording magnetoencephalography to quantify differences among perceptual states, ranging from no awareness to ambiguity to robust perception. A four-choice scale was used to assess awareness: “didn’t see” (no awareness), “couldn’t identify” (awareness without identification), “unsure” (awareness with low certainty identification), and “sure” (awareness with high certainty identification). Stimulus-evoked neuromagnetic signals were grouped according to behavioral response choices. Three main cortical responses were elicited. The earliest response, peaking at ∼100 ms after stimulus presentation, showed no significant correlation with stimulus perception. A late response (∼290 ms) showed moderate correlation with stimulus awareness but could not adequately differentiate conscious access from its absence. By contrast, an intermediate response peaking at ∼240 ms was observed only for trials in which stimuli were consciously detected. That this signal was similar for all conditions in which awareness was reported is consistent with the hypothesis that conscious visual access is relatively sharply demarcated. PMID:23509248

  8. Effects of auditory stimuli in the horizontal plane on audiovisual integration: an event-related potential study.

    Science.gov (United States)

    Yang, Weiping; Li, Qi; Ochi, Tatsuya; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Takahashi, Satoshi; Wu, Jinglong

    2013-01-01

    This article aims to investigate whether auditory stimuli in the horizontal plane, particularly originating from behind the participant, affect audiovisual integration by using behavioral and event-related potential (ERP) measurements. In this study, visual stimuli were presented directly in front of the participants, auditory stimuli were presented at one location in an equidistant horizontal plane at the front (0°, the fixation point), right (90°), back (180°), or left (270°) of the participants, and audiovisual stimuli that include both visual stimuli and auditory stimuli originating from one of the four locations were simultaneously presented. These stimuli were presented randomly with equal probability; during this time, participants were asked to attend to the visual stimulus and respond promptly only to visual target stimuli (a unimodal visual target stimulus and the visual target of the audiovisual stimulus). A significant facilitation of reaction times and hit rates was obtained following audiovisual stimulation, irrespective of whether the auditory stimuli were presented in the front or back of the participant. However, no significant interactions were found between visual stimuli and auditory stimuli from the right or left. Two main ERP components related to audiovisual integration were found: first, auditory stimuli from the front location produced an ERP reaction over the right temporal area and right occipital area at approximately 160-200 milliseconds; second, auditory stimuli from the back produced a reaction over the parietal and occipital areas at approximately 360-400 milliseconds. Our results confirmed that audiovisual integration was also elicited, even though auditory stimuli were presented behind the participant, but no integration occurred when auditory stimuli were presented in the right or left spaces, suggesting that the human brain might be particularly sensitive to information received from behind than both sides.

  9. Visual Sexual Stimuli-Cue or Reward? A Perspective for Interpreting Brain Imaging Findings on Human Sexual Behaviors.

    Science.gov (United States)

    Gola, Mateusz; Wordecha, Małgorzata; Marchewka, Artur; Sescousse, Guillaume

    2016-01-01

    There is an increasing number of neuroimaging studies using visual sexual stimuli (VSS), especially within the emerging field of research on compulsive sexual behaviors (CSB). A central question in this field is whether behaviors such as excessive pornography consumption share common brain mechanisms with widely studied substance and behavioral addictions. Depending on how VSS are conceptualized, different predictions can be formulated within the frameworks of Reinforcement Learning or Incentive Salience Theory, where a crucial distinction is made between conditioned and unconditioned stimuli (related to reward anticipation vs. reward consumption, respectively). Surveying 40 recent human neuroimaging studies we show existing ambiguity about the conceptualization of VSS. Therefore, we feel that it is important to address the question of whether VSS should be considered as conditioned stimuli (cue) or unconditioned stimuli (reward). Here we present our own perspective, which is that in most laboratory settings VSS play a role of reward, as evidenced by: (1) experience of pleasure while watching VSS, possibly accompanied by genital reaction; (2) reward-related brain activity correlated with these pleasurable feelings in response to VSS; (3) a willingness to exert effort to view VSS similarly as for other rewarding stimuli such as money; and (4) conditioning for cues predictive of VSS. We hope that this perspective article will initiate a scientific discussion on this important and overlooked topic and increase attention for appropriate interpretations of results of human neuroimaging studies using VSS.

  10. Olfactory cues are subordinate to visual stimuli in a neotropical generalist weevil.

    Directory of Open Access Journals (Sweden)

    Fernando Otálora-Luna

    Full Text Available The tropical root weevil Diaprepes abbreviatus is a major pest of multiple crops in the Caribbean Islands and has become a serious constraint to citrus production in the United States. Recent work has identified host and conspecific volatiles that mediate host- and mate-finding by D. abbreviatus. The interaction of light, color, and odors has not been studied in this species. The responses of male and female D. abbreviatus to narrow bandwidths of visible light emitted by LEDs offered alone and in combination with olfactory stimuli were studied in a specially-designed multiple choice arena combined with a locomotion compensator. Weevils were more attracted to wavelengths close to green and yellow compared with blue or ultraviolet, but preferred red and darkness over green. Additionally, dim green light was preferred over brighter green. Adult weevils were also attracted to the odor of its citrus host + conspecifics. However, the attractiveness of citrus + conspecific odors disappeared in the presence of a green light. Photic stimulation induced males but not females to increase their speed. In the presence of light emitted by LEDs, turning speed decreased and path straightness increased, indicating that weevils tended to walk less tortuously. Diaprepes abbreviatus showed a hierarchy between chemo- and photo-taxis in the series of experiments presented herein, where the presence of the green light abolished upwind anemotaxis elicited by the pheromone + host plant odor. Insight into the strong responses to visual stimuli of chemically stimulated insects may be provided when the amount of information supplied by vision and olfaction is compared, as the information transmission capacity of compound eyes is estimated to be several orders of magnitude higher compared with the olfactory system. Subordination of olfactory responses by photic stimuli should be considered in the design of strategies aimed at management of such insects.

  11. Event-related potential response to auditory social stimuli, parent-reported social communicative deficits and autism risk in school-aged children with congenital visual impairment

    Directory of Open Access Journals (Sweden)

    Joe Bathelt

    2017-10-01

    Full Text Available Communication with visual signals, like facial expression, is important in early social development, but the question if these signals are necessary for typical social development remains to be addressed. The potential impact on social development of being born with no or very low levels of vision is therefore of high theoretical and clinical interest. The current study investigated event-related potential responses to basic social stimuli in a rare group of school-aged children with congenital visual disorders of the anterior visual system (globe of the eye, retina, anterior optic nerve. Early-latency event-related potential responses showed no difference between the VI and control group, suggesting similar initial auditory processing. However, the mean amplitude over central and right frontal channels between 280 and 320 ms was reduced in response to own-name stimuli, but not control stimuli, in children with VI suggesting differences in social processing. Children with VI also showed an increased rate of autistic-related behaviours, pragmatic language deficits, as well as peer relationship and emotional problems on standard parent questionnaires. These findings suggest that vision may be necessary for the typical development of social processing across modalities.

  12. Network evolution induced by asynchronous stimuli through spike-timing-dependent plasticity.

    Directory of Open Access Journals (Sweden)

    Wu-Jie Yuan

    Full Text Available In sensory neural system, external asynchronous stimuli play an important role in perceptual learning, associative memory and map development. However, the organization of structure and dynamics of neural networks induced by external asynchronous stimuli are not well understood. Spike-timing-dependent plasticity (STDP is a typical synaptic plasticity that has been extensively found in the sensory systems and that has received much theoretical attention. This synaptic plasticity is highly sensitive to correlations between pre- and postsynaptic firings. Thus, STDP is expected to play an important role in response to external asynchronous stimuli, which can induce segregative pre- and postsynaptic firings. In this paper, we study the impact of external asynchronous stimuli on the organization of structure and dynamics of neural networks through STDP. We construct a two-dimensional spatial neural network model with local connectivity and sparseness, and use external currents to stimulate alternately on different spatial layers. The adopted external currents imposed alternately on spatial layers can be here regarded as external asynchronous stimuli. Through extensive numerical simulations, we focus on the effects of stimulus number and inter-stimulus timing on synaptic connecting weights and the property of propagation dynamics in the resulting network structure. Interestingly, the resulting feedforward structure induced by stimulus-dependent asynchronous firings and its propagation dynamics reflect both the underlying property of STDP. The results imply a possible important role of STDP in generating feedforward structure and collective propagation activity required for experience-dependent map plasticity in developing in vivo sensory pathways and cortices. The relevance of the results to cue-triggered recall of learned temporal sequences, an important cognitive function, is briefly discussed as well. Furthermore, this finding suggests a potential

  13. Cardiorespiratory interactions to external stimuli.

    Science.gov (United States)

    Bernardi, L; Porta, C; Spicuzza, L; Sleight, P

    2005-09-01

    Respiration is a powerful modulator of heart rate variability, and of baro- or chemo-reflex sensitivity. This occurs via a mechanical effect of breathing that synchronizes all cardiovascular variables at the respiratory rhythm, particularly when this occurs at a particular slow rate coincident with the Mayer waves in arterial pressure (approximately 6 cycles/min). Recitation of the rosary prayer (or of most mantras), induces a marked enhancement of these slow rhythms, whereas random verbalization or random breathing does not. This phenomenon in turn increases baroreflex sensitivity and reduces chemoreflex sensitivity, leading to increases in parasympathetic and reductions in sympathetic activity. The opposite can be seen during either verbalization or mental stress tests. Qualitatively similar effects can be obtained even by passive listening to more or less rhythmic auditory stimuli, such as music, and the speed of the rhythm (rather than the style) appears to be one of the main determinants of the cardiovascular and respiratory responses. These findings have clinical relevance. Appropriate modulation of breathing, can improve/restore autonomic control of cardiovascular and respiratory systems in relevant diseases such as hypertension and heart failure, and might therefore help improving exercise tolerance, quality of life, and ultimately, survival.

  14. Visual Monte Carlo and its application to internal and external dosimetry

    International Nuclear Information System (INIS)

    Hunt, J.G.; Silva, F.C. da; Souza-Santos, D. de; Dantas, B.M.; Azeredo, A.; Malatova, I.; Foltanova, S.; Isakson, M.

    2001-01-01

    The program visual Monte Carlo (VMC), combined with voxel phantoms, and its application to three areas of radiation protection: calibration of in vivo measurement systems, dose calculations due to external sources of radiation, and the calculation of Specific Effective Energies is described in this paper. The simulation of photon transport through a voxel phantom requires a Monte Carlo program adapted to voxel geometries. VMC is written in Visual Basic trademark, a Microsoft Windows based program, which is easy to use and has an extensive graphic output. (orig.)

  15. Facilitation of responses by task-irrelevant complex deviant stimuli.

    Science.gov (United States)

    Schomaker, J; Meeter, M

    2014-05-01

    Novel stimuli reliably attract attention, suggesting that novelty may disrupt performance when it is task-irrelevant. However, under certain circumstances novel stimuli can also elicit a general alerting response having beneficial effects on performance. In a series of experiments we investigated whether different aspects of novelty--stimulus novelty, contextual novelty, surprise, deviance, and relative complexity--lead to distraction or facilitation. We used a version of the visual oddball paradigm in which participants responded to an occasional auditory target. Participants responded faster to this auditory target when it occurred during the presentation of novel visual stimuli than of standard stimuli, especially at SOAs of 0 and 200 ms (Experiment 1). Facilitation was absent for both infrequent simple deviants and frequent complex images (Experiment 2). However, repeated complex deviant images did facilitate responses to the auditory target at the 200 ms SOA (Experiment 3). These findings suggest that task-irrelevant deviant visual stimuli can facilitate responses to an unrelated auditory target in a short 0-200 millisecond time-window after presentation. This only occurs when the deviant stimuli are complex relative to standard stimuli. We link our findings to the novelty P3, which is generated under the same circumstances, and to the adaptive gain theory of the locus coeruleus-norepinephrine system (Aston-Jones and Cohen, 2005), which may explain the timing of the effects. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Affective and physiological correlates of the perception of unimodal and bimodal emotional stimuli.

    Science.gov (United States)

    Rosa, Pedro J; Oliveira, Jorge; Alghazzawi, Daniyal; Fardoun, Habib; Gamito, Pedro

    2017-08-01

    Despite the multisensory nature of perception, previous research on emotions has been focused on unimodal emotional cues with visual stimuli. To the best of our knowledge, there is no evidence on the extent to which incongruent emotional cues from visual and auditory sensory channels affect pupil size. To investigate the effects of audiovisual emotional information perception on the physiological and affective response, but also to determine the impact of mismatched cues in emotional perception on these physiological indexes. Pupil size, electrodermal activity and affective subjective responses were recorded while 30 participants were exposed to visual and auditory stimuli with varied emotional content in three different experimental conditions: pictures and sounds presented alone (unimodal), emotionally matched audio-visual stimuli (bimodal congruent) and emotionally mismatched audio-visual stimuli (bimodal incongruent). The data revealed no effect of emotional incongruence on physiological and affective responses. On the other hand, pupil size covaried with skin conductance response (SCR), but the subjective experience was partially dissociated from autonomic responses. Emotional stimuli are able to trigger physiological responses regardless of valence, sensory modality or level of emotional congruence.

  17. Steady-state VEP responses to uncomfortable stimuli.

    Science.gov (United States)

    O'Hare, Louise

    2017-02-01

    Periodic stimuli, such as op-art, can evoke a range of aversive sensations included in the term visual discomfort. Illusory motion effects are elicited by fixational eye movements, but the cortex might also contribute to effects of discomfort. To investigate this possibility, steady-state visually evoked responses (SSVEPs) to contrast-matched op-art-based stimuli were measured at the same time as discomfort judgements. On average, discomfort reduced with increasing spatial frequency of the pattern. In contrast, the peak amplitude of the SSVEP response was around the midrange spatial frequencies. Like the discomfort judgements, SSVEP responses to the highest spatial frequencies were lowest amplitude, but the relationship breaks down between discomfort and SSVEP for the lower spatial frequency stimuli. This was not explicable by gross eye movements as measured using the facial electrodes. There was a weak relationship between the peak SSVEP responses and discomfort judgements for some stimuli, suggesting that discomfort can be explained in part by electrophysiological responses measured at the level of the cortex. However, there is a breakdown of this relationship in the case of lower spatial frequency stimuli, which remains unexplained. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  18. Testing a Poisson Counter Model for Visual Identification of Briefly Presented, Mutually Confusable Single Stimuli in Pure Accuracy Tasks

    Science.gov (United States)

    Kyllingsbaek, Soren; Markussen, Bo; Bundesen, Claus

    2012-01-01

    The authors propose and test a simple model of the time course of visual identification of briefly presented, mutually confusable single stimuli in pure accuracy tasks. The model implies that during stimulus analysis, tentative categorizations that stimulus i belongs to category j are made at a constant Poisson rate, v(i, j). The analysis is…

  19. Effects of visual working memory on brain information processing of irrelevant auditory stimuli.

    Directory of Open Access Journals (Sweden)

    Jiagui Qu

    Full Text Available Selective attention has traditionally been viewed as a sensory processing modulator that promotes cognitive processing efficiency by favoring relevant stimuli while inhibiting irrelevant stimuli. However, the cross-modal processing of irrelevant information during working memory (WM has been rarely investigated. In this study, the modulation of irrelevant auditory information by the brain during a visual WM task was investigated. The N100 auditory evoked potential (N100-AEP following an auditory click was used to evaluate the selective attention to auditory stimulus during WM processing and at rest. N100-AEP amplitudes were found to be significantly affected in the left-prefrontal, mid-prefrontal, right-prefrontal, left-frontal, and mid-frontal regions while performing a high WM load task. In contrast, no significant differences were found between N100-AEP amplitudes in WM states and rest states under a low WM load task in all recorded brain regions. Furthermore, no differences were found between the time latencies of N100-AEP troughs in WM states and rest states while performing either the high or low WM load task. These findings suggested that the prefrontal cortex (PFC may integrate information from different sensory channels to protect perceptual integrity during cognitive processing.

  20. Effects of visual working memory on brain information processing of irrelevant auditory stimuli.

    Science.gov (United States)

    Qu, Jiagui; Rizak, Joshua D; Zhao, Lun; Li, Minghong; Ma, Yuanye

    2014-01-01

    Selective attention has traditionally been viewed as a sensory processing modulator that promotes cognitive processing efficiency by favoring relevant stimuli while inhibiting irrelevant stimuli. However, the cross-modal processing of irrelevant information during working memory (WM) has been rarely investigated. In this study, the modulation of irrelevant auditory information by the brain during a visual WM task was investigated. The N100 auditory evoked potential (N100-AEP) following an auditory click was used to evaluate the selective attention to auditory stimulus during WM processing and at rest. N100-AEP amplitudes were found to be significantly affected in the left-prefrontal, mid-prefrontal, right-prefrontal, left-frontal, and mid-frontal regions while performing a high WM load task. In contrast, no significant differences were found between N100-AEP amplitudes in WM states and rest states under a low WM load task in all recorded brain regions. Furthermore, no differences were found between the time latencies of N100-AEP troughs in WM states and rest states while performing either the high or low WM load task. These findings suggested that the prefrontal cortex (PFC) may integrate information from different sensory channels to protect perceptual integrity during cognitive processing.

  1. Visual stimuli for the P300 brain-computer interface: a comparison of white/gray and green/blue flicker matrices.

    Science.gov (United States)

    Takano, Kouji; Komatsu, Tomoaki; Hata, Naoki; Nakajima, Yasoichi; Kansaku, Kenji

    2009-08-01

    The white/gray flicker matrix has been used as a visual stimulus for the so-called P300 brain-computer interface (BCI), but the white/gray flash stimuli might induce discomfort. In this study, we investigated the effectiveness of green/blue flicker matrices as visual stimuli. Ten able-bodied, non-trained subjects performed Alphabet Spelling (Japanese Alphabet: Hiragana) using an 8 x 10 matrix with three types of intensification/rest flicker combinations (L, luminance; C, chromatic; LC, luminance and chromatic); both online and offline performances were evaluated. The accuracy rate under the online LC condition was 80.6%. Offline analysis showed that the LC condition was associated with significantly higher accuracy than was the L or C condition (Tukey-Kramer, p < 0.05). No significant difference was observed between L and C conditions. The LC condition, which used the green/blue flicker matrix was associated with better performances in the P300 BCI. The green/blue chromatic flicker matrix can be an efficient tool for practical BCI application.

  2. Impact of visual repetition rate on intrinsic properties of low frequency fluctuations in the visual network.

    Directory of Open Access Journals (Sweden)

    Yi-Chia Li

    Full Text Available BACKGROUND: Visual processing network is one of the functional networks which have been reliably identified to consistently exist in human resting brains. In our work, we focused on this network and investigated the intrinsic properties of low frequency (0.01-0.08 Hz fluctuations (LFFs during changes of visual stimuli. There were two main questions to be discussed in this study: intrinsic properties of LFFs regarding (1 interactions between visual stimuli and resting-state; (2 impact of repetition rate of visual stimuli. METHODOLOGY/PRINCIPAL FINDINGS: We analyzed scanning sessions that contained rest and visual stimuli in various repetition rates with a novel method. The method included three numerical approaches involving ICA (Independent Component Analyses, fALFF (fractional Amplitude of Low Frequency Fluctuation, and Coherence, to respectively investigate the modulations of visual network pattern, low frequency fluctuation power, and interregional functional connectivity during changes of visual stimuli. We discovered when resting-state was replaced by visual stimuli, more areas were involved in visual processing, and both stronger low frequency fluctuations and higher interregional functional connectivity occurred in visual network. With changes of visual repetition rate, the number of areas which were involved in visual processing, low frequency fluctuation power, and interregional functional connectivity in this network were also modulated. CONCLUSIONS/SIGNIFICANCE: To combine the results of prior literatures and our discoveries, intrinsic properties of LFFs in visual network are altered not only by modulations of endogenous factors (eye-open or eye-closed condition; alcohol administration and disordered behaviors (early blind, but also exogenous sensory stimuli (visual stimuli with various repetition rates. It demonstrates that the intrinsic properties of LFFs are valuable to represent physiological states of human brains.

  3. Different Stimuli, Different Spatial Codes: A Visual Map and an Auditory Rate Code for Oculomotor Space in the Primate Superior Colliculus

    Science.gov (United States)

    Lee, Jungah; Groh, Jennifer M.

    2014-01-01

    Maps are a mainstay of visual, somatosensory, and motor coding in many species. However, auditory maps of space have not been reported in the primate brain. Instead, recent studies have suggested that sound location may be encoded via broadly responsive neurons whose firing rates vary roughly proportionately with sound azimuth. Within frontal space, maps and such rate codes involve different response patterns at the level of individual neurons. Maps consist of neurons exhibiting circumscribed receptive fields, whereas rate codes involve open-ended response patterns that peak in the periphery. This coding format discrepancy therefore poses a potential problem for brain regions responsible for representing both visual and auditory information. Here, we investigated the coding of auditory space in the primate superior colliculus(SC), a structure known to contain visual and oculomotor maps for guiding saccades. We report that, for visual stimuli, neurons showed circumscribed receptive fields consistent with a map, but for auditory stimuli, they had open-ended response patterns consistent with a rate or level-of-activity code for location. The discrepant response patterns were not segregated into different neural populations but occurred in the same neurons. We show that a read-out algorithm in which the site and level of SC activity both contribute to the computation of stimulus location is successful at evaluating the discrepant visual and auditory codes, and can account for subtle but systematic differences in the accuracy of auditory compared to visual saccades. This suggests that a given population of neurons can use different codes to support appropriate multimodal behavior. PMID:24454779

  4. Event-related potential response to auditory social stimuli, parent-reported social communicative deficits and autism risk in school-aged children with congenital visual impairment.

    Science.gov (United States)

    Bathelt, Joe; Dale, Naomi; de Haan, Michelle

    2017-10-01

    Communication with visual signals, like facial expression, is important in early social development, but the question if these signals are necessary for typical social development remains to be addressed. The potential impact on social development of being born with no or very low levels of vision is therefore of high theoretical and clinical interest. The current study investigated event-related potential responses to basic social stimuli in a rare group of school-aged children with congenital visual disorders of the anterior visual system (globe of the eye, retina, anterior optic nerve). Early-latency event-related potential responses showed no difference between the VI and control group, suggesting similar initial auditory processing. However, the mean amplitude over central and right frontal channels between 280 and 320ms was reduced in response to own-name stimuli, but not control stimuli, in children with VI suggesting differences in social processing. Children with VI also showed an increased rate of autistic-related behaviours, pragmatic language deficits, as well as peer relationship and emotional problems on standard parent questionnaires. These findings suggest that vision may be necessary for the typical development of social processing across modalities. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  5. Stimuli-Responsive Polymeric Nanoparticles.

    Science.gov (United States)

    Liu, Xiaolin; Yang, Ying; Urban, Marek W

    2017-07-01

    There is increasing evidence that stimuli-responsive nanomaterials have become significantly critical components of modern materials design and technological developments. Recent advances in synthesis and fabrication of stimuli-responsive polymeric nanoparticles with built-in stimuli-responsive components (Part A) and surface modifications of functional nanoparticles that facilitate responsiveness (Part B) are outlined here. The synthesis and construction of stimuli-responsive spherical, core-shell, concentric, hollow, Janus, gibbous/inverse gibbous, and cocklebur morphologies are discussed in Part A, with the focus on shape, color, or size changes resulting from external stimuli. Although inorganic/metallic nanoparticles exhibit many useful properties, including thermal or electrical conductivity, catalytic activity, or magnetic properties, their assemblies and formation of higher order constructs are often enhanced by surface modifications. Section B focuses on selected surface reactions that lead to responsiveness achieved by decorating nanoparticles with stimuli-responsive polymers. Although grafting-to and grafting-from dominate these synthetic efforts, there are opportunities for developing novel synthetic approaches facilitating controllable recognition, signaling, or sequential responses. Many nanotechnologies utilize a combination of organic and inorganic phases to produce ceramic or metallic nanoparticles. One can envision the development of new properties by combining inorganic (metals, metal oxides) and organic (polymer) phases into one nanoparticle designated as "ceramers" (inorganics) and "metamers" (metallic). © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. A solution for measuring accurate reaction time to visual stimuli realized with a programmable microcontroller.

    Science.gov (United States)

    Ohyanagi, Toshio; Sengoku, Yasuhito

    2010-02-01

    This article presents a new solution for measuring accurate reaction time (SMART) to visual stimuli. The SMART is a USB device realized with a Cypress Programmable System-on-Chip (PSoC) mixed-signal array programmable microcontroller. A brief overview of the hardware and firmware of the PSoC is provided, together with the results of three experiments. In Experiment 1, we investigated the timing accuracy of the SMART in measuring reaction time (RT) under different conditions of operating systems (OSs; Windows XP or Vista) and monitor displays (a CRT or an LCD). The results indicated that the timing error in measuring RT by the SMART was less than 2 msec, on average, under all combinations of OS and display and that the SMART was tolerant to jitter and noise. In Experiment 2, we tested the SMART with 8 participants. The results indicated that there was no significant difference among RTs obtained with the SMART under the different conditions of OS and display. In Experiment 3, we used Microsoft (MS) PowerPoint to present visual stimuli on the display. We found no significant difference in RTs obtained using MS DirectX technology versus using the PowerPoint file with the SMART. We are certain that the SMART is a simple and practical solution for measuring RTs accurately. Although there are some restrictions in using the SMART with RT paradigms, the SMART is capable of providing both researchers and health professionals working in clinical settings with new ways of using RT paradigms in their work.

  7. l-Theanine and caffeine improve target-specific attention to visual stimuli by decreasing mind wandering: a human functional magnetic resonance imaging study.

    Science.gov (United States)

    Kahathuduwa, Chanaka N; Dhanasekara, Chathurika S; Chin, Shao-Hua; Davis, Tyler; Weerasinghe, Vajira S; Dassanayake, Tharaka L; Binks, Martin

    2018-01-01

    Oral intake of l-theanine and caffeine supplements is known to be associated with faster stimulus discrimination, possibly via improving attention to stimuli. We hypothesized that l-theanine and caffeine may be bringing about this beneficial effect by increasing attention-related neural resource allocation to target stimuli and decreasing deviation of neural resources to distractors. We used functional magnetic resonance imaging (fMRI) to test this hypothesis. Solutions of 200mg of l-theanine, 160mg of caffeine, their combination, or the vehicle (distilled water; placebo) were administered in a randomized 4-way crossover design to 9 healthy adult men. Sixty minutes after administration, a 20-minute fMRI scan was performed while the subjects performed a visual color stimulus discrimination task. l-Theanine and l-theanine-caffeine combination resulted in faster responses to targets compared with placebo (∆=27.8milliseconds, P=.018 and ∆=26.7milliseconds, P=.037, respectively). l-Theanine was associated with decreased fMRI responses to distractor stimuli in brain regions that regulate visual attention, suggesting that l-theanine may be decreasing neural resource allocation to process distractors, thus allowing to attend to targets more efficiently. l-Theanine-caffeine combination was associated with decreased fMRI responses to target stimuli as compared with distractors in several brain regions that typically show increased activation during mind wandering. Factorial analysis suggested that l-theanine and caffeine seem to have a synergistic action in decreasing mind wandering. Therefore, our hypothesis is that l-theanine and caffeine may be decreasing deviation of attention to distractors (including mind wandering); thus, enhancing attention to target stimuli was confirmed. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Under pressure: adolescent substance users show exaggerated neural processing of aversive interoceptive stimuli

    NARCIS (Netherlands)

    Berk, L.; Stewart, J.L.; May, A.C.; Wiers, R.W.; Davenport, P.W.; Paulus, M.P.; Tapert, S.F.

    2015-01-01

    Aims: Adolescents with substance use disorders (SUD) exhibit hyposensitivity to pleasant internally generated (interoceptive) stimuli and hypersensitivity to external rewarding stimuli. It is unclear whether similar patterns exist for aversive interoceptive stimuli. We compared activation in the

  9. Visual attention and emotional reactions to negative stimuli: The role of age and cognitive reappraisal.

    Science.gov (United States)

    Wirth, Maria; Isaacowitz, Derek M; Kunzmann, Ute

    2017-09-01

    Prominent life span theories of emotion propose that older adults attend less to negative emotional information and report less negative emotional reactions to the same information than younger adults do. Although parallel age differences in affective information processing and age differences in emotional reactivity have been proposed, they have rarely been investigated within the same study. In this eye-tracking study, we tested age differences in visual attention and emotional reactivity, using standardized emotionally negative stimuli. Additionally, we investigated age differences in the association between visual attention and emotional reactivity, and whether these are moderated by cognitive reappraisal. Older as compared with younger adults showed fixation patterns away from negative image content, while they reacted with greater negative emotions. The association between visual attention and emotional reactivity differed by age group and positive reappraisal. Younger adults felt better when they attended more to negative content rather than less, but this relationship only held for younger adults who did not attach a positive meaning to the negative situation. For older adults, overall, there was no significant association between visual attention and emotional reactivity. However, for older adults who did not use positive reappraisal, decreases in attention to negative information were associated with less negative emotions. The present findings point to a complex relationship between younger and older adults' visual attention and emotional reactions. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  10. Abolishment of Spontaneous Flight Turns in Visually Responsive Drosophila.

    Science.gov (United States)

    Ferris, Bennett Drew; Green, Jonathan; Maimon, Gaby

    2018-01-22

    Animals react rapidly to external stimuli, such as an approaching predator, but in other circumstances, they seem to act spontaneously, without any obvious external trigger. How do the neural processes mediating the execution of reflexive and spontaneous actions differ? We studied this question in tethered, flying Drosophila. We found that silencing a large but genetically defined set of non-motor neurons virtually eliminates spontaneous flight turns while preserving the tethered flies' ability to perform two types of visually evoked turns, demonstrating that, at least in flies, these two modes of action are almost completely dissociable. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Selective Attention to Visual Stimuli Using Auditory Distractors Is Altered in Alpha-9 Nicotinic Receptor Subunit Knock-Out Mice.

    Science.gov (United States)

    Terreros, Gonzalo; Jorratt, Pascal; Aedo, Cristian; Elgoyhen, Ana Belén; Delano, Paul H

    2016-07-06

    During selective attention, subjects voluntarily focus their cognitive resources on a specific stimulus while ignoring others. Top-down filtering of peripheral sensory responses by higher structures of the brain has been proposed as one of the mechanisms responsible for selective attention. A prerequisite to accomplish top-down modulation of the activity of peripheral structures is the presence of corticofugal pathways. The mammalian auditory efferent system is a unique neural network that originates in the auditory cortex and projects to the cochlear receptor through the olivocochlear bundle, and it has been proposed to function as a top-down filter of peripheral auditory responses during attention to cross-modal stimuli. However, to date, there is no conclusive evidence of the involvement of olivocochlear neurons in selective attention paradigms. Here, we trained wild-type and α-9 nicotinic receptor subunit knock-out (KO) mice, which lack cholinergic transmission between medial olivocochlear neurons and outer hair cells, in a two-choice visual discrimination task and studied the behavioral consequences of adding different types of auditory distractors. In addition, we evaluated the effects of contralateral noise on auditory nerve responses as a measure of the individual strength of the olivocochlear reflex. We demonstrate that KO mice have a reduced olivocochlear reflex strength and perform poorly in a visual selective attention paradigm. These results confirm that an intact medial olivocochlear transmission aids in ignoring auditory distraction during selective attention to visual stimuli. The auditory efferent system is a neural network that originates in the auditory cortex and projects to the cochlear receptor through the olivocochlear system. It has been proposed to function as a top-down filter of peripheral auditory responses during attention to cross-modal stimuli. However, to date, there is no conclusive evidence of the involvement of olivocochlear

  12. Sex differences in interactions between nucleus accumbens and visual cortex by explicit visual erotic stimuli: an fMRI study.

    Science.gov (United States)

    Lee, S W; Jeong, B S; Choi, J; Kim, J-W

    2015-01-01

    Men tend to have greater positive responses than women to explicit visual erotic stimuli (EVES). However, it remains unclear, which brain network makes men more sensitive to EVES and which factors contribute to the brain network activity. In this study, we aimed to assess the effect of sex difference on brain connectivity patterns by EVES. We also investigated the association of testosterone with brain connection that showed the effects of sex difference. During functional magnetic resonance imaging scans, 14 males and 14 females were asked to see alternating blocks of pictures that were either erotic or non-erotic. Psychophysiological interaction analysis was performed to investigate the functional connectivity of the nucleus accumbens (NA) as it related to EVES. Men showed significantly greater EVES-specific functional connection between the right NA and the right lateral occipital cortex (LOC). In addition, the right NA and the right LOC network activity was positively correlated with the plasma testosterone level in men. Our results suggest that the reason men are sensitive to EVES is the increased interaction in the visual reward networks, which is modulated by their plasma testosterone level.

  13. Auditory Motion Elicits a Visual Motion Aftereffect.

    Science.gov (United States)

    Berger, Christopher C; Ehrsson, H Henrik

    2016-01-01

    The visual motion aftereffect is a visual illusion in which exposure to continuous motion in one direction leads to a subsequent illusion of visual motion in the opposite direction. Previous findings have been mixed with regard to whether this visual illusion can be induced cross-modally by auditory stimuli. Based on research on multisensory perception demonstrating the profound influence auditory perception can have on the interpretation and perceived motion of visual stimuli, we hypothesized that exposure to auditory stimuli with strong directional motion cues should induce a visual motion aftereffect. Here, we demonstrate that horizontally moving auditory stimuli induced a significant visual motion aftereffect-an effect that was driven primarily by a change in visual motion perception following exposure to leftward moving auditory stimuli. This finding is consistent with the notion that visual and auditory motion perception rely on at least partially overlapping neural substrates.

  14. Auditory Motion Elicits a Visual Motion Aftereffect

    Directory of Open Access Journals (Sweden)

    Christopher C. Berger

    2016-12-01

    Full Text Available The visual motion aftereffect is a visual illusion in which exposure to continuous motion in one direction leads to a subsequent illusion of visual motion in the opposite direction. Previous findings have been mixed with regard to whether this visual illusion can be induced cross-modally by auditory stimuli. Based on research on multisensory perception demonstrating the profound influence auditory perception can have on the interpretation and perceived motion of visual stimuli, we hypothesized that exposure to auditory stimuli with strong directional motion cues should induce a visual motion aftereffect. Here, we demonstrate that horizontally moving auditory stimuli induced a significant visual motion aftereffect—an effect that was driven primarily by a change in visual motion perception following exposure to leftward moving auditory stimuli. This finding is consistent with the notion that visual and auditory motion perception rely on at least partially overlapping neural substrates.

  15. The Role of Visual and Auditory Stimuli in Continuous Performance Tests: Differential Effects on Children With ADHD.

    Science.gov (United States)

    Simões, Eunice N; Carvalho, Ana L Novais; Schmidt, Sergio L

    2018-04-01

    Continuous performance tests (CPTs) usually utilize visual stimuli. A previous investigation showed that inattention is partially independent of modality, but response inhibition is modality-specific. Here we aimed to compare performance on visual and auditory CPTs in ADHD and in healthy controls. The sample consisted of 160 elementary and high school students (43 ADHD, 117 controls). For each sensory modality, five variables were extracted: commission errors (CEs) and omission errors (OEs), reaction time (RT), variability of reaction time (VRT), and coefficient of variability (CofV = VRT / RT). The ADHD group exhibited higher rates for all test variables. The discriminant analysis indicated that auditory OE was the most reliable variable for discriminating between groups, followed by visual CE, auditory CE, and auditory CofV. Discriminant equation classified ADHD with 76.3% accuracy. Auditory parameters in the inattention domain (OE and VRT) can discriminate ADHD from controls. For the hyperactive/impulsive domain (CE), the two modalities are equally important.

  16. Attending and Inhibiting Stimuli That Match the Contents of Visual Working Memory: Evidence from Eye Movements and Pupillometry (2015 GDR Vision meeting)

    OpenAIRE

    Mathôt, Sebastiaan; Heusden, Elle van; Stigchel, Stefan Van der

    2015-01-01

    Slides for: Mathôt, S., & Van Heusden, E., & Van der Stigchel, S. (2015, Dec). Attending and Inhibiting Stimuli That Match the Contents of Visual Working Memory: Evidence from Eye Movements and Pupillometry. Talk presented at the GDR Vision Meething, Grenoble, France.

  17. Freezing Behavior as a Response to Sexual Visual Stimuli as Demonstrated by Posturography

    Science.gov (United States)

    Mouras, Harold; Lelard, Thierry; Ahmaidi, Said; Godefroy, Olivier; Krystkowiak, Pierre

    2015-01-01

    Posturographic changes in motivational conditions remain largely unexplored in the context of embodied cognition. Over the last decade, sexual motivation has been used as a good canonical working model to study motivated social interactions. The objective of this study was to explore posturographic variations in response to visual sexual videos as compared to neutral videos. Our results support demonstration of a freezing-type response in response to sexually explicit stimuli compared to other conditions, as demonstrated by significantly decreased standard deviations for (i) the center of pressure displacement along the mediolateral and anteroposterior axes and (ii) center of pressure’s displacement surface. These results support the complexity of the motor correlates of sexual motivation considered to be a canonical functional context to study the motor correlates of motivated social interactions. PMID:25992571

  18. Happiness takes you right: the effect of emotional stimuli on line bisection.

    Science.gov (United States)

    Cattaneo, Zaira; Lega, Carlotta; Boehringer, Jana; Gallucci, Marcello; Girelli, Luisa; Carbon, Claus-Christian

    2014-01-01

    Emotion recognition is mediated by a complex network of cortical and subcortical areas, with the two hemispheres likely being differently involved in processing positive and negative emotions. As results on valence-dependent hemispheric specialisation are quite inconsistent, we carried out three experiments with emotional stimuli with a task being sensitive to measure specific hemispheric processing. Participants were required to bisect visual lines that were delimited by emotional face flankers, or to haptically bisect rods while concurrently listening to emotional vocal expressions. We found that prolonged (but not transient) exposition to concurrent happy stimuli significantly shifted the bisection bias to the right compared to both sad and neutral stimuli, indexing a greater involvement of the left hemisphere in processing of positively connoted stimuli. No differences between sad and neutral stimuli were observed across the experiments. In sum, our data provide consistent evidence in favour of a greater involvement of the left hemisphere in processing positive emotions and suggest that (prolonged) exposure to stimuli expressing happiness significantly affects allocation of (spatial) attentional resources, regardless of the sensory (visual/auditory) modality in which the emotion is perceived and space is explored (visual/haptic).

  19. Visual Literacy and Biochemistry Learning: The role of external representations

    Directory of Open Access Journals (Sweden)

    V.J.S.V. Santos

    2011-04-01

    Full Text Available Visual Literacy can bedefined as people’s ability to understand, use, think, learn and express themselves through external representations (ER in a given subject. This research aims to investigate the development of abilities of ERs reading and interpretation by students from a Biochemistry graduate course of theFederal University of São João Del-Rei. In this way, Visual Literacy level was  assessed using a questionnaire validatedin a previous educational research. This diagnosis questionnaire was elaborated according to six visual abilitiesidentified as essential for the study of the metabolic pathways. The initial statistical analysis of data collectedin this study was carried out using ANOVA method. Results obtained showed that the questionnaire used is adequate for the research and indicated that the level of Visual Literacy related to the metabolic processes increased significantly with the progress of the students in the graduation course. There was also an indication of a possible interference in the student’s performancedetermined by the cutoff punctuation in the university selection process.

  20. Emergence of ultrafast sparsely synchronized rhythms and their responses to external stimuli in an inhomogeneous small-world complex neuronal network.

    Science.gov (United States)

    Kim, Sang-Yoon; Lim, Woochang

    2017-09-01

    We consider an inhomogeneous small-world network (SWN) composed of inhibitory short-range (SR) and long-range (LR) interneurons, and investigate the effect of network architecture on emergence of synchronized brain rhythms by varying the fraction of LR interneurons p long . The betweenness centralities of the LR and SR interneurons (characterizing the potentiality in controlling communication between other interneurons) are distinctly different. Hence, in view of the betweenness, SWNs we consider are inhomogeneous, unlike the "canonical" Watts-Strogatz SWN with nearly the same betweenness centralities. For small p long , the load of communication traffic is much concentrated on a few LR interneurons. However, as p long is increased, the number of LR connections (coming from LR interneurons) increases, and then the load of communication traffic is less concentrated on LR interneurons, which leads to better efficiency of global communication between interneurons. Sparsely synchronized rhythms are thus found to emerge when passing a small critical value p long (c) (≃0.16). The population frequency of the sparsely synchronized rhythm is ultrafast (higher than 100 Hz), while the mean firing rate of individual interneurons is much lower (∼30 Hz) due to stochastic and intermittent neural discharges. These dynamical behaviors in the inhomogeneous SWN are also compared with those in the homogeneous Watts-Strogatz SWN, in connection with their network topologies. Particularly, we note that the main difference between the two types of SWNs lies in the distribution of betweenness centralities. Unlike the case of the Watts-Strogatz SWN, dynamical responses to external stimuli vary depending on the type of stimulated interneurons in the inhomogeneous SWN. We consider two cases of external time-periodic stimuli applied to sub-populations of the LR and SR interneurons, respectively. Dynamical responses (such as synchronization suppression and enhancement) to these two cases of

  1. Musical Brains. A study of evoked musical sensations without external auditory stimuli. Preliminary report of three cases

    International Nuclear Information System (INIS)

    Goycoolea, Marcos V; Mena, Ismael; Neubauer, Sonia G; Levy, Raquel G.; Fernandez Grez, Margarita; Berger, Claudia G

    2006-01-01

    Background: There are individuals, usually musicians, who are seemingly able to evoke musical sensations without external auditory stimuli. However, to date there is no available evidence to determine if it is feasible to have musical sensations without using external sensory receptors nor if there is a biological substrate to these sensations. Study design: Two single photon emission computerized tomography (SPECT) evaluations with [99mTc]-HMPAO were conducted in each of three female musicians. One was done under basal conditions (without evoking) and the other one while evoking these sensations. Results: In the NeuroSPECT studies of the musicians who were tested while evoking a musical composition, there was a significant increase in perfusion above the normal mean in the right and left hemispheres in Brodmann's areas 9 and 8 (frontal executive area) and in areas 40 on the left side (auditory center). However, under basal conditions there was no hyper perfusion of areas 9, 8, 39 and 40. In one case hyper perfusion was found under basal conditions in area 45, however it was less than when she was evoking. Conclusions: These findings are suggestive of a biological substrate to the process of evoking musical sensations (au)

  2. Prestimulus neural oscillations inhibit visual perception via modulation of response gain.

    Science.gov (United States)

    Chaumon, Maximilien; Busch, Niko A

    2014-11-01

    The ongoing state of the brain radically affects how it processes sensory information. How does this ongoing brain activity interact with the processing of external stimuli? Spontaneous oscillations in the alpha range are thought to inhibit sensory processing, but little is known about the psychophysical mechanisms of this inhibition. We recorded ongoing brain activity with EEG while human observers performed a visual detection task with stimuli of different contrast intensities. To move beyond qualitative description, we formally compared psychometric functions obtained under different levels of ongoing alpha power and evaluated the inhibitory effect of ongoing alpha oscillations in terms of contrast or response gain models. This procedure opens the way to understanding the actual functional mechanisms by which ongoing brain activity affects visual performance. We found that strong prestimulus occipital alpha oscillations-but not more anterior mu oscillations-reduce performance most strongly for stimuli of the highest intensities tested. This inhibitory effect is best explained by a divisive reduction of response gain. Ongoing occipital alpha oscillations thus reflect changes in the visual system's input/output transformation that are independent of the sensory input to the system. They selectively scale the system's response, rather than change its sensitivity to sensory information.

  3. Selective attention to spatial and non-spatial visual stimuli is affected differentially by age: Effects on event-related brain potentials and performance data

    NARCIS (Netherlands)

    Talsma, D.; Kok, Albert; Ridderinkhof, K. Richard

    2006-01-01

    To assess selective attention processes in young and old adults, behavioral and event-related potential (ERP) measures were recorded. Streams of visual stimuli were presented from left or right locations (Experiment 1) or from a central location and comprising two different spatial frequencies

  4. Consistent phosphenes generated by electrical microstimulation of the visual thalamus. An experimental approach for thalamic visual neuroprostheses

    Directory of Open Access Journals (Sweden)

    Fivos ePanetsos

    2011-07-01

    Full Text Available Most work on visual prostheses has centred on developing retinal or cortical devices. However, when retinal implants are not feasible, neuroprostheses could be implanted in the lateral geniculate nucleus of the thalamus (LGN, the intermediate relay station of visual information from the retina to the visual cortex (V1. The objective of the present study was to determine the types of artificial stimuli that when delivered to the visual thalamus can generate reliable responses of the cortical neurons similar to those obtained when the eye perceives a visual image. Visual stimuli {Si} were presented to one eye of an experimental animal and both, the thalamic {RThi} and cortical responses {RV1i} to such stimuli were recorded. Electrical patterns {RThi*} resembling {RThi} were then injected into the visual thalamus to obtain cortical responses {RV1i*} similar to {RV1i}. Visually- and electrically-generated V1 responses were compared.Results: During the course of this work we: (i characterised the response of V1 neurons to visual stimuli according to response magnitude, duration, spiking rate and the distribution of interspike intervals; (ii experimentally tested the dependence of V1 responses on stimulation parameters such as intensity, frequency, duration, etc. and determined the ranges of these parameters generating the desired cortical activity; (iii identified similarities between responses of V1 useful to compare the naturally and artificially generated neuronal activity of V1; and (iv by modifying the stimulation parameters, we generated artificial V1 responses similar to those elicited by visual stimuli.Generation of predictable and consistent phosphenes by means of artificial stimulation of the LGN is important for the feasibility of visual prostheses. Here we proved that electrical stimuli to the LGN can generate V1 neural responses that resemble those elicited by natural visual stimuli.

  5. Enhanced ERPs to visual stimuli in unaffected male siblings of ASD children.

    Science.gov (United States)

    Anzures, Gizelle; Goyet, Louise; Ganea, Natasa; Johnson, Mark H

    2016-01-01

    Autism spectrum disorders are characterized by deficits in social and communication abilities. While unaffected relatives lack severe deficits, milder impairments have been reported in some first-degree relatives. The present study sought to verify whether mild deficits in face perception are evident among the unaffected younger siblings of children with ASD. Children between 6-9 years of age completed a face-recognition task and a passive viewing ERP task with face and house stimuli. Sixteen children were typically developing with no family history of ASD, and 17 were unaffected children with an older sibling with ASD. Findings indicate that, while unaffected siblings are comparable to controls in their face-recognition abilities, unaffected male siblings in particular show relatively enhanced P100 and P100-N170 peak-to-peak amplitude responses to faces and houses. Enhanced ERPs among unaffected male siblings is discussed in relation to potential differences in neural network recruitment during visual and face processing.

  6. Food-related attentional bias. Word versus pictorial stimuli and the importance of stimuli calorific value in the dot probe task.

    Science.gov (United States)

    Freijy, Tanya; Mullan, Barbara; Sharpe, Louise

    2014-12-01

    The primary aim of this study was to extend previous research on food-related attentional biases by examining biases towards pictorial versus word stimuli, and foods of high versus low calorific value. It was expected that participants would demonstrate greater biases to pictures over words, and to high-calorie over low-calorie foods. A secondary aim was to examine associations between BMI, dietary restraint, external eating and attentional biases. It was expected that high scores on these individual difference variables would be associated with a bias towards high-calorie stimuli. Undergraduates (N = 99) completed a dot probe task including matched word and pictorial food stimuli in a controlled setting. Questionnaires assessing eating behaviour were administered, and height and weight were measured. Contrary to predictions, there were no main effects for stimuli type (pictures vs words) or calorific value (high vs low). There was, however, a significant interaction effect suggesting a bias towards high-calorie pictures, but away from high-calorie words; and a bias towards low-calorie words, but away from low-calorie pictures. No associations between attentional bias and any of the individual difference variables were found. The presence of a stimulus type by calorific value interaction demonstrates the importance of stimuli type in the dot probe task, and may help to explain inconsistencies in prior research. Further research is needed to clarify associations between attentional bias and BMI, restraint, and external eating. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Stimuli-responsive nanomaterials for therapeutic protein delivery.

    Science.gov (United States)

    Lu, Yue; Sun, Wujin; Gu, Zhen

    2014-11-28

    Protein therapeutics have emerged as a significant role in treatment of a broad spectrum of diseases, including cancer, metabolic disorders and autoimmune diseases. The efficacy of protein therapeutics, however, is limited by their instability, immunogenicity and short half-life. In order to overcome these barriers, tremendous efforts have recently been made in developing controlled protein delivery systems. Stimuli-triggered release is an appealing and promising approach for protein delivery and has made protein delivery with both spatiotemporal- and dosage-controlled manners possible. This review surveys recent advances in controlled protein delivery of proteins or peptides using stimuli-responsive nanomaterials. Strategies utilizing both physiological and external stimuli are introduced and discussed. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Movement Induces the Use of External Spatial Coordinates for Tactile Localization in Congenitally Blind Humans.

    Science.gov (United States)

    Heed, Tobias; Möller, Johanna; Röder, Brigitte

    2015-01-01

    To localize touch, the brain integrates spatial information coded in anatomically based and external spatial reference frames. Sighted humans, by default, use both reference frames in tactile localization. In contrast, congenitally blind individuals have been reported to rely exclusively on anatomical coordinates, suggesting a crucial role of the visual system for tactile spatial processing. We tested whether the use of external spatial information in touch can, alternatively, be induced by a movement context. Sighted and congenitally blind humans performed a tactile temporal order judgment task that indexes the use of external coordinates for tactile localization, while they executed bimanual arm movements with uncrossed and crossed start and end postures. In the sighted, start posture and planned end posture of the arm movement modulated tactile localization for stimuli presented before and during movement, indicating automatic, external recoding of touch. Contrary to previous findings, tactile localization of congenitally blind participants, too, was affected by external coordinates, though only for stimuli presented before movement start. Furthermore, only the movement's start posture, but not the planned end posture affected blind individuals' tactile performance. Thus, integration of external coordinates in touch is established without vision, though more selectively than when vision has developed normally, and possibly restricted to movement contexts. The lack of modulation by the planned posture in congenitally blind participants suggests that external coordinates in this group are not mediated by motor efference copy. Instead the task-related frequent posture changes, that is, movement consequences rather than planning, appear to have induced their use of external coordinates.

  9. Electrophysiological evidence of altered visual processing in adults who experienced visual deprivation during infancy.

    Science.gov (United States)

    Segalowitz, Sidney J; Sternin, Avital; Lewis, Terri L; Dywan, Jane; Maurer, Daphne

    2017-04-01

    We examined the role of early visual input in visual system development by testing adults who had been born with dense bilateral cataracts that blocked all patterned visual input during infancy until the cataractous lenses were removed surgically and the eyes fitted with compensatory contact lenses. Patients viewed checkerboards and textures to explore early processing regions (V1, V2), Glass patterns to examine global form processing (V4), and moving stimuli to explore global motion processing (V5). Patients' ERPs differed from those of controls in that (1) the V1 component was much smaller for all but the simplest stimuli and (2) extrastriate components did not differentiate amongst texture stimuli, Glass patterns, or motion stimuli. The results indicate that early visual deprivation contributes to permanent abnormalities at early and mid levels of visual processing, consistent with enduring behavioral deficits in the ability to process complex textures, global form, and global motion. © 2017 Wiley Periodicals, Inc.

  10. [Effects of visual optical stimuli for accommodation-convergence system on asthenopia].

    Science.gov (United States)

    Iwasaki, Tsuneto; Tawara, Akihiko; Miyake, Nobuyuki

    2006-01-01

    We investigated the effect on eyestrain of optical stimuli that we designed for accommodation and convergence systems. Eight female students were given optical stimuli for accommodation and convergence systems for 1.5 min immediately after 20 min of a sustained task on a 3-D display. Before and after the trial, their ocular functions were measured and their symptoms were assessed. The optical stimuli were applied by moving targets of scenery images far and near around the far point position of both eyes on a horizonal place, which induced divergence in the direction of the eye position of rest. In a control group, subjects rested with closed eyes for 1.5 min instead of applying the optical stimuli. There were significant changes in the accommodative contraction time (from far to near) and the accommodative relaxation time (from near to far) and the lag of accommodation at near target, from 1.26 s to 1.62 s and from 1.49 s to 1.63 s and from 0.5 D to 0.65 D, respectively, and in the symptoms in the control group after the duration of closed-eye rest. In the stimulus group, however, the changes of those functions were smaller than in the control group. From these results, we suggest that our designed optical stimuli for accommodation and convergence systems are effective on asthenopia following accommodative dysfunction.

  11. The eye-tracking of social stimuli in patients with Rett syndrome and autism spectrum disorders: a pilot study

    Directory of Open Access Journals (Sweden)

    José Salomão Schwartzman

    2015-05-01

    Full Text Available Objective To compare visual fixation at social stimuli in Rett syndrome (RT and autism spectrum disorders (ASD patients. Method Visual fixation at social stimuli was analyzed in 14 RS female patients (age range 4-30 years, 11 ASD male patients (age range 4-20 years, and 17 children with typical development (TD. Patients were exposed to three different pictures (two of human faces and one with social and non-social stimuli presented for 8 seconds each on the screen of a computer attached to an eye-tracker equipment. Results Percentage of visual fixation at social stimuli was significantly higher in the RS group compared to ASD and even to TD groups. Conclusion Visual fixation at social stimuli seems to be one more endophenotype making RS to be very different from ASD.

  12. Integration of visual and non-visual self-motion cues during voluntary head movements in the human brain.

    Science.gov (United States)

    Schindler, Andreas; Bartels, Andreas

    2018-05-15

    Our phenomenological experience of the stable world is maintained by continuous integration of visual self-motion with extra-retinal signals. However, due to conventional constraints of fMRI acquisition in humans, neural responses to visuo-vestibular integration have only been studied using artificial stimuli, in the absence of voluntary head-motion. We here circumvented these limitations and let participants to move their heads during scanning. The slow dynamics of the BOLD signal allowed us to acquire neural signal related to head motion after the observer's head was stabilized by inflatable aircushions. Visual stimuli were presented on head-fixed display goggles and updated in real time as a function of head-motion that was tracked using an external camera. Two conditions simulated forward translation of the participant. During physical head rotation, the congruent condition simulated a stable world, whereas the incongruent condition added arbitrary lateral motion. Importantly, both conditions were precisely matched in visual properties and head-rotation. By comparing congruent with incongruent conditions we found evidence consistent with the multi-modal integration of visual cues with head motion into a coherent "stable world" percept in the parietal operculum and in an anterior part of parieto-insular cortex (aPIC). In the visual motion network, human regions MST, a dorsal part of VIP, the cingulate sulcus visual area (CSv) and a region in precuneus (Pc) showed differential responses to the same contrast. The results demonstrate for the first time neural multimodal interactions between precisely matched congruent versus incongruent visual and non-visual cues during physical head-movement in the human brain. The methodological approach opens the path to a new class of fMRI studies with unprecedented temporal and spatial control over visuo-vestibular stimulation. Copyright © 2018 Elsevier Inc. All rights reserved.

  13. Binocular Combination of Second-Order Stimuli

    Science.gov (United States)

    Zhou, Jiawei; Liu, Rong; Zhou, Yifeng; Hess, Robert F.

    2014-01-01

    Phase information is a fundamental aspect of visual stimuli. However, the nature of the binocular combination of stimuli defined by modulations in contrast, so-called second-order stimuli, is presently not clear. To address this issue, we measured binocular combination for first- (luminance modulated) and second-order (contrast modulated) stimuli using a binocular phase combination paradigm in seven normal adults. We found that the binocular perceived phase of second-order gratings depends on the interocular signal ratio as has been previously shown for their first order counterparts; the interocular signal ratios when the two eyes were balanced was close to 1 in both first- and second-order phase combinations. However, second-order combination is more linear than previously found for first-order combination. Furthermore, binocular combination of second-order stimuli was similar regardless of whether the carriers in the two eyes were correlated, anti-correlated, or uncorrelated. This suggests that, in normal adults, the binocular phase combination of second-order stimuli occurs after the monocular extracting of the second-order modulations. The sensory balance associated with this second-order combination can be obtained from binocular phase combination measurements. PMID:24404180

  14. Early auditory change detection implicitly facilitated by ignored concurrent visual change during a Braille reading task.

    Science.gov (United States)

    Aoyama, Atsushi; Haruyama, Tomohiro; Kuriki, Shinya

    2013-09-01

    Unconscious monitoring of multimodal stimulus changes enables humans to effectively sense the external environment. Such automatic change detection is thought to be reflected in auditory and visual mismatch negativity (MMN) and mismatch negativity fields (MMFs). These are event-related potentials and magnetic fields, respectively, evoked by deviant stimuli within a sequence of standard stimuli, and both are typically studied during irrelevant visual tasks that cause the stimuli to be ignored. Due to the sensitivity of MMN/MMF to potential effects of explicit attention to vision, however, it is unclear whether multisensory co-occurring changes can purely facilitate early sensory change detection reciprocally across modalities. We adopted a tactile task involving the reading of Braille patterns as a neutral ignore condition, while measuring magnetoencephalographic responses to concurrent audiovisual stimuli that were infrequently deviated either in auditory, visual, or audiovisual dimensions; 1000-Hz standard tones were switched to 1050-Hz deviant tones and/or two-by-two standard check patterns displayed on both sides of visual fields were switched to deviant reversed patterns. The check patterns were set to be faint enough so that the reversals could be easily ignored even during Braille reading. While visual MMFs were virtually undetectable even for visual and audiovisual deviants, significant auditory MMFs were observed for auditory and audiovisual deviants, originating from bilateral supratemporal auditory areas. Notably, auditory MMFs were significantly enhanced for audiovisual deviants from about 100 ms post-stimulus, as compared with the summation responses for auditory and visual deviants or for each of the unisensory deviants recorded in separate sessions. Evidenced by high tactile task performance with unawareness of visual changes, we conclude that Braille reading can successfully suppress explicit attention and that simultaneous multisensory changes can

  15. Attentional Capture by Emotional Stimuli Is Modulated by Semantic Processing

    Science.gov (United States)

    Huang, Yang-Ming; Baddeley, Alan; Young, Andrew W.

    2008-01-01

    The attentional blink paradigm was used to examine whether emotional stimuli always capture attention. The processing requirement for emotional stimuli in a rapid sequential visual presentation stream was manipulated to investigate the circumstances under which emotional distractors capture attention, as reflected in an enhanced attentional blink…

  16. Visual Attention to Pictorial Food Stimuli in Individuals With Night Eating Syndrome: An Eye-Tracking Study.

    Science.gov (United States)

    Baldofski, Sabrina; Lüthold, Patrick; Sperling, Ingmar; Hilbert, Anja

    2018-03-01

    Night eating syndrome (NES) is characterized by excessive evening and/or nocturnal eating episodes. Studies indicate an attentional bias towards food in other eating disorders. For NES, however, evidence of attentional food processing is lacking. Attention towards food and non-food stimuli was compared using eye-tracking in 19 participants with NES and 19 matched controls without eating disorders during a free exploration paradigm and a visual search task. In the free exploration paradigm, groups did not differ in initial fixation position or gaze duration. However, a significant orienting bias to food compared to non-food was found within the NES group, but not in controls. A significant attentional maintenance bias to non-food compared to food was found in both groups. Detection times did not differ between groups in the search task. Only in NES, attention to and faster detection of non-food stimuli were related to higher BMI and more evening eating episodes. The results might indicate an attentional approach-avoidance pattern towards food in NES. However, further studies should clarify the implications of attentional mechanisms for the etiology and maintenance of NES. Copyright © 2017. Published by Elsevier Ltd.

  17. Neural correlates of visual aesthetics--beauty as the coalescence of stimulus and internal state.

    Science.gov (United States)

    Jacobs, Richard H A H; Renken, Remco; Cornelissen, Frans W

    2012-01-01

    How do external stimuli and our internal state coalesce to create the distinctive aesthetic pleasures that give vibrance to human experience? Neuroaesthetics has so far focused on the neural correlates of observing beautiful stimuli compared to neutral or ugly stimuli, or on neural correlates of judging for beauty as opposed to other judgments. Our group questioned whether this approach is sufficient. In our view, a brain region that assesses beauty should show beauty-level-dependent activation during the beauty judgment task, but not during other, unrelated tasks. We therefore performed an fMRI experiment in which subjects judged visual textures for beauty, naturalness and roughness. Our focus was on finding brain activation related to the rated beauty level of the stimuli, which would take place exclusively during the beauty judgment. An initial whole-brain analysis did not reveal such interactions, yet a number of the regions showing main effects of the judgment task or the beauty level of stimuli were selectively sensitive to beauty level during the beauty task. Of the regions that were more active during beauty judgments than roughness judgments, the frontomedian cortex and the amygdala demonstrated the hypothesized interaction effect, while the posterior cingulate cortex did not. The latter region, which only showed a task effect, may play a supporting role in beauty assessments, such as attending to one's internal state rather than the external world. Most of the regions showing interaction effects of judgment and beauty level correspond to regions that have previously been implicated in aesthetics using different stimulus classes, but based on either task or beauty effects alone. The fact that we have now shown that task-stimulus interactions are also present during the aesthetic judgment of visual textures implies that these areas form a network that is specifically devoted to aesthetic assessment, irrespective of the stimulus type.

  18. Auditory and visual spatial impression: Recent studies of three auditoria

    Science.gov (United States)

    Nguyen, Andy; Cabrera, Densil

    2004-10-01

    Auditory spatial impression is widely studied for its contribution to auditorium acoustical quality. By contrast, visual spatial impression in auditoria has received relatively little attention in formal studies. This paper reports results from a series of experiments investigating the auditory and visual spatial impression of concert auditoria. For auditory stimuli, a fragment of an anechoic recording of orchestral music was convolved with calibrated binaural impulse responses, which had been made with the dummy head microphone at a wide range of positions in three auditoria and the sound source on the stage. For visual stimuli, greyscale photographs were used, taken at the same positions in the three auditoria, with a visual target on the stage. Subjective experiments were conducted with auditory stimuli alone, visual stimuli alone, and visual and auditory stimuli combined. In these experiments, subjects rated apparent source width, listener envelopment, intimacy and source distance (auditory stimuli), and spaciousness, envelopment, stage dominance, intimacy and target distance (visual stimuli). Results show target distance to be of primary importance in auditory and visual spatial impression-thereby providing a basis for covariance between some attributes of auditory and visual spatial impression. Nevertheless, some attributes of spatial impression diverge between the senses.

  19. Enhanced pain and autonomic responses to ambiguous visual stimuli in chronic Complex Regional Pain Syndrome (CRPS) type I.

    Science.gov (United States)

    Cohen, H E; Hall, J; Harris, N; McCabe, C S; Blake, D R; Jänig, W

    2012-02-01

    Cortical reorganisation of sensory, motor and autonomic systems can lead to dysfunctional central integrative control. This may contribute to signs and symptoms of Complex Regional Pain Syndrome (CRPS), including pain. It has been hypothesised that central neuroplastic changes may cause afferent sensory feedback conflicts and produce pain. We investigated autonomic responses produced by ambiguous visual stimuli (AVS) in CRPS, and their relationship to pain. Thirty CRPS patients with upper limb involvement and 30 age and sex matched healthy controls had sympathetic autonomic function assessed using laser Doppler flowmetry of the finger pulp at baseline and while viewing a control figure or AVS. Compared to controls, there were diminished vasoconstrictor responses and a significant difference in the ratio of response between affected and unaffected limbs (symmetry ratio) to a deep breath and viewing AVS. While viewing visual stimuli, 33.5% of patients had asymmetric vasomotor responses and all healthy controls had a homologous symmetric pattern of response. Nineteen (61%) CRPS patients had enhanced pain within seconds of viewing the AVS. All the asymmetric vasomotor responses were in this group, and were not predictable from baseline autonomic function. Ten patients had accompanying dystonic reactions in their affected limb: 50% were in the asymmetric sub-group. In conclusion, there is a group of CRPS patients that demonstrate abnormal pain networks interacting with central somatomotor and autonomic integrational pathways. © 2011 European Federation of International Association for the Study of Pain Chapters.

  20. Consuming Almonds vs. Isoenergetic Baked Food Does Not Differentially Influence Postprandial Appetite or Neural Reward Responses to Visual Food Stimuli.

    Science.gov (United States)

    Sayer, R Drew; Dhillon, Jaapna; Tamer, Gregory G; Cornier, Marc-Andre; Chen, Ningning; Wright, Amy J; Campbell, Wayne W; Mattes, Richard D

    2017-07-27

    Nuts have high energy and fat contents, but nut intake does not promote weight gain or obesity, which may be partially explained by their proposed high satiety value. The primary aim of this study was to assess the effects of consuming almonds versus a baked food on postprandial appetite and neural responses to visual food stimuli. Twenty-two adults (19 women and 3 men) with a BMI between 25 and 40 kg/m² completed the current study during a 12-week behavioral weight loss intervention. Participants consumed either 28 g of whole, lightly salted roasted almonds or a serving of a baked food with equivalent energy and macronutrient contents in random order on two testing days prior to and at the end of the intervention. Pre- and postprandial appetite ratings and functional magnetic resonance imaging scans were completed on all four testing days. Postprandial hunger, desire to eat, fullness, and neural responses to visual food stimuli were not different following consumption of almonds and the baked food, nor were they influenced by weight loss. These results support energy and macronutrient contents as principal determinants of postprandial appetite and do not support a unique satiety effect of almonds independent of these variables.

  1. Lack of Multisensory Integration in Hemianopia: No Influence of Visual Stimuli on Aurally Guided Saccades to the Blind Hemifield

    Science.gov (United States)

    Ten Brink, Antonia F.; Nijboer, Tanja C. W.; Bergsma, Douwe P.; Barton, Jason J. S.; Van der Stigchel, Stefan

    2015-01-01

    In patients with visual hemifield defects residual visual functions may be present, a phenomenon called blindsight. The superior colliculus (SC) is part of the spared pathway that is considered to be responsible for this phenomenon. Given that the SC processes input from different modalities and is involved in the programming of saccadic eye movements, the aim of the present study was to examine whether multimodal integration can modulate oculomotor competition in the damaged hemifield. We conducted two experiments with eight patients who had visual field defects due to lesions that affected the retinogeniculate pathway but spared the retinotectal direct SC pathway. They had to make saccades to an auditory target that was presented alone or in combination with a visual stimulus. The visual stimulus could either be spatially coincident with the auditory target (possibly enhancing the auditory target signal), or spatially disparate to the auditory target (possibly competing with the auditory tar-get signal). For each patient we compared the saccade endpoint deviation in these two bi-modal conditions with the endpoint deviation in the unimodal condition (auditory target alone). In all seven hemianopic patients, saccade accuracy was affected only by visual stimuli in the intact, but not in the blind visual field. In one patient with a more limited quadrantano-pia, a facilitation effect of the spatially coincident visual stimulus was observed. We conclude that our results show that multisensory integration is infrequent in the blind field of patients with hemianopia. PMID:25835952

  2. Lack of multisensory integration in hemianopia: no influence of visual stimuli on aurally guided saccades to the blind hemifield.

    Directory of Open Access Journals (Sweden)

    Antonia F Ten Brink

    Full Text Available In patients with visual hemifield defects residual visual functions may be present, a phenomenon called blindsight. The superior colliculus (SC is part of the spared pathway that is considered to be responsible for this phenomenon. Given that the SC processes input from different modalities and is involved in the programming of saccadic eye movements, the aim of the present study was to examine whether multimodal integration can modulate oculomotor competition in the damaged hemifield. We conducted two experiments with eight patients who had visual field defects due to lesions that affected the retinogeniculate pathway but spared the retinotectal direct SC pathway. They had to make saccades to an auditory target that was presented alone or in combination with a visual stimulus. The visual stimulus could either be spatially coincident with the auditory target (possibly enhancing the auditory target signal, or spatially disparate to the auditory target (possibly competing with the auditory tar-get signal. For each patient we compared the saccade endpoint deviation in these two bi-modal conditions with the endpoint deviation in the unimodal condition (auditory target alone. In all seven hemianopic patients, saccade accuracy was affected only by visual stimuli in the intact, but not in the blind visual field. In one patient with a more limited quadrantano-pia, a facilitation effect of the spatially coincident visual stimulus was observed. We conclude that our results show that multisensory integration is infrequent in the blind field of patients with hemianopia.

  3. Comparison of Gated Audiovisual Speech Identification in Elderly Hearing Aid Users and Elderly Normal-Hearing Individuals: Effects of Adding Visual Cues to Auditory Speech Stimuli.

    Science.gov (United States)

    Moradi, Shahram; Lidestam, Björn; Rönnberg, Jerker

    2016-06-17

    The present study compared elderly hearing aid (EHA) users (n = 20) with elderly normal-hearing (ENH) listeners (n = 20) in terms of isolation points (IPs, the shortest time required for correct identification of a speech stimulus) and accuracy of audiovisual gated speech stimuli (consonants, words, and final words in highly and less predictable sentences) presented in silence. In addition, we compared the IPs of audiovisual speech stimuli from the present study with auditory ones extracted from a previous study, to determine the impact of the addition of visual cues. Both participant groups achieved ceiling levels in terms of accuracy in the audiovisual identification of gated speech stimuli; however, the EHA group needed longer IPs for the audiovisual identification of consonants and words. The benefit of adding visual cues to auditory speech stimuli was more evident in the EHA group, as audiovisual presentation significantly shortened the IPs for consonants, words, and final words in less predictable sentences; in the ENH group, audiovisual presentation only shortened the IPs for consonants and words. In conclusion, although the audiovisual benefit was greater for EHA group, this group had inferior performance compared with the ENH group in terms of IPs when supportive semantic context was lacking. Consequently, EHA users needed the initial part of the audiovisual speech signal to be longer than did their counterparts with normal hearing to reach the same level of accuracy in the absence of a semantic context. © The Author(s) 2016.

  4. Iconic-Memory Processing of Unfamiliar Stimuli by Retarded and Nonretarded Individuals.

    Science.gov (United States)

    Hornstein, Henry A.; Mosley, James L.

    1979-01-01

    The iconic-memory processing of unfamiliar stimuli by 11 mentally retarded males (mean age 22 years) was undertaken employing a visually cued partial-report procedure and a visual masking procedure. (Author/CL)

  5. Swimming micro-robot powered by stimuli-sensitive gel

    Science.gov (United States)

    Masoud, Hassan; Alexeev, Alexander

    2012-11-01

    Using three-dimensional computer simulations, we design a simple maneuverable micro-swimmer that can self-propel and navigate in highly viscous (low Reynolds-number) environments. Our simple swimmer consists of a cubic gel body which periodically changes volume in response to external stimuli, two rigid rectangular flaps attached to the opposite sides of the gel body, and a flexible steering flap at the front end of the swimmer. The stimuli-sensitive body undergoes periodic expansions (swelling) and contractions (deswelling) leading to a time-irreversible beating motion of the propulsive flaps that propel the micro-swimmer. Thus, the responsive gel body acts as an ``engine'' actuating the motion of the swimmer. We examine how the swimming speed depends on the gel and flap properties. We also probe how the swimmer trajectory can be changed using a responsive steering flap whose curvature is controlled by an external stimulus. We show that the turning occurs due to steering flap bending and periodic beating. Furthermore, our simulations reveal that the turning direction can be regulated by changing the intensity of external stimulus.

  6. Neural Correlates of Visual Aesthetics – Beauty as the Coalescence of Stimulus and Internal State

    Science.gov (United States)

    Jacobs, Richard H. A. H.; Renken, Remco; Cornelissen, Frans W.

    2012-01-01

    How do external stimuli and our internal state coalesce to create the distinctive aesthetic pleasures that give vibrance to human experience? Neuroaesthetics has so far focused on the neural correlates of observing beautiful stimuli compared to neutral or ugly stimuli, or on neural correlates of judging for beauty as opposed to other judgments. Our group questioned whether this approach is sufficient. In our view, a brain region that assesses beauty should show beauty-level-dependent activation during the beauty judgment task, but not during other, unrelated tasks. We therefore performed an fMRI experiment in which subjects judged visual textures for beauty, naturalness and roughness. Our focus was on finding brain activation related to the rated beauty level of the stimuli, which would take place exclusively during the beauty judgment. An initial whole-brain analysis did not reveal such interactions, yet a number of the regions showing main effects of the judgment task or the beauty level of stimuli were selectively sensitive to beauty level during the beauty task. Of the regions that were more active during beauty judgments than roughness judgments, the frontomedian cortex and the amygdala demonstrated the hypothesized interaction effect, while the posterior cingulate cortex did not. The latter region, which only showed a task effect, may play a supporting role in beauty assessments, such as attending to one's internal state rather than the external world. Most of the regions showing interaction effects of judgment and beauty level correspond to regions that have previously been implicated in aesthetics using different stimulus classes, but based on either task or beauty effects alone. The fact that we have now shown that task-stimulus interactions are also present during the aesthetic judgment of visual textures implies that these areas form a network that is specifically devoted to aesthetic assessment, irrespective of the stimulus type. PMID:22384006

  7. A Wider Look at Visual Discomfort

    Directory of Open Access Journals (Sweden)

    L O'Hare

    2012-07-01

    Full Text Available Visual discomfort is the adverse effects reported by some on viewing certain stimuli, such as stripes and certain filtered noise patterns. Stimuli that deviate from natural image statistics might be encoded inefficiently, which could cause discomfort (Juricevic, Land, Wilkins and Webster, 2010, Perception, 39(7, 884–899, possibly through excessive cortical responses (Wilkins, 1995, Visual Stress, Oxford, Oxford University Press. A less efficient visual system might exacerbate the effects of difficult stimuli. Extreme examples are seen in epilepsy and migraines (Wilkins, Bonnanni, Prociatti, Guerrini, 2004, Epilepsia, 45, 1–7; Aurora and Wilkinson, 2007, Cephalalgia, 27(12, 1422–1435. However, similar stimuli are also seen as uncomfortable by non-clinical populations, eg, striped patterns (Wilkins et al, 1984, Brain, 107(4. We propose that oversensitivity of clinical populations may represent extreme examples of visual discomfort in the general population. To study the prevalence and impact of visual discomfort in a wider context than typically studied, an Internet-based survey was conducted, including standardised questionnaires measuring visual discomfort susceptibility (Conlon, Lovegrove, Chekaluk and Pattison, 1999, Visual Cognition, 6(6, 637–663; Evans and Stevenson, 2008, Ophthal Physiol Opt 28(4 295–309 and judgments of visual stimuli, such as striped patterns (Wilkins et al, 1984 and filtered noise patterns (Fernandez and Wilkins, 2008, Perception, 37(7 1098–1013. Results show few individuals reporting high visual discomfort, contrary to other researchers (eg, Conlon et al, 1999.

  8. Influence of auditory and audiovisual stimuli on the right-left prevalence effect

    DEFF Research Database (Denmark)

    Vu, Kim-Phuong L; Minakata, Katsumi; Ngo, Mary Kim

    2014-01-01

    occurs when the two-dimensional stimuli are audiovisual, as well as whether there will be cross-modal facilitation of response selection for the horizontal and vertical dimensions. We also examined whether there is an additional benefit of adding a pitch dimension to the auditory stimulus to facilitate...... vertical coding through use of the spatial-musical association of response codes (SMARC) effect, where pitch is coded in terms of height in space. In Experiment 1, we found a larger right-left prevalence effect for unimodal auditory than visual stimuli. Neutral, non-pitch coded, audiovisual stimuli did...... not result in cross-modal facilitation, but did show evidence of visual dominance. The right-left prevalence effect was eliminated in the presence of SMARC audiovisual stimuli, but the effect influenced horizontal rather than vertical coding. Experiment 2 showed that the influence of the pitch dimension...

  9. Visual perception and imagery: a new molecular hypothesis.

    Science.gov (United States)

    Bókkon, I

    2009-05-01

    Here, we put forward a redox molecular hypothesis about the natural biophysical substrate of visual perception and visual imagery. This hypothesis is based on the redox and bioluminescent processes of neuronal cells in retinotopically organized cytochrome oxidase-rich visual areas. Our hypothesis is in line with the functional roles of reactive oxygen and nitrogen species in living cells that are not part of haphazard process, but rather a very strict mechanism used in signaling pathways. We point out that there is a direct relationship between neuronal activity and the biophoton emission process in the brain. Electrical and biochemical processes in the brain represent sensory information from the external world. During encoding or retrieval of information, electrical signals of neurons can be converted into synchronized biophoton signals by bioluminescent radical and non-radical processes. Therefore, information in the brain appears not only as an electrical (chemical) signal but also as a regulated biophoton (weak optical) signal inside neurons. During visual perception, the topological distribution of photon stimuli on the retina is represented by electrical neuronal activity in retinotopically organized visual areas. These retinotopic electrical signals in visual neurons can be converted into synchronized biophoton signals by radical and non-radical processes in retinotopically organized mitochondria-rich areas. As a result, regulated bioluminescent biophotons can create intrinsic pictures (depictive representation) in retinotopically organized cytochrome oxidase-rich visual areas during visual imagery and visual perception. The long-term visual memory is interpreted as epigenetic information regulated by free radicals and redox processes. This hypothesis does not claim to solve the secret of consciousness, but proposes that the evolution of higher levels of complexity made the intrinsic picture representation of the external visual world possible by regulated

  10. Visual Memories Bypass Normalization.

    Science.gov (United States)

    Bloem, Ilona M; Watanabe, Yurika L; Kibbe, Melissa M; Ling, Sam

    2018-05-01

    How distinct are visual memory representations from visual perception? Although evidence suggests that briefly remembered stimuli are represented within early visual cortices, the degree to which these memory traces resemble true visual representations remains something of a mystery. Here, we tested whether both visual memory and perception succumb to a seemingly ubiquitous neural computation: normalization. Observers were asked to remember the contrast of visual stimuli, which were pitted against each other to promote normalization either in perception or in visual memory. Our results revealed robust normalization between visual representations in perception, yet no signature of normalization occurring between working memory stores-neither between representations in memory nor between memory representations and visual inputs. These results provide unique insight into the nature of visual memory representations, illustrating that visual memory representations follow a different set of computational rules, bypassing normalization, a canonical visual computation.

  11. The effect of internal and external fields of view on visually induced motion sickness

    NARCIS (Netherlands)

    Bos, J.E.; Vries, S.C. de; Emmerik, M.L. van; Groen, E.L.

    2010-01-01

    Field of view (FOV) is said to affect visually induced motion sickness. FOV, however, is characterized by an internal setting used by the graphics generator (iFOV) and an external factor determined by screen size and viewing distance (eFOV). We hypothesized that especially the incongruence between

  12. Effects of facial emotion recognition remediation on visual scanning of novel face stimuli.

    Science.gov (United States)

    Marsh, Pamela J; Luckett, Gemma; Russell, Tamara; Coltheart, Max; Green, Melissa J

    2012-11-01

    Previous research shows that emotion recognition in schizophrenia can be improved with targeted remediation that draws attention to important facial features (eyes, nose, mouth). Moreover, the effects of training have been shown to last for up to one month after training. The aim of this study was to investigate whether improved emotion recognition of novel faces is associated with concomitant changes in visual scanning of these same novel facial expressions. Thirty-nine participants with schizophrenia received emotion recognition training using Ekman's Micro-Expression Training Tool (METT), with emotion recognition and visual scanpath (VSP) recordings to face stimuli collected simultaneously. Baseline ratings of interpersonal and cognitive functioning were also collected from all participants. Post-METT training, participants showed changes in foveal attention to the features of facial expressions of emotion not used in METT training, which were generally consistent with the information about important features from the METT. In particular, there were changes in how participants looked at the features of facial expressions of emotion surprise, disgust, fear, happiness, and neutral, demonstrating that improved emotion recognition is paralleled by changes in the way participants with schizophrenia viewed novel facial expressions of emotion. However, there were overall decreases in foveal attention to sad and neutral faces that indicate more intensive instruction might be needed for these faces during training. Most importantly, the evidence shows that participant gender may affect training outcomes. Copyright © 2012 Elsevier B.V. All rights reserved.

  13. The pointillism method for creating stimuli suitable for use in computer-based visual contrast sensitivity testing.

    Science.gov (United States)

    Turner, Travis H

    2005-03-30

    An increasingly large corpus of clinical and experimental neuropsychological research has demonstrated the utility of measuring visual contrast sensitivity. Unfortunately, existing means of measuring contrast sensitivity can be prohibitively expensive, difficult to standardize, or lack reliability. Additionally, most existing tests do not allow full control over important characteristics, such as off-angle rotations, waveform, contrast, and spatial frequency. Ideally, researchers could manipulate characteristics and display stimuli in a computerized task designed to meet experimental needs. Thus far, 256-bit color limitation in standard cathode ray tube (CRT) monitors has been preclusive. To this end, the pointillism method (PM) was developed. Using MATLAB software, stimuli are created based on both mathematical and stochastic components, such that differences in regional luminance values of the gradient field closely approximate the desired contrast. This paper describes the method and examines its performance in sine and square-wave image sets from a range of contrast values. Results suggest the utility of the method for most experimental applications. Weaknesses in the current version, the need for validation and reliability studies, and considerations regarding applications are discussed. Syntax for the program is provided in an appendix, and a version of the program independent of MATLAB is available from the author.

  14. Loss of variation of state detected in soybean metabolic and human myelomonocytic leukaemia cell transcriptional networks under external stimuli

    KAUST Repository

    Sakata, Katsumi

    2016-10-24

    Soybean (Glycine max) is sensitive to flooding stress, and flood damage at the seedling stage is a barrier to growth. We constructed two mathematical models of the soybean metabolic network, a control model and a flooded model, from metabolic profiles in soybean plants. We simulated the metabolic profiles with perturbations before and after the flooding stimulus using the two models. We measured the variation of state that the system could maintain from a state–space description of the simulated profiles. The results showed a loss of variation of state during the flooding response in the soybean plants. Loss of variation of state was also observed in a human myelomonocytic leukaemia cell transcriptional network in response to a phorbol-ester stimulus. Thus, we detected a loss of variation of state under external stimuli in two biological systems, regardless of the regulation and stimulus types. Our results suggest that a loss of robustness may occur concurrently with the loss of variation of state in biological systems. We describe the possible applications of the quantity of variation of state in plant genetic engineering and cell biology. Finally, we present a hypothetical “external stimulus-induced information loss” model of biological systems.

  15. Loss of variation of state detected in soybean metabolic and human myelomonocytic leukaemia cell transcriptional networks under external stimuli

    KAUST Repository

    Sakata, Katsumi; Saito, Toshiyuki; Ohyanagi, Hajime; Okumura, Jun; Ishige, Kentaro; Suzuki, Harukazu; Nakamura, Takuji; Komatsu, Setsuko

    2016-01-01

    Soybean (Glycine max) is sensitive to flooding stress, and flood damage at the seedling stage is a barrier to growth. We constructed two mathematical models of the soybean metabolic network, a control model and a flooded model, from metabolic profiles in soybean plants. We simulated the metabolic profiles with perturbations before and after the flooding stimulus using the two models. We measured the variation of state that the system could maintain from a state–space description of the simulated profiles. The results showed a loss of variation of state during the flooding response in the soybean plants. Loss of variation of state was also observed in a human myelomonocytic leukaemia cell transcriptional network in response to a phorbol-ester stimulus. Thus, we detected a loss of variation of state under external stimuli in two biological systems, regardless of the regulation and stimulus types. Our results suggest that a loss of robustness may occur concurrently with the loss of variation of state in biological systems. We describe the possible applications of the quantity of variation of state in plant genetic engineering and cell biology. Finally, we present a hypothetical “external stimulus-induced information loss” model of biological systems.

  16. Real time unsupervised learning of visual stimuli in neuromorphic VLSI systems

    Science.gov (United States)

    Giulioni, Massimiliano; Corradi, Federico; Dante, Vittorio; Del Giudice, Paolo

    2015-10-01

    Neuromorphic chips embody computational principles operating in the nervous system, into microelectronic devices. In this domain it is important to identify computational primitives that theory and experiments suggest as generic and reusable cognitive elements. One such element is provided by attractor dynamics in recurrent networks. Point attractors are equilibrium states of the dynamics (up to fluctuations), determined by the synaptic structure of the network; a ‘basin’ of attraction comprises all initial states leading to a given attractor upon relaxation, hence making attractor dynamics suitable to implement robust associative memory. The initial network state is dictated by the stimulus, and relaxation to the attractor state implements the retrieval of the corresponding memorized prototypical pattern. In a previous work we demonstrated that a neuromorphic recurrent network of spiking neurons and suitably chosen, fixed synapses supports attractor dynamics. Here we focus on learning: activating on-chip synaptic plasticity and using a theory-driven strategy for choosing network parameters, we show that autonomous learning, following repeated presentation of simple visual stimuli, shapes a synaptic connectivity supporting stimulus-selective attractors. Associative memory develops on chip as the result of the coupled stimulus-driven neural activity and ensuing synaptic dynamics, with no artificial separation between learning and retrieval phases.

  17. Altered visual information processing systems in bipolar disorder: evidence from visual MMN and P3

    Directory of Open Access Journals (Sweden)

    Toshihiko eMaekawa

    2013-07-01

    Full Text Available Objective: Mismatch negativity (MMN and P3 are unique ERP components that provide objective indices of human cognitive functions such as short-term memory and prediction. Bipolar disorder (BD is an endogenous psychiatric disorder characterized by extreme shifts in mood, energy, and ability to function socially. BD patients usually show cognitive dysfunction, and the goal of this study was to access their altered visual information processing via visual MMN (vMMN and P3 using windmill pattern stimuli.Methods: Twenty patients with BD and 20 healthy controls matched for age, gender, and handedness participated in this study. Subjects were seated in front of a monitor and listened to a story via earphones. Two types of windmill patterns (standard and deviant and white circle (target stimuli were randomly presented on the monitor. All stimuli were presented in random order at 200-ms durations with an 800-ms inter-stimulus interval. Stimuli were presented at 80% (standard, 10% (deviant, and 10% (target probabilities. The participants were instructed to attend to the story and press a button as soon as possible when the target stimuli were presented. Event-related potentials were recorded throughout the experiment using 128-channel EEG equipment. vMMN was obtained by subtracting standard from deviant stimuli responses, and P3 was evoked from the target stimulus.Results: Mean reaction times for target stimuli in the BD group were significantly higher than those in the control group. Additionally, mean vMMN-amplitudes and peak P3-amplitudes were significantly lower in the BD group than in controls.Conclusions: Abnormal vMMN and P3 in patients indicate a deficit of visual information processing in bipolar disorder, which is consistent with their increased reaction time to visual target stimuli.Significance: Both bottom-up and top-down visual information processing are likely altered in BD.

  18. Negative emotional stimuli reduce contextual cueing but not response times in inefficient search.

    Science.gov (United States)

    Kunar, Melina A; Watson, Derrick G; Cole, Louise; Cox, Angeline

    2014-02-01

    In visual search, previous work has shown that negative stimuli narrow the focus of attention and speed reaction times (RTs). This paper investigates these two effects by first asking whether negative emotional stimuli narrow the focus of attention to reduce the learning of a display context in a contextual cueing task and, second, whether exposure to negative stimuli also reduces RTs in inefficient search tasks. In Experiment 1, participants viewed either negative or neutral images (faces or scenes) prior to a contextual cueing task. In a typical contextual cueing experiment, RTs are reduced if displays are repeated across the experiment compared with novel displays that are not repeated. The results showed that a smaller contextual cueing effect was obtained after participants viewed negative stimuli than when they viewed neutral stimuli. However, in contrast to previous work, overall search RTs were not faster after viewing negative stimuli (Experiments 2 to 4). The findings are discussed in terms of the impact of emotional content on visual processing and the ability to use scene context to help facilitate search.

  19. Associative visual learning by tethered bees in a controlled visual environment.

    Science.gov (United States)

    Buatois, Alexis; Pichot, Cécile; Schultheiss, Patrick; Sandoz, Jean-Christophe; Lazzari, Claudio R; Chittka, Lars; Avarguès-Weber, Aurore; Giurfa, Martin

    2017-10-10

    Free-flying honeybees exhibit remarkable cognitive capacities but the neural underpinnings of these capacities cannot be studied in flying insects. Conversely, immobilized bees are accessible to neurobiological investigation but display poor visual learning. To overcome this limitation, we aimed at establishing a controlled visual environment in which tethered bees walking on a spherical treadmill learn to discriminate visual stimuli video projected in front of them. Freely flying bees trained to walk into a miniature Y-maze displaying these stimuli in a dark environment learned the visual discrimination efficiently when one of them (CS+) was paired with sucrose and the other with quinine solution (CS-). Adapting this discrimination to the treadmill paradigm with a tethered, walking bee was successful as bees exhibited robust discrimination and preferred the CS+ to the CS- after training. As learning was better in the maze, movement freedom, active vision and behavioral context might be important for visual learning. The nature of the punishment associated with the CS- also affects learning as quinine and distilled water enhanced the proportion of learners. Thus, visual learning is amenable to a controlled environment in which tethered bees learn visual stimuli, a result that is important for future neurobiological studies in virtual reality.

  20. Learning temporal context shapes prestimulus alpha oscillations and improves visual discrimination performance.

    Science.gov (United States)

    Toosi, Tahereh; K Tousi, Ehsan; Esteky, Hossein

    2017-08-01

    Time is an inseparable component of every physical event that we perceive, yet it is not clear how the brain processes time or how the neuronal representation of time affects our perception of events. Here we asked subjects to perform a visual discrimination task while we changed the temporal context in which the stimuli were presented. We collected electroencephalography (EEG) signals in two temporal contexts. In predictable blocks stimuli were presented after a constant delay relative to a visual cue, and in unpredictable blocks stimuli were presented after variable delays relative to the visual cue. Four subsecond delays of 83, 150, 400, and 800 ms were used in the predictable and unpredictable blocks. We observed that predictability modulated the power of prestimulus alpha oscillations in the parieto-occipital sites: alpha power increased in the 300-ms window before stimulus onset in the predictable blocks compared with the unpredictable blocks. This modulation only occurred in the longest delay period, 800 ms, in which predictability also improved the behavioral performance of the subjects. Moreover, learning the temporal context shaped the prestimulus alpha power: modulation of prestimulus alpha power grew during the predictable block and correlated with performance enhancement. These results suggest that the brain is able to learn the subsecond temporal context of stimuli and use this to enhance sensory processing. Furthermore, the neural correlate of this temporal prediction is reflected in the alpha oscillations. NEW & NOTEWORTHY It is not well understood how the uncertainty in the timing of an external event affects its processing, particularly at subsecond scales. Here we demonstrate how a predictable timing scheme improves visual processing. We found that learning the predictable scheme gradually shaped the prestimulus alpha power. These findings indicate that the human brain is able to extract implicit subsecond patterns in the temporal context of

  1. Music influences ratings of the affect of visual stimuli

    NARCIS (Netherlands)

    Hanser, W.E.; Mark, R.E.

    2013-01-01

    This review provides an overview of recent studies that have examined how music influences the judgment of emotional stimuli, including affective pictures and film clips. The relevant findings are incorporated within a broader theory of music and emotion, and suggestions for future research are

  2. Do episodic migraineurs selectively attend to headache-related visual stimuli?

    Science.gov (United States)

    McDermott, Michael J; Peck, Kelly R; Walters, A Brooke; Smitherman, Todd A

    2013-02-01

    To assess pain-related attentional biases among individuals with episodic migraine. Prior studies have examined whether chronic pain patients selectively attend to pain-related stimuli in the environment, but these studies have produced largely mixed findings and focused primarily on patients with chronic musculoskeletal pain. Limited research has implicated attentional biases among chronic headache patients, but no studies have been conducted among episodic migraineurs, who comprise the overwhelming majority of the migraine population. This was a case-control, experimental study. Three hundred and eight participants (mean age = 19.2 years [standard deviation = 3.3]; 69.5% female; 36.4% minority), consisting of 84 episodic migraineurs, diagnosed in accordance with International Classification of Headache Disorders (2(nd) edition) criteria using a structured diagnostic interview, and 224 non-migraine controls completed a computerized dot probe task to assess attentional bias toward headache-related pictorial stimuli. The task consisted of 192 trials and utilized 2 emotional-neutral stimulus pairing conditions (headache-neutral and happy-neutral). No within-group differences for reaction time latencies to headache vs happy conditions were found among those with episodic migraine or among the non-migraine controls. Migraine status was unrelated to attentional bias indices for both headache (F [1,306] = 0.56, P = .45) and happy facial stimuli (F [1,306] = 0.37, P = .54), indicating a lack of between-group differences. Lack of within- and between-group differences was confirmed with repeated measures analysis of variance. In light of the large sample size and prior pilot testing of presented images, results suggest that episodic migraineurs do not differentially attend to headache-related facial stimuli. Given modest evidence of attentional biases among chronic headache samples, these findings suggest potential differences in attentional

  3. The visual attention span deficit in dyslexia is visual and not verbal.

    Science.gov (United States)

    Lobier, Muriel; Zoubrinetzky, Rachel; Valdois, Sylviane

    2012-06-01

    The visual attention (VA) span deficit hypothesis of dyslexia posits that letter string deficits are a consequence of impaired visual processing. Alternatively, some have interpreted this deficit as resulting from a visual-to-phonology code mapping impairment. This study aims to disambiguate between the two interpretations by investigating performance in a non-verbal character string visual categorization task with verbal and non-verbal stimuli. Results show that VA span ability predicts performance for the non-verbal visual processing task in normal reading children. Furthermore, VA span impaired dyslexic children are also impaired for the categorization task independently of stimuli type. This supports the hypothesis that the underlying impairment responsible for the VA span deficit is visual, not verbal. Copyright © 2011 Elsevier Srl. All rights reserved.

  4. Food recognition and recipe analysis: integrating visual content, context and external knowledge

    OpenAIRE

    Herranz, Luis; Min, Weiqing; Jiang, Shuqiang

    2018-01-01

    The central role of food in our individual and social life, combined with recent technological advances, has motivated a growing interest in applications that help to better monitor dietary habits as well as the exploration and retrieval of food-related information. We review how visual content, context and external knowledge can be integrated effectively into food-oriented applications, with special focus on recipe analysis and retrieval, food recommendation, and the restaurant context as em...

  5. Benefits of stimulus congruency for multisensory facilitation of visual learning.

    Directory of Open Access Journals (Sweden)

    Robyn S Kim

    Full Text Available BACKGROUND: Studies of perceptual learning have largely focused on unisensory stimuli. However, multisensory interactions are ubiquitous in perception, even at early processing stages, and thus can potentially play a role in learning. Here, we examine the effect of auditory-visual congruency on visual learning. METHODOLOGY/PRINCIPLE FINDINGS: Subjects were trained over five days on a visual motion coherence detection task with either congruent audiovisual, or incongruent audiovisual stimuli. Comparing performance on visual-only trials, we find that training with congruent audiovisual stimuli produces significantly better learning than training with incongruent audiovisual stimuli or with only visual stimuli. CONCLUSIONS/SIGNIFICANCE: This advantage from stimulus congruency during training suggests that the benefits of multisensory training may result from audiovisual interactions at a perceptual rather than cognitive level.

  6. Temporal attention for visual food stimuli in restrained eaters

    NARCIS (Netherlands)

    Neimeijer, Renate A. M.; de Jong, Peter J.; Roefs, Anne

    2013-01-01

    Although restrained eaters try to limit their food intake, they often fail and indulge in exactly those foods that they want to avoid. A possible explanation is a temporal attentional bias for food cues. It could be that for these people food stimuli are processed relatively efficiently and require

  7. Processing of unconventional stimuli requires the recruitment of the non-specialized hemisphere

    Directory of Open Access Journals (Sweden)

    Yoed Nissan Kenett

    2015-02-01

    Full Text Available In the present study we investigate hemispheric processing of conventional and unconventional visual stimuli in the context of visual and verbal creative ability. In Experiment 1, we studied two unconventional visual recognition tasks – Mooney face and objects' silhouette recognition – and found a significant relationship between measures of verbal creativity and unconventional face recognition. In Experiment 2 we used the split visual field paradigm to investigate hemispheric processing of conventional and unconventional faces and its relation to verbal and visual characteristics of creativity. Results showed that while conventional faces were better processed by the specialized right hemisphere, unconventional faces were better processed by the non-specialized left hemisphere. In addition, only unconventional face processing by the non-specialized left hemisphere was related to verbal and visual measures of creative ability. Our findings demonstrate the role of the non-specialized hemisphere in processing unconventional stimuli and how it relates to creativity.

  8. Evolutionary relevance facilitates visual information processing.

    Science.gov (United States)

    Jackson, Russell E; Calvillo, Dusti P

    2013-11-03

    Visual search of the environment is a fundamental human behavior that perceptual load affects powerfully. Previously investigated means for overcoming the inhibitions of high perceptual load, however, generalize poorly to real-world human behavior. We hypothesized that humans would process evolutionarily relevant stimuli more efficiently than evolutionarily novel stimuli, and evolutionary relevance would mitigate the repercussions of high perceptual load during visual search. Animacy is a significant component to evolutionary relevance of visual stimuli because perceiving animate entities is time-sensitive in ways that pose significant evolutionary consequences. Participants completing a visual search task located evolutionarily relevant and animate objects fastest and with the least impact of high perceptual load. Evolutionarily novel and inanimate objects were located slowest and with the highest impact of perceptual load. Evolutionary relevance may importantly affect everyday visual information processing.

  9. Audio-visual synchrony and feature-selective attention co-amplify early visual processing.

    Science.gov (United States)

    Keitel, Christian; Müller, Matthias M

    2016-05-01

    Our brain relies on neural mechanisms of selective attention and converging sensory processing to efficiently cope with rich and unceasing multisensory inputs. One prominent assumption holds that audio-visual synchrony can act as a strong attractor for spatial attention. Here, we tested for a similar effect of audio-visual synchrony on feature-selective attention. We presented two superimposed Gabor patches that differed in colour and orientation. On each trial, participants were cued to selectively attend to one of the two patches. Over time, spatial frequencies of both patches varied sinusoidally at distinct rates (3.14 and 3.63 Hz), giving rise to pulse-like percepts. A simultaneously presented pure tone carried a frequency modulation at the pulse rate of one of the two visual stimuli to introduce audio-visual synchrony. Pulsed stimulation elicited distinct time-locked oscillatory electrophysiological brain responses. These steady-state responses were quantified in the spectral domain to examine individual stimulus processing under conditions of synchronous versus asynchronous tone presentation and when respective stimuli were attended versus unattended. We found that both, attending to the colour of a stimulus and its synchrony with the tone, enhanced its processing. Moreover, both gain effects combined linearly for attended in-sync stimuli. Our results suggest that audio-visual synchrony can attract attention to specific stimulus features when stimuli overlap in space.

  10. Early visual deprivation prompts the use of body-centered frames of reference for auditory localization.

    Science.gov (United States)

    Vercillo, Tiziana; Tonelli, Alessia; Gori, Monica

    2018-01-01

    The effects of early visual deprivation on auditory spatial processing are controversial. Results from recent psychophysical studies show that people who were born blind have a spatial impairment in localizing sound sources within specific auditory settings, while previous psychophysical studies revealed enhanced auditory spatial abilities in early blind compared to sighted individuals. An explanation of why an auditory spatial deficit is sometimes observed within blind populations and its task-dependency remains to be clarified. We investigated auditory spatial perception in early blind adults and demonstrated that the deficit derives from blind individual's reduced ability to remap sound locations using an external frame of reference. We found that performance in blind population was severely impaired when they were required to localize brief auditory stimuli with respect to external acoustic landmarks (external reference frame) or when they had to reproduce the spatial distance between two sounds. However, they performed similarly to sighted controls when had to localize sounds with respect to their own hand (body-centered reference frame), or to judge the distances of sounds from their finger. These results suggest that early visual deprivation and the lack of visual contextual cues during the critical period induce a preference for body-centered over external spatial auditory representations. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Functional Role of Internal and External Visual Imagery: Preliminary Evidences from Pilates

    Science.gov (United States)

    Montuori, Simone; Sorrentino, Pierpaolo; Belloni, Lidia; Sorrentino, Giuseppe

    2018-01-01

    The present study investigates whether a functional difference between the visualization of a sequence of movements in the perspective of the first- (internal VMI-I) or third- (external VMI-E) person exists, which might be relevant to promote learning. By using a mental chronometry experimental paradigm, we have compared the time or execution, imagination in the VMI-I perspective, and imagination in the VMI-E perspective of two kinds of Pilates exercises. The analysis was carried out in individuals with different levels of competence (expert, novice, and no-practice individuals). Our results showed that in the Expert group, in the VMI-I perspective, the imagination time was similar to the execution time, while in the VMI-E perspective, the imagination time was significantly lower than the execution time. An opposite pattern was found in the Novice group, in which the time of imagination was similar to that of execution only in the VMI-E perspective, while in the VMI-I perspective, the time of imagination was significantly lower than the time of execution. In the control group, the times of both modalities of imagination were significantly lower than the execution time for each exercise. The present data suggest that, while the VMI-I serves to train an already internalised gesture, the VMI-E perspective could be useful to learn, and then improve, the recently acquired sequence of movements. Moreover, visual imagery is not useful for individuals that lack a specific motor experience. The present data offer new insights in the application of mental training techniques, especially in field of sports. However, further investigations are needed to better understand the functional role of internal and external visual imagery. PMID:29849565

  12. Cortical Integration of Audio-Visual Information

    Science.gov (United States)

    Vander Wyk, Brent C.; Ramsay, Gordon J.; Hudac, Caitlin M.; Jones, Warren; Lin, David; Klin, Ami; Lee, Su Mei; Pelphrey, Kevin A.

    2013-01-01

    We investigated the neural basis of audio-visual processing in speech and non-speech stimuli. Physically identical auditory stimuli (speech and sinusoidal tones) and visual stimuli (animated circles and ellipses) were used in this fMRI experiment. Relative to unimodal stimuli, each of the multimodal conjunctions showed increased activation in largely non-overlapping areas. The conjunction of Ellipse and Speech, which most resembles naturalistic audiovisual speech, showed higher activation in the right inferior frontal gyrus, fusiform gyri, left posterior superior temporal sulcus, and lateral occipital cortex. The conjunction of Circle and Tone, an arbitrary audio-visual pairing with no speech association, activated middle temporal gyri and lateral occipital cortex. The conjunction of Circle and Speech showed activation in lateral occipital cortex, and the conjunction of Ellipse and Tone did not show increased activation relative to unimodal stimuli. Further analysis revealed that middle temporal regions, although identified as multimodal only in the Circle-Tone condition, were more strongly active to Ellipse-Speech or Circle-Speech, but regions that were identified as multimodal for Ellipse-Speech were always strongest for Ellipse-Speech. Our results suggest that combinations of auditory and visual stimuli may together be processed by different cortical networks, depending on the extent to which speech or non-speech percepts are evoked. PMID:20709442

  13. VEP Responses to Op-Art Stimuli.

    Directory of Open Access Journals (Sweden)

    Louise O'Hare

    Full Text Available Several types of striped patterns have been reported to cause adverse sensations described as visual discomfort. Previous research using op-art-based stimuli has demonstrated that spurious eye movement signals can cause the experience of illusory motion, or shimmering effects, which might be perceived as uncomfortable. Whilst the shimmering effects are one cause of discomfort, another possible contributor to discomfort is excessive neural responses: As striped patterns do not have the statistical redundancy typical of natural images, they are perhaps unable to be encoded efficiently. If this is the case, then this should be seen in the amplitude of the EEG response. This study found that stimuli that were judged to be most comfortable were also those with the lowest EEG amplitude. This provides some support for the idea that excessive neural responses might also contribute to discomfort judgements in normal populations, in stimuli controlled for perceived contrast.

  14. VEP Responses to Op-Art Stimuli.

    Science.gov (United States)

    O'Hare, Louise; Clarke, Alasdair D F; Pollux, Petra M J

    2015-01-01

    Several types of striped patterns have been reported to cause adverse sensations described as visual discomfort. Previous research using op-art-based stimuli has demonstrated that spurious eye movement signals can cause the experience of illusory motion, or shimmering effects, which might be perceived as uncomfortable. Whilst the shimmering effects are one cause of discomfort, another possible contributor to discomfort is excessive neural responses: As striped patterns do not have the statistical redundancy typical of natural images, they are perhaps unable to be encoded efficiently. If this is the case, then this should be seen in the amplitude of the EEG response. This study found that stimuli that were judged to be most comfortable were also those with the lowest EEG amplitude. This provides some support for the idea that excessive neural responses might also contribute to discomfort judgements in normal populations, in stimuli controlled for perceived contrast.

  15. Performance improvements from imagery:evidence that internal visual imagery is superior to external visual imagery for slalom performance

    Directory of Open Access Journals (Sweden)

    Nichola eCallow

    2013-10-01

    Full Text Available We report three experiments investigating the hypothesis that use of internal visual imagery (IVI would be superior to external visual imagery (EVI for the performance of different slalom-based motor tasks. In Experiment 1, three groups of participants (IVI, EVI, and a control group performed a driving-simulation slalom task. The IVI group achieved significantly quicker lap times than EVI and the control group. In Experiment 2, participants performed a downhill running slalom task under both IVI and EVI conditions. Performance was again quickest in the IVI compared to EVI condition, with no differences in accuracy. Experiment 3 used the same group design as Experiment 1, but with participants performing a downhill ski-slalom task. Results revealed the IVI group to be significantly more accurate than the control group, with no significant differences in time taken to complete the task. These results support the beneficial effects of IVI for slalom-based tasks, and significantly advances our knowledge related to the differential effects of visual imagery perspectives on motor performance.

  16. Increased Early Processing of Task-Irrelevant Auditory Stimuli in Older Adults.

    Directory of Open Access Journals (Sweden)

    Erich S Tusch

    Full Text Available The inhibitory deficit hypothesis of cognitive aging posits that older adults' inability to adequately suppress processing of irrelevant information is a major source of cognitive decline. Prior research has demonstrated that in response to task-irrelevant auditory stimuli there is an age-associated increase in the amplitude of the N1 wave, an ERP marker of early perceptual processing. Here, we tested predictions derived from the inhibitory deficit hypothesis that the age-related increase in N1 would be 1 observed under an auditory-ignore, but not auditory-attend condition, 2 attenuated in individuals with high executive capacity (EC, and 3 augmented by increasing cognitive load of the primary visual task. ERPs were measured in 114 well-matched young, middle-aged, young-old, and old-old adults, designated as having high or average EC based on neuropsychological testing. Under the auditory-ignore (visual-attend task, participants ignored auditory stimuli and responded to rare target letters under low and high load. Under the auditory-attend task, participants ignored visual stimuli and responded to rare target tones. Results confirmed an age-associated increase in N1 amplitude to auditory stimuli under the auditory-ignore but not auditory-attend task. Contrary to predictions, EC did not modulate the N1 response. The load effect was the opposite of expectation: the N1 to task-irrelevant auditory events was smaller under high load. Finally, older adults did not simply fail to suppress the N1 to auditory stimuli in the task-irrelevant modality; they generated a larger response than to identical stimuli in the task-relevant modality. In summary, several of the study's findings do not fit the inhibitory-deficit hypothesis of cognitive aging, which may need to be refined or supplemented by alternative accounts.

  17. Auditory-visual integration in fields of the auditory cortex.

    Science.gov (United States)

    Kubota, Michinori; Sugimoto, Shunji; Hosokawa, Yutaka; Ojima, Hisayuki; Horikawa, Junsei

    2017-03-01

    While multimodal interactions have been known to exist in the early sensory cortices, the response properties and spatiotemporal organization of these interactions are poorly understood. To elucidate the characteristics of multimodal sensory interactions in the cerebral cortex, neuronal responses to visual stimuli with or without auditory stimuli were investigated in core and belt fields of guinea pig auditory cortex using real-time optical imaging with a voltage-sensitive dye. On average, visual responses consisted of short excitation followed by long inhibition. Although visual responses were observed in core and belt fields, there were regional and temporal differences in responses. The most salient visual responses were observed in the caudal belt fields, especially posterior (P) and dorsocaudal belt (DCB) fields. Visual responses emerged first in fields P and DCB and then spread rostroventrally to core and ventrocaudal belt (VCB) fields. Absolute values of positive and negative peak amplitudes of visual responses were both larger in fields P and DCB than in core and VCB fields. When combined visual and auditory stimuli were applied, fields P and DCB were more inhibited than core and VCB fields beginning approximately 110 ms after stimuli. Correspondingly, differences between responses to auditory stimuli alone and combined audiovisual stimuli became larger in fields P and DCB than in core and VCB fields after approximately 110 ms after stimuli. These data indicate that visual influences are most salient in fields P and DCB, which manifest mainly as inhibition, and that they enhance differences in auditory responses among fields. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Bank of Standardized Stimuli (BOSS phase II: 930 new normative photos.

    Directory of Open Access Journals (Sweden)

    Mathieu B Brodeur

    Full Text Available Researchers have only recently started to take advantage of the developments in technology and communication for sharing data and documents. However, the exchange of experimental material has not taken advantage of this progress yet. In order to facilitate access to experimental material, the Bank of Standardized Stimuli (BOSS project was created as a free standardized set of visual stimuli accessible to all researchers, through a normative database. The BOSS is currently the largest existing photo bank providing norms for more than 15 dimensions (e.g. familiarity, visual complexity, manipulability, etc., making the BOSS an extremely useful research tool and a mean to homogenize scientific data worldwide. The first phase of the BOSS was completed in 2010, and contained 538 normative photos. The second phase of the BOSS project presented in this article, builds on the previous phase by adding 930 new normative photo stimuli. New categories of concepts were introduced, including animals, building infrastructures, body parts, and vehicles and the number of photos in other categories was increased. All new photos of the BOSS were normalized relative to their name, familiarity, visual complexity, object agreement, viewpoint agreement, and manipulability. The availability of these norms is a precious asset that should be considered for characterizing the stimuli as a function of the requirements of research and for controlling for potential confounding effects.

  19. High-intensity erotic visual stimuli de-activate the primary visual cortex in women.

    Science.gov (United States)

    Huynh, Hieu K; Beers, Caroline; Willemsen, Antoon; Lont, Erna; Laan, Ellen; Dierckx, Rudi; Jansen, Monique; Sand, Michael; Weijmar Schultz, Willibrord; Holstege, Gert

    2012-06-01

    The primary visual cortex, Brodmann's area (BA 17), plays a vital role in basic survival mechanisms in humans. In most neuro-imaging studies in which the volunteers have to watch pictures or movies, the primary visual cortex is similarly activated independent of the content of the pictures or movies. However, in case the volunteers perform demanding non-visual tasks, the primary visual cortex becomes de-activated, although the amount of incoming visual sensory information is the same. Do low- and high-intensity erotic movies, compared to neutral movies, produce similar de-activation of the primary visual cortex? Brain activation/de-activation was studied by Positron Emission Tomography scanning of the brains of 12 healthy heterosexual premenopausal women, aged 18-47, who watched neutral, low- and high-intensity erotic film segments. We measured differences in regional cerebral blood flow (rCBF) in the primary visual cortex during watching neutral, low-intensity erotic, and high-intensity erotic film segments. Watching high-intensity erotic, but not low-intensity erotic movies, compared to neutral movies resulted in strong de-activation of the primary (BA 17) and adjoining parts of the secondary visual cortex. The strong de-activation during watching high-intensity erotic film might represent compensation for the increased blood supply in the brain regions involved in sexual arousal, also because high-intensity erotic movies do not require precise scanning of the visual field, because the impact is clear to the observer. © 2012 International Society for Sexual Medicine.

  20. Modulation of visually evoked postural responses by contextual visual, haptic and auditory information: a 'virtual reality check'.

    Science.gov (United States)

    Meyer, Georg F; Shao, Fei; White, Mark D; Hopkins, Carl; Robotham, Antony J

    2013-01-01

    Externally generated visual motion signals can cause the illusion of self-motion in space (vection) and corresponding visually evoked postural responses (VEPR). These VEPRs are not simple responses to optokinetic stimulation, but are modulated by the configuration of the environment. The aim of this paper is to explore what factors modulate VEPRs in a high quality virtual reality (VR) environment where real and virtual foreground objects served as static visual, auditory and haptic reference points. Data from four experiments on visually evoked postural responses show that: 1) visually evoked postural sway in the lateral direction is modulated by the presence of static anchor points that can be haptic, visual and auditory reference signals; 2) real objects and their matching virtual reality representations as visual anchors have different effects on postural sway; 3) visual motion in the anterior-posterior plane induces robust postural responses that are not modulated by the presence of reference signals or the reality of objects that can serve as visual anchors in the scene. We conclude that automatic postural responses for laterally moving visual stimuli are strongly influenced by the configuration and interpretation of the environment and draw on multisensory representations. Different postural responses were observed for real and virtual visual reference objects. On the basis that automatic visually evoked postural responses in high fidelity virtual environments should mimic those seen in real situations we propose to use the observed effect as a robust objective test for presence and fidelity in VR.

  1. Pain and other symptoms of CRPS can be increased by ambiguous visual stimuli--an exploratory study.

    Science.gov (United States)

    Hall, Jane; Harrison, Simon; Cohen, Helen; McCabe, Candida S; Harris, N; Blake, David R

    2011-01-01

    Visual disturbance, visuo-spatial difficulties, and exacerbations of pain associated with these, have been reported by some patients with Complex Regional Pain Syndrome (CRPS). We investigated the hypothesis that some visual stimuli (i.e. those which produce ambiguous perceptions) can induce pain and other somatic sensations in people with CRPS. Thirty patients with CRPS, 33 with rheumatology conditions and 45 healthy controls viewed two images: a bistable spatial image and a control image. For each image participants recorded the frequency of percept change in 1 min and reported any changes in somatosensation. 73% of patients with CRPS reported increases in pain and/or sensory disturbances including changes in perception of the affected limb, temperature and weight changes and feelings of disorientation after viewing the bistable image. Additionally, 13% of the CRPS group responded with striking worsening of their symptoms which necessitated task cessation. Subjects in the control groups did not report pain increases or somatic sensations. It is possible to worsen the pain suffered in CRPS, and to produce other somatic sensations, by means of a visual stimulus alone. This is a newly described finding. As a clinical and research tool, the experimental method provides a means to generate and exacerbate somaesthetic disturbances, including pain, without moving the affected limb and causing nociceptive interference. This may be particularly useful for brain imaging studies. Copyright © 2010 European Federation of International Association for the Study of Pain Chapters. Published by Elsevier Ltd. All rights reserved.

  2. Evolutionary Relevance Facilitates Visual Information Processing

    Directory of Open Access Journals (Sweden)

    Russell E. Jackson

    2013-07-01

    Full Text Available Visual search of the environment is a fundamental human behavior that perceptual load affects powerfully. Previously investigated means for overcoming the inhibitions of high perceptual load, however, generalize poorly to real-world human behavior. We hypothesized that humans would process evolutionarily relevant stimuli more efficiently than evolutionarily novel stimuli, and evolutionary relevance would mitigate the repercussions of high perceptual load during visual search. Animacy is a significant component to evolutionary relevance of visual stimuli because perceiving animate entities is time-sensitive in ways that pose significant evolutionary consequences. Participants completing a visual search task located evolutionarily relevant and animate objects fastest and with the least impact of high perceptual load. Evolutionarily novel and inanimate objects were located slowest and with the highest impact of perceptual load. Evolutionary relevance may importantly affect everyday visual information processing.

  3. Auditory Emotional Cues Enhance Visual Perception

    Science.gov (United States)

    Zeelenberg, Rene; Bocanegra, Bruno R.

    2010-01-01

    Recent studies show that emotional stimuli impair performance to subsequently presented neutral stimuli. Here we show a cross-modal perceptual enhancement caused by emotional cues. Auditory cue words were followed by a visually presented neutral target word. Two-alternative forced-choice identification of the visual target was improved by…

  4. Functional connections between activated and deactivated brain regions mediate emotional interference during externally directed cognition.

    Science.gov (United States)

    Di Plinio, Simone; Ferri, Francesca; Marzetti, Laura; Romani, Gian Luca; Northoff, Georg; Pizzella, Vittorio

    2018-04-24

    Recent evidence shows that task-deactivations are functionally relevant for cognitive performance. Indeed, higher cognitive engagement has been associated with higher suppression of activity in task-deactivated brain regions - usually ascribed to the Default Mode Network (DMN). Moreover, a negative correlation between these regions and areas actively engaged by the task is associated with better performance. DMN regions show positive modulation during autobiographical, social, and emotional tasks. However, it is not clear how processing of emotional stimuli affects the interplay between the DMN and executive brain regions. We studied this interplay in an fMRI experiment using emotional negative stimuli as distractors. Activity modulations induced by the emotional interference of negative stimuli were found in frontal, parietal, and visual areas, and were associated with modulations of functional connectivity between these task-activated areas and DMN regions. A worse performance was predicted both by lower activity in the superior parietal cortex and higher connectivity between visual areas and frontal DMN regions. Connectivity between right inferior frontal gyrus and several DMN regions in the left hemisphere was related to the behavioral performance. This relation was weaker in the negative than in the neutral condition, likely suggesting less functional inhibitions of DMN regions during emotional processing. These results show that both executive and DMN regions are crucial for the emotional interference process and suggest that DMN connections are related to the interplay between externally-directed and internally-focused processes. Among DMN regions, superior frontal gyrus may be a key node in regulating the interference triggered by emotional stimuli. © 2018 Wiley Periodicals, Inc.

  5. Ingestão de ração e comportamento de larvas de pacu em resposta a estímulos químicos e visuais Diet ingestion rate and pacu larvae behavior in response to chemical and visual stimuli

    Directory of Open Access Journals (Sweden)

    Marcelo Borges Tesser

    2006-10-01

    Full Text Available Este estudo foi realizado com o objetivo de comparar a influência dos estímulos visual e/ou químico de náuplios de Artemia e de dieta microencapsulada sobre a taxa de ingestão da dieta microencapusulada por larvas de pacu Piaractus mesopotamicus. Utilizou-se um esquema fatorial 7 x 4 (estímulos e idades com duas repetições. Verificou-se efeito da idade das larvas e dos estímulos, mas não houve efeito para a interação idade ´ estímulos. O estímulo químico da Artemia e ambos os estímulos da Artemia resultaram em maior taxa de ingestão de dieta inerte. Resultado intermediário foi obtido com o estímulo visual da dieta microencapsulada. O estímulo químico, em comparação ao estímulo visual da Artemia, resultou em maiores taxas de ingestão da dieta. Com o aumento da idade, houve incremento na taxa de ingestão. Os estímulos visual e químico dos náuplios e o estímulo visual da ração aumentaram a ingestão de dieta inerte por larvas de pacu. Náuplios de Artemia devem ser oferecidos antes do fornecimento da dieta inerte, pois podem auxiliar no processo de transição alimentar. Os resultados deste trabalho apontaram novas possibilidades de estudos com larvas de peixes neotropicais visando a substituição precoce do alimento vivo para o inerte.The effect of visual, chemical and the combination of both stimuli from Artemia nauplii and from microencapsulated diet on dry diet ingestion by pacu Piaractus mesopotamicus larvae was evaluated in this research. The experiment was analyzed as a 7 x 4 factorial arrangement (seven stimuli and four ages with two replicates. It was observed effect of larvae age and stimuli, but no interaction (age ´ stimuli was observed. The chemical effect from Artemia and both effects from Artemia resulted in higher ingestion rates. An intermediary result was obtained with visual effect from microencapsulated diet. The chemical stimulus from Artemia resulted in higher ingestion rates than that

  6. Read-out of emotional information from iconic memory: the longevity of threatening stimuli.

    Science.gov (United States)

    Kuhbandner, Christof; Spitzer, Bernhard; Pekrun, Reinhard

    2011-05-01

    Previous research has shown that emotional stimuli are more likely than neutral stimuli to be selected by attention, indicating that the processing of emotional information is prioritized. In this study, we examined whether the emotional significance of stimuli influences visual processing already at the level of transient storage of incoming information in iconic memory, before attentional selection takes place. We used a typical iconic memory task in which the delay of a poststimulus cue, indicating which of several visual stimuli has to be reported, was varied. Performance decreased rapidly with increasing cue delay, reflecting the fast decay of information stored in iconic memory. However, although neutral stimulus information and emotional stimulus information were initially equally likely to enter iconic memory, the subsequent decay of the initially stored information was slowed for threatening stimuli, a result indicating that fear-relevant information has prolonged availability for read-out from iconic memory. This finding provides the first evidence that emotional significance already facilitates stimulus processing at the stage of iconic memory.

  7. Investigating vision in schizophrenia through responses to humorous stimuli

    Directory of Open Access Journals (Sweden)

    Wolfgang Tschacher

    2015-06-01

    Full Text Available The visual environment of humans contains abundant ambiguity and fragmentary information. Therefore, an early step of vision must disambiguate the incessant stream of information. Humorous stimuli produce a situation that is strikingly analogous to this process: Funniness is associated with the incongruity contained in a joke, pun, or cartoon. Like in vision in general, appreciating a visual pun as funny necessitates disambiguation of incongruous information. Therefore, perceived funniness of visual puns was implemented to study visual perception in a sample of 36 schizophrenia patients and 56 healthy control participants. We found that both visual incongruity and Theory of Mind (ToM content of the puns were associated with increased experienced funniness. This was significantly less so in participants with schizophrenia, consistent with the gestalt hypothesis of schizophrenia, which would predict compromised perceptual organization in patients. The association of incongruity with funniness was not mediated by known predictors of humor appreciation, such as affective state, depression, or extraversion. Patients with higher excitement symptoms and, at a trend level, reduced cognitive symptoms, reported lower funniness experiences. An open question remained whether patients showed this deficiency of visual incongruity detection independent of their ToM deficiency. Humorous stimuli may be viewed as a convenient method to study perceptual processes, but also fundamental questions of higher-level cognition.

  8. Moving Stimuli Facilitate Synchronization But Not Temporal Perception.

    Science.gov (United States)

    Silva, Susana; Castro, São Luís

    2016-01-01

    Recent studies have shown that a moving visual stimulus (e.g., a bouncing ball) facilitates synchronization compared to a static stimulus (e.g., a flashing light), and that it can even be as effective as an auditory beep. We asked a group of participants to perform different tasks with four stimulus types: beeps, siren-like sounds, visual flashes (static) and bouncing balls. First, participants performed synchronization with isochronous sequences (stimulus-guided synchronization), followed by a continuation phase in which the stimulus was internally generated (imagery-guided synchronization). Then they performed a perception task, in which they judged whether the final part of a temporal sequence was compatible with the previous beat structure (stimulus-guided perception). Similar to synchronization, an imagery-guided variant was added, in which sequences contained a gap in between (imagery-guided perception). Balls outperformed flashes and matched beeps (powerful ball effect) in stimulus-guided synchronization but not in perception (stimulus- or imagery-guided). In imagery-guided synchronization, performance accuracy decreased for beeps and balls, but not for flashes and sirens. Our findings suggest that the advantages of moving visual stimuli over static ones are grounded in action rather than perception, and they support the hypothesis that the sensorimotor coupling mechanisms for auditory (beeps) and moving visual stimuli (bouncing balls) overlap.

  9. The Potential of Stimuli-Responsive Nanogels in Drug and Active Molecule Delivery for Targeted Therapy

    Directory of Open Access Journals (Sweden)

    Marta Vicario-de-la-Torre

    2017-05-01

    Full Text Available Nanogels (NGs are currently under extensive investigation due to their unique properties, such as small particle size, high encapsulation efficiency and protection of active agents from degradation, which make them ideal candidates as drug delivery systems (DDS. Stimuli-responsive NGs are cross-linked nanoparticles (NPs, composed of polymers, natural, synthetic, or a combination thereof that can swell by absorption (uptake of large amounts of solvent, but not dissolve due to the constituent structure of the polymeric network. NGs can undergo change from a polymeric solution (swell form to a hard particle (collapsed form in response to (i physical stimuli such as temperature, ionic strength, magnetic or electric fields; (ii chemical stimuli such as pH, ions, specific molecules or (iii biochemical stimuli such as enzymatic substrates or affinity ligands. The interest in NGs comes from their multi-stimuli nature involving reversible phase transitions in response to changes in the external media in a faster way than macroscopic gels or hydrogels due to their nanometric size. NGs have a porous structure able to encapsulate small molecules such as drugs and genes, then releasing them by changing their volume when external stimuli are applied.

  10. Modulation of Visually Evoked Postural Responses by Contextual Visual, Haptic and Auditory Information: A ‘Virtual Reality Check’

    Science.gov (United States)

    Meyer, Georg F.; Shao, Fei; White, Mark D.; Hopkins, Carl; Robotham, Antony J.

    2013-01-01

    Externally generated visual motion signals can cause the illusion of self-motion in space (vection) and corresponding visually evoked postural responses (VEPR). These VEPRs are not simple responses to optokinetic stimulation, but are modulated by the configuration of the environment. The aim of this paper is to explore what factors modulate VEPRs in a high quality virtual reality (VR) environment where real and virtual foreground objects served as static visual, auditory and haptic reference points. Data from four experiments on visually evoked postural responses show that: 1) visually evoked postural sway in the lateral direction is modulated by the presence of static anchor points that can be haptic, visual and auditory reference signals; 2) real objects and their matching virtual reality representations as visual anchors have different effects on postural sway; 3) visual motion in the anterior-posterior plane induces robust postural responses that are not modulated by the presence of reference signals or the reality of objects that can serve as visual anchors in the scene. We conclude that automatic postural responses for laterally moving visual stimuli are strongly influenced by the configuration and interpretation of the environment and draw on multisensory representations. Different postural responses were observed for real and virtual visual reference objects. On the basis that automatic visually evoked postural responses in high fidelity virtual environments should mimic those seen in real situations we propose to use the observed effect as a robust objective test for presence and fidelity in VR. PMID:23840760

  11. Functional Role of Internal and External Visual Imagery: Preliminary Evidences from Pilates

    Directory of Open Access Journals (Sweden)

    Simone Montuori

    2018-01-01

    Full Text Available The present study investigates whether a functional difference between the visualization of a sequence of movements in the perspective of the first- (internal VMI-I or third- (external VMI-E person exists, which might be relevant to promote learning. By using a mental chronometry experimental paradigm, we have compared the time or execution, imagination in the VMI-I perspective, and imagination in the VMI-E perspective of two kinds of Pilates exercises. The analysis was carried out in individuals with different levels of competence (expert, novice, and no-practice individuals. Our results showed that in the Expert group, in the VMI-I perspective, the imagination time was similar to the execution time, while in the VMI-E perspective, the imagination time was significantly lower than the execution time. An opposite pattern was found in the Novice group, in which the time of imagination was similar to that of execution only in the VMI-E perspective, while in the VMI-I perspective, the time of imagination was significantly lower than the time of execution. In the control group, the times of both modalities of imagination were significantly lower than the execution time for each exercise. The present data suggest that, while the VMI-I serves to train an already internalised gesture, the VMI-E perspective could be useful to learn, and then improve, the recently acquired sequence of movements. Moreover, visual imagery is not useful for individuals that lack a specific motor experience. The present data offer new insights in the application of mental training techniques, especially in field of sports. However, further investigations are needed to better understand the functional role of internal and external visual imagery.

  12. Spatial Hearing with Incongruent Visual or Auditory Room Cues

    Science.gov (United States)

    Gil-Carvajal, Juan C.; Cubick, Jens; Santurette, Sébastien; Dau, Torsten

    2016-11-01

    In day-to-day life, humans usually perceive the location of sound sources as outside their heads. This externalized auditory spatial perception can be reproduced through headphones by recreating the sound pressure generated by the source at the listener’s eardrums. This requires the acoustical features of the recording environment and listener’s anatomy to be recorded at the listener’s ear canals. Although the resulting auditory images can be indistinguishable from real-world sources, their externalization may be less robust when the playback and recording environments differ. Here we tested whether a mismatch between playback and recording room reduces perceived distance, azimuthal direction, and compactness of the auditory image, and whether this is mostly due to incongruent auditory cues or to expectations generated from the visual impression of the room. Perceived distance ratings decreased significantly when collected in a more reverberant environment than the recording room, whereas azimuthal direction and compactness remained room independent. Moreover, modifying visual room-related cues had no effect on these three attributes, while incongruent auditory room-related cues between the recording and playback room did affect distance perception. Consequently, the external perception of virtual sounds depends on the degree of congruency between the acoustical features of the environment and the stimuli.

  13. Relative contributions of visual and auditory spatial representations to tactile localization.

    Science.gov (United States)

    Noel, Jean-Paul; Wallace, Mark

    2016-02-01

    Spatial localization of touch is critically dependent upon coordinate transformation between different reference frames, which must ultimately allow for alignment between somatotopic and external representations of space. Although prior work has shown an important role for cues such as body posture in influencing the spatial localization of touch, the relative contributions of the different sensory systems to this process are unknown. In the current study, we had participants perform a tactile temporal order judgment (TOJ) under different body postures and conditions of sensory deprivation. Specifically, participants performed non-speeded judgments about the order of two tactile stimuli presented in rapid succession on their ankles during conditions in which their legs were either uncrossed or crossed (and thus bringing somatotopic and external reference frames into conflict). These judgments were made in the absence of 1) visual, 2) auditory, or 3) combined audio-visual spatial information by blindfolding and/or placing participants in an anechoic chamber. As expected, results revealed that tactile temporal acuity was poorer under crossed than uncrossed leg postures. Intriguingly, results also revealed that auditory and audio-visual deprivation exacerbated the difference in tactile temporal acuity between uncrossed to crossed leg postures, an effect not seen for visual-only deprivation. Furthermore, the effects under combined audio-visual deprivation were greater than those seen for auditory deprivation. Collectively, these results indicate that mechanisms governing the alignment between somatotopic and external reference frames extend beyond those imposed by body posture to include spatial features conveyed by the auditory and visual modalities - with a heavier weighting of auditory than visual spatial information. Thus, sensory modalities conveying exteroceptive spatial information contribute to judgments regarding the localization of touch. Copyright © 2016

  14. North-American norms for name disagreement: pictorial stimuli naming discrepancies.

    Directory of Open Access Journals (Sweden)

    Mary O'Sullivan

    Full Text Available Pictorial stimuli are commonly used by scientists to explore central processes; including memory, attention, and language. Pictures that have been collected and put into sets for these purposes often contain visual ambiguities that lead to name disagreement amongst subjects. In the present work, we propose new norms which reflect these sources of name disagreement, and we apply this method to two sets of pictures: the Snodgrass and Vanderwart (S&V set and the Bank of Standardized Stimuli (BOSS. Naming responses of the presented pictures were classified within response categories based on whether they were correct, incorrect, or equivocal. To characterize the naming strategy where an alternative name was being used, responses were further divided into different sub-categories that reflected various sources of name disagreement. Naming strategies were also compared across the two sets of stimuli. Results showed that the pictures of the S&V set and the BOSS were more likely to elicit alternative specific and equivocal names, respectively. It was also found that the use of incorrect names was not significantly different across stimulus sets but that errors were more likely caused by visual ambiguity in the S&V set and by a misuse of names in the BOSS. Norms for name disagreement presented in this paper are useful for subsequent research for their categorization and elucidation of name disagreement that occurs when choosing visual stimuli from one or both stimulus sets. The sources of disagreement should be examined carefully as they help to provide an explanation of errors and inconsistencies of many concepts during picture naming tasks.

  15. Auditory motion capturing ambiguous visual motion

    Directory of Open Access Journals (Sweden)

    Arjen eAlink

    2012-01-01

    Full Text Available In this study, it is demonstrated that moving sounds have an effect on the direction in which one sees visual stimuli move. During the main experiment sounds were presented consecutively at four speaker locations inducing left- or rightwards auditory apparent motion. On the path of auditory apparent motion, visual apparent motion stimuli were presented with a high degree of directional ambiguity. The main outcome of this experiment is that our participants perceived visual apparent motion stimuli that were ambiguous (equally likely to be perceived as moving left- or rightwards more often as moving in the same direction than in the opposite direction of auditory apparent motion. During the control experiment we replicated this finding and found no effect of sound motion direction on eye movements. This indicates that auditory motion can capture our visual motion percept when visual motion direction is insufficiently determinate without affecting eye movements.

  16. Neural reactivity to visual food stimuli is reduced in some areas of the brain during evening hours compared to morning hours: an fMRI study in women.

    Science.gov (United States)

    Masterson, Travis D; Kirwan, C Brock; Davidson, Lance E; LeCheminant, James D

    2016-03-01

    The extent that neural responsiveness to visual food stimuli is influenced by time of day is not well examined. Using a crossover design, 15 healthy women were scanned using fMRI while presented with low- and high-energy pictures of food, once in the morning (6:30-8:30 am) and once in the evening (5:00-7:00 pm). Diets were identical on both days of the fMRI scans and were verified using weighed food records. Visual analog scales were used to record subjective perception of hunger and preoccupation with food prior to each fMRI scan. Six areas of the brain showed lower activation in the evening to both high- and low-energy foods, including structures in reward pathways (P foods compared to low-energy foods (P food stimuli tended to produce greater fMRI responses than low-energy food stimuli in specific areas of the brain, regardless of time of day. However, evening scans showed a lower response to both low- and high-energy food pictures in some areas of the brain. Subjectively, participants reported no difference in hunger by time of day (F = 1.84, P = 0.19), but reported they could eat more (F = 4.83, P = 0.04) and were more preoccupied with thoughts of food (F = 5.51, P = 0.03) in the evening compared to the morning. These data underscore the role that time of day may have on neural responses to food stimuli. These results may also have clinical implications for fMRI measurement in order to prevent a time of day bias.

  17. Auditory and visual memory in musicians and nonmusicians.

    Science.gov (United States)

    Cohen, Michael A; Evans, Karla K; Horowitz, Todd S; Wolfe, Jeremy M

    2011-06-01

    Numerous studies have shown that musicians outperform nonmusicians on a variety of tasks. Here we provide the first evidence that musicians have superior auditory recognition memory for both musical and nonmusical stimuli, compared to nonmusicians. However, this advantage did not generalize to the visual domain. Previously, we showed that auditory recognition memory is inferior to visual recognition memory. Would this be true even for trained musicians? We compared auditory and visual memory in musicians and nonmusicians using familiar music, spoken English, and visual objects. For both groups, memory for the auditory stimuli was inferior to memory for the visual objects. Thus, although considerable musical training is associated with better musical and nonmusical auditory memory, it does not increase the ability to remember sounds to the levels found with visual stimuli. This suggests a fundamental capacity difference between auditory and visual recognition memory, with a persistent advantage for the visual domain.

  18. External stimulation strength controls actin response dynamics in Dictyostelium cells

    Science.gov (United States)

    Hsu, Hsin-Fang; Westendorf, Christian; Tarantola, Marco; Zykov, Vladimir; Bodenschatz, Eberhard; Beta, Carsten

    2015-03-01

    Self-sustained oscillation and the resonance frequency of the cytoskeletal actin polymerization/depolymerization have recently been observed in Dictyostelium, a model system for studying chemotaxis. Here we report that the resonance frequency is not constant but rather varies with the strength of external stimuli. To understand the underlying mechanism, we analyzed the polymerization and depolymerization time at different levels of external stimulation. We found that polymerization time is independent of external stimuli but the depolymerization time is prolonged as the stimulation increases. These observations can be successfully reproduced in the frame work of our time delayed differential equation model.

  19. Reproducibility assessment of brain responses to visual food stimuli in adults with overweight and obesity.

    Science.gov (United States)

    Drew Sayer, R; Tamer, Gregory G; Chen, Ningning; Tregellas, Jason R; Cornier, Marc-Andre; Kareken, David A; Talavage, Thomas M; McCrory, Megan A; Campbell, Wayne W

    2016-10-01

    The brain's reward system influences ingestive behavior and subsequently obesity risk. Functional magnetic resonance imaging (fMRI) is a common method for investigating brain reward function. This study sought to assess the reproducibility of fasting-state brain responses to visual food stimuli using BOLD fMRI. A priori brain regions of interest included bilateral insula, amygdala, orbitofrontal cortex, caudate, and putamen. Fasting-state fMRI and appetite assessments were completed by 28 women (n = 16) and men (n = 12) with overweight or obesity on 2 days. Reproducibility was assessed by comparing mean fasting-state brain responses and measuring test-retest reliability of these responses on the two testing days. Mean fasting-state brain responses on day 2 were reduced compared with day 1 in the left insula and right amygdala, but mean day 1 and day 2 responses were not different in the other regions of interest. With the exception of the left orbitofrontal cortex response (fair reliability), test-retest reliabilities of brain responses were poor or unreliable. fMRI-measured responses to visual food cues in adults with overweight or obesity show relatively good mean-level reproducibility but considerable within-subject variability. Poor test-retest reliability reduces the likelihood of observing true correlations and increases the necessary sample sizes for studies. © 2016 The Obesity Society.

  20. Are females more responsive to emotional stimuli? A neurophysiological study across arousal and valence dimensions.

    Science.gov (United States)

    Lithari, C; Frantzidis, C A; Papadelis, C; Vivas, Ana B; Klados, M A; Kourtidou-Papadeli, C; Pappas, C; Ioannides, A A; Bamidis, P D

    2010-03-01

    Men and women seem to process emotions and react to them differently. Yet, few neurophysiological studies have systematically investigated gender differences in emotional processing. Here, we studied gender differences using Event Related Potentials (ERPs) and Skin Conductance Responses (SCR) recorded from participants who passively viewed emotional pictures selected from the International Affective Picture System (IAPS). The arousal and valence dimension of the stimuli were manipulated orthogonally. The peak amplitude and peak latency of ERP components and SCR were analyzed separately, and the scalp topographies of significant ERP differences were documented. Females responded with enhanced negative components (N100 and N200), in comparison to males, especially to the unpleasant visual stimuli, whereas both genders responded faster to high arousing or unpleasant stimuli. Scalp topographies revealed more pronounced gender differences on central and left hemisphere areas. Our results suggest a difference in the way emotional stimuli are processed by genders: unpleasant and high arousing stimuli evoke greater ERP amplitudes in women relatively to men. It also seems that unpleasant or high arousing stimuli are temporally prioritized during visual processing by both genders.

  1. Attentional bias for positive emotional stimuli: A meta-analytic investigation.

    Science.gov (United States)

    Pool, Eva; Brosch, Tobias; Delplanque, Sylvain; Sander, David

    2016-01-01

    Despite an initial focus on negative threatening stimuli, researchers have more recently expanded the investigation of attentional biases toward positive rewarding stimuli. The present meta-analysis systematically compared attentional bias for positive compared with neutral visual stimuli across 243 studies (N = 9,120 healthy participants) that used different types of attentional paradigms and positive stimuli. Factors were tested that, as postulated by several attentional models derived from theories of emotion, might modulate this bias. Overall, results showed a significant, albeit modest (Hedges' g = .258), attentional bias for positive as compared with neutral stimuli. Moderator analyses revealed that the magnitude of this attentional bias varied as a function of arousal and that this bias was significantly larger when the emotional stimulus was relevant to specific concerns (e.g., hunger) of the participants compared with other positive stimuli that were less relevant to the participants' concerns. Moreover, the moderator analyses showed that attentional bias for positive stimuli was larger in paradigms that measure early, rather than late, attentional processing, suggesting that attentional bias for positive stimuli occurs rapidly and involuntarily. Implications for theories of emotion and attention are discussed. (c) 2015 APA, all rights reserved).

  2. Eye structure, activity rhythms and visually-driven behavior are tuned to visual niche in ants

    Directory of Open Access Journals (Sweden)

    Ayse eYilmaz

    2014-06-01

    Full Text Available Insects have evolved physiological adaptations and behavioural strategies that allow them to cope with a broad spectrum of environmental challenges and contribute to their evolutionary success. Visual performance plays a key role in this success. Correlates between life style and eye organization have been reported in various insect species. Yet, if and how visual ecology translates effectively into different visual discrimination and learning capabilities has been less explored. Here we report results from optical and behavioural analyses performed in two sympatric ant species, Formica cunicularia and Camponotus aethiops. We show that the former are diurnal while the latter are cathemeral. Accordingly, F. cunicularia workers present compound eyes with higher resolution, while C. aethiops workers exhibit eyes with lower resolution but higher sensitivity. The discrimination and learning of visual stimuli differs significantly between these species in controlled dual-choice experiments: discrimination learning of small-field visual stimuli is achieved by F. cunicularia but not by C. aethiops, while both species master the discrimination of large-field visual stimuli. Our work thus provides a paradigmatic example about how timing of foraging activities and visual environment match the organization of compound eyes and visually-driven behaviour. This correspondence underlines the relevance of an ecological/evolutionary framework for analyses in behavioural neuroscience.

  3. Beyond arousal and valence: the importance of the biological versus social relevance of emotional stimuli.

    Science.gov (United States)

    Sakaki, Michiko; Niki, Kazuhisa; Mather, Mara

    2012-03-01

    The present study addressed the hypothesis that emotional stimuli relevant to survival or reproduction (biologically emotional stimuli) automatically affect cognitive processing (e.g., attention, memory), while those relevant to social life (socially emotional stimuli) require elaborative processing to modulate attention and memory. Results of our behavioral studies showed that (1) biologically emotional images hold attention more strongly than do socially emotional images, (2) memory for biologically emotional images was enhanced even with limited cognitive resources, but (3) memory for socially emotional images was enhanced only when people had sufficient cognitive resources at encoding. Neither images' subjective arousal nor their valence modulated these patterns. A subsequent functional magnetic resonance imaging study revealed that biologically emotional images induced stronger activity in the visual cortex and greater functional connectivity between the amygdala and visual cortex than did socially emotional images. These results suggest that the interconnection between the amygdala and visual cortex supports enhanced attention allocation to biological stimuli. In contrast, socially emotional images evoked greater activity in the medial prefrontal cortex (MPFC) and yielded stronger functional connectivity between the amygdala and MPFC than did biological images. Thus, it appears that emotional processing of social stimuli involves elaborative processing requiring frontal lobe activity.

  4. Modulation of visually evoked postural responses by contextual visual, haptic and auditory information: a 'virtual reality check'.

    Directory of Open Access Journals (Sweden)

    Georg F Meyer

    Full Text Available Externally generated visual motion signals can cause the illusion of self-motion in space (vection and corresponding visually evoked postural responses (VEPR. These VEPRs are not simple responses to optokinetic stimulation, but are modulated by the configuration of the environment. The aim of this paper is to explore what factors modulate VEPRs in a high quality virtual reality (VR environment where real and virtual foreground objects served as static visual, auditory and haptic reference points. Data from four experiments on visually evoked postural responses show that: 1 visually evoked postural sway in the lateral direction is modulated by the presence of static anchor points that can be haptic, visual and auditory reference signals; 2 real objects and their matching virtual reality representations as visual anchors have different effects on postural sway; 3 visual motion in the anterior-posterior plane induces robust postural responses that are not modulated by the presence of reference signals or the reality of objects that can serve as visual anchors in the scene. We conclude that automatic postural responses for laterally moving visual stimuli are strongly influenced by the configuration and interpretation of the environment and draw on multisensory representations. Different postural responses were observed for real and virtual visual reference objects. On the basis that automatic visually evoked postural responses in high fidelity virtual environments should mimic those seen in real situations we propose to use the observed effect as a robust objective test for presence and fidelity in VR.

  5. Visual memory errors in Parkinson's disease patient with visual hallucinations.

    Science.gov (United States)

    Barnes, J; Boubert, L

    2011-03-01

    The occurrences of visual hallucinations seem to be more prevalent in low light and hallucinators tend to be more prone to false positive type errors in memory tasks. Here we investigated whether the richness of stimuli does indeed affect recognition differently in hallucinating and nonhallucinating participants, and if so whether this difference extends to identifying spatial context. We compared 36 Parkinson's disease (PD) patients with visual hallucinations, 32 Parkinson's patients without hallucinations, and 36 age-matched controls, on a visual memory task where color and black and white pictures were presented at different locations. Participants had to recognize the pictures among distracters along with the location of the stimulus. Findings revealed clear differences in performance between the groups. Both PD groups had impaired recognition compared to the controls, but those with hallucinations were significantly more impaired on black and white than on color stimuli. In addition, the group with hallucinations was significantly impaired compared to the other two groups on spatial memory. We suggest that not only do PD patients have poorer recognition of pictorial stimuli than controls, those who present with visual hallucinations appear to be more heavily reliant on bottom up sensory input and impaired on spatial ability.

  6. Crosslinked ionic polysaccharides for stimuli-sensitive drug delivery.

    Science.gov (United States)

    Alvarez-Lorenzo, Carmen; Blanco-Fernandez, Barbara; Puga, Ana M; Concheiro, Angel

    2013-08-01

    Polysaccharides are gaining increasing attention as components of stimuli-responsive drug delivery systems, particularly since they can be obtained in a well characterized and reproducible way from the natural sources. Ionic polysaccharides can be readily crosslinked to render hydrogel networks sensitive to a variety of internal and external variables, and thus suitable for switching drug release on-off through diverse mechanisms. Hybrids, composites and grafted polymers can reinforce the responsiveness and widen the range of stimuli to which polysaccharide-based systems can respond. This review analyzes the state of the art of crosslinked ionic polysaccharides as components of delivery systems that can regulate drug release as a function of changes in pH, ion nature and concentration, electric and magnetic field intensity, light wavelength, temperature, redox potential, and certain molecules (enzymes, illness markers, and so on). Examples of specific applications are provided. The information compiled demonstrates that crosslinked networks of ionic polysaccharides are suitable building blocks for developing advanced externally activated and feed-back modulated drug delivery systems. Copyright © 2013 Elsevier B.V. All rights reserved.

  7. Negative emotional stimuli reduce contextual cueing but not response times in inefficient search

    OpenAIRE

    Kunar, Melina A.; Watson, Derrick G.; Cole, Louise (Researcher in Psychology); Cox, Angeline

    2014-01-01

    In visual search, previous work has shown that negative stimuli narrow the focus of attention and speed reaction times (RTs). This paper investigates these two effects by first asking whether negative emotional stimuli narrow the focus of attention to reduce the learning of a display context in a contextual cueing task and, second, whether exposure to negative stimuli also reduces RTs in inefficient search tasks. In Experiment 1, participants viewed either negative or neutral images (faces or...

  8. Music Influences Ratings of the Affect of Visual Stimuli

    Directory of Open Access Journals (Sweden)

    Waldie E Hanser

    2013-09-01

    Full Text Available This review provides an overview of recent studies that have examined how music influences the judgment of emotional stimuli, including affective pictures and film clips. The relevant findings are incorporated within a broader theory of music and emotion, and suggestions for future research are offered.Music is important in our daily lives, and one of its primary uses by listeners is the active regulation of one's mood. Despite this widespread use as a regulator of mood and its general pervasiveness in our society, the number of studies investigating the issue of whether, and how, music affects mood and emotional behaviour is limited however. Experiments investigating the effects of music have generally focused on how the emotional valence of background music impacts how affective pictures and/or film clips are evaluated. These studies have demonstrated strong effects of music on the emotional judgment of such stimuli. Most studies have reported concurrent background music to enhance the emotional valence when music and pictures are emotionally congruent. On the other hand, when music and pictures are emotionally incongruent, the ratings of the affect of the pictures will in- or decrease depending on the emotional valence of the background music. These results appear to be consistent in studies investigating the effects of (background music.

  9. Human visual system automatically represents large-scale sequential regularities.

    Science.gov (United States)

    Kimura, Motohiro; Widmann, Andreas; Schröger, Erich

    2010-03-04

    Our brain recordings reveal that large-scale sequential regularities defined across non-adjacent stimuli can be automatically represented in visual sensory memory. To show that, we adopted an auditory paradigm developed by Sussman, E., Ritter, W., and Vaughan, H. G. Jr. (1998). Predictability of stimulus deviance and the mismatch negativity. NeuroReport, 9, 4167-4170, Sussman, E., and Gumenyuk, V. (2005). Organization of sequential sounds in auditory memory. NeuroReport, 16, 1519-1523 to the visual domain by presenting task-irrelevant infrequent luminance-deviant stimuli (D, 20%) inserted among task-irrelevant frequent stimuli being of standard luminance (S, 80%) in randomized (randomized condition, SSSDSSSSSDSSSSD...) and fixed manners (fixed condition, SSSSDSSSSDSSSSD...). Comparing the visual mismatch negativity (visual MMN), an event-related brain potential (ERP) index of memory-mismatch processes in human visual sensory system, revealed that visual MMN elicited by deviant stimuli was reduced in the fixed compared to the randomized condition. Thus, the large-scale sequential regularity being present in the fixed condition (SSSSD) must have been represented in visual sensory memory. Interestingly, this effect did not occur in conditions with stimulus-onset asynchronies (SOAs) of 480 and 800 ms but was confined to the 160-ms SOA condition supporting the hypothesis that large-scale regularity extraction was based on perceptual grouping of the five successive stimuli defining the regularity. 2010 Elsevier B.V. All rights reserved.

  10. Synchronization with competing visual and auditory rhythms: bouncing ball meets metronome.

    Science.gov (United States)

    Hove, Michael J; Iversen, John R; Zhang, Allen; Repp, Bruno H

    2013-07-01

    Synchronization of finger taps with periodically flashing visual stimuli is known to be much more variable than synchronization with an auditory metronome. When one of these rhythms is the synchronization target and the other serves as a distracter at various temporal offsets, strong auditory dominance is observed. However, it has recently been shown that visuomotor synchronization improves substantially with moving stimuli such as a continuously bouncing ball. The present study pitted a bouncing ball against an auditory metronome in a target-distracter synchronization paradigm, with the participants being auditory experts (musicians) and visual experts (video gamers and ball players). Synchronization was still less variable with auditory than with visual target stimuli in both groups. For musicians, auditory stimuli tended to be more distracting than visual stimuli, whereas the opposite was the case for the visual experts. Overall, there was no main effect of distracter modality. Thus, a distracting spatiotemporal visual rhythm can be as effective as a distracting auditory rhythm in its capacity to perturb synchronous movement, but its effectiveness also depends on modality-specific expertise.

  11. Attending to and remembering tactile stimuli: a review of brain imaging data and single-neuron responses.

    Science.gov (United States)

    Burton, H; Sinclair, R J

    2000-11-01

    Clinical and neuroimaging observations of the cortical network implicated in tactile attention have identified foci in parietal somatosensory, posterior parietal, and superior frontal locations. Tasks involving intentional hand-arm movements activate similar or nearby parietal and frontal foci. Visual spatial attention tasks and deliberate visuomotor behavior also activate overlapping posterior parietal and frontal foci. Studies in the visual and somatosensory systems thus support a proposal that attention to the spatial location of an object engages cortical regions responsible for the same coordinate referents used for guiding purposeful motor behavior. Tactile attention also biases processing in the somatosensory cortex through amplification of responses to relevant features of selected stimuli. Psychophysical studies demonstrate retention gradients for tactile stimuli like those reported for visual and auditory stimuli, and suggest analogous neural mechanisms for working memory across modalities. Neuroimaging studies in humans using memory tasks, and anatomic studies in monkeys support the idea that tactile information relayed from the somatosensory cortex is directed ventrally through the insula to the frontal cortex for short-term retention and to structures of the medial temporal lobe for long-term encoding. At the level of single neurons, tactile (such as visual and auditory) short-term memory appears as a persistent response during delay intervals between sampled stimuli.

  12. Teaching children with autism spectrum disorder to tact olfactory stimuli.

    Science.gov (United States)

    Dass, Tina K; Kisamore, April N; Vladescu, Jason C; Reeve, Kenneth F; Reeve, Sharon A; Taylor-Santa, Catherine

    2018-05-28

    Research on tact acquisition by children with autism spectrum disorder (ASD) has often focused on teaching participants to tact visual stimuli. It is important to evaluate procedures for teaching tacts of nonvisual stimuli (e.g., olfactory, tactile). The purpose of the current study was to extend the literature on secondary target instruction and tact training by evaluating the effects of a discrete-trial instruction procedure involving (a) echoic prompts, a constant prompt delay, and error correction for primary targets; (b) inclusion of secondary target stimuli in the consequent portion of learning trials; and (c) multiple exemplar training on the acquisition of item tacts of olfactory stimuli, emergence of category tacts of olfactory stimuli, generalization of category tacts, and emergence of category matching, with three children diagnosed with ASD. Results showed that all participants learned the item and category tacts following teaching, participants demonstrated generalization across category tacts, and category matching emerged for all participants. © 2018 Society for the Experimental Analysis of Behavior.

  13. Increasing Valid Profiles in Phallometric Assessment of Sex Offenders with Child Victims: Combining the Strengths of Audio Stimuli and Synthetic Characters.

    Science.gov (United States)

    Marschall-Lévesque, Shawn; Rouleau, Joanne-Lucine; Renaud, Patrice

    2018-02-01

    Penile plethysmography (PPG) is a measure of sexual interests that relies heavily on the stimuli it uses to generate valid results. Ethical considerations surrounding the use of real images in PPG have further limited the content admissible for these stimuli. To palliate this limitation, the current study aimed to combine audio and visual stimuli by incorporating computer-generated characters to create new stimuli capable of accurately classifying sex offenders with child victims, while also increasing the number of valid profiles. Three modalities (audio, visual, and audiovisual) were compared using two groups (15 sex offenders with child victims and 15 non-offenders). Both the new visual and audiovisual stimuli resulted in a 13% increase in the number of valid profiles at 2.5 mm, when compared to the standard audio stimuli. Furthermore, the new audiovisual stimuli generated a 34% increase in penile responses. All three modalities were able to discriminate between the two groups by their responses to the adult and child stimuli. Lastly, sexual interest indices for all three modalities could accurately classify participants in their appropriate groups, as demonstrated by ROC curve analysis (i.e., audio AUC = .81, 95% CI [.60, 1.00]; visual AUC = .84, 95% CI [.66, 1.00], and audiovisual AUC = .83, 95% CI [.63, 1.00]). Results suggest that computer-generated characters allow accurate discrimination of sex offenders with child victims and can be added to already validated stimuli to increase the number of valid profiles. The implications of audiovisual stimuli using computer-generated characters and their possible use in PPG evaluations are also discussed.

  14. Rhythmic synchronization tapping to an audio–visual metronome in budgerigars

    Science.gov (United States)

    Hasegawa, Ai; Okanoya, Kazuo; Hasegawa, Toshikazu; Seki, Yoshimasa

    2011-01-01

    In all ages and countries, music and dance have constituted a central part in human culture and communication. Recently, vocal-learning animals such as parrots and elephants have been found to share rhythmic ability with humans. Thus, we investigated the rhythmic synchronization of budgerigars, a vocal-mimicking parrot species, under controlled conditions and a systematically designed experimental paradigm as a first step in understanding the evolution of musical entrainment. We trained eight budgerigars to perform isochronous tapping tasks in which they pecked a key to the rhythm of audio–visual metronome-like stimuli. The budgerigars showed evidence of entrainment to external stimuli over a wide range of tempos. They seemed to be inherently inclined to tap at fast tempos, which have a similar time scale to the rhythm of budgerigars' natural vocalizations. We suggest that vocal learning might have contributed to their performance, which resembled that of humans. PMID:22355637

  15. Rhythmic synchronization tapping to an audio-visual metronome in budgerigars.

    Science.gov (United States)

    Hasegawa, Ai; Okanoya, Kazuo; Hasegawa, Toshikazu; Seki, Yoshimasa

    2011-01-01

    In all ages and countries, music and dance have constituted a central part in human culture and communication. Recently, vocal-learning animals such as parrots and elephants have been found to share rhythmic ability with humans. Thus, we investigated the rhythmic synchronization of budgerigars, a vocal-mimicking parrot species, under controlled conditions and a systematically designed experimental paradigm as a first step in understanding the evolution of musical entrainment. We trained eight budgerigars to perform isochronous tapping tasks in which they pecked a key to the rhythm of audio-visual metronome-like stimuli. The budgerigars showed evidence of entrainment to external stimuli over a wide range of tempos. They seemed to be inherently inclined to tap at fast tempos, which have a similar time scale to the rhythm of budgerigars' natural vocalizations. We suggest that vocal learning might have contributed to their performance, which resembled that of humans.

  16. Visual attention

    NARCIS (Netherlands)

    Evans, K.K.; Horowitz, T.S.; Howe, P.; Pedersini, R.; Reijnen, E.; Pinto, Y.; Wolfe, J.M.

    2011-01-01

    A typical visual scene we encounter in everyday life is complex and filled with a huge amount of perceptual information. The term, ‘visual attention’ describes a set of mechanisms that limit some processing to a subset of incoming stimuli. Attentional mechanisms shape what we see and what we can act

  17. Endogenous visuospatial attention increases visual awareness independent of visual discrimination sensitivity.

    Science.gov (United States)

    Vernet, Marine; Japee, Shruti; Lokey, Savannah; Ahmed, Sara; Zachariou, Valentinos; Ungerleider, Leslie G

    2017-08-12

    Visuospatial attention often improves task performance by increasing signal gain at attended locations and decreasing noise at unattended locations. Attention is also believed to be the mechanism that allows information to enter awareness. In this experiment, we assessed whether orienting endogenous visuospatial attention with cues differentially affects visual discrimination sensitivity (an objective task performance) and visual awareness (the subjective feeling of perceiving) during the same discrimination task. Gabor patch targets were presented laterally, either at low contrast (contrast stimuli) or at high contrast embedded in noise (noise stimuli). Participants reported their orientation either in a 3-alternative choice task (clockwise, counterclockwise, unknown) that allowed for both objective and subjective reports, or in a 2-alternative choice task (clockwise, counterclockwise) that provided a control for objective reports. Signal detection theory models were fit to the experimental data: estimated perceptual sensitivity reflected objective performance; decision criteria, or subjective biases, were a proxy for visual awareness. Attention increased sensitivity (i.e., improved objective performance) for the contrast, but not for the noise stimuli. Indeed, with the latter, attention did not further enhance the already high target signal or reduce the already low uncertainty on its position. Interestingly, for both contrast and noise stimuli, attention resulted in more liberal criteria, i.e., awareness increased. The noise condition is thus an experimental configuration where people think they see the targets they attend to better, even if they do not. This could be explained by an internal representation of their attentional state, which influences awareness independent of objective visual signals. Copyright © 2017. Published by Elsevier Ltd.

  18. Neural Mechanisms of Selective Visual Attention.

    Science.gov (United States)

    Moore, Tirin; Zirnsak, Marc

    2017-01-03

    Selective visual attention describes the tendency of visual processing to be confined largely to stimuli that are relevant to behavior. It is among the most fundamental of cognitive functions, particularly in humans and other primates for whom vision is the dominant sense. We review recent progress in identifying the neural mechanisms of selective visual attention. We discuss evidence from studies of different varieties of selective attention and examine how these varieties alter the processing of stimuli by neurons within the visual system, current knowledge of their causal basis, and methods for assessing attentional dysfunctions. In addition, we identify some key questions that remain in identifying the neural mechanisms that give rise to the selective processing of visual information.

  19. The Interplay Among Children's Negative Family Representations, Visual Processing of Negative Emotions, and Externalizing Symptoms.

    Science.gov (United States)

    Davies, Patrick T; Coe, Jesse L; Hentges, Rochelle F; Sturge-Apple, Melissa L; van der Kloet, Erika

    2018-03-01

    This study examined the transactional interplay among children's negative family representations, visual processing of negative emotions, and externalizing symptoms in a sample of 243 preschool children (M age  = 4.60 years). Children participated in three annual measurement occasions. Cross-lagged autoregressive models were conducted with multimethod, multi-informant data to identify mediational pathways. Consistent with schema-based top-down models, negative family representations were associated with attention to negative faces in an eye-tracking task and their externalizing symptoms. Children's negative representations of family relationships specifically predicted decreases in their attention to negative emotions, which, in turn, was associated with subsequent increases in their externalizing symptoms. Follow-up analyses indicated that the mediational role of diminished attention to negative emotions was particularly pronounced for angry faces. © 2017 The Authors. Child Development © 2017 Society for Research in Child Development, Inc.

  20. The coupling of cerebral blood flow and oxygen metabolism with brain activation is similar for simple and complex stimuli in human primary visual cortex.

    Science.gov (United States)

    Griffeth, Valerie E M; Simon, Aaron B; Buxton, Richard B

    2015-01-01

    Quantitative functional MRI (fMRI) experiments to measure blood flow and oxygen metabolism coupling in the brain typically rely on simple repetitive stimuli. Here we compared such stimuli with a more naturalistic stimulus. Previous work on the primary visual cortex showed that direct attentional modulation evokes a blood flow (CBF) response with a relatively large oxygen metabolism (CMRO2) response in comparison to an unattended stimulus, which evokes a much smaller metabolic response relative to the flow response. We hypothesized that a similar effect would be associated with a more engaging stimulus, and tested this by measuring the primary human visual cortex response to two contrast levels of a radial flickering checkerboard in comparison to the response to free viewing of brief movie clips. We did not find a significant difference in the blood flow-metabolism coupling (n=%ΔCBF/%ΔCMRO2) between the movie stimulus and the flickering checkerboards employing two different analysis methods: a standard analysis using the Davis model and a new analysis using a heuristic model dependent only on measured quantities. This finding suggests that in the primary visual cortex a naturalistic stimulus (in comparison to a simple repetitive stimulus) is either not sufficient to provoke a change in flow-metabolism coupling by attentional modulation as hypothesized, that the experimental design disrupted the cognitive processes underlying the response to a more natural stimulus, or that the technique used is not sensitive enough to detect a small difference. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. Detection of differential viewing patterns to erotic and non-erotic stimuli using eye-tracking methodology.

    Science.gov (United States)

    Lykins, Amy D; Meana, Marta; Kambe, Gretchen

    2006-10-01

    As a first step in the investigation of the role of visual attention in the processing of erotic stimuli, eye-tracking methodology was employed to measure eye movements during erotic scene presentation. Because eye-tracking is a novel methodology in sexuality research, we attempted to determine whether the eye-tracker could detect differences (should they exist) in visual attention to erotic and non-erotic scenes. A total of 20 men and 20 women were presented with a series of erotic and non-erotic images and tracked their eye movements during image presentation. Comparisons between erotic and non-erotic image groups showed significant differences on two of three dependent measures of visual attention (number of fixations and total time) in both men and women. As hypothesized, there was a significant Stimulus x Scene Region interaction, indicating that participants visually attended to the body more in the erotic stimuli than in the non-erotic stimuli, as evidenced by a greater number of fixations and longer total time devoted to that region. These findings provide support for the application of eye-tracking methodology as a measure of visual attentional capture in sexuality research. Future applications of this methodology to expand our knowledge of the role of cognition in sexuality are suggested.

  2. High-intensity Erotic Visual Stimuli De-activate the Primary Visual Cortex in Women

    NARCIS (Netherlands)

    Huynh, Hieu K.; Beers, Caroline; Willemsen, Antoon; Lont, Erna; Laan, Ellen; Dierckx, Rudi; Jansen, Monique; Sand, Michael; Schultz, Willibrord Weijmar; Holstege, Gert

    Introduction. The primary visual cortex, Brodmann's area (BA 17), plays a vital role in basic survival mechanisms in humans. In most neuro-imaging studies in which the volunteers have to watch pictures or movies, the primary visual cortex is similarly activated independent of the content of the

  3. High-intensity Erotic Visual Stimuli De-activate the Primary Visual Cortex in Women

    NARCIS (Netherlands)

    Huynh, Hieu K.; Beers, Caroline; Willemsen, Antoon; Lont, Erna; Laan, Ellen; Dierckx, Rudi; Jansen, Monique; Sand, Michael; Weijmar Schultz, Willibrord; Holstege, Gert

    2012-01-01

    Introduction. The primary visual cortex, Brodmann's area (BA 17), plays a vital role in basic survival mechanisms in humans. In most neuro-imaging studies in which the volunteers have to watch pictures or movies, the primary visual cortex is similarly activated independent of the content of the

  4. Observer's Mood Manipulates Level of Visual Processing: Evidence from Face and Nonface Stimuli

    Directory of Open Access Journals (Sweden)

    Setareh Mokhtari

    2011-05-01

    Full Text Available For investigating the effect of observer's mood on level of processing of visual stimuli, happy or sad mood was induced in two groups of participants through asking them to deliberate one of their sad or happy memories while listening to a congruent piece of music. This was followed by a computer-based task that required counting some features (arcs or lines of emotional schematic faces (with either sad or happy expressions for group 1, and counting same features of meaningless combined shapes for group 2. Reaction time analysis indicated there is a significant difference in RTs after listening to the sad music compared with happy music for group 1; participants with sad moods were significantly slower when they worked on local levels of schematic faces with sad expressions. Happy moods did not show any specific effect on reaction time of participants who were working on local details of emotionally expressive faces. Sad moods or happy moods had no significant effect on reaction time of working on parts of meaningless shapes. It seems that sad moods as a contextual factor elevate the ability of sad expression to grab the attention and block fast access to the local parts of the holistic meaningful shapes.

  5. Amygdala activity related to enhanced memory for pleasant and aversive stimuli.

    Science.gov (United States)

    Hamann, S B; Ely, T D; Grafton, S T; Kilts, C D

    1999-03-01

    Pleasant or aversive events are better remembered than neutral events. Emotional enhancement of episodic memory has been linked to the amygdala in animal and neuropsychological studies. Using positron emission tomography, we show that bilateral amygdala activity during memory encoding is correlated with enhanced episodic recognition memory for both pleasant and aversive visual stimuli relative to neutral stimuli, and that this relationship is specific to emotional stimuli. Furthermore, data suggest that the amygdala enhances episodic memory in part through modulation of hippocampal activity. The human amygdala seems to modulate the strength of conscious memory for events according to emotional importance, regardless of whether the emotion is pleasant or aversive.

  6. Enhanced early visual processing in response to snake and trypophobic stimuli

    NARCIS (Netherlands)

    J.W. van Strien (Jan); Van der Peijl, M.K. (Manja K.)

    2018-01-01

    textabstractBackground: Trypophobia refers to aversion to clusters of holes. We investigated whether trypophobic stimuli evoke augmented early posterior negativity (EPN). Methods: Twenty-four participants filled out a trypophobia questionnaire and watched the random rapid serial presentation of 450

  7. Interpretative bias in spider phobia: Perception and information processing of ambiguous schematic stimuli.

    Science.gov (United States)

    Haberkamp, Anke; Schmidt, Filipp

    2015-09-01

    This study investigates the interpretative bias in spider phobia with respect to rapid visuomotor processing. We compared perception, evaluation, and visuomotor processing of ambiguous schematic stimuli between spider-fearful and control participants. Stimuli were produced by gradually morphing schematic flowers into spiders. Participants rated these stimuli related to their perceptual appearance and to their feelings of valence, disgust, and arousal. Also, they responded to the same stimuli within a response priming paradigm that measures rapid motor activation. Spider-fearful individuals showed an interpretative bias (i.e., ambiguous stimuli were perceived as more similar to spiders) and rated spider-like stimuli as more unpleasant, disgusting, and arousing. However, we observed no differences between spider-fearful and control participants in priming effects for ambiguous stimuli. For non-ambiguous stimuli, we observed a similar enhancement for phobic pictures as has been reported previously for natural images. We discuss our findings with respect to the visual representation of morphed stimuli and to perceptual learning processes. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Attentional Bias for Emotional Stimuli in Borderline Personality Disorder: A Meta-Analysis.

    Science.gov (United States)

    Kaiser, Deborah; Jacob, Gitta A; Domes, Gregor; Arntz, Arnoud

    2016-01-01

    In borderline personality disorder (BPD), attentional bias (AB) to emotional stimuli may be a core component in disorder pathogenesis and maintenance. 11 emotional Stroop task (EST) studies with 244 BPD patients, 255 nonpatients (NPs) and 95 clinical controls and 4 visual dot-probe task (VDPT) studies with 151 BPD patients or subjects with BPD features and 62 NPs were included. We conducted two separate meta-analyses for AB in BPD. One meta-analysis focused on the EST for generally negative and BPD-specific/personally relevant negative words. The other meta-analysis concentrated on the VDPT for negative and positive facial stimuli. There is evidence for an AB towards generally negative emotional words compared to NPs (standardized mean difference, SMD = 0.311) and to other psychiatric disorders (SMD = 0.374) in the EST studies. Regarding BPD-specific/personally relevant negative words, BPD patients reveal an even stronger AB than NPs (SMD = 0.454). The VDPT studies indicate a tendency towards an AB to positive facial stimuli but not negative stimuli in BPD patients compared to NPs. The findings rather reflect an AB in BPD to generally negative and BPD-specific/personally relevant negative words rather than an AB in BPD towards facial stimuli, and/or a biased allocation of covert attentional resources to negative emotional stimuli in BPD and not a bias in focus of visual attention. Further research regarding the role of childhood traumatization and comorbid anxiety disorders may improve the understanding of these underlying processes. © 2016 The Author(s) Published by S. Karger AG, Basel.

  9. [Intermodal timing cues for audio-visual speech recognition].

    Science.gov (United States)

    Hashimoto, Masahiro; Kumashiro, Masaharu

    2004-06-01

    The purpose of this study was to investigate the limitations of lip-reading advantages for Japanese young adults by desynchronizing visual and auditory information in speech. In the experiment, audio-visual speech stimuli were presented under the six test conditions: audio-alone, and audio-visually with either 0, 60, 120, 240 or 480 ms of audio delay. The stimuli were the video recordings of a face of a female Japanese speaking long and short Japanese sentences. The intelligibility of the audio-visual stimuli was measured as a function of audio delays in sixteen untrained young subjects. Speech intelligibility under the audio-delay condition of less than 120 ms was significantly better than that under the audio-alone condition. On the other hand, the delay of 120 ms corresponded to the mean mora duration measured for the audio stimuli. The results implied that audio delays of up to 120 ms would not disrupt lip-reading advantage, because visual and auditory information in speech seemed to be integrated on a syllabic time scale. Potential applications of this research include noisy workplace in which a worker must extract relevant speech from all the other competing noises.

  10. Visual fatigue while watching 3D stimuli from different positions

    Directory of Open Access Journals (Sweden)

    J. Antonio Aznar-Casanova

    2017-07-01

    Conclusion: This results support a mixed model, combining a model based on the visual angle (related to viewing distance and another based on the oculomotor imbalance (related to visual direction. This mixed model could help to predict the distribution of seats in the cinema room ranging from those that produce greater visual comfort to those that produce more visual discomfort. Also could be a first step to pre-diagnosis of binocular vision disorders.

  11. Subliminal and supraliminal processing of reward-related stimuli in anorexia nervosa.

    Science.gov (United States)

    Boehm, I; King, J A; Bernardoni, F; Geisler, D; Seidel, M; Ritschel, F; Goschke, T; Haynes, J-D; Roessner, V; Ehrlich, S

    2018-04-01

    Previous studies have highlighted the role of the brain reward and cognitive control systems in the etiology of anorexia nervosa (AN). In an attempt to disentangle the relative contribution of these systems to the disorder, we used functional magnetic resonance imaging (fMRI) to investigate hemodynamic responses to reward-related stimuli presented both subliminally and supraliminally in acutely underweight AN patients and age-matched healthy controls (HC). fMRI data were collected from a total of 35 AN patients and 35 HC, while they passively viewed subliminally and supraliminally presented streams of food, positive social, and neutral stimuli. Activation patterns of the group × stimulation condition × stimulus type interaction were interrogated to investigate potential group differences in processing different stimulus types under the two stimulation conditions. Moreover, changes in functional connectivity were investigated using generalized psychophysiological interaction analysis. AN patients showed a generally increased response to supraliminally presented stimuli in the inferior frontal junction (IFJ), but no alterations within the reward system. Increased activation during supraliminal stimulation with food stimuli was observed in the AN group in visual regions including superior occipital gyrus and the fusiform gyrus/parahippocampal gyrus. No group difference was found with respect to the subliminal stimulation condition and functional connectivity. Increased IFJ activation in AN during supraliminal stimulation may indicate hyperactive cognitive control, which resonates with clinical presentation of excessive self-control in AN patients. Increased activation to food stimuli in visual regions may be interpreted in light of an attentional food bias in AN.

  12. Human visual system automatically encodes sequential regularities of discrete events.

    Science.gov (United States)

    Kimura, Motohiro; Schröger, Erich; Czigler, István; Ohira, Hideki

    2010-06-01

    For our adaptive behavior in a dynamically changing environment, an essential task of the brain is to automatically encode sequential regularities inherent in the environment into a memory representation. Recent studies in neuroscience have suggested that sequential regularities embedded in discrete sensory events are automatically encoded into a memory representation at the level of the sensory system. This notion is largely supported by evidence from investigations using auditory mismatch negativity (auditory MMN), an event-related brain potential (ERP) correlate of an automatic memory-mismatch process in the auditory sensory system. However, it is still largely unclear whether or not this notion can be generalized to other sensory modalities. The purpose of the present study was to investigate the contribution of the visual sensory system to the automatic encoding of sequential regularities using visual mismatch negativity (visual MMN), an ERP correlate of an automatic memory-mismatch process in the visual sensory system. To this end, we conducted a sequential analysis of visual MMN in an oddball sequence consisting of infrequent deviant and frequent standard stimuli, and tested whether the underlying memory representation of visual MMN generation contains only a sensory memory trace of standard stimuli (trace-mismatch hypothesis) or whether it also contains sequential regularities extracted from the repetitive standard sequence (regularity-violation hypothesis). The results showed that visual MMN was elicited by first deviant (deviant stimuli following at least one standard stimulus), second deviant (deviant stimuli immediately following first deviant), and first standard (standard stimuli immediately following first deviant), but not by second standard (standard stimuli immediately following first standard). These results are consistent with the regularity-violation hypothesis, suggesting that the visual sensory system automatically encodes sequential

  13. Novel mathematical neural models for visual attention

    DEFF Research Database (Denmark)

    Li, Kang

    for the visual attention theories and spiking neuron models for single spike trains. Statistical inference and model selection are performed and various numerical methods are explored. The designed methods also give a framework for neural coding under visual attention theories. We conduct both analysis on real......Visual attention has been extensively studied in psychology, but some fundamental questions remain controversial. We focus on two questions in this study. First, we investigate how a neuron in visual cortex responds to multiple stimuli inside the receptive eld, described by either a response...... system, supported by simulation study. Finally, we present the decoding of multiple temporal stimuli under these visual attention theories, also in a realistic biophysical situation with simulations....

  14. Infant Attention to Dynamic Audiovisual Stimuli: Look Duration from 3 to 9 Months of Age

    Science.gov (United States)

    Reynolds, Greg D.; Zhang, Dantong; Guy, Maggie W.

    2013-01-01

    The goal of this study was to examine developmental change in visual attention to dynamic visual and audiovisual stimuli in 3-, 6-, and 9-month-old infants. Infant look duration was measured during exposure to dynamic geometric patterns and Sesame Street video clips under three different stimulus modality conditions: unimodal visual, synchronous…

  15. Visual plasticity : Blindsight bridges anatomy and function in the visual system

    NARCIS (Netherlands)

    Tamietto, M.; Morrone, M.C.

    2016-01-01

    Some people who are blind due to damage to their primary visual cortex, V1, can discriminate stimuli presented within their blind visual field. This residual function has been recently linked to a pathway that bypasses V1, and connects the thalamic lateral geniculate nucleus directly with the

  16. Lateralized visual behavior in bottlenose dolphins (Tursiops truncatus) performing audio-visual tasks: the right visual field advantage.

    Science.gov (United States)

    Delfour, F; Marten, K

    2006-01-10

    Analyzing cerebral asymmetries in various species helps in understanding brain organization. The left and right sides of the brain (lateralization) are involved in different cognitive and sensory functions. This study focuses on dolphin visual lateralization as expressed by spontaneous eye preference when performing a complex cognitive task; we examine lateralization when processing different visual stimuli displayed on an underwater touch-screen (two-dimensional figures, three-dimensional figures and dolphin/human video sequences). Three female bottlenose dolphins (Tursiops truncatus) were submitted to a 2-, 3- or 4-, choice visual/auditory discrimination problem, without any food reward: the subjects had to correctly match visual and acoustic stimuli together. In order to visualize and to touch the underwater target, the dolphins had to come close to the touch-screen and to position themselves using monocular vision (left or right eye) and/or binocular naso-ventral vision. The results showed an ability to associate simple visual forms and auditory information using an underwater touch-screen. Moreover, the subjects showed a spontaneous tendency to use monocular vision. Contrary to previous findings, our results did not clearly demonstrate right eye preference in spontaneous choice. However, the individuals' scores of correct answers were correlated with right eye vision, demonstrating the advantage of this visual field in visual information processing and suggesting a left hemispheric dominance. We also demonstrated that the nature of the presented visual stimulus does not seem to have any influence on the animals' monocular vision choice.

  17. Improved discrimination of visual stimuli following repetitive transcranial magnetic stimulation.

    Directory of Open Access Journals (Sweden)

    Michael L Waterston

    Full Text Available BACKGROUND: Repetitive transcranial magnetic stimulation (rTMS at certain frequencies increases thresholds for motor-evoked potentials and phosphenes following stimulation of cortex. Consequently rTMS is often assumed to introduce a "virtual lesion" in stimulated brain regions, with correspondingly diminished behavioral performance. METHODOLOGY/PRINCIPAL FINDINGS: Here we investigated the effects of rTMS to visual cortex on subjects' ability to perform visual psychophysical tasks. Contrary to expectations of a visual deficit, we find that rTMS often improves the discrimination of visual features. For coarse orientation tasks, discrimination of a static stimulus improved consistently following theta-burst stimulation of the occipital lobe. Using a reaction-time task, we found that these improvements occurred throughout the visual field and lasted beyond one hour post-rTMS. Low-frequency (1 Hz stimulation yielded similar improvements. In contrast, we did not find consistent effects of rTMS on performance in a fine orientation discrimination task. CONCLUSIONS/SIGNIFICANCE: Overall our results suggest that rTMS generally improves or has no effect on visual acuity, with the nature of the effect depending on the type of stimulation and the task. We interpret our results in the context of an ideal-observer model of visual perception.

  18. Direct Contribution of Auditory Motion Information to Sound-Induced Visual Motion Perception

    Directory of Open Access Journals (Sweden)

    Souta Hidaka

    2011-10-01

    Full Text Available We have recently demonstrated that alternating left-right sound sources induce motion perception to static visual stimuli along the horizontal plane (SIVM: sound-induced visual motion perception, Hidaka et al., 2009. The aim of the current study was to elucidate whether auditory motion signals, rather than auditory positional signals, can directly contribute to the SIVM. We presented static visual flashes at retinal locations outside the fovea together with a lateral auditory motion provided by a virtual stereo noise source smoothly shifting in the horizontal plane. The flashes appeared to move in the situation where auditory positional information would have little influence on the perceived position of visual stimuli; the spatiotemporal position of the flashes was in the middle of the auditory motion trajectory. Furthermore, the auditory motion altered visual motion perception in a global motion display; in this display, different localized motion signals of multiple visual stimuli were combined to produce a coherent visual motion perception so that there was no clear one-to-one correspondence between the auditory stimuli and each visual stimulus. These findings suggest the existence of direct interactions between the auditory and visual modalities in motion processing and motion perception.

  19. Attentional capture by social stimuli in young infants

    Directory of Open Access Journals (Sweden)

    Maxie eGluckman

    2013-08-01

    Full Text Available We investigated the possibility that a range of social stimuli capture the attention of 6-month-old infants when in competition with other non-face objects. Infants viewed a series of six-item arrays in which one target item was a face, body part, or animal as their eye movements were recorded. Stimulus arrays were also processed for relative salience of each item in terms of color, luminance, and amount of contour. Targets were rarely the most visually salient items in the arrays, yet infants’ first looks toward all three target types were above chance, and dwell times for targets exceeded other stimulus types. Girls looked longer at faces than did boys, but there were no sex differences for other stimuli. These results are interpreted in a context of learning to discriminate between different classes of animate stimuli, perhaps in line with affordances for social interaction, and origins of sex differences in social attention.

  20. A case of epilepsy induced by eating or by visual stimuli of food made of minced meat.

    Science.gov (United States)

    Mimura, Naoya; Inoue, Takeshi; Shimotake, Akihiro; Matsumoto, Riki; Ikeda, Akio; Takahashi, Ryosuke

    2017-08-31

    We report a 34-year-old woman with eating epilepsy induced not only by eating but also seeing foods made of minced meat. In her early 20s of age, she started having simple partial seizures (SPS) as flashback and epigastric discomfort induced by particular foods. When she was 33 years old, she developed SPS, followed by secondarily generalized tonic-clonic seizure (sGTCS) provoked by eating a hot dog, and 6 months later, only seeing the video of dumpling. We performed video electroencephalogram (EEG) monitoring while she was seeing the video of soup dumpling, which most likely caused sGTCS. Ictal EEG showed rhythmic theta activity in the left frontal to mid-temporal area, followed by generalized seizure pattern. In this patient, seizures were provoked not only by eating particular foods but also by seeing these. This suggests a form of epilepsy involving visual stimuli.

  1. Visual discomfort and depth-of-field

    NARCIS (Netherlands)

    O'Hare, L.; Zhang, T.; Nefs, H.T.; Hibbard, P.B.

    2013-01-01

    Visual discomfort has been reported for certain visual stimuli and under particular viewing conditions, such as stereoscopic viewing. In stereoscopic viewing, visual discomfort can be caused by a conflict between accommodation and convergence cues that may specify different distances in depth.

  2. The threshold for conscious report: Signal loss and response bias in visual and frontal cortex.

    Science.gov (United States)

    van Vugt, Bram; Dagnino, Bruno; Vartak, Devavrat; Safaai, Houman; Panzeri, Stefano; Dehaene, Stanislas; Roelfsema, Pieter R

    2018-05-04

    Why are some visual stimuli consciously detected, whereas others remain subliminal? We investigated the fate of weak visual stimuli in the visual and frontal cortex of awake monkeys trained to report stimulus presence. Reported stimuli were associated with strong sustained activity in the frontal cortex, and frontal activity was weaker and quickly decayed for unreported stimuli. Information about weak stimuli could be lost at successive stages en route from the visual to the frontal cortex, and these propagation failures were confirmed through microstimulation of area V1. Fluctuations in response bias and sensitivity during perception of identical stimuli were traced back to prestimulus brain-state markers. A model in which stimuli become consciously reportable when they elicit a nonlinear ignition process in higher cortical areas explained our results. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

  3. Using Prosopagnosia to Test and Modify Visual Recognition Theory.

    Science.gov (United States)

    O'Brien, Alexander M

    2018-02-01

    Biederman's contemporary theory of basic visual object recognition (Recognition-by-Components) is based on structural descriptions of objects and presumes 36 visual primitives (geons) people can discriminate, but there has been no empirical test of the actual use of these 36 geons to visually distinguish objects. In this study, we tested for the actual use of these geons in basic visual discrimination by comparing object discrimination performance patterns (when distinguishing varied stimuli) of an acquired prosopagnosia patient (LB) and healthy control participants. LB's prosopagnosia left her heavily reliant on structural descriptions or categorical object differences in visual discrimination tasks versus the control participants' additional ability to use face recognition or coordinate systems (Coordinate Relations Hypothesis). Thus, when LB performed comparably to control participants with a given stimulus, her restricted reliance on basic or categorical discriminations meant that the stimuli must be distinguishable on the basis of a geon feature. By varying stimuli in eight separate experiments and presenting all 36 geons, we discerned that LB coded only 12 (vs. 36) distinct visual primitives (geons), apparently reflective of human visual systems generally.

  4. Reversal Negativity and Bistable Stimuli: Attention, Awareness, or Something Else?

    Science.gov (United States)

    Intaite, Monika; Koivisto, Mika; Ruksenas, Osvaldas; Revonsuo, Antti

    2010-01-01

    Ambiguous (or bistable) figures are visual stimuli that have two mutually exclusive perceptual interpretations that spontaneously alternate with each other. Perceptual reversals, as compared with non-reversals, typically elicit a negative difference called reversal negativity (RN), peaking around 250 ms from stimulus onset. The cognitive…

  5. Audiovisual semantic interactions between linguistic and nonlinguistic stimuli: The time-courses and categorical specificity.

    Science.gov (United States)

    Chen, Yi-Chuan; Spence, Charles

    2018-04-30

    We examined the time-courses and categorical specificity of the crossmodal semantic congruency effects elicited by naturalistic sounds and spoken words on the processing of visual pictures (Experiment 1) and printed words (Experiment 2). Auditory cues were presented at 7 different stimulus onset asynchronies (SOAs) with respect to the visual targets, and participants made speeded categorization judgments (living vs. nonliving). Three common effects were observed across 2 experiments: Both naturalistic sounds and spoken words induced a slowly emerging congruency effect when leading by 250 ms or more in the congruent compared with the incongruent condition, and a rapidly emerging inhibitory effect when leading by 250 ms or less in the incongruent condition as opposed to the noise condition. Only spoken words that did not match the visual targets elicited an additional inhibitory effect when leading by 100 ms or when presented simultaneously. Compared with nonlinguistic stimuli, the crossmodal congruency effects associated with linguistic stimuli occurred over a wider range of SOAs and occurred at a more specific level of the category hierarchy (i.e., the basic level) than was required by the task. A comprehensive framework is proposed to provide a dynamic view regarding how meaning is extracted during the processing of visual or auditory linguistic and nonlinguistic stimuli, therefore contributing to our understanding of multisensory semantic processing in humans. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  6. Emotional conditioning to masked stimuli and modulation of visuospatial attention.

    Science.gov (United States)

    Beaver, John D; Mogg, Karin; Bradley, Brendan P

    2005-03-01

    Two studies investigated the effects of conditioning to masked stimuli on visuospatial attention. During the conditioning phase, masked snakes and spiders were paired with a burst of white noise, or paired with an innocuous tone, in the conditioned stimulus (CS)+ and CS- conditions, respectively. Attentional allocation to the CSs was then assessed with a visual probe task, in which the CSs were presented unmasked (Experiment 1) or both unmasked and masked (Experiment 2), together with fear-irrelevant control stimuli (flowers and mushrooms). In Experiment 1, participants preferentially allocated attention to CS+ relative to control stimuli. Experiment 2 suggested that this attentional bias depended on the perceived aversiveness of the unconditioned stimulus and did not require conscious recognition of the CSs during both acquisition and expression. Copyright 2005 APA, all rights reserved.

  7. Brain network involved in visual processing of movement stimuli used in upper limb robotic training: an fMRI study.

    Science.gov (United States)

    Nocchi, Federico; Gazzellini, Simone; Grisolia, Carmela; Petrarca, Maurizio; Cannatà, Vittorio; Cappa, Paolo; D'Alessio, Tommaso; Castelli, Enrico

    2012-07-24

    The potential of robot-mediated therapy and virtual reality in neurorehabilitation is becoming of increasing importance. However, there is limited information, using neuroimaging, on the neural networks involved in training with these technologies. This study was intended to detect the brain network involved in the visual processing of movement during robotic training. The main aim was to investigate the existence of a common cerebral network able to assimilate biological (human upper limb) and non-biological (abstract object) movements, hence testing the suitability of the visual non-biological feedback provided by the InMotion2 Robot. A visual functional Magnetic Resonance Imaging (fMRI) task was administered to 22 healthy subjects. The task required observation and retrieval of motor gestures and of the visual feedback used in robotic training. Functional activations of both biological and non-biological movements were examined to identify areas activated in both conditions, along with differential activity in upper limb vs. abstract object trials. Control of response was also tested by administering trials with congruent and incongruent reaching movements. The observation of upper limb and abstract object movements elicited similar patterns of activations according to a caudo-rostral pathway for the visual processing of movements (including specific areas of the occipital, temporal, parietal, and frontal lobes). Similarly, overlapping activations were found for the subsequent retrieval of the observed movement. Furthermore, activations of frontal cortical areas were associated with congruent trials more than with the incongruent ones. This study identified the neural pathway associated with visual processing of movement stimuli used in upper limb robot-mediated training and investigated the brain's ability to assimilate abstract object movements with human motor gestures. In both conditions, activations were elicited in cerebral areas involved in visual

  8. Brain network involved in visual processing of movement stimuli used in upper limb robotic training: an fMRI study

    Directory of Open Access Journals (Sweden)

    Nocchi Federico

    2012-07-01

    Full Text Available Abstract Background The potential of robot-mediated therapy and virtual reality in neurorehabilitation is becoming of increasing importance. However, there is limited information, using neuroimaging, on the neural networks involved in training with these technologies. This study was intended to detect the brain network involved in the visual processing of movement during robotic training. The main aim was to investigate the existence of a common cerebral network able to assimilate biological (human upper limb and non-biological (abstract object movements, hence testing the suitability of the visual non-biological feedback provided by the InMotion2 Robot. Methods A visual functional Magnetic Resonance Imaging (fMRI task was administered to 22 healthy subjects. The task required observation and retrieval of motor gestures and of the visual feedback used in robotic training. Functional activations of both biological and non-biological movements were examined to identify areas activated in both conditions, along with differential activity in upper limb vs. abstract object trials. Control of response was also tested by administering trials with congruent and incongruent reaching movements. Results The observation of upper limb and abstract object movements elicited similar patterns of activations according to a caudo-rostral pathway for the visual processing of movements (including specific areas of the occipital, temporal, parietal, and frontal lobes. Similarly, overlapping activations were found for the subsequent retrieval of the observed movement. Furthermore, activations of frontal cortical areas were associated with congruent trials more than with the incongruent ones. Conclusions This study identified the neural pathway associated with visual processing of movement stimuli used in upper limb robot-mediated training and investigated the brain’s ability to assimilate abstract object movements with human motor gestures. In both conditions

  9. Exposure to Virtual Social Stimuli Modulates Subjective Pain Reports

    Directory of Open Access Journals (Sweden)

    Jacob M Vigil

    2014-01-01

    Full Text Available BACKGROUND: Contextual factors, including the gender of researchers, influence experimental and patient pain reports. It is currently not known how social stimuli influence pain percepts, nor which types of sensory modalities of communication, such as auditory, visual or olfactory cues associated with person perception and gender processing, produce these effects.

  10. Peripheral visual response time and visual display layout

    Science.gov (United States)

    Haines, R. F.

    1974-01-01

    Experiments were performed on a group of 42 subjects in a study of their peripheral visual response time to visual signals under positive acceleration, during prolonged bedrest, at passive 70 deg headup body lift, under exposures to high air temperatures and high luminance levels, and under normal stress-free laboratory conditions. Diagrams are plotted for mean response times to white, red, yellow, green, and blue stimuli under different conditions.

  11. The Effects of Visual Cues and Learners' Field Dependence in Multiple External Representations Environment for Novice Program Comprehension

    Science.gov (United States)

    Wei, Liew Tze; Sazilah, Salam

    2012-01-01

    This study investigated the effects of visual cues in multiple external representations (MER) environment on the learning performance of novices' program comprehension. Program codes and flowchart diagrams were used as dual representations in multimedia environment to deliver lessons on C-Programming. 17 field independent participants and 16 field…

  12. Absent Audiovisual Integration Elicited by Peripheral Stimuli in Parkinson's Disease.

    Science.gov (United States)

    Ren, Yanna; Suzuki, Keisuke; Yang, Weiping; Ren, Yanling; Wu, Fengxia; Yang, Jiajia; Takahashi, Satoshi; Ejima, Yoshimichi; Wu, Jinglong; Hirata, Koichi

    2018-01-01

    The basal ganglia, which have been shown to be a significant multisensory hub, are disordered in Parkinson's disease (PD). This study was to investigate the audiovisual integration of peripheral stimuli in PD patients with/without sleep disturbances. Thirty-six age-matched normal controls (NC) and 30 PD patients were recruited for an auditory/visual discrimination experiment. The mean response times for each participant were analyzed using repeated measures ANOVA and race model. The results showed that the response to all stimuli was significantly delayed for PD compared to NC (all p audiovisual stimuli was significantly faster than that to unimodal stimuli in both NC and PD ( p audiovisual integration was absent in PD; however, it did occur in NC. Further analysis showed that there was no significant audiovisual integration in PD with/without cognitive impairment or in PD with/without sleep disturbances. Furthermore, audiovisual facilitation was not associated with Hoehn and Yahr stage, disease duration, or the presence of sleep disturbances (all p > 0.05). The current results showed that audiovisual multisensory integration for peripheral stimuli is absent in PD regardless of sleep disturbances and further suggested the abnormal audiovisual integration might be a potential early manifestation of PD.

  13. Spatial Scaling of the Profile of Selective Attention in the Visual Field.

    Science.gov (United States)

    Gannon, Matthew A; Knapp, Ashley A; Adams, Thomas G; Long, Stephanie M; Parks, Nathan A

    2016-01-01

    Neural mechanisms of selective attention must be capable of adapting to variation in the absolute size of an attended stimulus in the ever-changing visual environment. To date, little is known regarding how attentional selection interacts with fluctuations in the spatial expanse of an attended object. Here, we use event-related potentials (ERPs) to investigate the scaling of attentional enhancement and suppression across the visual field. We measured ERPs while participants performed a task at fixation that varied in its attentional demands (attentional load) and visual angle (1.0° or 2.5°). Observers were presented with a stream of task-relevant stimuli while foveal, parafoveal, and peripheral visual locations were probed by irrelevant distractor stimuli. We found two important effects in the N1 component of visual ERPs. First, N1 modulations to task-relevant stimuli indexed attentional selection of stimuli during the load task and further correlated with task performance. Second, with increased task size, attentional modulation of the N1 to distractor stimuli showed a differential pattern that was consistent with a scaling of attentional selection. Together, these results demonstrate that the size of an attended stimulus scales the profile of attentional selection across the visual field and provides insights into the attentional mechanisms associated with such spatial scaling.

  14. Interaction between visual and chemical cues in a Liolaemus lizard: a multimodal approach.

    Science.gov (United States)

    Vicente, Natalin S; Halloy, Monique

    2017-12-01

    Multimodal communication involves the use of signals and cues across two or more sensory modalities. The genus Liolaemus (Iguania: Liolaemidae) offers a great potential for studies on the ecology and evolution of multimodal communication, including visual and chemical signals. In this study, we analyzed the response of male and female Liolaemus pacha to chemical, visual and combined (multimodal) stimuli. Using cue-isolation tests, we registered the number of tongue flicks and headbob displays from exposure to signals in each modality. Number of tongue flicks was greater when a chemical stimulus was presented alone than in the presence of visual or multimodal stimuli. In contrast, headbob displays were fewer in number with visual and chemical stimuli alone, but significantly higher in number when combined. Female signallers triggered significantly more tongue flicks than male signallers, suggesting that chemical cues are involved in sexual recognition. We did not find an inhibition between chemical and visual cues. On the contrary, we observed a dominance of the chemical modality, because when presented with visual stimuli, lizards also responded with more tongue flicks than headbob displays. The total response produced by multimodal stimuli was similar to that of the chemical stimuli alone, possibly suggesting non-redundancy. We discuss whether the visual component of a multimodal signal could attract attention at a distance, increasing the effectiveness of transmission and reception of the information in chemical cues. Copyright © 2017 Elsevier GmbH. All rights reserved.

  15. A noninvasive brain computer interface using visually-induced near-infrared spectroscopy responses.

    Science.gov (United States)

    Chen, Cheng-Hsuan; Ho, Ming-Shan; Shyu, Kuo-Kai; Hsu, Kou-Cheng; Wang, Kuo-Wei; Lee, Po-Lei

    2014-09-19

    Visually-induced near-infrared spectroscopy (NIRS) response was utilized to design a brain computer interface (BCI) system. Four circular checkerboards driven by distinct flickering sequences were displayed on a LCD screen as visual stimuli to induce subjects' NIRS responses. Each flickering sequence was a concatenated sequence of alternative flickering segments and resting segments. The flickering segment was designed with fixed duration of 3s whereas the resting segment was chosen randomly within 15-20s to create the mutual independencies among different flickering sequences. Six subjects were recruited in this study and subjects were requested to gaze at the four visual stimuli one-after-one in a random order. Since visual responses in human brain are time-locked to the onsets of visual stimuli and the flicker sequences of distinct visual stimuli were designed mutually independent, the NIRS responses induced by user's gazed targets can be discerned from non-gazed targets by applying a simple averaging process. The accuracies for the six subjects were higher than 90% after 10 or more epochs being averaged. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  16. [The P300-based brain-computer interface: presentation of the complex "flash + movement" stimuli].

    Science.gov (United States)

    Ganin, I P; Kaplan, A Ia

    2014-01-01

    The P300 based brain-computer interface requires the detection of P300 wave of brain event-related potentials. Most of its users learn the BCI control in several minutes and after the short classifier training they can type a text on the computer screen or assemble an image of separate fragments in simple BCI-based video games. Nevertheless, insufficient attractiveness for users and conservative stimuli organization in this BCI may restrict its integration into real information processes control. At the same time initial movement of object (motion-onset stimuli) may be an independent factor that induces P300 wave. In current work we checked the hypothesis that complex "flash + movement" stimuli together with drastic and compact stimuli organization on the computer screen may be much more attractive for user while operating in P300 BCI. In 20 subjects research we showed the effectiveness of our interface. Both accuracy and P300 amplitude were higher for flashing stimuli and complex "flash + movement" stimuli compared to motion-onset stimuli. N200 amplitude was maximal for flashing stimuli, while for "flash + movement" stimuli and motion-onset stimuli it was only a half of it. Similar BCI with complex stimuli may be embedded into compact control systems requiring high level of user attention under impact of negative external effects obstructing the BCI control.

  17. Visual Aversive Learning Compromises Sensory Discrimination.

    Science.gov (United States)

    Shalev, Lee; Paz, Rony; Avidan, Galia

    2018-03-14

    Aversive learning is thought to modulate perceptual thresholds, which can lead to overgeneralization. However, it remains undetermined whether this modulation is domain specific or a general effect. Moreover, despite the unique role of the visual modality in human perception, it is unclear whether this aspect of aversive learning exists in this modality. The current study was designed to examine the effect of visual aversive outcomes on the perception of basic visual and auditory features. We tested the ability of healthy participants, both males and females, to discriminate between neutral stimuli, before and after visual learning. In each experiment, neutral stimuli were associated with aversive images in an experimental group and with neutral images in a control group. Participants demonstrated a deterioration in discrimination (higher discrimination thresholds) only after aversive learning. This deterioration was measured for both auditory (tone frequency) and visual (orientation and contrast) features. The effect was replicated in five different experiments and lasted for at least 24 h. fMRI neural responses and pupil size were also measured during learning. We showed an increase in neural activations in the anterior cingulate cortex, insula, and amygdala during aversive compared with neutral learning. Interestingly, the early visual cortex showed increased brain activity during aversive compared with neutral context trials, with identical visual information. Our findings imply the existence of a central multimodal mechanism, which modulates early perceptual properties, following exposure to negative situations. Such a mechanism could contribute to abnormal responses that underlie anxiety states, even in new and safe environments. SIGNIFICANCE STATEMENT Using a visual aversive-learning paradigm, we found deteriorated discrimination abilities for visual and auditory stimuli that were associated with visual aversive stimuli. We showed increased neural

  18. Effects of Auditory and Visual Priming on the Identification of Spoken Words.

    Science.gov (United States)

    Shigeno, Sumi

    2017-04-01

    This study examined the effects of preceding contextual stimuli, either auditory or visual, on the identification of spoken target words. Fifty-one participants (29% males, 71% females; mean age = 24.5 years, SD = 8.5) were divided into three groups: no context, auditory context, and visual context. All target stimuli were spoken words masked with white noise. The relationships between the context and target stimuli were as follows: identical word, similar word, and unrelated word. Participants presented with context experienced a sequence of six context stimuli in the form of either spoken words or photographs. Auditory and visual context conditions produced similar results, but the auditory context aided word identification more than the visual context in the similar word relationship. We discuss these results in the light of top-down processing, motor theory, and the phonological system of language.

  19. Altered processing of visual emotional stimuli in posttraumatic stress disorder: an event-related potential study.

    Science.gov (United States)

    Saar-Ashkenazy, Rotem; Shalev, Hadar; Kanthak, Magdalena K; Guez, Jonathan; Friedman, Alon; Cohen, Jonathan E

    2015-08-30

    Patients with posttraumatic stress disorder (PTSD) display abnormal emotional processing and bias towards emotional content. Most neurophysiological studies in PTSD found higher amplitudes of event-related potentials (ERPs) in response to trauma-related visual content. Here we aimed to characterize brain electrical activity in PTSD subjects in response to non-trauma-related emotion-laden pictures (positive, neutral and negative). A combined behavioral-ERP study was conducted in 14 severe PTSD patients and 14 controls. Response time in PTSD patients was slower compared with that in controls, irrespective to emotional valence. In both PTSD and controls, response time to negative pictures was slower compared with that to neutral or positive pictures. Upon ranking, both control and PTSD subjects similarly discriminated between pictures with different emotional valences. ERP analysis revealed three distinctive components (at ~300, ~600 and ~1000 ms post-stimulus onset) for emotional valence in control subjects. In contrast, PTSD patients displayed a similar brain response across all emotional categories, resembling the response of controls to negative stimuli. We interpret these findings as a brain-circuit response tendency towards negative overgeneralization in PTSD. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  20. Timing the impact of literacy on visual processing

    Science.gov (United States)

    Pegado, Felipe; Comerlato, Enio; Ventura, Fabricio; Jobert, Antoinette; Nakamura, Kimihiro; Buiatti, Marco; Ventura, Paulo; Dehaene-Lambertz, Ghislaine; Kolinsky, Régine; Morais, José; Braga, Lucia W.; Cohen, Laurent; Dehaene, Stanislas

    2014-01-01

    Learning to read requires the acquisition of an efficient visual procedure for quickly recognizing fine print. Thus, reading practice could induce a perceptual learning effect in early vision. Using functional magnetic resonance imaging (fMRI) in literate and illiterate adults, we previously demonstrated an impact of reading acquisition on both high- and low-level occipitotemporal visual areas, but could not resolve the time course of these effects. To clarify whether literacy affects early vs. late stages of visual processing, we measured event-related potentials to various categories of visual stimuli in healthy adults with variable levels of literacy, including completely illiterate subjects, early-schooled literate subjects, and subjects who learned to read in adulthood (ex-illiterates). The stimuli included written letter strings forming pseudowords, on which literacy is expected to have a major impact, as well as faces, houses, tools, checkerboards, and false fonts. To evaluate the precision with which these stimuli were encoded, we studied repetition effects by presenting the stimuli in pairs composed of repeated, mirrored, or unrelated pictures from the same category. The results indicate that reading ability is correlated with a broad enhancement of early visual processing, including increased repetition suppression, suggesting better exemplar discrimination, and increased mirror discrimination, as early as ∼100–150 ms in the left occipitotemporal region. These effects were found with letter strings and false fonts, but also were partially generalized to other visual categories. Thus, learning to read affects the magnitude, precision, and invariance of early visual processing. PMID:25422460

  1. Reproducibility and discriminability of brain patterns of semantic categories enhanced by congruent audiovisual stimuli.

    Directory of Open Access Journals (Sweden)

    Yuanqing Li

    Full Text Available One of the central questions in cognitive neuroscience is the precise neural representation, or brain pattern, associated with a semantic category. In this study, we explored the influence of audiovisual stimuli on the brain patterns of concepts or semantic categories through a functional magnetic resonance imaging (fMRI experiment. We used a pattern search method to extract brain patterns corresponding to two semantic categories: "old people" and "young people." These brain patterns were elicited by semantically congruent audiovisual, semantically incongruent audiovisual, unimodal visual, and unimodal auditory stimuli belonging to the two semantic categories. We calculated the reproducibility index, which measures the similarity of the patterns within the same category. We also decoded the semantic categories from these brain patterns. The decoding accuracy reflects the discriminability of the brain patterns between two categories. The results showed that both the reproducibility index of brain patterns and the decoding accuracy were significantly higher for semantically congruent audiovisual stimuli than for unimodal visual and unimodal auditory stimuli, while the semantically incongruent stimuli did not elicit brain patterns with significantly higher reproducibility index or decoding accuracy. Thus, the semantically congruent audiovisual stimuli enhanced the within-class reproducibility of brain patterns and the between-class discriminability of brain patterns, and facilitate neural representations of semantic categories or concepts. Furthermore, we analyzed the brain activity in superior temporal sulcus and middle temporal gyrus (STS/MTG. The strength of the fMRI signal and the reproducibility index were enhanced by the semantically congruent audiovisual stimuli. Our results support the use of the reproducibility index as a potential tool to supplement the fMRI signal amplitude for evaluating multimodal integration.

  2. Action video game players' visual search advantage extends to biologically relevant stimuli.

    Science.gov (United States)

    Chisholm, Joseph D; Kingstone, Alan

    2015-07-01

    Research investigating the effects of action video game experience on cognition has demonstrated a host of performance improvements on a variety of basic tasks. Given the prevailing evidence that these benefits result from efficient control of attentional processes, there has been growing interest in using action video games as a general tool to enhance everyday attentional control. However, to date, there is little evidence indicating that the benefits of action video game playing scale up to complex settings with socially meaningful stimuli - one of the fundamental components of our natural environment. The present experiment compared action video game player (AVGP) and non-video game player (NVGP) performance on an oculomotor capture task that presented participants with face stimuli. In addition, the expression of a distractor face was manipulated to assess if action video game experience modulated the effect of emotion. Results indicate that AVGPs experience less oculomotor capture than NVGPs; an effect that was not influenced by the emotional content depicted by distractor faces. It is noteworthy that this AVGP advantage emerged despite participants being unaware that the investigation had to do with video game playing, and participants being equivalent in their motivation and treatment of the task as a game. The results align with the notion that action video game experience is associated with superior attentional and oculomotor control, and provides evidence that these benefits can generalize to more complex and biologically relevant stimuli. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Magnetic stimulation of visual cortex impairs perceptual learning.

    Science.gov (United States)

    Baldassarre, Antonello; Capotosto, Paolo; Committeri, Giorgia; Corbetta, Maurizio

    2016-12-01

    The ability to learn and process visual stimuli more efficiently is important for survival. Previous neuroimaging studies have shown that perceptual learning on a shape identification task differently modulates activity in both frontal-parietal cortical regions and visual cortex (Sigman et al., 2005;Lewis et al., 2009). Specifically, fronto-parietal regions (i.e. intra parietal sulcus, pIPS) became less activated for trained as compared to untrained stimuli, while visual regions (i.e. V2d/V3 and LO) exhibited higher activation for familiar shape. Here, after the intensive training, we employed transcranial magnetic stimulation over both visual occipital and parietal regions, previously shown to be modulated, to investigate their causal role in learning the shape identification task. We report that interference with V2d/V3 and LO increased reaction times to learned stimuli as compared to pIPS and Sham control condition. Moreover, the impairment observed after stimulation over the two visual regions was positive correlated. These results strongly support the causal role of the visual network in the control of the perceptual learning. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Visual attention modulates brain activation to angry voices.

    Science.gov (United States)

    Mothes-Lasch, Martin; Mentzel, Hans-Joachim; Miltner, Wolfgang H R; Straube, Thomas

    2011-06-29

    In accordance with influential models proposing prioritized processing of threat, previous studies have shown automatic brain responses to angry prosody in the amygdala and the auditory cortex under auditory distraction conditions. However, it is unknown whether the automatic processing of angry prosody is also observed during cross-modal distraction. The current fMRI study investigated brain responses to angry versus neutral prosodic stimuli during visual distraction. During scanning, participants were exposed to angry or neutral prosodic stimuli while visual symbols were displayed simultaneously. By means of task requirements, participants either attended to the voices or to the visual stimuli. While the auditory task revealed pronounced activation in the auditory cortex and amygdala to angry versus neutral prosody, this effect was absent during the visual task. Thus, our results show a limitation of the automaticity of the activation of the amygdala and auditory cortex to angry prosody. The activation of these areas to threat-related voices depends on modality-specific attention.

  5. Radial frequency stimuli and sine-wave gratings seem to be processed by distinct contrast brain mechanisms

    Directory of Open Access Journals (Sweden)

    M.L.B. Simas

    2005-03-01

    Full Text Available An assumption commonly made in the study of visual perception is that the lower the contrast threshold for a given stimulus, the more sensitive and selective will be the mechanism that processes it. On the basis of this consideration, we investigated contrast thresholds for two classes of stimuli: sine-wave gratings and radial frequency stimuli (i.e., j0 targets or stimuli modulated by spherical Bessel functions. Employing a suprathreshold summation method, we measured the selectivity of spatial and radial frequency filters using either sine-wave gratings or j0 target contrast profiles at either 1 or 4 cycles per degree of visual angle (cpd, as the test frequencies. Thus, in a forced-choice trial, observers chose between a background spatial (or radial frequency alone and the given background stimulus plus the test frequency (1 or 4 cpd sine-wave grating or radial frequency. Contrary to our expectations, the results showed elevated thresholds (i.e., inhibition for sine-wave gratings and decreased thresholds (i.e., summation for radial frequencies when background and test frequencies were identical. This was true for both 1- and 4-cpd test frequencies. This finding suggests that sine-wave gratings and radial frequency stimuli are processed by different quasi-linear systems, one working at low luminance and contrast level (sine-wave gratings and the other at high luminance and contrast levels (radial frequency stimuli. We think that this interpretation is consistent with distinct foveal only and foveal-parafoveal mechanisms involving striate and/or other higher visual areas (i.e., V2 and V4.

  6. Understanding Consumers' In-store Visual Perception

    DEFF Research Database (Denmark)

    Clement, Jesper; Kristensen, Tore; Grønhaug, Kjell

    2013-01-01

    It is widely accepted that the human brain has limited capacity for perceptual stimuli and consumers'' visual attention, when searching for a particular product or brand in a grocery store, should then be limited by the boundaries of their own perceptual capacity. In this exploratory study, we...... examine the relationship between abundant in-store stimuli and limited human perceptual capacity. Specifically, we test the influence of package design features on visual attention. Data was collected through two eye-tracking experiments, one in a grocery store using wireless eye-tracking equipment......, and another in a lab setting. Findings show that consumers have fragmented visual attention during grocery shopping, and that their visual attention is simultaneously influenced and disrupted by the shelf display. Physical design features such as shape and contrast dominate the initial phase of searching...

  7. Cortical oscillations modulated by congruent and incongruent audiovisual stimuli.

    Science.gov (United States)

    Herdman, A T; Fujioka, T; Chau, W; Ross, B; Pantev, C; Picton, T W

    2004-11-30

    Congruent or incongruent grapheme-phoneme stimuli are easily perceived as one or two linguistic objects. The main objective of this study was to investigate the changes in cortical oscillations that reflect the processing of congruent and incongruent audiovisual stimuli. Graphemes were Japanese Hiragana characters for four different vowels (/a/, /o/, /u/, and /i/). They were presented simultaneously with their corresponding phonemes (congruent) or non-corresponding phonemes (incongruent) to native-speaking Japanese participants. Participants' reaction times to the congruent audiovisual stimuli were significantly faster by 57 ms as compared to reaction times to incongruent stimuli. We recorded the brain responses for each condition using a whole-head magnetoencephalograph (MEG). A novel approach to analysing MEG data, called synthetic aperture magnetometry (SAM), was used to identify event-related changes in cortical oscillations involved in audiovisual processing. The SAM contrast between congruent and incongruent responses revealed greater event-related desynchonization (8-16 Hz) bilaterally in the occipital lobes and greater event-related synchronization (4-8 Hz) in the left transverse temporal gyrus. Results from this study further support the concept of interactions between the auditory and visual sensory cortices in multi-sensory processing of audiovisual objects.

  8. Non-target adjacent stimuli classification improves performance of classical ERP-based brain computer interface

    Science.gov (United States)

    Ceballos, G. A.; Hernández, L. F.

    2015-04-01

    Objective. The classical ERP-based speller, or P300 Speller, is one of the most commonly used paradigms in the field of Brain Computer Interfaces (BCI). Several alterations to the visual stimuli presentation system have been developed to avoid unfavorable effects elicited by adjacent stimuli. However, there has been little, if any, regard to useful information contained in responses to adjacent stimuli about spatial location of target symbols. This paper aims to demonstrate that combining the classification of non-target adjacent stimuli with standard classification (target versus non-target) significantly improves classical ERP-based speller efficiency. Approach. Four SWLDA classifiers were trained and combined with the standard classifier: the lower row, upper row, right column and left column classifiers. This new feature extraction procedure and the classification method were carried out on three open databases: the UAM P300 database (Universidad Autonoma Metropolitana, Mexico), BCI competition II (dataset IIb) and BCI competition III (dataset II). Main results. The inclusion of the classification of non-target adjacent stimuli improves target classification in the classical row/column paradigm. A gain in mean single trial classification of 9.6% and an overall improvement of 25% in simulated spelling speed was achieved. Significance. We have provided further evidence that the ERPs produced by adjacent stimuli present discriminable features, which could provide additional information about the spatial location of intended symbols. This work promotes the searching of information on the peripheral stimulation responses to improve the performance of emerging visual ERP-based spellers.

  9. Secondary hyperalgesia to heat stimuli after burn injury in man

    DEFF Research Database (Denmark)

    Pedersen, J L; Kehlet, H

    1998-01-01

    The aim of the study was to examine the presence of hyperalgesia to heat stimuli within the zone of secondary hyperalgesia to punctate mechanical stimuli. A burn was produced on the medial part of the non-dominant crus in 15 healthy volunteers with a 50 x 25 mm thermode (47 degrees C, 7 min......), and assessments were made 70 min and 40 min before, and 0, 1, and 2 h after the burn injury. Hyperalgesia to mechanical and heat stimuli were examined by von Frey hairs and contact thermodes (3.75 and 12.5 cm2), and pain responses were rated with a visual analog scale (0-100). The area of secondary hyperalgesia...... to punctate stimuli was assessed with a rigid von Frey hair (462 mN). The heat pain responses to 45 degrees C in 5 s (3.75 cm2) were tested in the area just outside the burn, where the subjects developed secondary hyperalgesia, and on the lateral crus where no subject developed secondary hyperalgesia (control...

  10. Stress improves selective attention towards emotionally neutral left ear stimuli.

    Science.gov (United States)

    Hoskin, Robert; Hunter, M D; Woodruff, P W R

    2014-09-01

    Research concerning the impact of psychological stress on visual selective attention has produced mixed results. The current paper describes two experiments which utilise a novel auditory oddball paradigm to test the impact of psychological stress on auditory selective attention. Participants had to report the location of emotionally-neutral auditory stimuli, while ignoring task-irrelevant changes in their content. The results of the first experiment, in which speech stimuli were presented, suggested that stress improves the ability to selectively attend to left, but not right ear stimuli. When this experiment was repeated using tonal stimuli the same result was evident, but only for female participants. Females were also found to experience greater levels of distraction in general across the two experiments. These findings support the goal-shielding theory which suggests that stress improves selective attention by reducing the attentional resources available to process task-irrelevant information. The study also demonstrates, for the first time, that this goal-shielding effect extends to auditory perception. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Neural Basis of Visual Attentional Orienting in Childhood Autism Spectrum Disorders.

    Science.gov (United States)

    Murphy, Eric R; Norr, Megan; Strang, John F; Kenworthy, Lauren; Gaillard, William D; Vaidya, Chandan J

    2017-01-01

    We examined spontaneous attention orienting to visual salience in stimuli without social significance using a modified Dot-Probe task during functional magnetic resonance imaging in high-functioning preadolescent children with Autism Spectrum Disorder (ASD) and age- and IQ-matched control children. While the magnitude of attentional bias (faster response to probes in the location of solid color patch) to visually salient stimuli was similar in the groups, activation differences in frontal and temporoparietal regions suggested hyper-sensitivity to visual salience or to sameness in ASD children. Further, activation in a subset of those regions was associated with symptoms of restricted and repetitive behavior. Thus, atypicalities in response to visual properties of stimuli may drive attentional orienting problems associated with ASD.

  12. Visual Stimuli Induce Waves of Electrical Activity in Turtle Cortex

    Science.gov (United States)

    Prechtl, J. C.; Cohen, L. B.; Pesaran, B.; Mitra, P. P.; Kleinfeld, D.

    1997-07-01

    The computations involved in the processing of a visual scene invariably involve the interactions among neurons throughout all of visual cortex. One hypothesis is that the timing of neuronal activity, as well as the amplitude of activity, provides a means to encode features of objects. The experimental data from studies on cat [Gray, C. M., Konig, P., Engel, A. K. & Singer, W. (1989) Nature (London) 338, 334-337] support a view in which only synchronous (no phase lags) activity carries information about the visual scene. In contrast, theoretical studies suggest, on the one hand, the utility of multiple phases within a population of neurons as a means to encode independent visual features and, on the other hand, the likely existence of timing differences solely on the basis of network dynamics. Here we use widefield imaging in conjunction with voltage-sensitive dyes to record electrical activity from the virtually intact, unanesthetized turtle brain. Our data consist of single-trial measurements. We analyze our data in the frequency domain to isolate coherent events that lie in different frequency bands. Low frequency oscillations (scale differences in neuronal timing are present and persistent during visual processing.

  13. Auditory, visual and auditory-visual memory and sequencing performance in typically developing children.

    Science.gov (United States)

    Pillai, Roshni; Yathiraj, Asha

    2017-09-01

    The study evaluated whether there exists a difference/relation in the way four different memory skills (memory score, sequencing score, memory span, & sequencing span) are processed through the auditory modality, visual modality and combined modalities. Four memory skills were evaluated on 30 typically developing children aged 7 years and 8 years across three modality conditions (auditory, visual, & auditory-visual). Analogous auditory and visual stimuli were presented to evaluate the three modality conditions across the two age groups. The children obtained significantly higher memory scores through the auditory modality compared to the visual modality. Likewise, their memory scores were significantly higher through the auditory-visual modality condition than through the visual modality. However, no effect of modality was observed on the sequencing scores as well as for the memory and the sequencing span. A good agreement was seen between the different modality conditions that were studied (auditory, visual, & auditory-visual) for the different memory skills measures (memory scores, sequencing scores, memory span, & sequencing span). A relatively lower agreement was noted only between the auditory and visual modalities as well as between the visual and auditory-visual modality conditions for the memory scores, measured using Bland-Altman plots. The study highlights the efficacy of using analogous stimuli to assess the auditory, visual as well as combined modalities. The study supports the view that the performance of children on different memory skills was better through the auditory modality compared to the visual modality. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Response to various periods of mechanical stimuli in Physarum plasmodium

    International Nuclear Information System (INIS)

    Umedachi, Takuya; Ito, Kentaro; Kobayashi, Ryo; Ishiguro, Akio; Nakagaki, Toshiyuki

    2017-01-01

    Response to mechanical stimuli is a fundamental and critical ability for living cells to survive in hazardous conditions or to form adaptive and functional structures against force(s) from the environment. Although this ability has been extensively studied by molecular biology strategies, it is also important to investigate the ability from the viewpoint of biological rhythm phenomena so as to reveal the mechanisms that underlie these phenomena. Here, we use the plasmodium of the true slime mold Physarum polycephalum as the experimental system for investigating this ability. The plasmodium was repetitively stretched for various periods during which its locomotion speed was observed. Since the plasmodium has inherent oscillation cycles of protoplasmic streaming and thickness variation, how the plasmodium responds to various periods of external stretching stimuli can shed light on the other biological rhythm phenomena. The experimental results show that the plasmodium exhibits response to periodic mechanical stimulation and changes its locomotion speed depending on the period of the stretching stimuli. (paper)

  15. Working memory biasing of visual perception without awareness.

    Science.gov (United States)

    Pan, Yi; Lin, Bingyuan; Zhao, Yajun; Soto, David

    2014-10-01

    Previous research has demonstrated that the contents of visual working memory can bias visual processing in favor of matching stimuli in the scene. However, the extent to which such top-down, memory-driven biasing of visual perception is contingent on conscious awareness remains unknown. Here we showed that conscious awareness of critical visual cues is dispensable for working memory to bias perceptual selection mechanisms. Using the procedure of continuous flash suppression, we demonstrated that "unseen" visual stimuli during interocular suppression can gain preferential access to awareness if they match the contents of visual working memory. Strikingly, the very same effect occurred even when the visual cue to be held in memory was rendered nonconscious by masking. Control experiments ruled out the alternative accounts of repetition priming and different detection criteria. We conclude that working memory biases of visual perception can operate in the absence of conscious awareness.

  16. Computer-animated stimuli to measure motion sensitivity: constraints on signal design in the Jacky dragon.

    Science.gov (United States)

    Woo, Kevin L; Rieucau, Guillaume; Burke, Darren

    2017-02-01

    Identifying perceptual thresholds is critical for understanding the mechanisms that underlie signal evolution. Using computer-animated stimuli, we examined visual speed sensitivity in the Jacky dragon Amphibolurus muricatus , a species that makes extensive use of rapid motor patterns in social communication. First, focal lizards were tested in discrimination trials using random-dot kinematograms displaying combinations of speed, coherence, and direction. Second, we measured subject lizards' ability to predict the appearance of a secondary reinforcer (1 of 3 different computer-generated animations of invertebrates: cricket, spider, and mite) based on the direction of movement of a field of drifting dots by following a set of behavioural responses (e.g., orienting response, latency to respond) to our virtual stimuli. We found an effect of both speed and coherence, as well as an interaction between these 2 factors on the perception of moving stimuli. Overall, our results showed that Jacky dragons have acute sensitivity to high speeds. We then employed an optic flow analysis to match the performance to ecologically relevant motion. Our results suggest that the Jacky dragon visual system may have been shaped to detect fast motion. This pre-existing sensitivity may have constrained the evolution of conspecific displays. In contrast, Jacky dragons may have difficulty in detecting the movement of ambush predators, such as snakes and of some invertebrate prey. Our study also demonstrates the potential of the computer-animated stimuli technique for conducting nonintrusive tests to explore motion range and sensitivity in a visually mediated species.

  17. A working memory bias for alcohol-related stimuli depends on drinking score.

    Science.gov (United States)

    Kessler, Klaus; Pajak, Katarzyna Malgorzata; Harkin, Ben; Jones, Barry

    2013-03-01

    We tested 44 participants with respect to their working memory (WM) performance on alcohol-related versus neutral visual stimuli. Previously an alcohol attentional bias (AAB) had been reported using these stimuli, where the attention of frequent drinkers was automatically drawn toward alcohol-related items (e.g., beer bottle). The present study set out to provide evidence for an alcohol memory bias (AMB) that would persist over longer time-scales than the AAB. The WM task we used required memorizing 4 stimuli in their correct locations and a visual interference task was administered during a 4-sec delay interval. A subsequent probe required participants to indicate whether a stimulus was shown in the correct or incorrect location. For each participant we calculated a drinking score based on 3 items derived from the Alcohol Use Questionnaire, and we observed that higher scorers better remembered alcohol-related images compared with lower scorers, particularly when these were presented in their correct locations upon recall. This provides first evidence for an AMB. It is important to highlight that this effect persisted over a 4-sec delay period including a visual interference task that erased iconic memories and diverted attention away from the encoded items, thus the AMB cannot be reduced to the previously reported AAB. Our finding calls for further investigation of alcohol-related cognitive biases in WM, and we propose a preliminary model that may guide future research. (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  18. Sensory modality specificity of neural activity related to memory in visual cortex.

    Science.gov (United States)

    Gibson, J R; Maunsell, J H

    1997-09-01

    Previous studies have shown that when monkeys perform a delayed match-to-sample (DMS) task, some neurons in inferotemporal visual cortex are activated selectively during the delay period when the animal must remember particular visual stimuli. This selective delay activity may be involved in short-term memory. It does not depend on visual stimulation: both auditory and tactile stimuli can trigger selective delay activity in inferotemporal cortex when animals expect to respond to visual stimuli in a DMS task. We have examined the overall modality specificity of delay period activity using a variety of auditory/visual cross-modal and unimodal DMS tasks. The cross-modal DMS tasks involved making specific long-term memory associations between visual and auditory stimuli, whereas the unimodal DMS tasks were standard identity matching tasks. Delay activity existed in auditory/visual cross-modal DMS tasks whether the animal anticipated responding to visual or auditory stimuli. No evidence of selective delay period activation was seen in a purely auditory DMS task. Delay-selective cells were relatively common in one animal where they constituted up to 53% neurons tested with a given task. This was only the case for up to 9% of cells in a second animal. In the first animal, a specific long-term memory representation for learned cross-modal associations was observed in delay activity, indicating that this type of representation need not be purely visual. Furthermore, in this same animal, delay activity in one cross-modal task, an auditory-to-visual task, predicted correct and incorrect responses. These results suggest that neurons in inferotemporal cortex contribute to abstract memory representations that can be activated by input from other sensory modalities, but these representations are specific to visual behaviors.

  19. Exploring combinations of different color and facial expression stimuli for gaze-independent BCIs

    Directory of Open Access Journals (Sweden)

    Long eChen

    2016-01-01

    Full Text Available AbstractBackground: Some studies have proven that a conventional visual brain computer interface (BCI based on overt attention cannot be used effectively when eye movement control is not possible. To solve this problem, a novel visual-based BCI system based on covert attention and feature attention had been proposed and was called the gaze-independent BCI. Color and shape difference between stimuli and backgrounds have generally been used in examples of gaze-independent BCIs. Recently, a new paradigm based on facial expression change had been presented, and obtained high performance. However, some facial expressions were so similar that users couldn’t tell them apart. Especially they were presented at the same position in a rapid serial visual presentation (RSVP paradigm. Consequently, the performance of BCIs is reduced.New Method: In this paper, we combined facial expressions and colors to optimize the stimuli presentation in the gaze-independent BCI. This optimized paradigm was called the colored dummy face pattern. It is suggested that different colors and facial expressions could help subjects to locate the target and evoke larger event-related potentials (ERPs. In order to evaluate the performance of this new paradigm, two other paradigms were presented, called the grey dummy face pattern and the colored ball pattern. Comparison with Existing Method(s: The key point that determined the value of the colored dummy faces stimuli in BCI systems were whether dummy face stimuli could obtain higher performance than grey faces or colored balls stimuli. Ten healthy subjects (7 male, aged 21-26 years, mean 24.5±1.25 participated in our experiment. Online and offline results of four different paradigms were obtained and comparatively analyzed.Results: The results showed that the colored dummy face pattern could evoke higher P300 and N400 ERP amplitudes, compared with the grey dummy face pattern and the colored ball pattern. Online results showed

  20. Do Tonic Itch and Pain Stimuli Draw Attention towards Their Location?

    Directory of Open Access Journals (Sweden)

    Antoinette I. M. van Laarhoven

    2017-01-01

    Full Text Available Background. Although itch and pain are distinct experiences, both are unpleasant, may demand attention, and interfere with daily activities. Research investigating the role of attention in tonic itch and pain stimuli, particularly whether attention is drawn to the stimulus location, is scarce. Methods. In the somatosensory attention task, fifty-three healthy participants were exposed to 35-second electrical itch or pain stimuli on either the left or right wrist. Participants responded as quickly as possible to visual targets appearing at the stimulated location (ipsilateral trials or the arm without stimulation (contralateral trials. During control blocks, participants performed the visual task without stimulation. Attention allocation at the itch and pain location is inferred when responses are faster ipsilaterally than contralaterally. Results. Results did not indicate that attention was directed towards or away from the itch and pain location. Notwithstanding, participants were slower during itch and pain than during control blocks. Conclusions. In contrast with our hypotheses, no indications were found for spatial attention allocation towards the somatosensory stimuli. This may relate to dynamic shifts in attention over the time course of the tonic sensations. Our secondary finding that itch and pain interfere with task performance is in-line with attention theories of bodily perception.

  1. Dorsal hippocampus is necessary for visual categorization in rats.

    Science.gov (United States)

    Kim, Jangjin; Castro, Leyre; Wasserman, Edward A; Freeman, John H

    2018-02-23

    The hippocampus may play a role in categorization because of the need to differentiate stimulus categories (pattern separation) and to recognize category membership of stimuli from partial information (pattern completion). We hypothesized that the hippocampus would be more crucial for categorization of low-density (few relevant features) stimuli-due to the higher demand on pattern separation and pattern completion-than for categorization of high-density (many relevant features) stimuli. Using a touchscreen apparatus, rats were trained to categorize multiple abstract stimuli into two different categories. Each stimulus was a pentagonal configuration of five visual features; some of the visual features were relevant for defining the category whereas others were irrelevant. Two groups of rats were trained with either a high (dense, n = 8) or low (sparse, n = 8) number of category-relevant features. Upon reaching criterion discrimination (≥75% correct, on 2 consecutive days), bilateral cannulas were implanted in the dorsal hippocampus. The rats were then given either vehicle or muscimol infusions into the hippocampus just prior to various testing sessions. They were tested with: the previously trained stimuli (trained), novel stimuli involving new irrelevant features (novel), stimuli involving relocated features (relocation), and a single relevant feature (singleton). In training, the dense group reached criterion faster than the sparse group, indicating that the sparse task was more difficult than the dense task. In testing, accuracy of both groups was equally high for trained and novel stimuli. However, both groups showed impaired accuracy in the relocation and singleton conditions, with a greater deficit in the sparse group. The testing data indicate that rats encode both the relevant features and the spatial locations of the features. Hippocampal inactivation impaired visual categorization regardless of the density of the category-relevant features for

  2. Visual cortex in dementia with Lewy bodies: magnetic resonance imaging study

    Science.gov (United States)

    Taylor, John-Paul; Firbank, Michael J.; He, Jiabao; Barnett, Nicola; Pearce, Sarah; Livingstone, Anthea; Vuong, Quoc; McKeith, Ian G.; O’Brien, John T.

    2012-01-01

    Background Visual hallucinations and visuoperceptual deficits are common in dementia with Lewy bodies, suggesting that cortical visual function may be abnormal. Aims To investigate: (1) cortical visual function using functional magnetic resonance imaging (fMRI); and (2) the nature and severity of perfusion deficits in visual areas using arterial spin labelling (ASL)-MRI. Method In total, 17 participants with dementia with Lewy bodies (DLB group) and 19 similarly aged controls were presented with simple visual stimuli (checkerboard, moving dots, and objects) during fMRI and subsequently underwent ASL-MRI (DLB group n = 15, control group n = 19). Results Functional activations were evident in visual areas in both the DLB and control groups in response to checkerboard and objects stimuli but reduced visual area V5/MT (middle temporal) activation occurred in the DLB group in response to motion stimuli. Posterior cortical perfusion deficits occurred in the DLB group, particularly in higher visual areas. Conclusions Higher visual areas, particularly occipito-parietal, appear abnormal in dementia with Lewy bodies, while there is a preservation of function in lower visual areas (V1 and V2/3). PMID:22500014

  3. Balancing Attended and Global Stimuli in Perceived Video Quality Assessment

    DEFF Research Database (Denmark)

    You, Junyong; Korhonen, Jari; Perkis, Andrew

    2011-01-01

    . This paper proposes a quality model based on the late attention selection theory, assuming that the video quality is perceived via two mechanisms: global and local quality assessment. First we model several visual features influencing the visual attention in quality assessment scenarios to derive......The visual attention mechanism plays a key role in the human perception system and it has a significant impact on our assessment of perceived video quality. In spite of receiving less attention from the viewers, unattended stimuli can still contribute to the understanding of the visual content...... an attention map using appropriate fusion techniques. The global quality assessment as based on the assumption that viewers allocate their attention equally to the entire visual scene, is modeled by four carefully designed quality features. By employing these same quality features, the local quality model...

  4. Auditory short-term memory behaves like visual short-term memory.

    Directory of Open Access Journals (Sweden)

    Kristina M Visscher

    2007-03-01

    Full Text Available Are the information processing steps that support short-term sensory memory common to all the senses? Systematic, psychophysical comparison requires identical experimental paradigms and comparable stimuli, which can be challenging to obtain across modalities. Participants performed a recognition memory task with auditory and visual stimuli that were comparable in complexity and in their neural representations at early stages of cortical processing. The visual stimuli were static and moving Gaussian-windowed, oriented, sinusoidal gratings (Gabor patches; the auditory stimuli were broadband sounds whose frequency content varied sinusoidally over time (moving ripples. Parallel effects on recognition memory were seen for number of items to be remembered, retention interval, and serial position. Further, regardless of modality, predicting an item's recognizability requires taking account of (1 the probe's similarity to the remembered list items (summed similarity, and (2 the similarity between the items in memory (inter-item homogeneity. A model incorporating both these factors gives a good fit to recognition memory data for auditory as well as visual stimuli. In addition, we present the first demonstration of the orthogonality of summed similarity and inter-item homogeneity effects. These data imply that auditory and visual representations undergo very similar transformations while they are encoded and retrieved from memory.

  5. Auditory short-term memory behaves like visual short-term memory.

    Science.gov (United States)

    Visscher, Kristina M; Kaplan, Elina; Kahana, Michael J; Sekuler, Robert

    2007-03-01

    Are the information processing steps that support short-term sensory memory common to all the senses? Systematic, psychophysical comparison requires identical experimental paradigms and comparable stimuli, which can be challenging to obtain across modalities. Participants performed a recognition memory task with auditory and visual stimuli that were comparable in complexity and in their neural representations at early stages of cortical processing. The visual stimuli were static and moving Gaussian-windowed, oriented, sinusoidal gratings (Gabor patches); the auditory stimuli were broadband sounds whose frequency content varied sinusoidally over time (moving ripples). Parallel effects on recognition memory were seen for number of items to be remembered, retention interval, and serial position. Further, regardless of modality, predicting an item's recognizability requires taking account of (1) the probe's similarity to the remembered list items (summed similarity), and (2) the similarity between the items in memory (inter-item homogeneity). A model incorporating both these factors gives a good fit to recognition memory data for auditory as well as visual stimuli. In addition, we present the first demonstration of the orthogonality of summed similarity and inter-item homogeneity effects. These data imply that auditory and visual representations undergo very similar transformations while they are encoded and retrieved from memory.

  6. Spatial Scaling of the Profile of Selective Attention in the Visual Field.

    Directory of Open Access Journals (Sweden)

    Matthew A Gannon

    Full Text Available Neural mechanisms of selective attention must be capable of adapting to variation in the absolute size of an attended stimulus in the ever-changing visual environment. To date, little is known regarding how attentional selection interacts with fluctuations in the spatial expanse of an attended object. Here, we use event-related potentials (ERPs to investigate the scaling of attentional enhancement and suppression across the visual field. We measured ERPs while participants performed a task at fixation that varied in its attentional demands (attentional load and visual angle (1.0° or 2.5°. Observers were presented with a stream of task-relevant stimuli while foveal, parafoveal, and peripheral visual locations were probed by irrelevant distractor stimuli. We found two important effects in the N1 component of visual ERPs. First, N1 modulations to task-relevant stimuli indexed attentional selection of stimuli during the load task and further correlated with task performance. Second, with increased task size, attentional modulation of the N1 to distractor stimuli showed a differential pattern that was consistent with a scaling of attentional selection. Together, these results demonstrate that the size of an attended stimulus scales the profile of attentional selection across the visual field and provides insights into the attentional mechanisms associated with such spatial scaling.

  7. Startle Auditory Stimuli Enhance the Performance of Fast Dynamic Contractions

    Science.gov (United States)

    Fernandez-Del-Olmo, Miguel; Río-Rodríguez, Dan; Iglesias-Soler, Eliseo; Acero, Rafael M.

    2014-01-01

    Fast reaction times and the ability to develop a high rate of force development (RFD) are crucial for sports performance. However, little is known regarding the relationship between these parameters. The aim of this study was to investigate the effects of auditory stimuli of different intensities on the performance of a concentric bench-press exercise. Concentric bench-presses were performed by thirteen trained subjects in response to three different conditions: a visual stimulus (VS); a visual stimulus accompanied by a non-startle auditory stimulus (AS); and a visual stimulus accompanied by a startle auditory stimulus (SS). Peak RFD, peak velocity, onset movement, movement duration and electromyography from pectoralis and tricep muscles were recorded. The SS condition induced an increase in the RFD and peak velocity and a reduction in the movement onset and duration, in comparison with the VS and AS condition. The onset activation of the pectoralis and tricep muscles was shorter for the SS than for the VS and AS conditions. These findings point out to specific enhancement effects of loud auditory stimulation on the rate of force development. This is of relevance since startle stimuli could be used to explore neural adaptations to resistance training. PMID:24489967

  8. Startle auditory stimuli enhance the performance of fast dynamic contractions.

    Directory of Open Access Journals (Sweden)

    Miguel Fernandez-Del-Olmo

    Full Text Available Fast reaction times and the ability to develop a high rate of force development (RFD are crucial for sports performance. However, little is known regarding the relationship between these parameters. The aim of this study was to investigate the effects of auditory stimuli of different intensities on the performance of a concentric bench-press exercise. Concentric bench-presses were performed by thirteen trained subjects in response to three different conditions: a visual stimulus (VS; a visual stimulus accompanied by a non-startle auditory stimulus (AS; and a visual stimulus accompanied by a startle auditory stimulus (SS. Peak RFD, peak velocity, onset movement, movement duration and electromyography from pectoralis and tricep muscles were recorded. The SS condition induced an increase in the RFD and peak velocity and a reduction in the movement onset and duration, in comparison with the VS and AS condition. The onset activation of the pectoralis and tricep muscles was shorter for the SS than for the VS and AS conditions. These findings point out to specific enhancement effects of loud auditory stimulation on the rate of force development. This is of relevance since startle stimuli could be used to explore neural adaptations to resistance training.

  9. Visual Categorization of Natural Movies by Rats

    Science.gov (United States)

    Vinken, Kasper; Vermaercke, Ben

    2014-01-01

    Visual categorization of complex, natural stimuli has been studied for some time in human and nonhuman primates. Recent interest in the rodent as a model for visual perception, including higher-level functional specialization, leads to the question of how rodents would perform on a categorization task using natural stimuli. To answer this question, rats were trained in a two-alternative forced choice task to discriminate movies containing rats from movies containing other objects and from scrambled movies (ordinate-level categorization). Subsequently, transfer to novel, previously unseen stimuli was tested, followed by a series of control probes. The results show that the animals are capable of acquiring a decision rule by abstracting common features from natural movies to generalize categorization to new stimuli. Control probes demonstrate that they did not use single low-level features, such as motion energy or (local) luminance. Significant generalization was even present with stationary snapshots from untrained movies. The variability within and between training and test stimuli, the complexity of natural movies, and the control experiments and analyses all suggest that a more high-level rule based on more complex stimulus features than local luminance-based cues was used to classify the novel stimuli. In conclusion, natural stimuli can be used to probe ordinate-level categorization in rats. PMID:25100598

  10. Absent Audiovisual Integration Elicited by Peripheral Stimuli in Parkinson’s Disease

    Directory of Open Access Journals (Sweden)

    Yanna Ren

    2018-01-01

    Full Text Available The basal ganglia, which have been shown to be a significant multisensory hub, are disordered in Parkinson’s disease (PD. This study was to investigate the audiovisual integration of peripheral stimuli in PD patients with/without sleep disturbances. Thirty-six age-matched normal controls (NC and 30 PD patients were recruited for an auditory/visual discrimination experiment. The mean response times for each participant were analyzed using repeated measures ANOVA and race model. The results showed that the response to all stimuli was significantly delayed for PD compared to NC (all p0.05. The current results showed that audiovisual multisensory integration for peripheral stimuli is absent in PD regardless of sleep disturbances and further suggested the abnormal audiovisual integration might be a potential early manifestation of PD.

  11. Heightened eating drive and visual food stimuli attenuate central nociceptive processing.

    Science.gov (United States)

    Wright, Hazel; Li, Xiaoyun; Fallon, Nicholas B; Giesbrecht, Timo; Thomas, Anna; Harrold, Joanne A; Halford, Jason C G; Stancak, Andrej

    2015-03-01

    Hunger and pain are basic drives that compete for a behavioral response when experienced together. To investigate the cortical processes underlying hunger-pain interactions, we manipulated participants' hunger and presented photographs of appetizing food or inedible objects in combination with painful laser stimuli. Fourteen healthy participants completed two EEG sessions: one after an overnight fast, the other following a large breakfast. Spatio-temporal patterns of cortical activation underlying the hunger-pain competition were explored with 128-channel EEG recordings and source dipole analysis of laser-evoked potentials (LEPs). We found that initial pain ratings were temporarily reduced when participants were hungry compared with fed. Source activity in parahippocampal gyrus was weaker when participants were hungry, and activations of operculo-insular cortex, anterior cingulate cortex, parahippocampal gyrus, and cerebellum were smaller in the context of appetitive food photographs than in that of inedible object photographs. Cortical processing of noxious stimuli in pain-related brain structures is reduced and pain temporarily attenuated when people are hungry or passively viewing food photographs, suggesting a possible interaction between the opposing motivational forces of the eating drive and pain. Copyright © 2015 the American Physiological Society.

  12. Working Memory Enhances Visual Perception: Evidence from Signal Detection Analysis

    Science.gov (United States)

    Soto, David; Wriglesworth, Alice; Bahrami-Balani, Alex; Humphreys, Glyn W.

    2010-01-01

    We show that perceptual sensitivity to visual stimuli can be modulated by matches between the contents of working memory (WM) and stimuli in the visual field. Observers were presented with an object cue (to hold in WM or to merely attend) and subsequently had to identify a brief target presented within a colored shape. The cue could be…

  13. Prevailing theories of consciousness are challenged by novel cross-modal associations acquired between subliminal stimuli.

    Science.gov (United States)

    Scott, Ryan B; Samaha, Jason; Chrisley, Ron; Dienes, Zoltan

    2018-06-01

    While theories of consciousness differ substantially, the 'conscious access hypothesis', which aligns consciousness with the global accessibility of information across cortical regions, is present in many of the prevailing frameworks. This account holds that consciousness is necessary to integrate information arising from independent functions such as the specialist processing required by different senses. We directly tested this account by evaluating the potential for associative learning between novel pairs of subliminal stimuli presented in different sensory modalities. First, pairs of subliminal stimuli were presented and then their association assessed by examining the ability of the first stimulus to prime classification of the second. In Experiments 1-4 the stimuli were word-pairs consisting of a male name preceding either a creative or uncreative profession. Participants were subliminally exposed to two name-profession pairs where one name was paired with a creative profession and the other an uncreative profession. A supraliminal task followed requiring the timed classification of one of those two professions. The target profession was preceded by either the name with which it had been subliminally paired (concordant) or the alternate name (discordant). Experiment 1 presented stimuli auditorily, Experiment 2 visually, and Experiment 3 presented names auditorily and professions visually. All three experiments revealed the same inverse priming effect with concordant test pairs associated with significantly slower classification judgements. Experiment 4 sought to establish if learning would be more efficient with supraliminal stimuli and found evidence that a different strategy is adopted when stimuli are consciously perceived. Finally, Experiment 5 replicated the unconscious cross-modal association achieved in Experiment 3 utilising non-linguistic stimuli. The results demonstrate the acquisition of novel cross-modal associations between stimuli which are not

  14. Agnosia for mirror stimuli: a new case report with a small parietal lesion.

    Science.gov (United States)

    Martinaud, Olivier; Mirlink, Nicolas; Bioux, Sandrine; Bliaux, Evangéline; Lebas, Axel; Gerardin, Emmanuel; Hannequin, Didier

    2014-11-01

    Only seven cases of agnosia for mirror stimuli have been reported, always with an extensive lesion. We report a new case of an agnosia for mirror stimuli due to a circumscribed lesion. An extensive battery of neuropsychological tests and a new experimental procedure to assess visual object mirror and orientation discrimination were assessed 10 days after the onset of clinical symptoms, and 5 years later. The performances of our patient were compared with those of four healthy control subjects matched for age. This test revealed an agnosia for mirror stimuli. Brain imaging showed a small right occipitoparietal hematoma, encompassing the extrastriate cortex adjoining the inferior parietal lobe. This new case suggests that: (i) agnosia for mirror stimuli can persist for 5 years after onset and (ii) the posterior part of the right intraparietal sulcus could be critical in the cognitive process of mirror stimuli discrimination. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  15. The Visual Shock of Francis Bacon: An essay in neuroesthetics

    Directory of Open Access Journals (Sweden)

    Semir eZeki

    2013-12-01

    Full Text Available In this paper we discuss the work of Francis Bacon in the context of his declared aim of giving a visual shock. We explore what this means in terms of brain activity and what insights into the brain’s visual perceptive system his work gives. We do so especially with reference to the representation of faces and bodies in the human visual brain. We discuss the evidence that shows that both these categories of stimuli have a very privileged status in visual perception, compared to the perception of other stimuli, including man-made artifacts such as houses, chairs and cars. We show that viewing face and house stimuli that depart significantly from a normal representation of faces and bodies entails a significant difference in the pattern of brain activation. We argue that Bacon succeeded in delivering his visual shock because he subverted the normal neural representation of faces and bodies, without at the same time subverting the representation of man-made artifacts.

  16. The "Visual Shock" of Francis Bacon: an essay in neuroesthetics.

    Science.gov (United States)

    Zeki, Semir; Ishizu, Tomohiro

    2013-01-01

    In this paper we discuss the work of Francis Bacon in the context of his declared aim of giving a "visual shock."We explore what this means in terms of brain activity and what insights into the brain's visual perceptive system his work gives. We do so especially with reference to the representation of faces and bodies in the human visual brain. We discuss the evidence that shows that both these categories of stimuli have a very privileged status in visual perception, compared to the perception of other stimuli, including man-made artifacts such as houses, chairs, and cars. We show that viewing stimuli that depart significantly from a normal representation of faces and bodies entails a significant difference in the pattern of brain activation. We argue that Bacon succeeded in delivering his "visual shock" because he subverted the normal neural representation of faces and bodies, without at the same time subverting the representation of man-made artifacts.

  17. The left visual-field advantage in rapid visual presentation is amplified rather than reduced by posterior-parietal rTMS

    DEFF Research Database (Denmark)

    Verleger, Rolf; Möller, Friderike; Kuniecki, Michal

    2010-01-01

    ) either as effective or as sham stimulation. In two experiments, either one of these two factors, hemisphere and effectiveness of rTMS, was varied within or between participants. Again, T2 was much better identified in the left than in the right visual field. This advantage of the left visual field......In the present task, series of visual stimuli are rapidly presented left and right, containing two target stimuli, T1 and T2. In previous studies, T2 was better identified in the left than in the right visual field. This advantage of the left visual field might reflect dominance exerted...... by the right over the left hemisphere. If so, then repetitive transcranial magnetic stimulation (rTMS) to the right parietal cortex might release the left hemisphere from right-hemispheric control, thereby improving T2 identification in the right visual field. Alternatively or additionally, the asymmetry in T2...

  18. The effects of overfeeding on the neuronal response to visual food cues in thin and reduced-obese individuals.

    Directory of Open Access Journals (Sweden)

    Marc-Andre Cornier

    2009-07-01

    Full Text Available The regulation of energy intake is a complex process involving the integration of homeostatic signals and both internal and external sensory inputs. The objective of this study was to examine the effects of short-term overfeeding on the neuronal response to food-related visual stimuli in individuals prone and resistant to weight gain.22 thin and 19 reduced-obese (RO individuals were studied. Functional magnetic resonance imaging (fMRI was performed in the fasted state after two days of eucaloric energy intake and after two days of 30% overfeeding in a counterbalanced design. fMRI was performed while subjects viewed images of foods of high hedonic value and neutral non-food objects. In the eucaloric state, food as compared to non-food images elicited significantly greater activation of insula and inferior visual cortex in thin as compared to RO individuals. Two days of overfeeding led to significant attenuation of not only insula and visual cortex responses but also of hypothalamus response in thin as compared to RO individuals.These findings emphasize the important role of food-related visual cues in ingestive behavior and suggest that there are important phenotypic differences in the interactions between external visual sensory inputs, energy balance status, and brain regions involved in the regulation of energy intake. Furthermore, alterations in the neuronal response to food cues may relate to the propensity to gain weight.

  19. Gaze-independent ERP-BCIs: augmenting performance through location-congruent bimodal stimuli

    Science.gov (United States)

    Thurlings, Marieke E.; Brouwer, Anne-Marie; Van Erp, Jan B. F.; Werkhoven, Peter

    2014-01-01

    Gaze-independent event-related potential (ERP) based brain-computer interfaces (BCIs) yield relatively low BCI performance and traditionally employ unimodal stimuli. Bimodal ERP-BCIs may increase BCI performance due to multisensory integration or summation in the brain. An additional advantage of bimodal BCIs may be that the user can choose which modality or modalities to attend to. We studied bimodal, visual-tactile, gaze-independent BCIs and investigated whether or not ERP components’ tAUCs and subsequent classification accuracies are increased for (1) bimodal vs. unimodal stimuli; (2) location-congruent vs. location-incongruent bimodal stimuli; and (3) attending to both modalities vs. to either one modality. We observed an enhanced bimodal (compared to unimodal) P300 tAUC, which appeared to be positively affected by location-congruency (p = 0.056) and resulted in higher classification accuracies. Attending either to one or to both modalities of the bimodal location-congruent stimuli resulted in differences between ERP components, but not in classification performance. We conclude that location-congruent bimodal stimuli improve ERP-BCIs, and offer the user the possibility to switch the attended modality without losing performance. PMID:25249947

  20. Monetary reward modulates task-irrelevant perceptual learning for invisible stimuli.

    Science.gov (United States)

    Pascucci, David; Mastropasqua, Tommaso; Turatto, Massimo

    2015-01-01

    Task Irrelevant Perceptual Learning (TIPL) shows that the brain's discriminative capacity can improve also for invisible and unattended visual stimuli. It has been hypothesized that this form of "unconscious" neural plasticity is mediated by an endogenous reward mechanism triggered by the correct task performance. Although this result has challenged the mandatory role of attention in perceptual learning, no direct evidence exists of the hypothesized link between target recognition, reward and TIPL. Here, we manipulated the reward value associated with a target to demonstrate the involvement of reinforcement mechanisms in sensory plasticity for invisible inputs. Participants were trained in a central task associated with either high or low monetary incentives, provided only at the end of the experiment, while subliminal stimuli were presented peripherally. Our results showed that high incentive-value targets induced a greater degree of perceptual improvement for the subliminal stimuli, supporting the role of reinforcement mechanisms in TIPL.

  1. Decoding complex flow-field patterns in visual working memory.

    Science.gov (United States)

    Christophel, Thomas B; Haynes, John-Dylan

    2014-05-01

    There has been a long history of research on visual working memory. Whereas early studies have focused on the role of lateral prefrontal cortex in the storage of sensory information, this has been challenged by research in humans that has directly assessed the encoding of perceptual contents, pointing towards a role of visual and parietal regions during storage. In a previous study we used pattern classification to investigate the storage of complex visual color patterns across delay periods. This revealed coding of such contents in early visual and parietal brain regions. Here we aim to investigate whether the involvement of visual and parietal cortex is also observable for other types of complex, visuo-spatial pattern stimuli. Specifically, we used a combination of fMRI and multivariate classification to investigate the retention of complex flow-field stimuli defined by the spatial patterning of motion trajectories of random dots. Subjects were trained to memorize the precise spatial layout of these stimuli and to retain this information during an extended delay. We used a multivariate decoding approach to identify brain regions where spatial patterns of activity encoded the memorized stimuli. Content-specific memory signals were observable in motion sensitive visual area MT+ and in posterior parietal cortex that might encode spatial information in a modality independent manner. Interestingly, we also found information about the memorized visual stimulus in somatosensory cortex, suggesting a potential crossmodal contribution to memory. Our findings thus indicate that working memory storage of visual percepts might be distributed across unimodal, multimodal and even crossmodal brain regions. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. Auditory capture of visual motion: effects on perception and discrimination.

    Science.gov (United States)

    McCourt, Mark E; Leone, Lynnette M

    2016-09-28

    We asked whether the perceived direction of visual motion and contrast thresholds for motion discrimination are influenced by the concurrent motion of an auditory sound source. Visual motion stimuli were counterphasing Gabor patches, whose net motion energy was manipulated by adjusting the contrast of the leftward-moving and rightward-moving components. The presentation of these visual stimuli was paired with the simultaneous presentation of auditory stimuli, whose apparent motion in 3D auditory space (rightward, leftward, static, no sound) was manipulated using interaural time and intensity differences, and Doppler cues. In experiment 1, observers judged whether the Gabor visual stimulus appeared to move rightward or leftward. In experiment 2, contrast discrimination thresholds for detecting the interval containing unequal (rightward or leftward) visual motion energy were obtained under the same auditory conditions. Experiment 1 showed that the perceived direction of ambiguous visual motion is powerfully influenced by concurrent auditory motion, such that auditory motion 'captured' ambiguous visual motion. Experiment 2 showed that this interaction occurs at a sensory stage of processing as visual contrast discrimination thresholds (a criterion-free measure of sensitivity) were significantly elevated when paired with congruent auditory motion. These results suggest that auditory and visual motion signals are integrated and combined into a supramodal (audiovisual) representation of motion.

  3. Appetitive and aversive visual learning in freely moving Drosophila

    Directory of Open Access Journals (Sweden)

    Christopher Schnaitmann

    2010-03-01

    Full Text Available To compare appetitive and aversive visual memories of the fruit fly Drosophila melanogaster, we developed a new paradigm for classical conditioning. Adult flies are trained en masse to differentially associate one of two visual conditioned stimuli (blue and green light as conditioned stimuli or CS with an appetitive or aversive chemical substance (unconditioned stimulus or US. In a test phase, flies are given a choice between the paired and the unpaired visual stimuli. Associative memory is measured based on altered visual preference in the test. If a group of flies has, for example, received a sugar reward with green light, they show a significantly higher preference for the green stimulus during the test than another group of flies having received the same reward with blue light. We demonstrate critical parameters for the formation of visual appetitive memory, such as training repetition, order of reinforcement, starvation, and individual conditioning. Furthermore, we show that formic acid can act as an aversive chemical reinforcer, yielding weak, yet significant, aversive memory. These results provide a basis for future investigations into the cellular and molecular mechanisms underlying visual memory and perception in Drosophila.

  4. Auditory/visual distance estimation: accuracy and variability

    Directory of Open Access Journals (Sweden)

    Paul Wallace Anderson

    2014-10-01

    Full Text Available Past research has shown that auditory distance estimation improves when listeners are given the opportunity to see all possible sound sources when compared to no visual input. It has also been established that distance estimation is more accurate in vision than in audition. The present study investigates the degree to which auditory distance estimation is improved when matched with a congruent visual stimulus. Virtual sound sources based on binaural room impulse response (BRIR measurements made from distances ranging from approximately 0.3 to 9.8 m in a concert hall were used as auditory stimuli. Visual stimuli were photographs taken from the listener’s perspective at each distance in the impulse response measurement setup presented on a large HDTV monitor. Listeners were asked to estimate egocentric distance to the sound source in each of three conditions: auditory only (A, visual only (V, and congruent auditory/visual stimuli (A+V. Each condition was presented within its own block. Sixty-two listeners were tested in order to quantify the response variability inherent in auditory distance perception. Distance estimates from both the V and A+V conditions were found to be considerably more accurate and less variable than estimates from the A condition.

  5. Attracting the attention of a fly

    OpenAIRE

    Sareen, Preeti; Wolf, Reinhard; Heisenberg, Martin

    2011-01-01

    Organisms with complex visual systems rarely respond to just the sum of all visual stimuli impinging on their eyes. Often, they restrict their responses to stimuli in a temporarily selected region of the visual field (selective visual attention). Here, we investigate visual attention in the fly Drosophila during tethered flight at a torque meter. Flies can actively shift their attention; however, their attention can be guided to a certain location by external cues. Using visual cues, we can d...

  6. Heightened eating drive and visual food stimuli attenuate central nociceptive processing

    OpenAIRE

    Wright, Hazel; Li, Xiaoyun; Fallon, Nicholas B.; Giesbrecht, Timo; Thomas, Anna; Harrold, Joanne A.; Halford, Jason C. G.; Stancak, Andrej

    2014-01-01

    Hunger and pain are basic drives that compete for a behavioral response when experienced together. To investigate the cortical processes underlying hunger-pain interactions, we manipulated participants' hunger and presented photographs of appetizing food or inedible objects in combination with painful laser stimuli. Fourteen healthy participants completed two EEG sessions: one after an overnight fast, the other following a large breakfast. Spatio-temporal patterns of cortical activation under...

  7. Top-down attention affects sequential regularity representation in the human visual system.

    Science.gov (United States)

    Kimura, Motohiro; Widmann, Andreas; Schröger, Erich

    2010-08-01

    Recent neuroscience studies using visual mismatch negativity (visual MMN), an event-related brain potential (ERP) index of memory-mismatch processes in the visual sensory system, have shown that although sequential regularities embedded in successive visual stimuli can be automatically represented in the visual sensory system, an existence of sequential regularity itself does not guarantee that the sequential regularity will be automatically represented. In the present study, we investigated the effects of top-down attention on sequential regularity representation in the visual sensory system. Our results showed that a sequential regularity (SSSSD) embedded in a modified oddball sequence where infrequent deviant (D) and frequent standard stimuli (S) differing in luminance were regularly presented (SSSSDSSSSDSSSSD...) was represented in the visual sensory system only when participants attended the sequential regularity in luminance, but not when participants ignored the stimuli or simply attended the dimension of luminance per se. This suggests that top-down attention affects sequential regularity representation in the visual sensory system and that top-down attention is a prerequisite for particular sequential regularities to be represented. Copyright 2010 Elsevier B.V. All rights reserved.

  8. Attentional Bias for Emotional Stimuli in Borderline Personality Disorder : A Meta-Analysis

    NARCIS (Netherlands)

    Kaiser, D.; Jacob, G.A.; Domes, G.; Arntz, A.

    2016-01-01

    Background: In borderline personality disorder (BPD), attentional bias (AB) to emotional stimuli may be a core component in disorder pathogenesis and maintenance. Sampling: 11 emotional Stroop task (EST) studies with 244 BPD patients, 255 nonpatients (NPs) and 95 clinical controls and 4 visual

  9. Adaptation to Variance of Stimuli in Drosophila Larva Navigation

    Science.gov (United States)

    Wolk, Jason; Gepner, Ruben; Gershow, Marc

    In order to respond to stimuli that vary over orders of magnitude while also being capable of sensing very small changes, neural systems must be capable of rapidly adapting to the variance of stimuli. We study this adaptation in Drosophila larvae responding to varying visual signals and optogenetically induced fictitious odors using an infrared illuminated arena and custom computer vision software. Larval navigational decisions (when to turn) are modeled as the output a linear-nonlinear Poisson process. The development of the nonlinear turn rate in response to changes in variance is tracked using an adaptive point process filter determining the rate of adaptation to different stimulus profiles. Supported by NIH Grant 1DP2EB022359 and NSF Grant PHY-1455015.

  10. Assessment of sexual orientation using the hemodynamic brain response to visual sexual stimuli

    DEFF Research Database (Denmark)

    Ponseti, Jorge; Granert, Oliver; Jansen, Olav

    2009-01-01

    in a nonclinical sample of 12 heterosexual men and 14 homosexual men. During fMRI, participants were briefly exposed to pictures of same-sex and opposite-sex genitals. Data analysis involved four steps: (i) differences in the BOLD response to female and male sexual stimuli were calculated for each subject; (ii......) these contrast images were entered into a group analysis to calculate whole-brain difference maps between homosexual and heterosexual participants; (iii) a single expression value was computed for each subject expressing its correspondence to the group result; and (iv) based on these expression values, Fisher...... response patterns of the brain to sexual stimuli contained sufficient information to predict individual sexual orientation with high accuracy. These results suggest that fMRI-based classification methods hold promise for the diagnosis of paraphilic disorders (e.g., pedophilia)....

  11. Visual categorization of natural movies by rats.

    Science.gov (United States)

    Vinken, Kasper; Vermaercke, Ben; Op de Beeck, Hans P

    2014-08-06

    Visual categorization of complex, natural stimuli has been studied for some time in human and nonhuman primates. Recent interest in the rodent as a model for visual perception, including higher-level functional specialization, leads to the question of how rodents would perform on a categorization task using natural stimuli. To answer this question, rats were trained in a two-alternative forced choice task to discriminate movies containing rats from movies containing other objects and from scrambled movies (ordinate-level categorization). Subsequently, transfer to novel, previously unseen stimuli was tested, followed by a series of control probes. The results show that the animals are capable of acquiring a decision rule by abstracting common features from natural movies to generalize categorization to new stimuli. Control probes demonstrate that they did not use single low-level features, such as motion energy or (local) luminance. Significant generalization was even present with stationary snapshots from untrained movies. The variability within and between training and test stimuli, the complexity of natural movies, and the control experiments and analyses all suggest that a more high-level rule based on more complex stimulus features than local luminance-based cues was used to classify the novel stimuli. In conclusion, natural stimuli can be used to probe ordinate-level categorization in rats. Copyright © 2014 the authors 0270-6474/14/3410645-14$15.00/0.

  12. Enhanced early visual processing in response to snake and trypophobic stimuli

    OpenAIRE

    Strien, Jan; Van der Peijl, M.K. (Manja K.)

    2018-01-01

    textabstractBackground: Trypophobia refers to aversion to clusters of holes. We investigated whether trypophobic stimuli evoke augmented early posterior negativity (EPN). Methods: Twenty-four participants filled out a trypophobia questionnaire and watched the random rapid serial presentation of 450 trypophobic pictures, 450 pictures of poisonous animals, 450 pictures of snakes, and 450 pictures of small birds (1800 pictures in total, at a rate of 3 pictures/s). The EPN was scored as the mean ...

  13. The “Visual Shock” of Francis Bacon: an essay in neuroesthetics

    Science.gov (United States)

    Zeki, Semir; Ishizu, Tomohiro

    2013-01-01

    In this paper we discuss the work of Francis Bacon in the context of his declared aim of giving a “visual shock.”We explore what this means in terms of brain activity and what insights into the brain's visual perceptive system his work gives. We do so especially with reference to the representation of faces and bodies in the human visual brain. We discuss the evidence that shows that both these categories of stimuli have a very privileged status in visual perception, compared to the perception of other stimuli, including man-made artifacts such as houses, chairs, and cars. We show that viewing stimuli that depart significantly from a normal representation of faces and bodies entails a significant difference in the pattern of brain activation. We argue that Bacon succeeded in delivering his “visual shock” because he subverted the normal neural representation of faces and bodies, without at the same time subverting the representation of man-made artifacts. PMID:24339812

  14. Common coding of auditory and visual spatial information in working memory.

    Science.gov (United States)

    Lehnert, Günther; Zimmer, Hubert D

    2008-09-16

    We compared spatial short-term memory for visual and auditory stimuli in an event-related slow potentials study. Subjects encoded object locations of either four or six sequentially presented auditory or visual stimuli and maintained them during a retention period of 6 s. Slow potentials recorded during encoding were modulated by the modality of the stimuli. Stimulus related activity was stronger for auditory items at frontal and for visual items at posterior sites. At frontal electrodes, negative potentials incrementally increased with the sequential presentation of visual items, whereas a strong transient component occurred during encoding of each auditory item without the cumulative increment. During maintenance, frontal slow potentials were affected by modality and memory load according to task difficulty. In contrast, at posterior recording sites, slow potential activity was only modulated by memory load independent of modality. We interpret the frontal effects as correlates of different encoding strategies and the posterior effects as a correlate of common coding of visual and auditory object locations.

  15. Discriminative stimuli that control instrumental tobacco-seeking by human smokers also command selective attention.

    Science.gov (United States)

    Hogarth, Lee; Dickinson, Anthony; Duka, Theodora

    2003-08-01

    Incentive salience theory states that acquired bias in selective attention for stimuli associated with tobacco-smoke reinforcement controls the selective performance of tobacco-seeking and tobacco-taking behaviour. To support this theory, we assessed whether a stimulus that had acquired control of a tobacco-seeking response in a discrimination procedure would command the focus of visual attention in a subsequent test phase. Smokers received discrimination training in which an instrumental key-press response was followed by tobacco-smoke reinforcement when one visual discriminative stimulus (S+) was present, but not when another stimulus (S-) was present. The skin conductance response to the S+ and S- assessed whether Pavlovian conditioning to the S+ had taken place. In a subsequent test phase, the S+ and S- were presented in the dot-probe task and the allocation of the focus of visual attention to these stimuli was measured. Participants learned to perform the instrumental tobacco-seeking response selectively in the presence of the S+ relative to the S-, and showed a greater skin conductance response to the S+ than the S-. In the subsequent test phase, participants allocated the focus of visual attention to the S+ in preference to the S-. Correlation analysis revealed that the visual attentional bias for the S+ was positively associated with the number of times the S+ had been paired with tobacco-smoke in training, the skin conductance response to the S+ and with subjective craving to smoke. Furthermore, increased exposure to tobacco-smoke in the natural environment was associated with reduced discrimination learning. These data demonstrate that discriminative stimuli that signal that tobacco-smoke reinforcement is available acquire the capacity to command selective attentional and elicit instrumental tobacco-seeking behaviour.

  16. Monetary reward modulates task-irrelevant perceptual learning for invisible stimuli.

    Directory of Open Access Journals (Sweden)

    David Pascucci

    Full Text Available Task Irrelevant Perceptual Learning (TIPL shows that the brain's discriminative capacity can improve also for invisible and unattended visual stimuli. It has been hypothesized that this form of "unconscious" neural plasticity is mediated by an endogenous reward mechanism triggered by the correct task performance. Although this result has challenged the mandatory role of attention in perceptual learning, no direct evidence exists of the hypothesized link between target recognition, reward and TIPL. Here, we manipulated the reward value associated with a target to demonstrate the involvement of reinforcement mechanisms in sensory plasticity for invisible inputs. Participants were trained in a central task associated with either high or low monetary incentives, provided only at the end of the experiment, while subliminal stimuli were presented peripherally. Our results showed that high incentive-value targets induced a greater degree of perceptual improvement for the subliminal stimuli, supporting the role of reinforcement mechanisms in TIPL.

  17. Dissociating object-based from egocentric transformations in mental body rotation: effect of stimuli size.

    Science.gov (United States)

    Habacha, Hamdi; Moreau, David; Jarraya, Mohamed; Lejeune-Poutrain, Laure; Molinaro, Corinne

    2018-01-01

    The effect of stimuli size on the mental rotation of abstract objects has been extensively investigated, yet its effect on the mental rotation of bodily stimuli remains largely unexplored. Depending on the experimental design, mentally rotating bodily stimuli can elicit object-based transformations, relying mainly on visual processes, or egocentric transformations, which typically involve embodied motor processes. The present study included two mental body rotation tasks requiring either a same-different or a laterality judgment, designed to elicit object-based or egocentric transformations, respectively. Our findings revealed shorter response times for large-sized stimuli than for small-sized stimuli only for greater angular disparities, suggesting that the more unfamiliar the orientations of the bodily stimuli, the more stimuli size affected mental processing. Importantly, when comparing size transformation times, results revealed different patterns of size transformation times as a function of angular disparity between object-based and egocentric transformations. This indicates that mental size transformation and mental rotation proceed differently depending on the mental rotation strategy used. These findings are discussed with respect to the different spatial manipulations involved during object-based and egocentric transformations.

  18. Testing a Poisson counter model for visual identification of briefly presented, mutually confusable single stimuli in pure accuracy tasks.

    Science.gov (United States)

    Kyllingsbæk, Søren; Markussen, Bo; Bundesen, Claus

    2012-06-01

    The authors propose and test a simple model of the time course of visual identification of briefly presented, mutually confusable single stimuli in pure accuracy tasks. The model implies that during stimulus analysis, tentative categorizations that stimulus i belongs to category j are made at a constant Poisson rate, v(i, j). The analysis is continued until the stimulus disappears, and the overt response is based on the categorization made the greatest number of times. The model was evaluated by Monte Carlo tests of goodness of fit against observed probability distributions of responses in two extensive experiments and also by quantifications of the information loss of the model compared with the observed data by use of information theoretic measures. The model provided a close fit to individual data on identification of digits and an apparently perfect fit to data on identification of Landolt rings.

  19. Reward-associated stimuli capture the eyes in spite of strategic attentional set

    NARCIS (Netherlands)

    Hickey, C.M.; van Zoest, W.

    2013-01-01

    Theories of reinforcement learning have proposed that the association of reward to visual stimuli may cause these objects to become fundamentally salient and thus attention-drawing. A number of recent studies have investigated the oculomotor correlates of this reward-priming effect, but there is

  20. Affective Overload: The Effect of Emotive Visual Stimuli on Target Vocabulary Retrieval

    Science.gov (United States)

    Çetin, Yakup; Griffiths, Carol; Özel, Zeynep Ebrar Yetkiner; Kinay, Hüseyin

    2016-01-01

    There has been considerable interest in cognitive load in recent years, but the effect of affective load and its relationship to mental functioning has not received as much attention. In order to investigate the effects of affective stimuli on cognitive function as manifest in the ability to remember foreign language vocabulary, two groups of…

  1. Working Memory as Internal Attention: Toward an Integrative Account of Internal and External Selection Processes

    Science.gov (United States)

    Kiyonaga, Anastasia; Egner, Tobias

    2012-01-01

    Working memory (WM) and attention have been studied as separate cognitive constructs, although it has long been acknowledged that attention plays an important role in controlling the activation, maintenance, and manipulation of representations in WM. WM has, conversely, been thought of as a means of maintaining representations to voluntarily guide perceptual selective attention. It has more recently been observed, however, that the contents of WM can capture visual attention, even when such internally maintained representations are irrelevant, and often disruptive, to the immediate external task. Thus the precise relationship between WM and attention remains unclear, but it appears that they may bi-directionally impact one another, whether or not internal representations are consistent with external perceptual goals. This reciprocal relationship seems, further, to be constrained by limited cognitive resources to handle demands in either maintenance or selection. We propose here that the close relationship between WM and attention may be best described as a give-and-take interdependence between attention directed toward actively maintained internal representations (traditionally considered WM) versus external perceptual stimuli (traditionally considered selective attention), underpinned by their shared reliance on a common cognitive resource. Put simply, we argue that WM and attention should no longer be considered as separate systems or concepts, but as competing and impacting one another because they rely on the same limited resource. This framework can offer an explanation for the capture of visual attention by irrelevant WM contents, as well as a straightforward account of the underspecified relationship between WM and attention. PMID:23233157

  2. Working memory as internal attention: toward an integrative account of internal and external selection processes.

    Science.gov (United States)

    Kiyonaga, Anastasia; Egner, Tobias

    2013-04-01

    Working memory (WM) and attention have been studied as separate cognitive constructs, although it has long been acknowledged that attention plays an important role in controlling the activation, maintenance, and manipulation of representations in WM. WM has, conversely, been thought of as a means of maintaining representations to voluntarily guide perceptual selective attention. It has more recently been observed, however, that the contents of WM can capture visual attention, even when such internally maintained representations are irrelevant, and often disruptive, to the immediate external task. Thus, the precise relationship between WM and attention remains unclear, but it appears that they may bidirectionally impact one another, whether or not internal representations are consistent with the external perceptual goals. This reciprocal relationship seems, further, to be constrained by limited cognitive resources to handle demands in either maintenance or selection. We propose here that the close relationship between WM and attention may be best described as a give-and-take interdependence between attention directed toward either actively maintained internal representations (traditionally considered WM) or external perceptual stimuli (traditionally considered selective attention), underpinned by their shared reliance on a common cognitive resource. Put simply, we argue that WM and attention should no longer be considered as separate systems or concepts, but as competing and influencing one another because they rely on the same limited resource. This framework can offer an explanation for the capture of visual attention by irrelevant WM contents, as well as a straightforward account of the underspecified relationship between WM and attention.

  3. [Recognition of visual objects under forward masking. Effects of cathegorial similarity of test and masking stimuli].

    Science.gov (United States)

    Gerasimenko, N Iu; Slavutskaia, A V; Kalinin, S A; Kulikov, M A; Mikhaĭlova, E S

    2013-01-01

    In 38 healthy subjects accuracy and response time were examined during recognition of two categories of images--animals andnonliving objects--under forward masking. We revealed new data that masking effects depended of categorical similarity of target and masking stimuli. The recognition accuracy was the lowest and the response time was the most slow, when the target and masking stimuli belongs to the same category, that was combined with high dispersion of response times. The revealed effects were more clear in the task of animal recognition in comparison with the recognition of nonliving objects. We supposed that the revealed effects connected with interference between cortical representations of the target and masking stimuli and discussed our results in context of cortical interference and negative priming.

  4. Visual Processing Speeds in Children

    Directory of Open Access Journals (Sweden)

    Steve Croker

    2011-01-01

    Full Text Available The aim of this study was to investigate visual processing speeds in children. A rapid serial visual presentation (RSVP task with schematic faces as stimuli was given to ninety-nine 6–10-year-old children as well as a short form of the WISC-III. Participants were asked to determine whether a happy face stimulus was embedded in a stream of distracter stimuli. Presentation time was gradually reduced from 500 ms per stimulus to 100 ms per stimulus, in 50 ms steps. The data revealed that (i RSVP speed increases with age, (ii children aged 8 years and over can discriminate stimuli presented every 100 ms—the speed typically used with RSVP procedures in adult and adolescent populations, and (iii RSVP speed is significantly correlated with digit span and object assembly. In consequence, the RSVP paradigm presented here is appropriate for use in further investigations of processes of temporal attention within this cohort.

  5. Statistical regularities in art: Relations with visual coding and perception.

    Science.gov (United States)

    Graham, Daniel J; Redies, Christoph

    2010-07-21

    Since at least 1935, vision researchers have used art stimuli to test human response to complex scenes. This is sensible given the "inherent interestingness" of art and its relation to the natural visual world. The use of art stimuli has remained popular, especially in eye tracking studies. Moreover, stimuli in common use by vision scientists are inspired by the work of famous artists (e.g., Mondrians). Artworks are also popular in vision science as illustrations of a host of visual phenomena, such as depth cues and surface properties. However, until recently, there has been scant consideration of the spatial, luminance, and color statistics of artwork, and even less study of ways that regularities in such statistics could affect visual processing. Furthermore, the relationship between regularities in art images and those in natural scenes has received little or no attention. In the past few years, there has been a concerted effort to study statistical regularities in art as they relate to neural coding and visual perception, and art stimuli have begun to be studied in rigorous ways, as natural scenes have been. In this minireview, we summarize quantitative studies of links between regular statistics in artwork and processing in the visual stream. The results of these studies suggest that art is especially germane to understanding human visual coding and perception, and it therefore warrants wider study. Copyright 2010 Elsevier Ltd. All rights reserved.

  6. Preschool-Age Children and Adults Flexibly Shift Their Preferences for Auditory versus Visual Modalities but Do Not Exhibit Auditory Dominance

    Science.gov (United States)

    Noles, Nicholaus S.; Gelman, Susan A.

    2012-01-01

    The goal of this study was to evaluate the claim that young children display preferences for auditory stimuli over visual stimuli. This study was motivated by concerns that the visual stimuli employed in prior studies were considerably more complex and less distinctive than the competing auditory stimuli, resulting in an illusory preference for…

  7. Auditory recognition memory is inferior to visual recognition memory.

    Science.gov (United States)

    Cohen, Michael A; Horowitz, Todd S; Wolfe, Jeremy M

    2009-04-07

    Visual memory for scenes is surprisingly robust. We wished to examine whether an analogous ability exists in the auditory domain. Participants listened to a variety of sound clips and were tested on their ability to distinguish old from new clips. Stimuli ranged from complex auditory scenes (e.g., talking in a pool hall) to isolated auditory objects (e.g., a dog barking) to music. In some conditions, additional information was provided to help participants with encoding. In every situation, however, auditory memory proved to be systematically inferior to visual memory. This suggests that there exists either a fundamental difference between auditory and visual stimuli, or, more plausibly, an asymmetry between auditory and visual processing.

  8. Visual search in ecological and non-ecological displays: evidence for a non-monotonic effect of complexity on performance.

    Directory of Open Access Journals (Sweden)

    Philippe Chassy

    Full Text Available Considerable research has been carried out on visual search, with single or multiple targets. However, most studies have used artificial stimuli with low ecological validity. In addition, little is known about the effects of target complexity and expertise in visual search. Here, we investigate visual search in three conditions of complexity (detecting a king, detecting a check, and detecting a checkmate with chess players of two levels of expertise (novices and club players. Results show that the influence of target complexity depends on level of structure of the visual display. Different functional relationships were found between artificial (random chess positions and ecologically valid (game positions stimuli: With artificial, but not with ecologically valid stimuli, a "pop out" effect was present when a target was visually more complex than distractors but could be captured by a memory chunk. This suggests that caution should be exercised when generalising from experiments using artificial stimuli with low ecological validity to real-life stimuli.

  9. Auditory and visual interactions between the superior and inferior colliculi in the ferret.

    Science.gov (United States)

    Stitt, Iain; Galindo-Leon, Edgar; Pieper, Florian; Hollensteiner, Karl J; Engler, Gerhard; Engel, Andreas K

    2015-05-01

    The integration of visual and auditory spatial information is important for building an accurate perception of the external world, but the fundamental mechanisms governing such audiovisual interaction have only partially been resolved. The earliest interface between auditory and visual processing pathways is in the midbrain, where the superior (SC) and inferior colliculi (IC) are reciprocally connected in an audiovisual loop. Here, we investigate the mechanisms of audiovisual interaction in the midbrain by recording neural signals from the SC and IC simultaneously in anesthetized ferrets. Visual stimuli reliably produced band-limited phase locking of IC local field potentials (LFPs) in two distinct frequency bands: 6-10 and 15-30 Hz. These visual LFP responses co-localized with robust auditory responses that were characteristic of the IC. Imaginary coherence analysis confirmed that visual responses in the IC were not volume-conducted signals from the neighboring SC. Visual responses in the IC occurred later than retinally driven superficial SC layers and earlier than deep SC layers that receive indirect visual inputs, suggesting that retinal inputs do not drive visually evoked responses in the IC. In addition, SC and IC recording sites with overlapping visual spatial receptive fields displayed stronger functional connectivity than sites with separate receptive fields, indicating that visual spatial maps are aligned across both midbrain structures. Reciprocal coupling between the IC and SC therefore probably serves the dynamic integration of visual and auditory representations of space. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  10. Oxytocin and vasopressin enhance responsiveness to infant stimuli in adult marmosets.

    Science.gov (United States)

    Taylor, Jack H; French, Jeffrey A

    2015-09-01

    The neuropeptides oxytocin (OT) and arginine-vasopressin (AVP) have been implicated in modulating sex-specific responses to offspring in a variety of uniparental and biparental rodent species. Despite the large body of research in rodents, the effects of these hormones in biparental primates are less understood. Marmoset monkeys (Callithrix jacchus) belong to a clade of primates with a high incidence of biparental care and also synthesize a structurally distinct variant of OT (proline instead of leucine at the 8th amino acid position; Pro(8)-OT). We examined the roles of the OT and AVP systems in the control of responses to infant stimuli in marmoset monkeys. We administered neuropeptide receptor agonists and antagonists to male and female marmosets, and then exposed them to visual and auditory infant-related and control stimuli. Intranasal Pro(8)-OT decreased latencies to respond to infant stimuli in males, and intranasal AVP decreased latencies to respond to infant stimuli in females. Our study is the first to demonstrate that Pro(8)-OT and AVP alter responsiveness to infant stimuli in a biparental New World monkey. Across species, the effects of OT and AVP on parental behavior appear to vary by species-typical caregiving responsibilities in males and females. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. The mere exposure effect for visual image.

    Science.gov (United States)

    Inoue, Kazuya; Yagi, Yoshihiko; Sato, Nobuya

    2018-02-01

    Mere exposure effect refers to a phenomenon in which repeated stimuli are evaluated more positively than novel stimuli. We investigated whether this effect occurs for internally generated visual representations (i.e., visual images). In an exposure phase, a 5 × 5 dot array was presented, and a pair of dots corresponding to the neighboring vertices of an invisible polygon was sequentially flashed (in red), creating an invisible polygon. In Experiments 1, 2, and 4, participants visualized and memorized the shapes of invisible polygons based on different sequences of flashed dots, whereas in Experiment 3, participants only memorized positions of these dots. In a subsequent rating phase, participants visualized the shape of the invisible polygon from allocations of numerical characters on its vertices, and then rated their preference for invisible polygons (Experiments 1, 2, and 3). In contrast, in Experiment 4, participants rated the preference for visible polygons. Results showed that the mere exposure effect appeared only when participants visualized the shape of invisible polygons in both the exposure and rating phases (Experiments 1 and 2), suggesting that the mere exposure effect occurred for internalized visual images. This implies that the sensory inputs from repeated stimuli play a minor role in the mere exposure effect. Absence of the mere exposure effect in Experiment 4 suggests that the consistency of processing between exposure and rating phases plays an important role in the mere exposure effect.

  12. A deafening flash! Visual interference of auditory signal detection.

    Science.gov (United States)

    Fassnidge, Christopher; Cecconi Marcotti, Claudia; Freeman, Elliot

    2017-03-01

    In some people, visual stimulation evokes auditory sensations. How prevalent and how perceptually real is this? 22% of our neurotypical adult participants responded 'Yes' when asked whether they heard faint sounds accompanying flash stimuli, and showed significantly better ability to discriminate visual 'Morse-code' sequences. This benefit might arise from an ability to recode visual signals as sounds, thus taking advantage of superior temporal acuity of audition. In support of this, those who showed better visual relative to auditory sequence discrimination also had poorer auditory detection in the presence of uninformative visual flashes, though this was independent of awareness of visually-evoked sounds. Thus a visually-evoked auditory representation may occur subliminally and disrupt detection of real auditory signals. The frequent natural correlation between visual and auditory stimuli might explain the surprising prevalence of this phenomenon. Overall, our results suggest that learned correspondences between strongly correlated modalities may provide a precursor for some synaesthetic abilities. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Chewing Stimulation Reduces Appetite Ratings and Attentional Bias toward Visual Food Stimuli in Healthy-Weight Individuals.

    Science.gov (United States)

    Ikeda, Akitsu; Miyamoto, Jun J; Usui, Nobuo; Taira, Masato; Moriyama, Keiji

    2018-01-01

    Based on the theory of incentive sensitization, the exposure to food stimuli sensitizes the brain's reward circuits and enhances attentional bias toward food. Therefore, reducing attentional bias to food could possibly be beneficial in preventing impulsive eating. The importance of chewing has been increasingly implicated as one of the methods for reducing appetite, however, no studies to investigate the effect of chewing on attentional bias to food. In this study, we investigated whether chewing stimulation (i.e., chewing tasteless gum) reduces attentional bias to food as well as an actual feeding (i.e., ingesting a standardized meal) does. We measured reaction time, gaze direction and gaze duration to assess attentional bias toward food images in pairs of food and non-food images that were presented in a visual probe task (Experiment 1, n = 21) and/or eye-tracking task (Experiment 2, n = 20). We also measured appetite ratings using visual analog scale. In addition, we conducted a control study in which the same number of participants performed the identical tasks to Experiments 1 and 2, but the participants did not perform sham feeding with gum-chewing/actual feeding between tasks and they took a rest. Two-way ANOVA revealed that after actual feeding, subjective ratings of hunger, preoccupation with food, and desire to eat significantly decreased, whereas fullness significantly increased. Sham feeding showed the same trends, but to a lesser degree. Results of the visual probe task in Experiment 1 showed that both sham feeding and actual feeding reduced reaction time bias significantly. Eye-tracking data showed that both sham and actual feeding resulted in significant reduction in gaze direction bias, indexing initial attentional orientation. Gaze duration bias was unaffected. In both control experiments, one-way ANOVAs showed no significant differences between immediately before and after the resting state for any of the appetite ratings, reaction time bias, gaze

  14. Chewing Stimulation Reduces Appetite Ratings and Attentional Bias toward Visual Food Stimuli in Healthy-Weight Individuals

    Science.gov (United States)

    Ikeda, Akitsu; Miyamoto, Jun J.; Usui, Nobuo; Taira, Masato; Moriyama, Keiji

    2018-01-01

    Based on the theory of incentive sensitization, the exposure to food stimuli sensitizes the brain’s reward circuits and enhances attentional bias toward food. Therefore, reducing attentional bias to food could possibly be beneficial in preventing impulsive eating. The importance of chewing has been increasingly implicated as one of the methods for reducing appetite, however, no studies to investigate the effect of chewing on attentional bias to food. In this study, we investigated whether chewing stimulation (i.e., chewing tasteless gum) reduces attentional bias to food as well as an actual feeding (i.e., ingesting a standardized meal) does. We measured reaction time, gaze direction and gaze duration to assess attentional bias toward food images in pairs of food and non-food images that were presented in a visual probe task (Experiment 1, n = 21) and/or eye-tracking task (Experiment 2, n = 20). We also measured appetite ratings using visual analog scale. In addition, we conducted a control study in which the same number of participants performed the identical tasks to Experiments 1 and 2, but the participants did not perform sham feeding with gum-chewing/actual feeding between tasks and they took a rest. Two-way ANOVA revealed that after actual feeding, subjective ratings of hunger, preoccupation with food, and desire to eat significantly decreased, whereas fullness significantly increased. Sham feeding showed the same trends, but to a lesser degree. Results of the visual probe task in Experiment 1 showed that both sham feeding and actual feeding reduced reaction time bias significantly. Eye-tracking data showed that both sham and actual feeding resulted in significant reduction in gaze direction bias, indexing initial attentional orientation. Gaze duration bias was unaffected. In both control experiments, one-way ANOVAs showed no significant differences between immediately before and after the resting state for any of the appetite ratings, reaction time bias

  15. Stimuli-responsive Smart Liposomes in Cancer Targeting.

    Science.gov (United States)

    Jain, Ankit; Jain, Sanjay K

    2018-02-08

    Liposomes are vesicular carriers which possess aqueous core entrapped within the lipid bilayer. These are carriers of choice because of biocompatible and biodegradable features in addition to flexibility of surface modifications at surface and lipid compositions of lipid bilayers. Liposomes have been reported well for cancer treatment using both passive and active targeting approaches however tumor microenvironment is still the biggest hurdle for safe and effective delivery of anticancer agents. To overcome this problem, stimuli-responsive smart liposomes have emerged as promising cargoes pioneered to anomalous tumor milieu in response to pH, temperature, and enzymes etc. as internal triggers, and magnetic field, ultrasound, and redox potential as external guides for enhancement of drug delivery to tumors. This review focuses on all such stimuli-responsive approaches using fabrication potentiality of liposomes in combination to various ligands, linkers, and PEGylation etc. Scientists engaged in cancer targeting approaches can get benefited greatly with this knowledgeable assemblage of advances in liposomal nanovectors. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  16. Intraindividual variability in vigilance performance: does degrading visual stimuli mimic age-related "neural noise"?

    Science.gov (United States)

    MacDonald, Stuart W S; Hultsch, David F; Bunce, David

    2006-07-01

    Intraindividual performance variability, or inconsistency, has been shown to predict neurological status, physiological functioning, and age differences and declines in cognition. However, potential moderating factors of inconsistency are not well understood. The present investigation examined whether inconsistency in vigilance response latencies varied as a function of time-on-task and task demands by degrading visual stimuli in three separate conditions (10%, 20%, and 30%). Participants were 24 younger women aged 21 to 30 years (M = 24.04, SD = 2.51) and 23 older women aged 61 to 83 years (M = 68.70, SD = 6.38). A measure of within-person inconsistency, the intraindividual standard deviation (ISD), was computed for each individual across reaction time (RT) trials (3 blocks of 45 event trials) for each condition of the vigilance task. Greater inconsistency was observed with increasing stimulus degradation and age, even after controlling for group differences in mean RTs and physical condition. Further, older adults were more inconsistent than younger adults for similar degradation conditions, with ISD scores for younger adults in the 30% condition approximating estimates observed for older adults in the 10% condition. Finally, a measure of perceptual sensitivity shared increasing negative associations with ISDs, with this association further modulated as a function of age but to a lesser degree by degradation condition. Results support current hypotheses suggesting that inconsistency serves as a marker of neurological integrity and are discussed in terms of potential underlying mechanisms.

  17. Effects of incongruent auditory and visual room-related cues on sound externalization

    DEFF Research Database (Denmark)

    Carvajal, Juan Camilo Gil; Santurette, Sébastien; Cubick, Jens

    Sounds presented via headphones are typically perceived inside the head. However, the illusion of a sound source located out in space away from the listener’s head can be generated with binaural headphone-based auralization systems by convolving anechoic sound signals with a binaural room impulse...... response (BRIR) measured with miniature microphones placed in the listener’s ear canals. Sound externalization of such virtual sounds can be very convincing and robust but there have been reports that the illusion might break down when the listening environment differs from the room in which the BRIRs were...... recorded [1,2,3]. This may be due to incongruent auditory cues between the recording and playback room during sound reproduction [2]. Alternatively, an expectation effect caused by the visual impression of the room may affect the position of the perceived auditory image [3]. Here, we systematically...

  18. Effects of Binaural Sensory Aids on the Development of Visual Perceptual Abilities in Visually Handicapped Infants. Final Report, April 15, 1982-November 15, 1982.

    Science.gov (United States)

    Hart, Verna; Ferrell, Kay

    Twenty-four congenitally visually handicapped infants, aged 6-24 months, participated in a study to determine (1) those stimuli best able to elicit visual attention, (2) the stability of visual acuity over time, and (3) the effects of binaural sensory aids on both visual attention and visual acuity. Ss were dichotomized into visually handicapped…

  19. Letters persistence after physical offset: visual word form area and left planum temporale. An fMRI study.

    Science.gov (United States)

    Barban, Francesco; Zannino, Gian Daniele; Macaluso, Emiliano; Caltagirone, Carlo; Carlesimo, Giovanni A

    2013-06-01

    Iconic memory is a high-capacity low-duration visual memory store that allows the persistence of a visual stimulus after its offset. The categorical nature of this store has been extensively debated. This study provides functional magnetic resonance imaging evidence for brain regions underlying the persistence of postcategorical representations of visual stimuli. In a partial report paradigm, subjects matched a cued row of a 3 × 3 array of letters (postcategorical stimuli) or false fonts (precategorical stimuli) with a subsequent triplet of stimuli. The cued row was indicated by two visual flankers presented at the onset (physical stimulus readout) or after the offset of the array (iconic memory readout). The left planum temporale showed a greater modulation of the source of readout (iconic memory vs. physical stimulus) when letters were presented compared to false fonts. This is a multimodal brain region responsible for matching incoming acoustic and visual patterns with acoustic pattern templates. These findings suggest that letters persist after their physical offset in an abstract postcategorical representation. A targeted region of interest analysis revealed a similar pattern of activation in the Visual Word Form Area. These results suggest that multiple higher-order visual areas mediate iconic memory for postcategorical stimuli. Copyright © 2012 Wiley Periodicals, Inc.

  20. The influence of response competition on cerebral asymmetries for processing hierarchical stimuli revealed by ERP recordings

    OpenAIRE

    Malinowski, Peter; Hübner, Ronald; Keil, Andreas; Gruber, Thomas

    2002-01-01

    It is widely accepted that the left and right hemispheres differ with respect to the processing of global and local aspects of visual stimuli. Recently, behavioural experiments have shown that this processing asymmetry strongly depends on the response competition between the global and local levels of a stimulus. Here we report electrophysiological data that underline this observation. Hemispheric differences for global/local processing were mainly observed for responseincompatible stimuli an...

  1. Conditioned sounds enhance visual processing.

    Directory of Open Access Journals (Sweden)

    Fabrizio Leo

    Full Text Available This psychophysics study investigated whether prior auditory conditioning influences how a sound interacts with visual perception. In the conditioning phase, subjects were presented with three pure tones ( =  conditioned stimuli, CS that were paired with positive, negative or neutral unconditioned stimuli. As unconditioned reinforcers we employed pictures (highly pleasant, unpleasant and neutral or monetary outcomes (+50 euro cents, -50 cents, 0 cents. In the subsequent visual selective attention paradigm, subjects were presented with near-threshold Gabors displayed in their left or right hemifield. Critically, the Gabors were presented in synchrony with one of the conditioned sounds. Subjects discriminated whether the Gabors were presented in their left or right hemifields. Participants determined the location more accurately when the Gabors were presented in synchrony with positive relative to neutral sounds irrespective of reinforcer type. Thus, previously rewarded relative to neutral sounds increased the bottom-up salience of the visual Gabors. Our results are the first demonstration that prior auditory conditioning is a potent mechanism to modulate the effect of sounds on visual perception.

  2. Neural Correlates of Visual Aesthetics - Beauty as the Coalescence of Stimulus and Internal State

    NARCIS (Netherlands)

    Jacobs, Richard H. A. H.; Renken, Remco; Cornelissen, Frans W.

    2012-01-01

    How do external stimuli and our internal state coalesce to create the distinctive aesthetic pleasures that give vibrance to human experience? Neuroaesthetics has so far focused on the neural correlates of observing beautiful stimuli compared to neutral or ugly stimuli, or on neural correlates of

  3. Enhanced alpha-oscillations in visual cortex during anticipation of self-generated visual stimulation.

    Science.gov (United States)

    Stenner, Max-Philipp; Bauer, Markus; Haggard, Patrick; Heinze, Hans-Jochen; Dolan, Ray

    2014-11-01

    The perceived intensity of sensory stimuli is reduced when these stimuli are caused by the observer's actions. This phenomenon is traditionally explained by forward models of sensory action-outcome, which arise from motor processing. Although these forward models critically predict anticipatory modulation of sensory neural processing, neurophysiological evidence for anticipatory modulation is sparse and has not been linked to perceptual data showing sensory attenuation. By combining a psychophysical task involving contrast discrimination with source-level time-frequency analysis of MEG data, we demonstrate that the amplitude of alpha-oscillations in visual cortex is enhanced before the onset of a visual stimulus when the identity and onset of the stimulus are controlled by participants' motor actions. Critically, this prestimulus enhancement of alpha-amplitude is paralleled by psychophysical judgments of a reduced contrast for this stimulus. We suggest that alpha-oscillations in visual cortex preceding self-generated visual stimulation are a likely neurophysiological signature of motor-induced sensory anticipation and mediate sensory attenuation. We discuss our results in relation to proposals that attribute generic inhibitory functions to alpha-oscillations in prioritizing and gating sensory information via top-down control.

  4. Cross-modal processing in auditory and visual working memory.

    Science.gov (United States)

    Suchan, Boris; Linnewerth, Britta; Köster, Odo; Daum, Irene; Schmid, Gebhard

    2006-02-01

    This study aimed to further explore processing of auditory and visual stimuli in working memory. Smith and Jonides (1997) [Smith, E.E., Jonides, J., 1997. Working memory: A view from neuroimaging. Cogn. Psychol. 33, 5-42] described a modified working memory model in which visual input is automatically transformed into a phonological code. To study this process, auditory and the corresponding visual stimuli were presented in a variant of the 2-back task which involved changes from the auditory to the visual modality and vice versa. Brain activation patterns underlying visual and auditory processing as well as transformation mechanisms were analyzed. Results yielded a significant activation in the left primary auditory cortex associated with transformation of visual into auditory information which reflects the matching and recoding of a stored item and its modality. This finding yields empirical evidence for a transformation of visual input into a phonological code, with the auditory cortex as the neural correlate of the recoding process in working memory.

  5. An Artificial Flexible Visual Memory System Based on an UV-Motivated Memristor.

    Science.gov (United States)

    Chen, Shuai; Lou, Zheng; Chen, Di; Shen, Guozhen

    2018-02-01

    For the mimicry of human visual memory, a prominent challenge is how to detect and store the image information by electronic devices, which demands a multifunctional integration to sense light like eyes and to memorize image information like the brain by transforming optical signals to electrical signals that can be recognized by electronic devices. Although current image sensors can perceive simple images in real time, the image information fades away when the external image stimuli are removed. The deficiency between the state-of-the-art image sensors and visual memory system inspires the logical integration of image sensors and memory devices to realize the sensing and memory process toward light information for the bionic design of human visual memory. Hence, a facile architecture is designed to construct artificial flexible visual memory system by employing an UV-motivated memristor. The visual memory arrays can realize the detection and memory process of UV light distribution with a patterned image for a long-term retention and the stored image information can be reset by a negative voltage sweep and reprogrammed to the same or an other image distribution, which proves the effective reusability. These results provide new opportunities for the mimicry of human visual memory and enable the flexible visual memory device to be applied in future wearable electronics, electronic eyes, multifunctional robotics, and auxiliary equipment for visual handicapped. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Is one enough? The case for non-additive influences of visual features on crossmodal Stroop interference

    Directory of Open Access Journals (Sweden)

    Lawrence Gregory Appelbaum

    2013-10-01

    Full Text Available When different perceptual signals arising from the same physical entity are integrated, they form a more reliable sensory estimate. When such repetitive sensory signals are pitted against other competing stimuli, such as in a Stroop Task, this redundancy may lead to stronger processing that biases behavior towards reporting the redundant stimuli. This bias would therefore be expected to evoke greater incongruency effects than if these stimuli did not contain redundant sensory features. In the present paper we report that this is not the case for a set of three crossmodal, auditory-visual Stroop tasks. In these tasks participants attended to, and reported, either the visual or the auditory stimulus (in separate blocks while ignoring the other, unattended modality. The visual component of these stimuli could be purely semantic (words, purely perceptual (colors, or the combination of both. Based on previous work showing enhanced crossmodal integration and visual search gains for redundantly coded stimuli, we had expected that relative to the single features, redundant visual features would have induced both greater visual distracter incongruency effects for attended auditory targets, and been less influenced by auditory distracters for attended visual targets. Overall, reaction time were faster for visual targets and were dominated by behavioral facilitation for the cross-modal interactions (relative to interference, but showed surprisingly little influence of visual feature redundancy. Post hoc analyses revealed modest and trending evidence for possible increases in behavioral interference for redundant visual distracters on auditory targets, however, these effects were substantially smaller than anticipated and were not accompanied by redundancy effect for behavioral facilitation or for attended visual targets.

  7. Sex differences in adults' relative visual interest in female and male faces, toys, and play styles.

    Science.gov (United States)

    Alexander, Gerianne M; Charles, Nora

    2009-06-01

    An individual's reproductive potential appears to influence response to attractive faces of the opposite sex. Otherwise, relatively little is known about the characteristics of the adult observer that may influence his or her affective evaluation of male and female faces. An untested hypothesis (based on the proposed role of attractive faces in mate selection) is that most women would show greater interest in male faces whereas most men would show greater interest in female faces. Further, evidence from individuals with preferences for same-sex sexual partners suggests that response to attractive male and female faces may be influenced by gender-linked play preferences. To test these hypotheses, visual attention directed to sex-linked stimuli (faces, toys, play styles) was measured in 39 men and 44 women using eye tracking technology. Consistent with our predictions, men directed greater visual attention to all male-typical stimuli and visual attention to male and female faces was associated with visual attention to gender conforming or nonconforming stimuli in a manner consistent with previous research on sexual orientation. In contrast, women showed a visual preference for female-typical toys, but no visual preference for male faces or female-typical play styles. These findings indicate that sex differences in visual processing extend beyond stimuli associated with adult sexual behavior. We speculate that sex differences in visual processing are a component of the expression of gender phenotypes across the lifespan that may reflect sex differences in the motivational properties of gender-linked stimuli.

  8. Multiaccommodative stimuli in VR systems: problems & solutions.

    Science.gov (United States)

    Marran, L; Schor, C

    1997-09-01

    Virtual reality environments can introduce multiple and sometimes conflicting accommodative stimuli. For instance, with the high-powered lenses commonly used in head-mounted displays, small discrepancies in screen lens placement, caused by manufacturer error or user adjustment focus error, can change the focal depths of the image by a couple of diopters. This can introduce a binocular accommodative stimulus or, if the displacement between the two screens is unequal, an unequal (anisometropic) accommodative stimulus for the two eyes. Systems that allow simultaneous viewing of virtual and real images can also introduce a conflict in accommodative stimuli: When real and virtual images are at different focal planes, both cannot be in focus at the same time, though they may appear to be in similar locations in space. In this paper four unique designs are described that minimize the range of accommodative stimuli and maximize the visual system's ability to cope efficiently with the focus conflicts that remain: pinhole optics, monocular lens addition combined with aniso-accommodation, chromatic bifocal, and bifocal lens system. The advantages and disadvantages of each design are described and recommendation for design choice is given after consideration of the end use of the virtual reality system (e.g., low or high end, entertainment, technical, or medical use). The appropriate design modifications should allow greater user comfort and better performance.

  9. The Time Is Up: Compression of Visual Time Interval Estimations of Bimodal Aperiodic Patterns

    Science.gov (United States)

    Duarte, Fabiola; Lemus, Luis

    2017-01-01

    The ability to estimate time intervals subserves many of our behaviors and perceptual experiences. However, it is not clear how aperiodic (AP) stimuli affect our perception of time intervals across sensory modalities. To address this question, we evaluated the human capacity to discriminate between two acoustic (A), visual (V) or audiovisual (AV) time intervals of trains of scattered pulses. We first measured the periodicity of those stimuli and then sought for correlations with the accuracy and reaction times (RTs) of the subjects. We found that, for all time intervals tested in our experiment, the visual system consistently perceived AP stimuli as being shorter than the periodic (P) ones. In contrast, such a compression phenomenon was not apparent during auditory trials. Our conclusions are: first, the subjects exposed to P stimuli are more likely to measure their durations accurately. Second, perceptual time compression occurs for AP visual stimuli. Lastly, AV discriminations are determined by A dominance rather than by AV enhancement. PMID:28848406

  10. Anxiety and autonomic response to social-affective stimuli in individuals with Williams syndrome.

    Science.gov (United States)

    Ng, Rowena; Bellugi, Ursula; Järvinen, Anna

    2016-12-01

    Williams syndrome (WS) is a genetic condition characterized by an unusual "hypersocial" personality juxtaposed by high anxiety. Recent evidence suggests that autonomic reactivity to affective face stimuli is disorganised in WS, which may contribute to emotion dysregulation and/or social disinhibition. Electrodermal activity (EDA) and mean interbeat interval (IBI) of 25 participants with WS (19 - 57 years old) and 16 typically developing (TD; 17-43 years old) adults were measured during a passive presentation of affective face and voice stimuli. The Beck Anxiety Inventory was administered to examine associations between autonomic reactivity to social-affective stimuli and anxiety symptomatology. The WS group was characterized by higher overall anxiety symptomatology, and poorer anger recognition in social visual and aural stimuli relative to the TD group. No between-group differences emerged in autonomic response patterns. Notably, for participants with WS, increased anxiety was uniquely associated with diminished arousal to angry faces and voices. In contrast, for the TD group, no associations emerged between anxiety and physiological responsivity to social-emotional stimuli. The anxiety associated with WS appears to be intimately related to reduced autonomic arousal to angry social stimuli, which may also be linked to the characteristic social disinhibition. Copyright © 2016. Published by Elsevier Ltd.

  11. Visual and auditory perception in preschool children at risk for dyslexia.

    Science.gov (United States)

    Ortiz, Rosario; Estévez, Adelina; Muñetón, Mercedes; Domínguez, Carolina

    2014-11-01

    Recently, there has been renewed interest in perceptive problems of dyslexics. A polemic research issue in this area has been the nature of the perception deficit. Another issue is the causal role of this deficit in dyslexia. Most studies have been carried out in adult and child literates; consequently, the observed deficits may be the result rather than the cause of dyslexia. This study addresses these issues by examining visual and auditory perception in children at risk for dyslexia. We compared children from preschool with and without risk for dyslexia in auditory and visual temporal order judgment tasks and same-different discrimination tasks. Identical visual and auditory, linguistic and nonlinguistic stimuli were presented in both tasks. The results revealed that the visual as well as the auditory perception of children at risk for dyslexia is impaired. The comparison between groups in auditory and visual perception shows that the achievement of children at risk was lower than children without risk for dyslexia in the temporal tasks. There were no differences between groups in auditory discrimination tasks. The difficulties of children at risk in visual and auditory perceptive processing affected both linguistic and nonlinguistic stimuli. Our conclusions are that children at risk for dyslexia show auditory and visual perceptive deficits for linguistic and nonlinguistic stimuli. The auditory impairment may be explained by temporal processing problems and these problems are more serious for processing language than for processing other auditory stimuli. These visual and auditory perceptive deficits are not the consequence of failing to learn to read, thus, these findings support the theory of temporal processing deficit. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. In Situ Cross-Linking of Stimuli-Responsive Hemicellulose Microgels during Spray Drying

    Science.gov (United States)

    2015-01-01

    Chemical cross-linking during spray drying offers the potential for green fabrication of microgels with a rapid stimuli response and good blood compatibility and provides a platform for stimuli-responsive hemicellulose microgels (SRHMGs). The cross-linking reaction occurs rapidly in situ at elevated temperature during spray drying, enabling the production of microgels in a large scale within a few minutes. The SRHMGs with an average size range of ∼1–4 μm contain O-acetyl-galactoglucomannan as a matrix and poly(acrylic acid), aniline pentamer (AP), and iron as functional additives, which are responsive to external changes in pH, electrochemical stimuli, magnetic field, or dual-stimuli. The surface morphologies, chemical compositions, charge, pH, and mechanical properties of these smart microgels were evaluated using scanning electron microscopy, IR, zeta potential measurements, pH evaluation, and quantitative nanomechanical mapping, respectively. Different oxidation states were observed when AP was introduced, as confirmed by UV spectroscopy and cyclic voltammetry. Systematic blood compatibility evaluations revealed that the SRHMGs have good blood compatibility. This bottom-up strategy to synthesize SRHMGs enables a new route to the production of smart microgels for biomedical applications. PMID:25630464

  13. In situ cross-linking of stimuli-responsive hemicellulose microgels during spray drying.

    Science.gov (United States)

    Zhao, Weifeng; Nugroho, Robertus Wahyu N; Odelius, Karin; Edlund, Ulrica; Zhao, Changsheng; Albertsson, Ann-Christine

    2015-02-25

    Chemical cross-linking during spray drying offers the potential for green fabrication of microgels with a rapid stimuli response and good blood compatibility and provides a platform for stimuli-responsive hemicellulose microgels (SRHMGs). The cross-linking reaction occurs rapidly in situ at elevated temperature during spray drying, enabling the production of microgels in a large scale within a few minutes. The SRHMGs with an average size range of ∼ 1-4 μm contain O-acetyl-galactoglucomannan as a matrix and poly(acrylic acid), aniline pentamer (AP), and iron as functional additives, which are responsive to external changes in pH, electrochemical stimuli, magnetic field, or dual-stimuli. The surface morphologies, chemical compositions, charge, pH, and mechanical properties of these smart microgels were evaluated using scanning electron microscopy, IR, zeta potential measurements, pH evaluation, and quantitative nanomechanical mapping, respectively. Different oxidation states were observed when AP was introduced, as confirmed by UV spectroscopy and cyclic voltammetry. Systematic blood compatibility evaluations revealed that the SRHMGs have good blood compatibility. This bottom-up strategy to synthesize SRHMGs enables a new route to the production of smart microgels for biomedical applications.

  14. The effect of spatio-temporal distance between visual stimuli on information processing in children with Specific Language Impairment.

    Science.gov (United States)

    Dispaldro, Marco; Corradi, Nicola

    2015-01-01

    The purpose of this study is to evaluate whether children with Specific Language Impairment (SLI) have a deficit in processing a sequence of two visual stimuli (S1 and S2) presented at different inter-stimulus intervals and in different spatial locations. In particular, the core of this study is to investigate whether S1 identification is disrupted due to a retroactive interference of S2. To this aim, two experiments were planned in which children with SLI and children with typical development (TD), matched by age and non-verbal IQ, were compared (Experiment 1: SLI n=19; TD n=19; Experiment 2: SLI n=16; TD n=16). Results show group differences in the ability to identify a single stimulus surrounded by flankers (Baseline level). Moreover, children with SLI show a stronger negative interference of S2, both for temporal and spatial modulation. These results are discussed in the light of an attentional processing limitation in children with SLI. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Mirrored and rotated stimuli are not the same: A neuropsychological and lesion mapping study.

    Science.gov (United States)

    Martinaud, Olivier; Mirlink, Nicolas; Bioux, Sandrine; Bliaux, Evangéline; Champmartin, Cécile; Pouliquen, Dorothée; Cruypeninck, Yohann; Hannequin, Didier; Gérardin, Emmanuel

    2016-05-01

    Agnosia for mirrored stimuli is a rare clinical deficit. Only eight patients have been reported in the literature so far and little is known about the neural substrates of this agnosia. Using a previously developed experimental test designed to assess this agnosia, namely the Mirror and Orientation Agnosia Test (MOAT), as well as voxel-lesion symptom mapping (VLSM), we tested the hypothesis that focal brain-injured patients with right parietal damage would be impaired in the discrimination between the canonical view of a visual object and its mirrored and rotated images. Thirty-four consecutively recruited patients with a stroke involving the right or left parietal lobe have been included: twenty patients (59%) had a deficit on at least one of the six conditions of the MOAT, fourteen patients (41%) had a deficit on the mirror condition, twelve patients (35%) had a deficit on at least one the four rotated conditions and one had a truly selective agnosia for mirrored stimuli. A lesion analysis showed that discrimination of mirrored stimuli was correlated to the mesial part of the posterior superior temporal gyrus and the lateral part of the inferior parietal lobule, while discrimination of rotated stimuli was correlated to the lateral part of the posterior superior temporal gyrus and the mesial part of the inferior parietal lobule, with only a small overlap between the two. These data suggest that the right visual 'dorsal' pathway is essential for accurate perception of mirrored and rotated stimuli, with a selective cognitive process and anatomical network underlying our ability to discriminate between mirrored images, different from the process of discriminating between rotated images. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. The effect of internal and external fields of view on visually induced motion sickness.

    Science.gov (United States)

    Bos, Jelte E; de Vries, Sjoerd C; van Emmerik, Martijn L; Groen, Eric L

    2010-07-01

    Field of view (FOV) is said to affect visually induced motion sickness. FOV, however, is characterized by an internal setting used by the graphics generator (iFOV) and an external factor determined by screen size and viewing distance (eFOV). We hypothesized that especially the incongruence between iFOV and eFOV would lead to sickness. To that end we used a computer game environment with different iFOV and eFOV settings, and found the opposite effect. We speculate that the relative large differences between iFOV and eFOV used in this experiment caused the discrepancy, as may be explained by assuming an observer model controlling body motion. Copyright 2009 Elsevier Ltd. All rights reserved.

  17. Visual experience and blindsight: A methodological review

    DEFF Research Database (Denmark)

    Overgaard, Morten

    2011-01-01

    Blindsight is classically defined as residual visual capacity, e.g., to detect and identify visual stimuli, in the total absence of perceptual awareness following lesions to V1. However, whereas most experiments have investigated what blindsight patients can and cannot do, the literature contains...

  18. Visual search of illusory contours: Shape and orientation effects

    Directory of Open Access Journals (Sweden)

    Gvozdenović Vasilije

    2008-01-01

    Full Text Available Illusory contours are specific class of visual stimuli that represent stimuli configurations perceived as integral irrespective of the fact that they are given in fragmented uncompleted wholes. Due to their specific features, illusory contours gained much attention in last decade representing prototype of stimuli used in investigations focused on binding problem. On the other side, investigations of illusory contours are related to problem of the level of their visual processing. Neurophysiologic studies show that processing of illusory contours proceed relatively early, on the V2 level, on the other hand most of experimental studies claim that illusory contours are perceived with engagement of visual attention, binding their elements to whole percept. This research is focused on two experiments in which visual search of illusory contours are based on shape and orientation. The main experimental procedure evolved the task proposed by Bravo and Nakayama where instead of detection, subjects were performing identification of one among two possible targets. In the first experiment subjects detected the presence of illusory square or illusory triangle, while in the second experiment subject were detecting two different orientations of illusory triangle. The results are interpreted in terms of visual search and feature integration theory. Beside the type of visual search task, search type proved to be dependent of specific features of illusory shapes which further complicate theoretical interpretation of the level of their perception.

  19. Hemispheric specialization in dogs for processing different acoustic stimuli.

    Directory of Open Access Journals (Sweden)

    Marcello Siniscalchi

    Full Text Available Considerable experimental evidence shows that functional cerebral asymmetries are widespread in animals. Activity of the right cerebral hemisphere has been associated with responses to novel stimuli and the expression of intense emotions, such as aggression, escape behaviour and fear. The left hemisphere uses learned patterns and responds to familiar stimuli. Although such lateralization has been studied mainly for visual responses, there is evidence in primates that auditory perception is lateralized and that vocal communication depends on differential processing by the hemispheres. The aim of the present work was to investigate whether dogs use different hemispheres to process different acoustic stimuli by presenting them with playbacks of a thunderstorm and their species-typical vocalizations. The results revealed that dogs usually process their species-typical vocalizations using the left hemisphere and the thunderstorm sounds using the right hemisphere. Nevertheless, conspecific vocalizations are not always processed by the left hemisphere, since the right hemisphere is used for processing vocalizations when they elicit intense emotion, including fear. These findings suggest that the specialisation of the left hemisphere for intraspecific communication is more ancient that previously thought, and so is specialisation of the right hemisphere for intense emotions.

  20. Collinearity Impairs Local Element Visual Search

    Science.gov (United States)

    Jingling, Li; Tseng, Chia-Huei

    2013-01-01

    In visual searches, stimuli following the law of good continuity attract attention to the global structure and receive attentional priority. Also, targets that have unique features are of high feature contrast and capture attention in visual search. We report on a salient global structure combined with a high orientation contrast to the…

  1. Brain activation in response to randomized visual stimulation as obtained from conjunction and differential analysis: an fMRI study

    International Nuclear Information System (INIS)

    Nasaruddin, N H; Yusoff, A N; Kaur, S

    2014-01-01

    The objective of this multiple-subjects functional magnetic resonance imaging (fMRI) study was to identify the common brain areas that are activated when viewing black-and-white checkerboard pattern stimuli of various shapes, pattern and size and to investigate specific brain areas that are involved in processing static and moving visual stimuli. Sixteen participants viewed the moving (expanding ring, rotating wedge, flipping hour glass and bowtie and arc quadrant) and static (full checkerboard) stimuli during an fMRI scan. All stimuli have black-and-white checkerboard pattern. Statistical parametric mapping (SPM) was used in generating brain activation. Differential analyses were implemented to separately search for areas involved in processing static and moving stimuli. In general, the stimuli of various shapes, pattern and size activated multiple brain areas mostly in the left hemisphere. The activation in the right middle temporal gyrus (MTG) was found to be significantly higher in processing moving visual stimuli as compared to static stimulus. In contrast, the activation in the left calcarine sulcus and left lingual gyrus were significantly higher for static stimulus as compared to moving stimuli. Visual stimulation of various shapes, pattern and size used in this study indicated left lateralization of activation. The involvement of the right MTG in processing moving visual information was evident from differential analysis, while the left calcarine sulcus and left lingual gyrus are the areas that are involved in the processing of static visual stimulus

  2. Brain activation in response to randomized visual stimulation as obtained from conjunction and differential analysis: an fMRI study

    Science.gov (United States)

    Nasaruddin, N. H.; Yusoff, A. N.; Kaur, S.

    2014-11-01

    The objective of this multiple-subjects functional magnetic resonance imaging (fMRI) study was to identify the common brain areas that are activated when viewing black-and-white checkerboard pattern stimuli of various shapes, pattern and size and to investigate specific brain areas that are involved in processing static and moving visual stimuli. Sixteen participants viewed the moving (expanding ring, rotating wedge, flipping hour glass and bowtie and arc quadrant) and static (full checkerboard) stimuli during an fMRI scan. All stimuli have black-and-white checkerboard pattern. Statistical parametric mapping (SPM) was used in generating brain activation. Differential analyses were implemented to separately search for areas involved in processing static and moving stimuli. In general, the stimuli of various shapes, pattern and size activated multiple brain areas mostly in the left hemisphere. The activation in the right middle temporal gyrus (MTG) was found to be significantly higher in processing moving visual stimuli as compared to static stimulus. In contrast, the activation in the left calcarine sulcus and left lingual gyrus were significantly higher for static stimulus as compared to moving stimuli. Visual stimulation of various shapes, pattern and size used in this study indicated left lateralization of activation. The involvement of the right MTG in processing moving visual information was evident from differential analysis, while the left calcarine sulcus and left lingual gyrus are the areas that are involved in the processing of static visual stimulus.

  3. Stimulus Dependency of Object-Evoked Responses in Human Visual Cortex: An Inverse Problem for Category Specificity

    Science.gov (United States)

    Graewe, Britta; De Weerd, Peter; Farivar, Reza; Castelo-Branco, Miguel

    2012-01-01

    Many studies have linked the processing of different object categories to specific event-related potentials (ERPs) such as the face-specific N170. Despite reports showing that object-related ERPs are influenced by visual stimulus features, there is consensus that these components primarily reflect categorical aspects of the stimuli. Here, we re-investigated this idea by systematically measuring the effects of visual feature manipulations on ERP responses elicited by both structure-from-motion (SFM)-defined and luminance-defined object stimuli. SFM objects elicited a novel component at 200–250 ms (N250) over parietal and posterior temporal sites. We found, however, that the N250 amplitude was unaffected by restructuring SFM stimuli into meaningless objects based on identical visual cues. This suggests that this N250 peak was not uniquely linked to categorical aspects of the objects, but is strongly determined by visual stimulus features. We provide strong support for this hypothesis by parametrically manipulating the depth range of both SFM- and luminance-defined object stimuli and showing that the N250 evoked by SFM stimuli as well as the well-known N170 to static faces were sensitive to this manipulation. Importantly, this effect could not be attributed to compromised object categorization in low depth stimuli, confirming a strong impact of visual stimulus features on object-related ERP signals. As ERP components linked with visual categorical object perception are likely determined by multiple stimulus features, this creates an interesting inverse problem when deriving specific perceptual processes from variations in ERP components. PMID:22363479

  4. Attentional capture by social stimuli in young infants

    OpenAIRE

    Gluckman, Maxie; Johnson, Scott P.

    2013-01-01

    We investigated the possibility that a range of social stimuli capture the attention of 6-month-old infants when in competition with other non-face objects. Infants viewed a series of six-item arrays in which one target item was a face, body part, or animal as their eye movements were recorded. Stimulus arrays were also processed for relative salience of each item in terms of color, luminance, and amount of contour. Targets were rarely the most visually salient items in the arrays, yet inf...

  5. Brain correlates of automatic visual change detection.

    Science.gov (United States)

    Cléry, H; Andersson, F; Fonlupt, P; Gomot, M

    2013-07-15

    A number of studies support the presence of visual automatic detection of change, but little is known about the brain generators involved in such processing and about the modulation of brain activity according to the salience of the stimulus. The study presented here was designed to locate the brain activity elicited by unattended visual deviant and novel stimuli using fMRI. Seventeen adult participants were presented with a passive visual oddball sequence while performing a concurrent visual task. Variations in BOLD signal were observed in the modality-specific sensory cortex, but also in non-specific areas involved in preattentional processing of changing events. A degree-of-deviance effect was observed, since novel stimuli elicited more activity in the sensory occipital regions and at the medial frontal site than small changes. These findings could be compared to those obtained in the auditory modality and might suggest a "general" change detection process operating in several sensory modalities. Copyright © 2013 Elsevier Inc. All rights reserved.

  6. Biases in Visual, Auditory, and Audiovisual Perception of Space

    Science.gov (United States)

    Odegaard, Brian; Wozny, David R.; Shams, Ladan

    2015-01-01

    Localization of objects and events in the environment is critical for survival, as many perceptual and motor tasks rely on estimation of spatial location. Therefore, it seems reasonable to assume that spatial localizations should generally be accurate. Curiously, some previous studies have reported biases in visual and auditory localizations, but these studies have used small sample sizes and the results have been mixed. Therefore, it is not clear (1) if the reported biases in localization responses are real (or due to outliers, sampling bias, or other factors), and (2) whether these putative biases reflect a bias in sensory representations of space or a priori expectations (which may be due to the experimental setup, instructions, or distribution of stimuli). Here, to address these questions, a dataset of unprecedented size (obtained from 384 observers) was analyzed to examine presence, direction, and magnitude of sensory biases, and quantitative computational modeling was used to probe the underlying mechanism(s) driving these effects. Data revealed that, on average, observers were biased towards the center when localizing visual stimuli, and biased towards the periphery when localizing auditory stimuli. Moreover, quantitative analysis using a Bayesian Causal Inference framework suggests that while pre-existing spatial biases for central locations exert some influence, biases in the sensory representations of both visual and auditory space are necessary to fully explain the behavioral data. How are these opposing visual and auditory biases reconciled in conditions in which both auditory and visual stimuli are produced by a single event? Potentially, the bias in one modality could dominate, or the biases could interact/cancel out. The data revealed that when integration occurred in these conditions, the visual bias dominated, but the magnitude of this bias was reduced compared to unisensory conditions. Therefore, multisensory integration not only improves the

  7. Biases in Visual, Auditory, and Audiovisual Perception of Space.

    Directory of Open Access Journals (Sweden)

    Brian Odegaard

    2015-12-01

    Full Text Available Localization of objects and events in the environment is critical for survival, as many perceptual and motor tasks rely on estimation of spatial location. Therefore, it seems reasonable to assume that spatial localizations should generally be accurate. Curiously, some previous studies have reported biases in visual and auditory localizations, but these studies have used small sample sizes and the results have been mixed. Therefore, it is not clear (1 if the reported biases in localization responses are real (or due to outliers, sampling bias, or other factors, and (2 whether these putative biases reflect a bias in sensory representations of space or a priori expectations (which may be due to the experimental setup, instructions, or distribution of stimuli. Here, to address these questions, a dataset of unprecedented size (obtained from 384 observers was analyzed to examine presence, direction, and magnitude of sensory biases, and quantitative computational modeling was used to probe the underlying mechanism(s driving these effects. Data revealed that, on average, observers were biased towards the center when localizing visual stimuli, and biased towards the periphery when localizing auditory stimuli. Moreover, quantitative analysis using a Bayesian Causal Inference framework suggests that while pre-existing spatial biases for central locations exert some influence, biases in the sensory representations of both visual and auditory space are necessary to fully explain the behavioral data. How are these opposing visual and auditory biases reconciled in conditions in which both auditory and visual stimuli are produced by a single event? Potentially, the bias in one modality could dominate, or the biases could interact/cancel out. The data revealed that when integration occurred in these conditions, the visual bias dominated, but the magnitude of this bias was reduced compared to unisensory conditions. Therefore, multisensory integration not only

  8. A Bilateral Advantage for Storage in Visual Working Memory

    Science.gov (United States)

    Umemoto, Akina; Drew, Trafton; Ester, Edward F.; Awh, Edward

    2010-01-01

    Various studies have demonstrated enhanced visual processing when information is presented across both visual hemifields rather than in a single hemifield (the "bilateral advantage"). For example, Alvarez and Cavanagh (2005) reported that observers were able to track twice as many moving visual stimuli when the tracked items were presented…

  9. Visual Working Memory Enhances the Neural Response to Matching Visual Input.

    Science.gov (United States)

    Gayet, Surya; Guggenmos, Matthias; Christophel, Thomas B; Haynes, John-Dylan; Paffen, Chris L E; Van der Stigchel, Stefan; Sterzer, Philipp

    2017-07-12

    Visual working memory (VWM) is used to maintain visual information available for subsequent goal-directed behavior. The content of VWM has been shown to affect the behavioral response to concurrent visual input, suggesting that visual representations originating from VWM and from sensory input draw upon a shared neural substrate (i.e., a sensory recruitment stance on VWM storage). Here, we hypothesized that visual information maintained in VWM would enhance the neural response to concurrent visual input that matches the content of VWM. To test this hypothesis, we measured fMRI BOLD responses to task-irrelevant stimuli acquired from 15 human participants (three males) performing a concurrent delayed match-to-sample task. In this task, observers were sequentially presented with two shape stimuli and a retro-cue indicating which of the two shapes should be memorized for subsequent recognition. During the retention interval, a task-irrelevant shape (the probe) was briefly presented in the peripheral visual field, which could either match or mismatch the shape category of the memorized stimulus. We show that this probe stimulus elicited a stronger BOLD response, and allowed for increased shape-classification performance, when it matched rather than mismatched the concurrently memorized content, despite identical visual stimulation. Our results demonstrate that VWM enhances the neural response to concurrent visual input in a content-specific way. This finding is consistent with the view that neural populations involved in sensory processing are recruited for VWM storage, and it provides a common explanation for a plethora of behavioral studies in which VWM-matching visual input elicits a stronger behavioral and perceptual response. SIGNIFICANCE STATEMENT Humans heavily rely on visual information to interact with their environment and frequently must memorize such information for later use. Visual working memory allows for maintaining such visual information in the mind

  10. The Role of Inhibition in Avoiding Distraction by Salient Stimuli.

    Science.gov (United States)

    Gaspelin, Nicholas; Luck, Steven J

    2018-01-01

    Researchers have long debated whether salient stimuli can involuntarily 'capture' visual attention. We review here evidence for a recently discovered inhibitory mechanism that may help to resolve this debate. This evidence suggests that salient stimuli naturally attempt to capture attention, but capture can be avoided if the salient stimulus is suppressed before it captures attention. Importantly, the suppression process can be more or less effective as a result of changing task demands or lapses in cognitive control. Converging evidence for the existence of this suppression mechanism comes from multiple sources, including psychophysics, eye-tracking, and event-related potentials (ERPs). We conclude that the evidence for suppression is strong, but future research will need to explore the nature and limits of this mechanism. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Microcontroller based fibre-optic visual presentation system for multisensory neuroimaging.

    Science.gov (United States)

    Kurniawan, Veldri; Klemen, Jane; Chambers, Christopher D

    2011-10-30

    Presenting visual stimuli in physical 3D space during fMRI experiments carries significant technical challenges. Certain types of multisensory visuotactile experiments and visuomotor tasks require presentation of visual stimuli in peripersonal space, which cannot be accommodated by ordinary projection screens or binocular goggles. However, light points produced by a group of LEDs can be transmitted through fibre-optic cables and positioned anywhere inside the MRI scanner. Here we describe the design and implementation of a microcontroller-based programmable digital device for controlling fibre-optically transmitted LED lights from a PC. The main feature of this device is the ability to independently control the colour, brightness, and timing of each LED. Moreover, the device was designed in a modular and extensible way, which enables easy adaptation for various experimental paradigms. The device was tested and validated in three fMRI experiments involving basic visual perception, a simple colour discrimination task, and a blocked multisensory visuo-tactile task. The results revealed significant lateralized activation in occipital cortex of all participants, a reliable response in ventral occipital areas to colour stimuli elicited by the device, and strong activations in multisensory brain regions in the multisensory task. Overall, these findings confirm the suitability of this device for presenting complex fibre-optic visual and cross-modal stimuli inside the scanner. Copyright © 2011 Elsevier B.V. All rights reserved.

  12. The working memory stroop effect: when internal representations clash with external stimuli.

    Science.gov (United States)

    Kiyonaga, Anastasia; Egner, Tobias

    2014-08-01

    Working memory (WM) has recently been described as internally directed attention, which implies that WM content should affect behavior exactly like an externally perceived and attended stimulus. We tested whether holding a color word in WM, rather than attending to it in the external environment, can produce interference in a color-discrimination task, which would mimic the classic Stroop effect. Over three experiments, the WM Stroop effect recapitulated core properties of the classic attentional Stroop effect, displaying equivalent congruency effects, additive contributions from stimulus- and response-level congruency, and susceptibility to modulation by the percentage of congruent and incongruent trials. Moreover, WM maintenance was inversely related to attentional demands during the WM delay between stimulus presentation and recall, with poorer memory performance following incongruent than congruent trials. Together, these results suggest that WM and attention rely on the same resources and operate over the same representations. © The Author(s) 2014.

  13. False memories to emotional stimuli are not equally affected in right- and left-brain-damaged stroke patients.

    Science.gov (United States)

    Buratto, Luciano Grüdtner; Zimmermann, Nicolle; Ferré, Perrine; Joanette, Yves; Fonseca, Rochele Paz; Stein, Lilian Milnitsky

    2014-10-01

    Previous research has attributed to the right hemisphere (RH) a key role in eliciting false memories to visual emotional stimuli. These results have been explained in terms of two right-hemisphere properties: (i) that emotional stimuli are preferentially processed in the RH and (ii) that visual stimuli are represented more coarsely in the RH. According to this account, false emotional memories are preferentially produced in the RH because emotional stimuli are both more strongly and more diffusely activated during encoding, leaving a memory trace that can be erroneously reactivated by similar but unstudied emotional items at test. If this right-hemisphere hypothesis is correct, then RH damage should result in a reduction in false memories to emotional stimuli relative to left-hemisphere lesions. To investigate this possibility, groups of right-brain-damaged (RBD, N=15), left-brain-damaged (LBD, N=15) and healthy (HC, N=30) participants took part in a recognition memory experiment with emotional (negative and positive) and non-emotional pictures. False memories were operationalized as incorrect responses to unstudied pictures that were similar to studied ones. Both RBD and LBD participants showed similar reductions in false memories for negative pictures relative to controls. For positive pictures, however, false memories were reduced only in RBD patients. The results provide only partial support for the right-hemisphere hypothesis and suggest that inter-hemispheric cooperation models may be necessary to fully account for false emotional memories. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Elevated audiovisual temporal interaction in patients with migraine without aura

    Science.gov (United States)

    2014-01-01

    Background Photophobia and phonophobia are the most prominent symptoms in patients with migraine without aura. Hypersensitivity to visual stimuli can lead to greater hypersensitivity to auditory stimuli, which suggests that the interaction between visual and auditory stimuli may play an important role in the pathogenesis of migraine. However, audiovisual temporal interactions in migraine have not been well studied. Therefore, our aim was to examine auditory and visual interactions in migraine. Methods In this study, visual, auditory, and audiovisual stimuli with different temporal intervals between the visual and auditory stimuli were randomly presented to the left or right hemispace. During this time, the participants were asked to respond promptly to target stimuli. We used cumulative distribution functions to analyze the response times as a measure of audiovisual integration. Results Our results showed that audiovisual integration was significantly elevated in the migraineurs compared with the normal controls (p audiovisual suppression was weaker in the migraineurs compared with the normal controls (p < 0.05). Conclusions Our findings further objectively support the notion that migraineurs without aura are hypersensitive to external visual and auditory stimuli. Our study offers a new quantitative and objective method to evaluate hypersensitivity to audio-visual stimuli in patients with migraine. PMID:24961903

  15. Visually Evoked Spiking Evolves While Spontaneous Ongoing Dynamics Persist

    Science.gov (United States)

    Huys, Raoul; Jirsa, Viktor K.; Darokhan, Ziauddin; Valentiniene, Sonata; Roland, Per E.

    2016-01-01

    Neurons in the primary visual cortex spontaneously spike even when there are no visual stimuli. It is unknown whether the spiking evoked by visual stimuli is just a modification of the spontaneous ongoing cortical spiking dynamics or whether the spontaneous spiking state disappears and is replaced by evoked spiking. This study of laminar recordings of spontaneous spiking and visually evoked spiking of neurons in the ferret primary visual cortex shows that the spiking dynamics does not change: the spontaneous spiking as well as evoked spiking is controlled by a stable and persisting fixed point attractor. Its existence guarantees that evoked spiking return to the spontaneous state. However, the spontaneous ongoing spiking state and the visual evoked spiking states are qualitatively different and are separated by a threshold (separatrix). The functional advantage of this organization is that it avoids the need for a system reorganization following visual stimulation, and impedes the transition of spontaneous spiking to evoked spiking and the propagation of spontaneous spiking from layer 4 to layers 2–3. PMID:26778982

  16. Activations in temporal areas using visual and auditory naming stimuli: A language fMRI study in temporal lobe epilepsy.

    Science.gov (United States)

    Gonzálvez, Gloria G; Trimmel, Karin; Haag, Anja; van Graan, Louis A; Koepp, Matthias J; Thompson, Pamela J; Duncan, John S

    2016-12-01

    Verbal fluency functional MRI (fMRI) is used for predicting language deficits after anterior temporal lobe resection (ATLR) for temporal lobe epilepsy (TLE), but primarily engages frontal lobe areas. In this observational study we investigated fMRI paradigms using visual and auditory stimuli, which predominately involve language areas resected during ATLR. Twenty-three controls and 33 patients (20 left (LTLE), 13 right (RTLE)) were assessed using three fMRI paradigms: verbal fluency, auditory naming with a contrast of auditory reversed speech; picture naming with a contrast of scrambled pictures and blurred faces. Group analysis showed bilateral temporal activations for auditory naming and picture naming. Correcting for auditory and visual input (by subtracting activations resulting from auditory reversed speech and blurred pictures/scrambled faces respectively) resulted in left-lateralised activations for patients and controls, which was more pronounced for LTLE compared to RTLE patients. Individual subject activations at a threshold of T>2.5, extent >10 voxels, showed that verbal fluency activated predominantly the left inferior frontal gyrus (IFG) in 90% of LTLE, 92% of RTLE, and 65% of controls, compared to right IFG activations in only 15% of LTLE and RTLE and 26% of controls. Middle temporal (MTG) or superior temporal gyrus (STG) activations were seen on the left in 30% of LTLE, 23% of RTLE, and 52% of controls, and on the right in 15% of LTLE, 15% of RTLE, and 35% of controls. Auditory naming activated temporal areas more frequently than did verbal fluency (LTLE: 93%/73%; RTLE: 92%/58%; controls: 82%/70% (left/right)). Controlling for auditory input resulted in predominantly left-sided temporal activations. Picture naming resulted in temporal lobe activations less frequently than did auditory naming (LTLE 65%/55%; RTLE 53%/46%; controls 52%/35% (left/right)). Controlling for visual input had left-lateralising effects. Auditory and picture naming activated

  17. Effects of Visual Speech on Early Auditory Evoked Fields - From the Viewpoint of Individual Variance

    Science.gov (United States)

    Yahata, Izumi; Kanno, Akitake; Hidaka, Hiroshi; Sakamoto, Shuichi; Nakasato, Nobukazu; Kawashima, Ryuta; Katori, Yukio

    2017-01-01

    The effects of visual speech (the moving image of the speaker’s face uttering speech sound) on early auditory evoked fields (AEFs) were examined using a helmet-shaped magnetoencephalography system in 12 healthy volunteers (9 males, mean age 35.5 years). AEFs (N100m) in response to the monosyllabic sound /be/ were recorded and analyzed under three different visual stimulus conditions, the moving image of the same speaker’s face uttering /be/ (congruent visual stimuli) or uttering /ge/ (incongruent visual stimuli), and visual noise (still image processed from speaker’s face using a strong Gaussian filter: control condition). On average, latency of N100m was significantly shortened in the bilateral hemispheres for both congruent and incongruent auditory/visual (A/V) stimuli, compared to the control A/V condition. However, the degree of N100m shortening was not significantly different between the congruent and incongruent A/V conditions, despite the significant differences in psychophysical responses between these two A/V conditions. Moreover, analysis of the magnitudes of these visual effects on AEFs in individuals showed that the lip-reading effects on AEFs tended to be well correlated between the two different audio-visual conditions (congruent vs. incongruent visual stimuli) in the bilateral hemispheres but were not significantly correlated between right and left hemisphere. On the other hand, no significant correlation was observed between the magnitudes of visual speech effects and psychophysical responses. These results may indicate that the auditory-visual interaction observed on the N100m is a fundamental process which does not depend on the congruency of the visual information. PMID:28141836

  18. TypingSuite: Integrated Software for Presenting Stimuli, and Collecting and Analyzing Typing Data

    Science.gov (United States)

    Mazerolle, Erin L.; Marchand, Yannick

    2015-01-01

    Research into typing patterns has broad applications in both psycholinguistics and biometrics (i.e., improving security of computer access via each user's unique typing patterns). We present a new software package, TypingSuite, which can be used for presenting visual and auditory stimuli, collecting typing data, and summarizing and analyzing the…

  19. Comparable mechanisms of working memory interference by auditory and visual motion in youth and aging.

    Science.gov (United States)

    Mishra, Jyoti; Zanto, Theodore; Nilakantan, Aneesha; Gazzaley, Adam

    2013-08-01

    Intrasensory interference during visual working memory (WM) maintenance by object stimuli (such as faces and scenes), has been shown to negatively impact WM performance, with greater detrimental impacts of interference observed in aging. Here we assessed age-related impacts by intrasensory WM interference from lower-level stimulus features such as visual and auditory motion stimuli. We consistently found that interference in the form of ignored distractions and secondary task interruptions presented during a WM maintenance period, degraded memory accuracy in both the visual and auditory domain. However, in contrast to prior studies assessing WM for visual object stimuli, feature-based interference effects were not observed to be significantly greater in older adults. Analyses of neural oscillations in the alpha frequency band further revealed preserved mechanisms of interference processing in terms of post-stimulus alpha suppression, which was observed maximally for secondary task interruptions in visual and auditory modalities in both younger and older adults. These results suggest that age-related sensitivity of WM to interference may be limited to complex object stimuli, at least at low WM loads. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. A Steady-State Visual Evoked Potential Brain-Computer Interface System Evaluation as an In-Vehicle Warning Device

    Science.gov (United States)

    Riyahi, Pouria

    This thesis is part of current research at Center for Intelligence Systems Research (CISR) at The George Washington University for developing new in-vehicle warning systems via Brain-Computer Interfaces (BCIs). The purpose of conducting this research is to contribute to the current gap between BCI and in-vehicle safety studies. It is based on the premise that accurate and timely monitoring of human (driver) brain's signal to external stimuli could significantly aide in detection of driver's intentions and development of effective warning systems. The thesis starts with introducing the concept of BCI and its development history while it provides a literature review on the nature of brain signals. The current advancement and increasing demand for commercial and non-medical BCI products are described. In addition, the recent research attempts in transportation safety to study drivers' behavior or responses through brain signals are reviewed. The safety studies, which are focused on employing a reliable and practical BCI system as an in-vehicle assistive device, are also introduced. A major focus of this thesis research has been on the evaluation and development of the signal processing algorithms which can effectively filter and process brain signals when the human subject is subjected to Visual LED (Light Emitting Diodes) stimuli at different frequencies. The stimulated brain generates a voltage potential, referred to as Steady-State Visual Evoked Potential (SSVEP). Therefore, a newly modified analysis algorithm for detecting the brain visual signals is proposed. These algorithms are designed to reach a satisfactory accuracy rate without preliminary trainings, hence focusing on eliminating the need for lengthy training of human subjects. Another important concern is the ability of the algorithms to find correlation of brain signals with external visual stimuli in real-time. The developed analysis models are based on algorithms which are capable of generating results

  1. Stimuli-Responsive Block Copolymer-Based Assemblies for Cargo Delivery and Theranostic Applications

    Directory of Open Access Journals (Sweden)

    Jun Yin

    2016-07-01

    Full Text Available Although a number of tactics towards the fabrication and biomedical exploration of stimuli-responsive polymeric assemblies being responsive and adaptive to various factors have appeared, the controlled preparation of assemblies with well-defined physicochemical properties and tailor-made functions are still challenges. These responsive polymeric assemblies, which are triggered by stimuli, always exhibited reversible or irreversible changes in chemical structures and physical properties. However, simple drug/polymer nanocomplexes cannot deliver or release drugs into the diseased sites and cells on-demand due to the inevitable biological barriers. Hence, utilizing therapeutic or imaging agents-loaded stimuli-responsive block copolymer assemblies that are responsive to tumor internal microenvironments (pH, redox, enzyme, and temperature, etc. or external stimuli (light and electromagnetic field, etc. have emerged to be an important solution to improve therapeutic efficacy and imaging sensitivity through rationally designing as well as self-assembling approaches. In this review, we summarize a portion of recent progress in tumor and intracellular microenvironment responsive block copolymer assemblies and their applications in anticancer drug delivery and triggered release and enhanced imaging sensitivity. The outlook on future developments is also discussed. We hope that this review can stimulate more revolutionary ideas and novel concepts and meet the significant interest to diverse readers.

  2. Face processing is gated by visual spatial attention

    Directory of Open Access Journals (Sweden)

    Roy E Crist

    2008-03-01

    Full Text Available Human perception of faces is widely believed to rely on automatic processing by a domain-specifi c, modular component of the visual system. Scalp-recorded event-related potential (ERP recordings indicate that faces receive special stimulus processing at around 170 ms poststimulus onset, in that faces evoke an enhanced occipital negative wave, known as the N170, relative to the activity elicited by other visual objects. As predicted by modular accounts of face processing, this early face-specifi c N170 enhancement has been reported to be largely immune to the infl uence of endogenous processes such as task strategy or attention. However, most studies examining the infl uence of attention on face processing have focused on non-spatial attention, such as object-based attention, which tend to have longer-latency effects. In contrast, numerous studies have demonstrated that visual spatial attention can modulate the processing of visual stimuli as early as 80 ms poststimulus – substantially earlier than the N170. These temporal characteristics raise the question of whether this initial face-specifi c processing is immune to the infl uence of spatial attention. This question was addressed in a dual-visualstream ERP study in which the infl uence of spatial attention on the face-specifi c N170 could be directly examined. As expected, early visual sensory responses to all stimuli presented in an attended location were larger than responses evoked by those same stimuli when presented in an unattended location. More importantly, a signifi cant face-specifi c N170 effect was elicited by faces that appeared in an attended location, but not in an unattended one. In summary, early face-specifi c processing is not automatic, but rather, like other objects, strongly depends on endogenous factors such as the allocation of spatial attention. Moreover, these fi ndings underscore the extensive infl uence that top-down attention exercises over the processing of

  3. Audio visual interaction in the context of multi-media applications

    NARCIS (Netherlands)

    Kohlrausch, A.G.; Par, van de S.L.J.D.E.; Blauert, J.

    2005-01-01

    In our natural environment, we simultaneously receive information through various sensory modalities. The properties of these stimuli are coupled by physical laws, so that, e. g., auditory and visual stimuli caused by the same event have a specific temporal, spatial and contextual relation when

  4. First- and second-order contrast sensitivity functions reveal disrupted visual processing following mild traumatic brain injury.

    Science.gov (United States)

    Spiegel, Daniel P; Reynaud, Alexandre; Ruiz, Tatiana; Laguë-Beauvais, Maude; Hess, Robert; Farivar, Reza

    2016-05-01

    Vision is disrupted by traumatic brain injury (TBI), with vision-related complaints being amongst the most common in this population. Based on the neural responses of early visual cortical areas, injury to the visual cortex would be predicted to affect both 1(st) order and 2(nd) order contrast sensitivity functions (CSFs)-the height and/or the cut-off of the CSF are expected to be affected by TBI. Previous studies have reported disruptions only in 2(nd) order contrast sensitivity, but using a narrow range of parameters and divergent methodologies-no study has characterized the effect of TBI on the full CSF for both 1(st) and 2(nd) order stimuli. Such information is needed to properly understand the effect of TBI on contrast perception, which underlies all visual processing. Using a unified framework based on the quick contrast sensitivity function, we measured full CSFs for static and dynamic 1(st) and 2(nd) order stimuli. Our results provide a unique dataset showing alterations in sensitivity for both 1(st) and 2(nd) order visual stimuli. In particular, we show that TBI patients have increased sensitivity for 1(st) order motion stimuli and decreased sensitivity to orientation-defined and contrast-defined 2(nd) order stimuli. In addition, our data suggest that TBI patients' sensitivity for both 1(st) order stimuli and 2(nd) order contrast-defined stimuli is shifted towards higher spatial frequencies. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  5. Implicit and Explicit Associations with Erotic Stimuli in Women with and Without Sexual Problems.

    Science.gov (United States)

    van Lankveld, Jacques J D M; Bandell, Myrthe; Bastin-Hurek, Eva; van Beurden, Myra; Araz, Suzan

    2018-02-20

    Conceptual models of sexual functioning have suggested a major role for implicit cognitive processing in sexual functioning. The present study aimed to investigate implicit and explicit cognition in sexual functioning in women. Gynecological patients with (N = 38) and without self-reported sexual problems (N = 41) were compared. Participants performed two Single-Target Implicit Association Tests (ST-IAT), measuring the implicit association of visual erotic stimuli with attributes representing, respectively, valence and motivation. Participants also rated the erotic pictures that were shown in the ST-IATs on the dimensions of valence, attractiveness, and sexual excitement, to assess their explicit associations with these erotic stimuli. Participants completed the Female Sexual Functioning Index and the Female Sexual Distress Scale for continuous measures of sexual functioning, and the Hospital Anxiety and Depression Scale to assess depressive symptoms. Compared to nonsymptomatic women, women with sexual problems were found to show more negative implicit associations of erotic stimuli with wanting (implicit sexual motivation). Across both groups, stronger implicit associations of erotic stimuli with wanting predicted higher level of sexual functioning. More positive explicit ratings of erotic stimuli predicted lower level of sexual distress across both groups.

  6. Linking crowding, visual span, and reading.

    Science.gov (United States)

    He, Yingchen; Legge, Gordon E

    2017-09-01

    The visual span is hypothesized to be a sensory bottleneck on reading speed with crowding thought to be the major sensory factor limiting the size of the visual span. This proposed linkage between crowding, visual span, and reading speed is challenged by the finding that training to read crowded letters reduced crowding but did not improve reading speed (Chung, 2007). Here, we examined two properties of letter-recognition training that may influence the transfer to improved reading: the spatial arrangement of training stimuli and the presence of flankers. Three groups of nine young adults were trained with different configurations of letter stimuli at 10° in the lower visual field: a flanked-local group (flanked letters localized at one position), a flanked-distributed group (flanked letters distributed across different horizontal locations), and an isolated-distributed group (isolated and distributed letters). We found that distributed training, but not the presence of flankers, appears to be necessary for the training benefit to transfer to increased reading speed. Localized training may have biased attention to one specific, small area in the visual field, thereby failing to improve reading. We conclude that the visual span represents a sensory bottleneck on reading, but there may also be an attentional bottleneck. Reducing the impact of crowding can enlarge the visual span and can potentially facilitate reading, but not when adverse attentional bias is present. Our results clarify the association between crowding, visual span, and reading.

  7. The flanker compatibility effect as a function of visual angle, attentional focus, visual transients, and perceptual load: a search for boundary conditions.

    Science.gov (United States)

    Miller, J

    1991-03-01

    When subjects must respond to a relevant center letter and ignore irrelevant flanking letters, the identities of the flankers produce a response compatibility effect, indicating that they are processed semantically at least to some extent. Because this effect decreases as the separation between target and flankers increases, the effect appears to result from imperfect early selection (attenuation). In the present experiments, several features of the focused attention paradigm were examined, in order to determine whether they might produce the flanker compatibility effect by interfering with the operation of an early selective mechanism. Specifically, the effect might be produced because the paradigm requires subjects to (1) attend exclusively to stimuli within a very small visual angle, (2) maintain a long-term attentional focus on a constant display location, (3) focus attention on an empty display location, (4) exclude onset-transient flankers from semantic processing, or (5) ignore some of the few stimuli in an impoverished visual field. The results indicate that none of these task features is required for semantic processing of unattended stimuli to occur. In fact, visual angle is the only one of the task features that clearly has a strong influence on the size of the flanker compatibility effect. The invariance of the flanker compatibility effect across these conditions suggests that the mechanism for early selection rarely, if ever, completely excludes unattended stimuli from semantic analysis. In addition, it shows that selective mechanisms are relatively insensitive to several factors that might be expected to influence them, thereby supporting the view that spatial separation has a special status for visual selective attention.

  8. Integration of motion energy from overlapping random background noise increases perceived speed of coherently moving stimuli.

    Science.gov (United States)

    Chuang, Jason; Ausloos, Emily C; Schwebach, Courtney A; Huang, Xin

    2016-12-01

    The perception of visual motion can be profoundly influenced by visual context. To gain insight into how the visual system represents motion speed, we investigated how a background stimulus that did not move in a net direction influenced the perceived speed of a center stimulus. Visual stimuli were two overlapping random-dot patterns. The center stimulus moved coherently in a fixed direction, whereas the background stimulus moved randomly. We found that human subjects perceived the speed of the center stimulus to be significantly faster than its veridical speed when the background contained motion noise. Interestingly, the perceived speed was tuned to the noise level of the background. When the speed of the center stimulus was low, the highest perceived speed was reached when the background had a low level of motion noise. As the center speed increased, the peak perceived speed was reached at a progressively higher background noise level. The effect of speed overestimation required the center stimulus to overlap with the background. Increasing the background size within a certain range enhanced the effect, suggesting spatial integration. The speed overestimation was significantly reduced or abolished when the center stimulus and the background stimulus had different colors, or when they were placed at different depths. When the center- and background-stimuli were perceptually separable, speed overestimation was correlated with perceptual similarity between the center- and background-stimuli. These results suggest that integration of motion energy from random motion noise has a significant impact on speed perception. Our findings put new constraints on models regarding the neural basis of speed perception. Copyright © 2016 the American Physiological Society.

  9. Evidence for unlimited capacity processing of simple features in visual cortex.

    Science.gov (United States)

    White, Alex L; Runeson, Erik; Palmer, John; Ernst, Zachary R; Boynton, Geoffrey M

    2017-06-01

    Performance in many visual tasks is impaired when observers attempt to divide spatial attention across multiple visual field locations. Correspondingly, neuronal response magnitudes in visual cortex are often reduced during divided compared with focused spatial attention. This suggests that early visual cortex is the site of capacity limits, where finite processing resources must be divided among attended stimuli. However, behavioral research demonstrates that not all visual tasks suffer such capacity limits: The costs of divided attention are minimal when the task and stimulus are simple, such as when searching for a target defined by orientation or contrast. To date, however, every neuroimaging study of divided attention has used more complex tasks and found large reductions in response magnitude. We bridged that gap by using functional magnetic resonance imaging to measure responses in the human visual cortex during simple feature detection. The first experiment used a visual search task: Observers detected a low-contrast Gabor patch within one or four potentially relevant locations. The second experiment used a dual-task design, in which observers made independent judgments of Gabor presence in patches of dynamic noise at two locations. In both experiments, blood-oxygen level-dependent (BOLD) signals in the retinotopic cortex were significantly lower for ignored than attended stimuli. However, when observers divided attention between multiple stimuli, BOLD signals were not reliably reduced and behavioral performance was unimpaired. These results suggest that processing of simple features in early visual cortex has unlimited capacity.

  10. Audio-visual identification of place of articulation and voicing in white and babble noise.

    Science.gov (United States)

    Alm, Magnus; Behne, Dawn M; Wang, Yue; Eg, Ragnhild

    2009-07-01

    Research shows that noise and phonetic attributes influence the degree to which auditory and visual modalities are used in audio-visual speech perception (AVSP). Research has, however, mainly focused on white noise and single phonetic attributes, thus neglecting the more common babble noise and possible interactions between phonetic attributes. This study explores whether white and babble noise differentially influence AVSP and whether these differences depend on phonetic attributes. White and babble noise of 0 and -12 dB signal-to-noise ratio were added to congruent and incongruent audio-visual stop consonant-vowel stimuli. The audio (A) and video (V) of incongruent stimuli differed either in place of articulation (POA) or voicing. Responses from 15 young adults show that, compared to white noise, babble resulted in more audio responses for POA stimuli, and fewer for voicing stimuli. Voiced syllables received more audio responses than voiceless syllables. Results can be attributed to discrepancies in the acoustic spectra of both the noise and speech target. Voiced consonants may be more auditorily salient than voiceless consonants which are more spectrally similar to white noise. Visual cues contribute to identification of voicing, but only if the POA is visually salient and auditorily susceptible to the noise type.

  11. Mouse V1 population correlates of visual detection rely on heterogeneity within neuronal response patterns

    Science.gov (United States)

    Montijn, Jorrit S; Goltstein, Pieter M; Pennartz, Cyriel MA

    2015-01-01

    Previous studies have demonstrated the importance of the primary sensory cortex for the detection, discrimination, and awareness of visual stimuli, but it is unknown how neuronal populations in this area process detected and undetected stimuli differently. Critical differences may reside in the mean strength of responses to visual stimuli, as reflected in bulk signals detectable in functional magnetic resonance imaging, electro-encephalogram, or magnetoencephalography studies, or may be more subtly composed of differentiated activity of individual sensory neurons. Quantifying single-cell Ca2+ responses to visual stimuli recorded with in vivo two-photon imaging, we found that visual detection correlates more strongly with population response heterogeneity rather than overall response strength. Moreover, neuronal populations showed consistencies in activation patterns across temporally spaced trials in association with hit responses, but not during nondetections. Contrary to models relying on temporally stable networks or bulk signaling, these results suggest that detection depends on transient differentiation in neuronal activity within cortical populations. DOI: http://dx.doi.org/10.7554/eLife.10163.001 PMID:26646184

  12. Automatic processing of unattended lexical information in visual oddball presentation: neurophysiological evidence

    Directory of Open Access Journals (Sweden)

    Yury eShtyrov

    2013-08-01

    Full Text Available Previous electrophysiological studies of automatic language processing revealed early (100-200 ms reflections of access to lexical characteristics of speech signal using the so-called mismatch negativity (MMN, a negative ERP deflection elicited by infrequent irregularities in unattended repetitive auditory stimulation. In those studies, lexical processing of spoken stimuli became manifest as an enhanced ERP in response to unattended real words as opposed to phonologically matched but meaningless pseudoword stimuli. This lexical ERP enhancement was explained by automatic activation of word memory traces realised as distributed strongly intra-connected neuronal circuits, whose robustness guarantees memory trace activation even in the absence of attention on spoken input. Such an account would predict the automatic activation of these memory traces upon any presentation of linguistic information, irrespective of the presentation modality. As previous lexical MMN studies exclusively used auditory stimulation, we here adapted the lexical MMN paradigm to investigate early automatic lexical effects in the visual modality. In a visual oddball sequence, matched short word and pseudoword stimuli were presented tachistoscopically in perifoveal area outside the visual focus of attention, as the subjects’ attention was concentrated on a concurrent non-linguistic visual dual task in the centre of the screen. Using EEG, we found a visual analogue of the lexical ERP enhancement effect, with unattended written words producing larger brain response amplitudes than matched pseudowords, starting at ~100 ms. Furthermore, we also found significant visual MMN, reported here for the first time for unattended lexical stimuli presented perifoveally. The data suggest early automatic lexical processing of visually presented language outside the focus of attention.

  13. Brain activity during auditory and visual phonological, spatial and simple discrimination tasks.

    Science.gov (United States)

    Salo, Emma; Rinne, Teemu; Salonen, Oili; Alho, Kimmo

    2013-02-16

    We used functional magnetic resonance imaging to measure human brain activity during tasks demanding selective attention to auditory or visual stimuli delivered in concurrent streams. Auditory stimuli were syllables spoken by different voices and occurring in central or peripheral space. Visual stimuli were centrally or more peripherally presented letters in darker or lighter fonts. The participants performed a phonological, spatial or "simple" (speaker-gender or font-shade) discrimination task in either modality. Within each modality, we expected a clear distinction between brain activations related to nonspatial and spatial processing, as reported in previous studies. However, within each modality, different tasks activated largely overlapping areas in modality-specific (auditory and visual) cortices, as well as in the parietal and frontal brain regions. These overlaps may be due to effects of attention common for all three tasks within each modality or interaction of processing task-relevant features and varying task-irrelevant features in the attended-modality stimuli. Nevertheless, brain activations caused by auditory and visual phonological tasks overlapped in the left mid-lateral prefrontal cortex, while those caused by the auditory and visual spatial tasks overlapped in the inferior parietal cortex. These overlapping activations reveal areas of multimodal phonological and spatial processing. There was also some evidence for intermodal attention-related interaction. Most importantly, activity in the superior temporal sulcus elicited by unattended speech sounds was attenuated during the visual phonological task in comparison with the other visual tasks. This effect might be related to suppression of processing irrelevant speech presumably distracting the phonological task involving the letters. Copyright © 2012 Elsevier B.V. All rights reserved.

  14. Effects of auditory and visual modalities in recall of words.

    Science.gov (United States)

    Gadzella, B M; Whitehead, D A

    1975-02-01

    Ten experimental conditions were used to study the effects of auditory and visual (printed words, uncolored and colored pictures) modalities and their various combinations with college students. A recall paradigm was employed in which subjects responded in a written test. Analysis of data showed the auditory modality was superior to visual (pictures) ones but was not significantly different from visual (printed words) modality. In visual modalities, printed words were superior to colored pictures. Generally, conditions with multiple modes of representation of stimuli were significantly higher than for conditions with single modes. Multiple modalities, consisting of two or three modes, did not differ significantly from each other. It was concluded that any two modalities of the stimuli presented simultaneously were just as effective as three in recall of stimulus words.

  15. Audio-visual onset differences are used to determine syllable identity for ambiguous audio-visual stimulus pairs.

    Science.gov (United States)

    Ten Oever, Sanne; Sack, Alexander T; Wheat, Katherine L; Bien, Nina; van Atteveldt, Nienke

    2013-01-01

    Content and temporal cues have been shown to interact during audio-visual (AV) speech identification. Typically, the most reliable unimodal cue is used more strongly to identify specific speech features; however, visual cues are only used if the AV stimuli are presented within a certain temporal window of integration (TWI). This suggests that temporal cues denote whether unimodal stimuli belong together, that is, whether they should be integrated. It is not known whether temporal cues also provide information about the identity of a syllable. Since spoken syllables have naturally varying AV onset asynchronies, we hypothesize that for suboptimal AV cues presented within the TWI, information about the natural AV onset differences can aid in speech identification. To test this, we presented low-intensity auditory syllables concurrently with visual speech signals, and varied the stimulus onset asynchronies (SOA) of the AV pair, while participants were instructed to identify the auditory syllables. We revealed that specific speech features (e.g., voicing) were identified by relying primarily on one modality (e.g., auditory). Additionally, we showed a wide window in which visual information influenced auditory perception, that seemed even wider for congruent stimulus pairs. Finally, we found a specific response pattern across the SOA range for syllables that were not reliably identified by the unimodal cues, which we explained as the result of the use of natural onset differences between AV speech signals. This indicates that temporal cues not only provide information about the temporal integration of AV stimuli, but additionally convey information about the identity of AV pairs. These results provide a detailed behavioral basis for further neuro-imaging and stimulation studies to unravel the neurofunctional mechanisms of the audio-visual-temporal interplay within speech perception.

  16. Modelling auditory attention: Insights from the Theory of Visual Attention (TVA)

    DEFF Research Database (Denmark)

    Roberts, K. L.; Andersen, Tobias; Kyllingsbæk, Søren

    modelled using a log-logistic function than an exponential function. A more challenging difference is that in the partial report task, there is more target-distractor confusion for auditory than visual stimuli. This failure of object-formation (prior to attentional object-selection) is not yet effectively......We report initial progress towards creating an auditory analogue of a mathematical model of visual attention: the ‘Theory of Visual Attention’ (TVA; Bundesen, 1990). TVA is one of the best established models of visual attention. It assumes that visual stimuli are initially processed in parallel......, and that there is a ‘race’ for selection and representation in visual short term memory (VSTM). In the basic TVA task, participants view a brief display of letters and are asked to report either all of the letters (whole report) or a subset of the letters (e.g., the red letters; partial report). Fitting the model...

  17. Picture book exposure elicits positive visual preferences in toddlers.

    Science.gov (United States)

    Houston-Price, Carmel; Burton, Eliza; Hickinson, Rachel; Inett, Jade; Moore, Emma; Salmon, Katherine; Shiba, Paula

    2009-09-01

    Although the relationship between "mere exposure" and attitude enhancement is well established in the adult domain, there has been little similar work with children. This article examines whether toddlers' visual attention toward pictures of foods can be enhanced by repeated visual exposure to pictures of foods in a parent-administered picture book. We describe three studies that explored the number and nature of exposures required to elicit positive visual preferences for stimuli and the extent to which induced preferences generalize to other similar items. Results show that positive preferences for stimuli are easily and reliably induced in children and, importantly, that this effect of exposure is not restricted to the exposed stimulus per se but also applies to new representations of the exposed item.

  18. Audiovisual Capture with Ambiguous Audiovisual Stimuli

    Directory of Open Access Journals (Sweden)

    Jean-Michel Hupé

    2011-10-01

    Full Text Available Audiovisual capture happens when information across modalities get fused into a coherent percept. Ambiguous multi-modal stimuli have the potential to be powerful tools to observe such effects. We used such stimuli made of temporally synchronized and spatially co-localized visual flashes and auditory tones. The flashes produced bistable apparent motion and the tones produced ambiguous streaming. We measured strong interferences between perceptual decisions in each modality, a case of audiovisual capture. However, does this mean that audiovisual capture occurs before bistable decision? We argue that this is not the case, as the interference had a slow temporal dynamics and was modulated by audiovisual congruence, suggestive of high-level factors such as attention or intention. We propose a framework to integrate bistability and audiovisual capture, which distinguishes between “what” competes and “how” it competes (Hupé et al., 2008. The audiovisual interactions may be the result of contextual influences on neural representations (“what” competes, quite independent from the causal mechanisms of perceptual switches (“how” it competes. This framework predicts that audiovisual capture can bias bistability especially if modalities are congruent (Sato et al., 2007, but that is fundamentally distinct in nature from the bistable competition mechanism.

  19. Neurocognitive correlates of processing food-related stimuli in a Go/No-go paradigm.

    Science.gov (United States)

    Watson, Todd D; Garvey, Katherine T

    2013-12-01

    We examined the neurocognitive correlates of processing food-related stimuli in healthy young adults. Event-related potential (ERP) data were collected while 48 participants completed a computerized Go/No-go task consisting of food and nonfood images. Separately, we assessed participants' self-reported levels of external, restrained, and emotional eating behaviors as well as trait impulsivity, behavioral activation/inhibition, and performance on the Stroop Color-Word Test. We found that across participants, food images elicited significantly enhanced P3(00) and slow-wave ERP components. The difference in slow-wave components elicited by food and nonfood images was correlated with Stroop interference scores. Food images also elicited significantly enhanced N2(00) components, but only in female participants. The difference between N2 components elicited by food and nonfood images was related to body mass index and scores of external eating in females. Overall, these data suggest that processing food-related stimuli recruits distinct patterns of cortical activity, that the magnitude of these effects is related to behavioral and cognitive variables, and that the neurocognitive correlates of processing food-cues may be at least partly dissociable between males and females. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Visual search and contextual cueing: differential effects in 10-year-old children and adults.

    Science.gov (United States)

    Couperus, Jane W; Hunt, Ruskin H; Nelson, Charles A; Thomas, Kathleen M

    2011-02-01

    The development of contextual cueing specifically in relation to attention was examined in two experiments. Adult and 10-year-old participants completed a context cueing visual search task (Jiang & Chun, The Quarterly Journal of Experimental Psychology, 54A(4), 1105-1124, 2001) containing stimuli presented in an attended (e.g., red) and unattended (e.g., green) color. When the spatial configuration of stimuli in the attended and unattended color was invariant and consistently paired with the target location, adult reaction times improved, demonstrating learning. Learning also occurred if only the attended stimuli's configuration remained fixed. In contrast, while 10 year olds, like adults, showed incrementally slower reaction times as the number of attended stimuli increased, they did not show learning in the standard paradigm. However, they did show learning when the ratio of attended to unattended stimuli was high, irrespective of the total number of attended stimuli. Findings suggest children show efficient attentional guidance by color in visual search but differences in contextual cueing.

  1. Auditory-visual integration of emotional signals in a virtual environment for cynophobia.

    Science.gov (United States)

    Taffou, Marine; Chapoulie, Emmanuelle; David, Adrien; Guerchouche, Rachid; Drettakis, George; Viaud-Delmon, Isabelle

    2012-01-01

    Cynophobia (dog phobia) has both visual and auditory relevant components. In order to investigate the efficacy of virtual reality (VR) exposure-based treatment for cynophobia, we studied the efficiency of auditory-visual environments in generating presence and emotion. We conducted an evaluation test with healthy participants sensitive to cynophobia in order to assess the capacity of auditory-visual virtual environments (VE) to generate fear reactions. Our application involves both high fidelity visual stimulation displayed in an immersive space and 3D sound. This specificity enables us to present and spatially manipulate fearful stimuli in the auditory modality, the visual modality and both. Our specific presentation of animated dog stimuli creates an environment that is highly arousing, suggesting that VR is a promising tool for cynophobia treatment and that manipulating auditory-visual integration might provide a way to modulate affect.

  2. Facilitation of listening comprehension by visual information under noisy listening condition

    Science.gov (United States)

    Kashimada, Chiho; Ito, Takumi; Ogita, Kazuki; Hasegawa, Hiroshi; Kamata, Kazuo; Ayama, Miyoshi

    2009-02-01

    Comprehension of a sentence under a wide range of delay conditions between auditory and visual stimuli was measured in the environment with low auditory clarity of the level of -10dB and -15dB pink noise. Results showed that the image was helpful for comprehension of the noise-obscured voice stimulus when the delay between the auditory and visual stimuli was 4 frames (=132msec) or less, the image was not helpful for comprehension when the delay between the auditory and visual stimulus was 8 frames (=264msec) or more, and in some cases of the largest delay (32 frames), the video image interfered with comprehension.

  3. Manipulation of the extrastriate frontal loop can resolve visual disability in blindsight patients.

    Science.gov (United States)

    Badgaiyan, Rajendra D

    2012-12-01

    Patients with blindsight are not consciously aware of visual stimuli in the affected field of vision but retain nonconscious perception. This disability can be resolved if nonconsciously perceived information can be brought to their conscious awareness. It can be accomplished by manipulating neural network of visual awareness. To understand this network, we studied the pattern of cortical activity elicited during processing of visual stimuli with or without conscious awareness. The analysis indicated that a re-entrant signaling loop between the area V3A (located in the extrastriate cortex) and the frontal cortex is critical for processing conscious awareness. The loop is activated by visual signals relayed in the primary visual cortex, which is damaged in blindsight patients. Because of the damage, V3A-frontal loop is not activated and the signals are not processed for conscious awareness. These patients however continue to receive visual signals through the lateral geniculate nucleus. Since these signals do not activate the V3A-frontal loop, the stimuli are not consciously perceived. If visual input from the lateral geniculate nucleus is appropriately manipulated and made to activate the V3A-frontal loop, blindsight patients can regain conscious vision. Published by Elsevier Ltd.

  4. XD Metrics on Demand Value Analytics: Visualizing the Impact of Internal Information Technology Investments on External Funding, Publications, and Collaboration Networks

    Directory of Open Access Journals (Sweden)

    Olga Scrivner

    2018-01-01

    Full Text Available Many universities invest substantial resources in the design, deployment, and maintenance of campus-based cyberinfrastructure (CI. To justify the expense, it is important that university administrators and others understand and communicate the value of these internal investments in terms of scholarly impact. This paper introduces two visualizations and their usage in the Value Analytics (VA module for Open XD metrics on demand (XDMoD, which enable analysis of external grant funding income, scholarly publications, and collaboration networks. The VA module was developed by Indiana University’s (IU Research Technologies division, Pervasive Technology Institute, and the CI for Network Science Center (CNS, in conjunction with the University at Buffalo’s Center for Computational Research. It provides diverse visualizations of measures of information technology (IT usage, external funding, and publications in support of IT strategic decision-making. This paper details the data, analysis workflows, and visual mappings used in two VA visualizations that aim to communicate the value of different IT usage in terms of NSF and NIH funding, resulting publications, and associated research collaborations. To illustrate the feasibility of measuring IT values on research, we measured its financial and academic impact from the period between 2012 and 2017 for IU. The financial return on investment (ROI is measured in terms of IU funding, totaling $339,013,365 for 885 NIH and NSF projects associated with IT usage, and the academic ROI constitutes 968 publications associated with 83 of these NSF and NIH awards. In addition, the results show that Medical Specialties, Brain Research, and Infectious Diseases are the top three scientific disciplines ranked by the number of publications during the given time period.

  5. Attention modulates the responses of simple cells in monkey primary visual cortex.

    Science.gov (United States)

    McAdams, Carrie J; Reid, R Clay

    2005-11-23

    Spatial attention has long been postulated to act as a spotlight that increases the salience of visual stimuli at the attended location. We examined the effects of attention on the receptive fields of simple cells in primary visual cortex (V1) by training macaque monkeys to perform a task with two modes. In the attended mode, the stimuli relevant to the animal's task overlay the receptive field of the neuron being recorded. In the unattended mode, the animal was cued to attend to stimuli outside the receptive field of that neuron. The relevant stimulus, a colored pixel, was briefly presented within a white-noise stimulus, a flickering grid of black and white pixels. The receptive fields of the neurons were mapped by correlating spikes with the white-noise stimulus in both attended and unattended modes. We found that attention could cause significant modulation of the visually evoked response despite an absence of significant effects on the overall firing rates. On further examination of the relationship between the strength of the visual stimulation and the firing rate, we found that attention appears to cause multiplicative scaling of the visually evoked responses of simple cells, demonstrating that attention reaches back to the initial stages of visual cortical processing.

  6. Olfactory or auditory stimulation and their hedonic valúes differentially modulate visual working memory

    Directory of Open Access Journals (Sweden)

    ANA M DONOSO

    2008-12-01

    Full Text Available Working memory (WM designates the retention of objects or events in conscious awareness when these are not present in the environment. Many studies have focused on the interference properties of distracter stimuli in working memory, but these studies have mainly examined the influence of the intensity of these stimuli. Little is known about the memory modulation of hedonic content of distracter stimuli as they also may affect WM performance or attentional tasks. In this paper, we have studied the performance of a visual WM task where subjects recollect from five to eight visually presented objects while they are simultaneously exposed to additional - albeit weak- auditory or olfactory distracter stimulus. We found that WM performance decreases as the number of Ítems to remember increases, but this performance was unaltered by any of the distracter stimuli. However, when performance was correlated to the subject's perceived hedonic valúes, distracter stimuli classified as negative exhibit higher error rates than positive, neutral or control stimuli. We demónstrate that some hedonic content of otherwise neutral stimuli can strongly modulate memory processes.

  7. Visual input that matches the content of vist of visual working memory requires less (not faster) evidence sampling to reach conscious access

    NARCIS (Netherlands)

    Gayet, S.; van Maanen, L.; Heilbron, M.; Paffen, C.L.E.; Van Der Stigchel, S.

    2016-01-01

    The content of visual working memory (VWM) affects the processing of concurrent visual input. Recently, it has been demonstrated that stimuli are released from interocular suppression faster when they match rather than mismatch a color that is memorized for subsequent recall. In order to investigate

  8. A Neural Theory of Visual Attention: Bridging Cognition and Neurophysiology

    Science.gov (United States)

    Bundesen, Claus; Habekost, Thomas; Kyllingsbaek, Soren

    2005-01-01

    A neural theory of visual attention (NTVA) is presented. NTVA is a neural interpretation of C. Bundesen's (1990) theory of visual attention (TVA). In NTVA, visual processing capacity is distributed across stimuli by dynamic remapping of receptive fields of cortical cells such that more processing resources (cells) are devoted to behaviorally…

  9. Electrophysiological evidence for altered visual, but not auditory, selective attention in adolescent cochlear implant users.

    Science.gov (United States)

    Harris, Jill; Kamke, Marc R

    2014-11-01

    Selective attention fundamentally alters sensory perception, but little is known about the functioning of attention in individuals who use a cochlear implant. This study aimed to investigate visual and auditory attention in adolescent cochlear implant users. Event related potentials were used to investigate the influence of attention on visual and auditory evoked potentials in six cochlear implant users and age-matched normally-hearing children. Participants were presented with streams of alternating visual and auditory stimuli in an oddball paradigm: each modality contained frequently presented 'standard' and infrequent 'deviant' stimuli. Across different blocks attention was directed to either the visual or auditory modality. For the visual stimuli attention boosted the early N1 potential, but this effect was larger for cochlear implant users. Attention was also associated with a later P3 component for the visual deviant stimulus, but there was no difference between groups in the later attention effects. For the auditory stimuli, attention was associated with a decrease in N1 latency as well as a robust P3 for the deviant tone. Importantly, there was no difference between groups in these auditory attention effects. The results suggest that basic mechanisms of auditory attention are largely normal in children who are proficient cochlear implant users, but that visual attention may be altered. Ultimately, a better understanding of how selective attention influences sensory perception in cochlear implant users will be important for optimising habilitation strategies. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  10. The sensory components of high-capacity iconic memory and visual working memory.

    Science.gov (United States)

    Bradley, Claire; Pearson, Joel

    2012-01-01

    EARLY VISUAL MEMORY CAN BE SPLIT INTO TWO PRIMARY COMPONENTS: a high-capacity, short-lived iconic memory followed by a limited-capacity visual working memory that can last many seconds. Whereas a large number of studies have investigated visual working memory for low-level sensory features, much research on iconic memory has used more "high-level" alphanumeric stimuli such as letters or numbers. These two forms of memory are typically examined separately, despite an intrinsic overlap in their characteristics. Here, we used a purely sensory paradigm to examine visual short-term memory for 10 homogeneous items of three different visual features (color, orientation and motion) across a range of durations from 0 to 6 s. We found that the amount of information stored in iconic memory is smaller for motion than for color or orientation. Performance declined exponentially with longer storage durations and reached chance levels after ∼2 s. Further experiments showed that performance for the 10 items at 1 s was contingent on unperturbed attentional resources. In addition, for orientation stimuli, performance was contingent on the location of stimuli in the visual field, especially for short cue delays. Overall, our results suggest a smooth transition between an automatic, high-capacity, feature-specific sensory-iconic memory, and an effortful "lower-capacity" visual working memory.

  11. The sensory components of high-capacity iconic memory and visual working memory

    Directory of Open Access Journals (Sweden)

    Claire eBradley

    2012-09-01

    Full Text Available Early visual memory can be split into two primary components: a high-capacity, short-lived iconic memory followed by a limited-capacity visual working memory that can last many seconds. Whereas a large number of studies have investigated visual working memory for low-level sensory features, much research on iconic memory has used more high-level alphanumeric stimuli such as letters or numbers. These two forms of memory are typically examined separately, despite an intrinsic overlap in their characteristics. Here, we used a purely sensory paradigm to examine visual short-term memory for 10 homogeneous items of 3 different visual features (colour, orientation and motion across a range of durations from 0 to 6 seconds. We found that the amount of information stored in iconic memory is smaller for motion than for colour or orientation. Performance declined exponentially with longer storage durations and reached chance levels after ~2 seconds. Further experiments showed that performance for the 10 items at 1 second was contingent on unperturbed attentional resources. In addition, for orientation stimuli, performance was contingent on the location of stimuli in the visual field, especially for short cue delays. Overall, our results suggest a smooth transition between an automatic, high-capacity, feature-specific sensory-iconic memory and an effortful ‘lower-capacity’ visual working memory.

  12. Color vision in attention-deficit/hyperactivity disorder: a pilot visual evoked potential study.

    Science.gov (United States)

    Kim, Soyeon; Banaschewski, Tobias; Tannock, Rosemary

    2015-01-01

    Individuals with attention-deficit/hyperactivity disorder (ADHD) are reported to manifest visual problems (including ophthalmological and color perception, particularly for blue-yellow stimuli), but findings are inconsistent. Accordingly, this study investigated visual function and color perception in adolescents with ADHD using color Visual Evoked Potentials (cVEP), which provides an objective measure of color perception. Thirty-one adolescents (aged 13-18), 16 with a confirmed diagnosis of ADHD, and 15 healthy peers, matched for age, gender, and IQ participated in the study. All underwent an ophthalmological exam, as well as electrophysiological testing color Visual Evoked Potentials (cVEP), which measured the latency and amplitude of the neural P1 response to chromatic (blue-yellow, red-green) and achromatic stimuli. No intergroup differences were found in the ophthalmological exam. However, significantly larger P1 amplitude was found for blue and yellow stimuli, but not red/green or achromatic stimuli, in the ADHD group (particularly in the medicated group) compared to controls. Larger amplitude in the P1 component for blue-yellow in the ADHD group compared to controls may account for the lack of difference in color perception tasks. We speculate that the larger amplitude for blue-yellow stimuli in early sensory processing (P1) might reflect a compensatory strategy for underlying problems including compromised retinal input of s-cones due to hypo-dopaminergic tone. Copyright © 2014 Spanish General Council of Optometry. Published by Elsevier Espana. All rights reserved.

  13. Distinct electrophysiological indices of maintenance in auditory and visual short-term memory.

    Science.gov (United States)

    Lefebvre, Christine; Vachon, François; Grimault, Stephan; Thibault, Jennifer; Guimond, Synthia; Peretz, Isabelle; Zatorre, Robert J; Jolicœur, Pierre

    2013-11-01

    We compared the electrophysiological correlates for the maintenance of non-musical tones sequences in auditory short-term memory (ASTM) to those for the short-term maintenance of sequences of coloured disks held in visual short-term memory (VSTM). The visual stimuli yielded a sustained posterior contralateral negativity (SPCN), suggesting that the maintenance of sequences of coloured stimuli engaged structures similar to those involved in the maintenance of simultaneous visual displays. On the other hand, maintenance of acoustic sequences produced a sustained negativity at fronto-central sites. This component is named the Sustained Anterior Negativity (SAN). The amplitude of the SAN increased with increasing load in ASTM and predicted individual differences in the performance. There was no SAN in a control condition with the same auditory stimuli but no memory task, nor one associated with visual memory. These results suggest that the SAN is an index of brain activity related to the maintenance of representations in ASTM that is distinct from the maintenance of representations in VSTM. © 2013 Elsevier Ltd. All rights reserved.

  14. Sustained Splits of Attention within versus across Visual Hemifields Produce Distinct Spatial Gain Profiles.

    Science.gov (United States)

    Walter, Sabrina; Keitel, Christian; Müller, Matthias M

    2016-01-01

    Visual attention can be focused concurrently on two stimuli at noncontiguous locations while intermediate stimuli remain ignored. Nevertheless, behavioral performance in multifocal attention tasks falters when attended stimuli fall within one visual hemifield as opposed to when they are distributed across left and right hemifields. This "different-hemifield advantage" has been ascribed to largely independent processing capacities of each cerebral hemisphere in early visual cortices. Here, we investigated how this advantage influences the sustained division of spatial attention. We presented six isoeccentric light-emitting diodes (LEDs) in the lower visual field, each flickering at a different frequency. Participants attended to two LEDs that were spatially separated by an intermediate LED and responded to synchronous events at to-be-attended LEDs. Task-relevant pairs of LEDs were either located in the same hemifield ("within-hemifield" conditions) or separated by the vertical meridian ("across-hemifield" conditions). Flicker-driven brain oscillations, steady-state visual evoked potentials (SSVEPs), indexed the allocation of attention to individual LEDs. Both behavioral performance and SSVEPs indicated enhanced processing of attended LED pairs during "across-hemifield" relative to "within-hemifield" conditions. Moreover, SSVEPs demonstrated effective filtering of intermediate stimuli in "across-hemifield" condition only. Thus, despite identical physical distances between LEDs of attended pairs, the spatial profiles of gain effects differed profoundly between "across-hemifield" and "within-hemifield" conditions. These findings corroborate that early cortical visual processing stages rely on hemisphere-specific processing capacities and highlight their limiting role in the concurrent allocation of visual attention to multiple locations.

  15. Putative inhibitory training of a stimulus makes it a facilitator: a within-subject comparison of visual and auditory stimuli in autoshaping.

    Science.gov (United States)

    Nakajima, S

    2000-03-14

    Pigeons were trained with the A+, AB-, ABC+, AD- and ADE+ task where each of stimulus A and stimulus compounds ABC and ADE signalled food (positive trials), and each of stimulus compounds AB and AD signalled no food (negative trials). Stimuli A, B, C and E were small visual figures localised on a response key, and stimulus D was a white noise. Stimulus B was more effective than D as an inhibitor of responding to A during the training. After the birds learned to respond exclusively on the positive trials, effects of B and D on responding to C and E, respectively, were tested by comparing C, BC, E and DE trials. Stimulus B continuously facilitated responding to C on the BC test trials, but D's facilitative effect was observed only on the first DE test trial. Stimulus B also facilitated responding to E on BE test trials. Implications for the Rescorla-Wagner elemental model and the Pearce configural model of Pavlovian conditioning were discussed.

  16. Extinction of Conditioned Responses to Methamphetamine-Associated Stimuli in Healthy Humans.

    Science.gov (United States)

    Cavallo, Joel S; Ruiz, Nicholas A; de Wit, Harriet

    2016-07-01

    Contextual stimuli present during drug experiences become associated with the drug through Pavlovian conditioning and are thought to sustain drug-seeking behavior. Thus, extinction of conditioned responses is an important target for treatment. To date, acquisition and extinction to drug-paired cues have been studied in animal models or drug-dependent individuals, but rarely in non-drug users. We have recently developed a procedure to study acquisition of conditioned responses after single doses of methamphetamine (MA) in healthy volunteers. Here, we examined extinction of these responses and their persistence after conditioning. Healthy adults (18-35 years; N = 20) received two pairings of audio-visual stimuli with MA (20 mg oral) or placebo. Responses to stimuli were assessed before and after conditioning, using three tasks: behavioral preference, attentional bias, and subjective "liking." Subjects exhibited behavioral preference for the drug-paired stimuli at the first post-conditioning test, but this declined rapidly on subsequent extinction tests. They also exhibited a bias to initially look towards the drug-paired stimuli at the first post-test session, but not thereafter. Subjects who experienced more positive subjective drug effects during conditioning exhibited a smaller decline in preference during the extinction phase. Further, longer inter-session intervals during the extinction phase were associated with less extinction of the behavioral preference measure. Conditioned responses after two pairings with MA extinguish quickly, and are influenced by both subjective drug effects and the extinction interval. Characterizing and refining this conditioning procedure will aid in understanding the acquisition and extinction processes of drug-related conditioned responses in humans.

  17. Intersensory Function in Newborns: Effect of Sound on Visual Preferences.

    Science.gov (United States)

    Lawson, Katharine Rieke; Turkewitz, Gerald

    1980-01-01

    Newborn infants' fixation of a graduated series of visual stimuli significantly differed in the absence and presence of white-noise bursts. Relative to the no-sound condition, sound resulted in the infants' tendency to look more at the low-intensity visual stimulus and less at the high- intensity visual stimulus. (Author/DB)

  18. Long-term memory of color stimuli in the jungle crow (Corvus macrorhynchos).

    Science.gov (United States)

    Bogale, Bezawork Afework; Sugawara, Satoshi; Sakano, Katsuhisa; Tsuda, Sonoko; Sugita, Shoei

    2012-03-01

    Wild-caught jungle crows (n = 20) were trained to discriminate between color stimuli in a two-alternative discrimination task. Next, crows were tested for long-term memory after 1-, 2-, 3-, 6-, and 10-month retention intervals. This preliminary study showed that jungle crows learn the task and reach a discrimination criterion (80% or more correct choices in two consecutive sessions of ten trials) in a few trials, and some even in a single session. Most, if not all, crows successfully remembered the constantly reinforced visual stimulus during training after all retention intervals. These results suggest that jungle crows have a high retention capacity for learned information, at least after a 10-month retention interval and make no or very few errors. This study is the first to show long-term memory capacity of color stimuli in corvids following a brief training that memory rather than rehearsal was apparent. Memory of visual color information is vital for exploitation of biological resources in crows. We suspect that jungle crows could remember the learned color discrimination task even after a much longer retention interval.

  19. Visual Information Present in Infragranular Layers of Mouse Auditory Cortex.

    Science.gov (United States)

    Morrill, Ryan J; Hasenstaub, Andrea R

    2018-03-14

    The cerebral cortex is a major hub for the convergence and integration of signals from across the sensory modalities; sensory cortices, including primary regions, are no exception. Here we show that visual stimuli influence neural firing in the auditory cortex of awake male and female mice, using multisite probes to sample single units across multiple cortical layers. We demonstrate that visual stimuli influence firing in both primary and secondary auditory cortex. We then determine the laminar location of recording sites through electrode track tracing with fluorescent dye and optogenetic identification using layer-specific markers. Spiking responses to visual stimulation occur deep in auditory cortex and are particularly prominent in layer 6. Visual modulation of firing rate occurs more frequently at areas with secondary-like auditory responses than those with primary-like responses. Auditory cortical responses to drifting visual gratings are not orientation-tuned, unlike visual cortex responses. The deepest cortical layers thus appear to be an important locus for cross-modal integration in auditory cortex. SIGNIFICANCE STATEMENT The deepest layers of the auditory cortex are often considered its most enigmatic, possessing a wide range of cell morphologies and atypical sensory responses. Here we show that, in mouse auditory cortex, these layers represent a locus of cross-modal convergence, containing many units responsive to visual stimuli. Our results suggest that this visual signal conveys the presence and timing of a stimulus rather than specifics about that stimulus, such as its orientation. These results shed light on both how and what types of cross-modal information is integrated at the earliest stages of sensory cortical processing. Copyright © 2018 the authors 0270-6474/18/382854-09$15.00/0.

  20. Preprocessing of emotional visual information in the human piriform cortex.

    Science.gov (United States)

    Schulze, Patrick; Bestgen, Anne-Kathrin; Lech, Robert K; Kuchinke, Lars; Suchan, Boris

    2017-08-23

    This study examines the processing of visual information by the olfactory system in humans. Recent data point to the processing of visual stimuli by the piriform cortex, a region mainly known as part of the primary olfactory cortex. Moreover, the piriform cortex generates predictive templates of olfactory stimuli to facilitate olfactory processing. This study fills the gap relating to the question whether this region is also capable of preprocessing emotional visual information. To gain insight into the preprocessing and transfer of emotional visual information into olfactory processing, we recorded hemodynamic responses during affective priming using functional magnetic resonance imaging (fMRI). Odors of different valence (pleasant, neutral and unpleasant) were primed by images of emotional facial expressions (happy, neutral and disgust). Our findings are the first to demonstrate that the piriform cortex preprocesses emotional visual information prior to any olfactory stimulation and that the emotional connotation of this preprocessing is subsequently transferred and integrated into an extended olfactory network for olfactory processing.

  1. Sex differences in visual attention to erotic and non-erotic stimuli.

    Science.gov (United States)

    Lykins, Amy D; Meana, Marta; Strauss, Gregory P

    2008-04-01

    It has been suggested that sex differences in the processing of erotic material (e.g., memory, genital arousal, brain activation patterns) may also be reflected by differential attention to visual cues in erotic material. To test this hypothesis, we presented 20 heterosexual men and 20 heterosexual women with erotic and non-erotic images of heterosexual couples and tracked their eye movements during scene presentation. Results supported previous findings that erotic and non-erotic information was visually processed in a different manner by both men and women. Men looked at opposite sex figures significantly longer than did women, and women looked at same sex figures significantly longer than did men. Within-sex analyses suggested that men had a strong visual attention preference for opposite sex figures as compared to same sex figures, whereas women appeared to disperse their attention evenly between opposite and same sex figures. These differences, however, were not limited to erotic images but evidenced in non-erotic images as well. No significant sex differences were found for attention to the contextual region of the scenes. Results were interpreted as potentially supportive of recent studies showing a greater non-specificity of sexual arousal in women. This interpretation assumes there is an erotic valence to images of the sex to which one orients, even when the image is not explicitly erotic. It also assumes a relationship between visual attention and erotic valence.

  2. Steady-state signatures of visual perceptual load, multimodal distractor filtering, and neural competition.

    Science.gov (United States)

    Parks, Nathan A; Hilimire, Matthew R; Corballis, Paul M

    2011-05-01

    The perceptual load theory of attention posits that attentional selection occurs early in processing when a task is perceptually demanding but occurs late in processing otherwise. We used a frequency-tagged steady-state evoked potential paradigm to investigate the modality specificity of perceptual load-induced distractor filtering and the nature of neural-competitive interactions between task and distractor stimuli. EEG data were recorded while participants monitored a stream of stimuli occurring in rapid serial visual presentation (RSVP) for the appearance of previously assigned targets. Perceptual load was manipulated by assigning targets that were identifiable by color alone (low load) or by the conjunction of color and orientation (high load). The RSVP task was performed alone and in the presence of task-irrelevant visual and auditory distractors. The RSVP stimuli, visual distractors, and auditory distractors were "tagged" by modulating each at a unique frequency (2.5, 8.5, and 40.0 Hz, respectively), which allowed each to be analyzed separately in the frequency domain. We report three important findings regarding the neural mechanisms of perceptual load. First, we replicated previous findings of within-modality distractor filtering and demonstrated a reduction in visual distractor signals with high perceptual load. Second, auditory steady-state distractor signals were unaffected by manipulations of visual perceptual load, consistent with the idea that perceptual load-induced distractor filtering is modality specific. Third, analysis of task-related signals revealed that visual distractors competed with task stimuli for representation and that increased perceptual load appeared to resolve this competition in favor of the task stimulus.

  3. Medial temporal lobe damage impairs representation of simple stimuli

    Directory of Open Access Journals (Sweden)

    David E Warren

    2010-05-01

    Full Text Available Medial temporal lobe damage in humans is typically thought to produce a circumscribed impairment in the acquisition of new enduring memories, but recent reports have documented deficits even in short-term maintenance. We examined possible maintenance deficits in a population of medial temporal lobe amnesics, with the goal of characterizing their impairments as either representational drift or outright loss of representation over time. Patients and healthy comparisons performed a visual search task in which the similarity of various lures to a target was varied parametrically. Stimuli were simple shapes varying along one of several visual dimensions. The task was performed in two conditions, one presenting a sample target simultaneously with the search array and the other imposing a delay between sample and array. Eye-movement data collected during search revealed that the duration of fixations to items varied with lure-target similarity for all participants, i.e., fixations were longer for items more similar to the target. In the simultaneous condition, patients and comparisons exhibited an equivalent effect of similarity on fixation durations. However, imposing a delay modulated the effect differently for the two groups: in comparisons, fixation duration to similar items was exaggerated; in patients, the original effect was diminished. These findings indicate that medial temporal lobe lesions subtly impair short-term maintenance of even simple stimuli, with performance reflecting not the complete loss of the maintained representation but rather a degradation or progressive drift of the representation over time.

  4. Inattentional Deafness: Visual Load Leads to Time-Specific Suppression of Auditory Evoked Responses.

    Science.gov (United States)

    Molloy, Katharine; Griffiths, Timothy D; Chait, Maria; Lavie, Nilli

    2015-12-09

    Due to capacity limits on perception, conditions of high perceptual load lead to reduced processing of unattended stimuli (Lavie et al., 2014). Accumulating work demonstrates the effects of visual perceptual load on visual cortex responses, but the effects on auditory processing remain poorly understood. Here we establish the neural mechanisms underlying "inattentional deafness"--the failure to perceive auditory stimuli under high visual perceptual load. Participants performed a visual search task of low (target dissimilar to nontarget items) or high (target similar to nontarget items) load. On a random subset (50%) of trials, irrelevant tones were presented concurrently with the visual stimuli. Brain activity was recorded with magnetoencephalography, and time-locked responses to the visual search array and to the incidental presence of unattended tones were assessed. High, compared to low, perceptual load led to increased early visual evoked responses (within 100 ms from onset). This was accompanied by reduced early (∼ 100 ms from tone onset) auditory evoked activity in superior temporal sulcus and posterior middle temporal gyrus. A later suppression of the P3 "awareness" response to the tones was also observed under high load. A behavioral experiment revealed reduced tone detection sensitivity under high visual load, indicating that the reduction in neural responses was indeed associated with reduced awareness of the sounds. These findings support a neural account of shared audiovisual resources, which, when depleted under load, leads to failures of sensory perception and awareness. The present work clarifies the neural underpinning of inattentional deafness under high visual load. The findings of near-simultaneous load effects on both visual and auditory evoked responses suggest shared audiovisual processing capacity. Temporary depletion of shared capacity in perceptually demanding visual tasks leads to a momentary reduction in sensory processing of auditory

  5. Bio-inspired fabrication of stimuli-responsive photonic crystals with hierarchical structures and their applications

    International Nuclear Information System (INIS)

    Lu, Tao; Peng, Wenhong; Zhu, Shenmin; Zhang, Di

    2016-01-01

    When the constitutive materials of photonic crystals (PCs) are stimuli-responsive, the resultant PCs exhibit optical properties that can be tuned by the stimuli. This can be exploited for promising applications in colour displays, biological and chemical sensors, inks and paints, and many optically active components. However, the preparation of the required photonic structures is the first issue to be solved. In the past two decades, approaches such as microfabrication and self-assembly have been developed to incorporate stimuli-responsive materials into existing periodic structures for the fabrication of PCs, either as the initial building blocks or as the surrounding matrix. Generally, the materials that respond to thermal, pH, chemical, optical, electrical, or magnetic stimuli are either soft or aggregate, which is why the manufacture of three-dimensional hierarchical photonic structures with responsive properties is a great challenge. Recently, inspired by biological PCs in nature which exhibit both flexible and responsive properties, researchers have developed various methods to synthesize metals and metal oxides with hierarchical structures by using a biological PC as the template. This review will focus on the recent developments in this field. In particular, PCs with biological hierarchical structures that can be tuned by external stimuli have recently been successfully fabricated. These findings offer innovative insights into the design of responsive PCs and should be of great importance for future applications of these materials. (topical review)

  6. Relating Attentional Biases for Stimuli Associated with Social Reward and Punishment to Autistic Traits

    Directory of Open Access Journals (Sweden)

    Brian A. Anderson

    2018-04-01

    Full Text Available Evidence for impaired attention to social stimuli in autism has been mixed. The role of social feedback in shaping attention to other, non-social stimuli that are predictive of such feedback has not been examined in the context of autism. In the present study, participants searched for a color-defined target during a training phase, with the color of the target predicting the emotional reaction of a face that appeared after each trial. Then, participants performed visual search for a shape-defined target while trying to ignore the color of stimuli. On a subset of trials, one of the non-targets was rendered in the color of a former target from training. Autistic traits were measured for each participant using the Autism Quotient (AQ. Our findings replicate robust attentional capture by stimuli learned to predict valenced social feedback. There was no evidence that autistic traits are associated with blunted attention to predictors of social outcomes. Consistent with an emerging body of literature, our findings cast doubt on strong versions of the claim that autistic traits can be explained by a blunted influence of social information on the attention system. We extend these findings to non-social stimuli that predict socially relevant information.

  7. Age-related positivity enhancement is not universal: older Chinese look away from positive stimuli.

    Science.gov (United States)

    Fung, Helene H; Lu, Alice Y; Goren, Deborah; Isaacowitz, Derek M; Wadlinger, Heather A; Wilson, Hugh R

    2008-06-01

    Socioemotional selectivity theory postulates that with age, people are motivated to derive emotional meaning from life, leading them to pay more attention to positive relative to negative/neutral stimuli. The authors argue that cultures that differ in what they consider to be emotionally meaningful may show this preference to different extents. Using eye-tracking techniques, the authors compared visual attention toward emotional (happy, fearful, sad, and angry) and neutral facial expressions among 46 younger and 57 older Hong Kong Chinese. In contrast to prior Western findings, older but not younger Chinese looked away from happy facial expressions, suggesting that they do not show attentional preferences toward positive stimuli.

  8. Generalization of the disruptive effects of alternative stimuli when combined with target stimuli in extinction.

    Science.gov (United States)

    Podlesnik, Christopher A; Miranda-Dukoski, Ludmila; Jonas Chan, C K; Bland, Vikki J; Bai, John Y H

    2017-09-01

    Differential-reinforcement treatments reduce target problem behavior in the short term but at the expense of making it more persistent long term. Basic and translational research based on behavioral momentum theory suggests that combining features of stimuli governing an alternative response with the stimuli governing target responding could make target responding less persistent. However, changes to the alternative stimulus context when combining alternative and target stimuli could diminish the effectiveness of the alternative stimulus in reducing target responding. In an animal model with pigeons, the present study reinforced responding in the presence of target and alternative stimuli. When combining the alternative and target stimuli during extinction, we altered the alternative stimulus through changes in line orientation. We found that (1) combining alternative and target stimuli in extinction more effectively decreased target responding than presenting the target stimulus on its own; (2) combining these stimuli was more effective in decreasing target responding trained with lower reinforcement rates; and (3) changing the alternative stimulus reduced its effectiveness when it was combined with the target stimulus. Therefore, changing alternative stimuli (e.g., therapist, clinical setting) during behavioral treatments that combine alternative and target stimuli could reduce the effectiveness of those treatments in disrupting problem behavior. © 2017 Society for the Experimental Analysis of Behavior.

  9. Auditory, Visual and Audiovisual Speech Processing Streams in Superior Temporal Sulcus.

    Science.gov (United States)

    Venezia, Jonathan H; Vaden, Kenneth I; Rong, Feng; Maddox, Dale; Saberi, Kourosh; Hickok, Gregory

    2017-01-01

    The human superior temporal sulcus (STS) is responsive to visual and auditory information, including sounds and facial cues during speech recognition. We investigated the functional organization of STS with respect to modality-specific and multimodal speech representations. Twenty younger adult participants were instructed to perform an oddball detection task and were presented with auditory, visual, and audiovisual speech stimuli, as well as auditory and visual nonspeech control stimuli in a block fMRI design. Consistent with a hypothesized anterior-posterior processing gradient in STS, auditory, visual and audiovisual stimuli produced the largest BOLD effects in anterior, posterior and middle STS (mSTS), respectively, based on whole-brain, linear mixed effects and principal component analyses. Notably, the mSTS exhibited preferential responses to multisensory stimulation, as well as speech compared to nonspeech. Within the mid-posterior and mSTS regions, response preferences changed gradually from visual, to multisensory, to auditory moving posterior to anterior. Post hoc analysis of visual regions in the posterior STS revealed that a single subregion bordering the mSTS was insensitive to differences in low-level motion kinematics yet distinguished between visual speech and nonspeech based on multi-voxel activation patterns. These results suggest that auditory and visual speech representations are elaborated gradually within anterior and posterior processing streams, respectively, and may be integrated within the mSTS, which is sensitive to more abstract speech information within and across presentation modalities. The spatial organization of STS is consistent with processing streams that are hypothesized to synthesize perceptual speech representations from sensory signals that provide convergent information from visual and auditory modalities.

  10. The neural correlates of visual imagery: A co-ordinate-based meta-analysis.

    Science.gov (United States)

    Winlove, Crawford I P; Milton, Fraser; Ranson, Jake; Fulford, Jon; MacKisack, Matthew; Macpherson, Fiona; Zeman, Adam

    2018-01-02

    Visual imagery is a form of sensory imagination, involving subjective experiences typically described as similar to perception, but which occur in the absence of corresponding external stimuli. We used the Activation Likelihood Estimation algorithm (ALE) to identify regions consistently activated by visual imagery across 40 neuroimaging studies, the first such meta-analysis. We also employed a recently developed multi-modal parcellation of the human brain to attribute stereotactic co-ordinates to one of 180 anatomical regions, the first time this approach has been combined with the ALE algorithm. We identified a total 634 foci, based on measurements from 464 participants. Our overall comparison identified activation in the superior parietal lobule, particularly in the left hemisphere, consistent with the proposed 'top-down' role for this brain region in imagery. Inferior premotor areas and the inferior frontal sulcus were reliably activated, a finding consistent with the prominent semantic demands made by many visual imagery tasks. We observed bilateral activation in several areas associated with the integration of eye movements and visual information, including the supplementary and cingulate eye fields (SCEFs) and the frontal eye fields (FEFs), suggesting that enactive processes are important in visual imagery. V1 was typically activated during visual imagery, even when participants have their eyes closed, consistent with influential depictive theories of visual imagery. Temporal lobe activation was restricted to area PH and regions of the fusiform gyrus, adjacent to the fusiform face complex (FFC). These results provide a secure foundation for future work to characterise in greater detail the functional contributions of specific areas to visual imagery. Copyright © 2017. Published by Elsevier Ltd.

  11. The effect of non-visual working memory load on top-down modulation of visual processing.

    Science.gov (United States)

    Rissman, Jesse; Gazzaley, Adam; D'Esposito, Mark

    2009-06-01

    While a core function of the working memory (WM) system is the active maintenance of behaviorally relevant sensory representations, it is also critical that distracting stimuli are appropriately ignored. We used functional magnetic resonance imaging to examine the role of domain-general WM resources in the top-down attentional modulation of task-relevant and irrelevant visual representations. In our dual-task paradigm, each trial began with the auditory presentation of six random (high load) or sequentially ordered (low load) digits. Next, two relevant visual stimuli (e.g., faces), presented amongst two temporally interspersed visual distractors (e.g., scenes), were to be encoded and maintained across a 7-s delay interval, after which memory for the relevant images and digits was probed. When taxed by high load digit maintenance, participants exhibited impaired performance on the visual WM task and a selective failure to attenuate the neural processing of task-irrelevant scene stimuli. The over-processing of distractor scenes under high load was indexed by elevated encoding activity in a scene-selective region-of-interest relative to low load and passive viewing control conditions, as well as by improved long-term recognition memory for these items. In contrast, the load manipulation did not affect participants' ability to upregulate activity in this region when scenes were task-relevant. These results highlight the critical role of domain-general WM resources in the goal-directed regulation of distractor processing. Moreover, the consequences of increased WM load in young adults closely resemble the effects of cognitive aging on distractor filtering [Gazzaley, A., Cooney, J. W., Rissman, J., & D'Esposito, M. (2005). Top-down suppression deficit underlies working memory impairment in normal aging. Nature Neuroscience 8, 1298-1300], suggesting the possibility of a common underlying mechanism.

  12. Attentional effects in the visual pathways

    DEFF Research Database (Denmark)

    Bundesen, Claus; Larsen, Axel; Kyllingsbæk, Søren

    2002-01-01

    nucleus. Frontal activations were found in a region that seems implicated in visual short-term memory (posterior parts of the superior sulcus and the middle gyrus). The reverse, color-shape comparison showed bilateral increases in rCBF in the anterior cingulate gyri, superior frontal gyri, and superior...... and middle temporal gyri. The attentional effects found by the shape-color comparison in the thalamus and the primary visual cortex may have been generated by feedback signals preserving visual representations of selected stimuli in short-term memory....

  13. Attention Determines Contextual Enhancement versus Suppression in Human Primary Visual Cortex.

    Science.gov (United States)

    Flevaris, Anastasia V; Murray, Scott O

    2015-09-02

    Neural responses in primary visual cortex (V1) depend on stimulus context in seemingly complex ways. For example, responses to an oriented stimulus can be suppressed when it is flanked by iso-oriented versus orthogonally oriented stimuli but can also be enhanced when attention is directed to iso-oriented versus orthogonal flanking stimuli. Thus the exact same contextual stimulus arrangement can have completely opposite effects on neural responses-in some cases leading to orientation-tuned suppression and in other cases leading to orientation-tuned enhancement. Here we show that stimulus-based suppression and enhancement of fMRI responses in humans depends on small changes in the focus of attention and can be explained by a model that combines feature-based attention with response normalization. Neurons in the primary visual cortex (V1) respond to stimuli within a restricted portion of the visual field, termed their "receptive field." However, neuronal responses can also be influenced by stimuli that surround a receptive field, although the nature of these contextual interactions and underlying neural mechanisms are debated. Here we show that the response in V1 to a stimulus in the same context can either be suppressed or enhanced depending on the focus of attention. We are able to explain the results using a simple computational model that combines two well established properties of visual cortical responses: response normalization and feature-based enhancement. Copyright © 2015 the authors 0270-6474/15/3512273-08$15.00/0.

  14. Behold the voice of wrath: Cross-modal modulation of visual attention by anger prosody

    OpenAIRE

    Brosch, Tobias; Grandjean, Didier Maurice; Sander, David; Scherer, Klaus R.

    2008-01-01

    Emotionally relevant stimuli are prioritized in human information processing. It has repeatedly been shown that selective spatial attention is modulated by the emotional content of a stimulus. Until now, studies investigating this phenomenon have only examined withinmodality effects, most frequently using pictures of emotional stimuli to modulate visual attention. In this study, we used simultaneously presented utterances with emotional and neutral prosody as cues for a visually presented tar...

  15. Eye position effects on the remapped memory trace of visual motion in cortical area MST.

    Science.gov (United States)

    Inaba, Naoko; Kawano, Kenji

    2016-02-23

    After a saccade, most MST neurons respond to moving visual stimuli that had existed in their post-saccadic receptive fields and turned off before the saccade ("trans-saccadic memory remapping"). Neuronal responses in higher visual processing areas are known to be modulated in relation to gaze angle to represent image location in spatiotopic coordinates. In the present study, we investigated the eye position effects after saccades and found that the gaze angle modulated the visual sensitivity of MST neurons after saccades both to the actually existing visual stimuli and to the visual memory traces remapped by the saccades. We suggest that two mechanisms, trans-saccadic memory remapping and gaze modulation, work cooperatively in individual MST neurons to represent a continuous visual world.

  16. Visual rehabilitation: visual scanning, multisensory stimulation and vision restoration trainings

    Directory of Open Access Journals (Sweden)

    Neil M. Dundon

    2015-07-01

    Full Text Available Neuropsychological training methods of visual rehabilitation for homonymous vision loss caused by postchiasmatic damage fall into two fundamental paradigms: compensation and restoration. Existing methods can be classified into three groups: Visual Scanning Training (VST, Audio-Visual Scanning Training (AViST and Vision Restoration Training (VRT. VST and AViST aim at compensating vision loss by training eye scanning movements, whereas VRT aims at improving lost vision by activating residual visual functions by training light detection and discrimination of visual stimuli. This review discusses the rationale underlying these paradigms and summarizes the available evidence with respect to treatment efficacy. The issues raised in our review should help guide clinical care and stimulate new ideas for future research uncovering the underlying neural correlates of the different treatment paradigms. We propose that both local within-system interactions (i.e., relying on plasticity within peri-lesional spared tissue and changes in more global between-system networks (i.e., recruiting alternative visual pathways contribute to both vision restoration and compensatory rehabilitation that ultimately have implications for the rehabilitation of cognitive functions.

  17. Temporal-order judgment of visual and auditory stimuli: Modulations in situations with and without stimulus discrimination

    Directory of Open Access Journals (Sweden)

    Elisabeth eHendrich

    2012-08-01

    Full Text Available Temporal-order judgment (TOJ tasks are an important paradigm to investigate processing times of information in different modalities. There are a lot of studies on how temporal order decisions can be influenced by stimuli characteristics. However, so far it has not been investigated whether the addition of a choice reaction time task has an influence on temporal-order judgment. Moreover, it is not known when during processing the decision about the temporal order of two stimuli is made. We investigated the first of these two questions by comparing a regular TOJ task with a dual task. In both tasks, we manipulated different processing stages to investigate whether the manipulations have an influence on temporal-order judgment and to determine thereby the time of processing at which the decision about temporal order is made. The results show that the addition of a choice reaction time task does have an influence on the temporal-order judgment, but the influence seems to be linked to the kind of manipulation of the processing stages that is used. The results of the manipulations indicate that the temporal order decision in the dual task paradigm is made after perceptual processing of the stimuli.

  18. Do infants find snakes aversive? Infants' physiological responses to "fear-relevant" stimuli.

    Science.gov (United States)

    Thrasher, Cat; LoBue, Vanessa

    2016-02-01

    In the current research, we sought to measure infants' physiological responses to snakes-one of the world's most widely feared stimuli-to examine whether they find snakes aversive or merely attention grabbing. Using a similar method to DeLoache and LoBue (Developmental Science, 2009, Vol. 12, pp. 201-207), 6- to 9-month-olds watched a series of multimodal (both auditory and visual) stimuli: a video of a snake (fear-relevant) or an elephant (non-fear-relevant) paired with either a fearful or happy auditory track. We measured physiological responses to the pairs of stimuli, including startle magnitude, latency to startle, and heart rate. Results suggest that snakes capture infants' attention; infants showed the fastest startle responses and lowest average heart rate to the snakes, especially when paired with a fearful voice. Unexpectedly, they also showed significantly reduced startle magnitude during this same snake video plus fearful voice combination. The results are discussed with respect to theoretical perspectives on fear acquisition. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Functional magnetic resonance imaging of the human primary visual cortex during visual stimulation

    International Nuclear Information System (INIS)

    Miki, Atsushi; Abe, Haruki; Nakajima, Takashi; Fujita, Motoi; Watanabe, Hiroyuki; Kuwabara, Takeo; Naruse, Shoji; Takagi, Mineo.

    1995-01-01

    Signal changes in the human primary visual cortex during visual stimulation were evaluated using non-invasive functional magnetic resonance imaging (fMRI). The experiments were performed on 10 normal human volunteers and 2 patients with homonymous hemianopsia, including one who was recovering from the exacerbation of multiple sclerosis. The visual stimuli were provided by a pattern generator using the checkerboard pattern for determining the visual evoked potential of full-field and hemifield stimulation. In normal volunteers, a signal increase was observed on the bilateral primary visual cortex during the full-field stimulation and on the contra-lateral cortex during hemifield stimulation. In the patient with homonymous hemianopsia after cerebral infarction, the signal change was clearly decreased on the affected side. In the other patient, the one recovering from multiple sclerosis with an almost normal visual field, the fMRI was within normal limits. These results suggest that it is possible to visualize the activation of the visual cortex during visual stimulation, and that there is a possibility of using this test as an objective method of visual field examination. (author)

  20. Cholinergic control of visual categorisation in macaques

    Directory of Open Access Journals (Sweden)

    Nikolaos C. Aggelopoulos

    2011-11-01

    Full Text Available Acetylcholine (ACh is a neurotransmitter acting via muscarinic and nicotinic receptors that is implicated in several cognitive functions and impairments, such as Alzheimer’s disease. It is believed to especially affect the acquisition of new information, which is particularly important when behaviour needs to be adapted to new situations and to novel sensory events. Categorisation, the process of assigning stimuli to a category, is a cognitive function that also involves information acquisition. The role of ACh on categorisation has not been previously studied. We have examined the effects of scopolamine, an antagonist of muscarinic ACh receptors, on visual categorisation in macaque monkeys using familiar and novel stimuli. When the peripheral effects of scopolamine on the parasympathetic nervous system were controlled for, categorisation performance was disrupted following systemic injections of scopolamine. This impairment was observed only when the stimuli that needed to be categorised had not been seen before. In other words, the monkeys were not impaired by the central action of scopolamine in categorising a set of familiar stimuli (stimuli which they had categorised successfully in previous sessions. Categorisation performance also deteriorated as the stimulus became less salient by an increase in the level of visual noise. However, scopolamine did not cause additional performance disruptions for difficult categorisation judgements at lower coherence levels. Scopolamine, therefore, specifically affects the assignment of new exemplars to established cognitive categories, presumably by impairing the processing of novel information. Since we did not find an effect of scopolamine in the categorisation of familiar stimuli, scopolamine had no significant central action on other cognitive functions such as perception, attention, memory or executive control within the context of our categorisation task.