WorldWideScience

Sample records for sound discrimination task

  1. Songbirds and humans apply different strategies in a sound sequence discrimination task

    Directory of Open Access Journals (Sweden)

    Yoshimasa eSeki

    2013-07-01

    Full Text Available The abilities of animals and humans to extract rules from sound sequences have previously been compared using observation of spontaneous responses and conditioning techniques. However, the results were inconsistently interpreted across studies possibly due to methodological and/or species differences. Therefore, we examined the strategies for discrimination of sound sequences in Bengalese finches and humans using the same protocol. Birds were trained on a GO/NOGO task to discriminate between two categories of sound stimulus generated based on an AAB or ABB rule. The sound elements used were taken from a variety of male (M and female (F calls, such that the sequences could be represented as MMF and MFF. In test sessions, FFM and FMM sequences, which were never presented in the training sessions but conformed to the rule, were presented as probe stimuli. The results suggested two discriminative strategies were being applied: 1 memorizing sound patterns of either GO or NOGO stimuli and generating the appropriate responses for only those sounds; and 2 using the repeated element as a cue. There was no evidence that the birds successfully extracted the abstract rule (i.e. AAB and ABB; MMF-GO subjects did not produce a GO response for FFM and vice versa. Next we examined whether those strategies were also applicable for human participants on the same task. The results and questionnaires revealed that participants extracted the abstract rule, and most of them employed it to discriminate the sequences. This strategy was never observed in bird subjects, although some participants used strategies similar to the birds when responding to the probe stimuli. Our results showed that the human participants applied the abstract rule in the task even without instruction but Bengalese finches did not, thereby reconfirming that humans have to extract abstract rules from sound sequences that is distinct from non-human animals.

  2. Songbirds and humans apply different strategies in a sound sequence discrimination task.

    Science.gov (United States)

    Seki, Yoshimasa; Suzuki, Kenta; Osawa, Ayumi M; Okanoya, Kazuo

    2013-01-01

    The abilities of animals and humans to extract rules from sound sequences have previously been compared using observation of spontaneous responses and conditioning techniques. However, the results were inconsistently interpreted across studies possibly due to methodological and/or species differences. Therefore, we examined the strategies for discrimination of sound sequences in Bengalese finches and humans using the same protocol. Birds were trained on a GO/NOGO task to discriminate between two categories of sound stimulus generated based on an "AAB" or "ABB" rule. The sound elements used were taken from a variety of male (M) and female (F) calls, such that the sequences could be represented as MMF and MFF. In test sessions, FFM and FMM sequences, which were never presented in the training sessions but conformed to the rule, were presented as probe stimuli. The results suggested two discriminative strategies were being applied: (1) memorizing sound patterns of either GO or NOGO stimuli and generating the appropriate responses for only those sounds; and (2) using the repeated element as a cue. There was no evidence that the birds successfully extracted the abstract rule (i.e., AAB and ABB); MMF-GO subjects did not produce a GO response for FFM and vice versa. Next we examined whether those strategies were also applicable for human participants on the same task. The results and questionnaires revealed that participants extracted the abstract rule, and most of them employed it to discriminate the sequences. This strategy was never observed in bird subjects, although some participants used strategies similar to the birds when responding to the probe stimuli. Our results showed that the human participants applied the abstract rule in the task even without instruction but Bengalese finches did not, thereby reconfirming that humans have to extract abstract rules from sound sequences that is distinct from non-human animals.

  3. Cognitive flexibility modulates maturation and music-training-related changes in neural sound discrimination.

    Science.gov (United States)

    Saarikivi, Katri; Putkinen, Vesa; Tervaniemi, Mari; Huotilainen, Minna

    2016-07-01

    Previous research has demonstrated that musicians show superior neural sound discrimination when compared to non-musicians, and that these changes emerge with accumulation of training. Our aim was to investigate whether individual differences in executive functions predict training-related changes in neural sound discrimination. We measured event-related potentials induced by sound changes coupled with tests for executive functions in musically trained and non-trained children aged 9-11 years and 13-15 years. High performance in a set-shifting task, indexing cognitive flexibility, was linked to enhanced maturation of neural sound discrimination in both musically trained and non-trained children. Specifically, well-performing musically trained children already showed large mismatch negativity (MMN) responses at a young age as well as at an older age, indicating accurate sound discrimination. In contrast, the musically trained low-performing children still showed an increase in MMN amplitude with age, suggesting that they were behind their high-performing peers in the development of sound discrimination. In the non-trained group, in turn, only the high-performing children showed evidence of an age-related increase in MMN amplitude, and the low-performing children showed a small MMN with no age-related change. These latter results suggest an advantage in MMN development also for high-performing non-trained individuals. For the P3a amplitude, there was an age-related increase only in the children who performed well in the set-shifting task, irrespective of music training, indicating enhanced attention-related processes in these children. Thus, the current study provides the first evidence that, in children, cognitive flexibility may influence age-related and training-related plasticity of neural sound discrimination. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  4. Sound sensitivity of neurons in rat hippocampus during performance of a sound-guided task

    Science.gov (United States)

    Vinnik, Ekaterina; Honey, Christian; Schnupp, Jan; Diamond, Mathew E.

    2012-01-01

    To investigate how hippocampal neurons encode sound stimuli, and the conjunction of sound stimuli with the animal's position in space, we recorded from neurons in the CA1 region of hippocampus in rats while they performed a sound discrimination task. Four different sounds were used, two associated with water reward on the right side of the animal and the other two with water reward on the left side. This allowed us to separate neuronal activity related to sound identity from activity related to response direction. To test the effect of spatial context on sound coding, we trained rats to carry out the task on two identical testing platforms at different locations in the same room. Twenty-one percent of the recorded neurons exhibited sensitivity to sound identity, as quantified by the difference in firing rate for the two sounds associated with the same response direction. Sensitivity to sound identity was often observed on only one of the two testing platforms, indicating an effect of spatial context on sensory responses. Forty-three percent of the neurons were sensitive to response direction, and the probability that any one neuron was sensitive to response direction was statistically independent from its sensitivity to sound identity. There was no significant coding for sound identity when the rats heard the same sounds outside the behavioral task. These results suggest that CA1 neurons encode sound stimuli, but only when those sounds are associated with actions. PMID:22219030

  5. Food approach conditioning and discrimination learning using sound cues in benthic sharks.

    Science.gov (United States)

    Vila Pouca, Catarina; Brown, Culum

    2018-07-01

    The marine environment is filled with biotic and abiotic sounds. Some of these sounds predict important events that influence fitness while others are unimportant. Individuals can learn specific sound cues and 'soundscapes' and use them for vital activities such as foraging, predator avoidance, communication and orientation. Most research with sounds in elasmobranchs has focused on hearing thresholds and attractiveness to sound sources, but very little is known about their abilities to learn about sounds, especially in benthic species. Here we investigated if juvenile Port Jackson sharks could learn to associate a musical stimulus with a food reward, discriminate between two distinct musical stimuli, and whether individual personality traits were linked to cognitive performance. Five out of eight sharks were successfully conditioned to associate a jazz song with a food reward delivered in a specific corner of the tank. We observed repeatable individual differences in activity and boldness in all eight sharks, but these personality traits were not linked to the learning performance assays we examined. These sharks were later trained in a discrimination task, where they had to distinguish between the same jazz and a novel classical music song, and swim to opposite corners of the tank according to the stimulus played. The sharks' performance to the jazz stimulus declined to chance levels in the discrimination task. Interestingly, some sharks developed a strong side bias to the right, which in some cases was not the correct side for the jazz stimulus.

  6. Knockdown of Dyslexia-Gene Dcdc2 Interferes with Speech Sound Discrimination in Continuous Streams.

    Science.gov (United States)

    Centanni, Tracy Michelle; Booker, Anne B; Chen, Fuyi; Sloan, Andrew M; Carraway, Ryan S; Rennaker, Robert L; LoTurco, Joseph J; Kilgard, Michael P

    2016-04-27

    Dyslexia is the most common developmental language disorder and is marked by deficits in reading and phonological awareness. One theory of dyslexia suggests that the phonological awareness deficit is due to abnormal auditory processing of speech sounds. Variants in DCDC2 and several other neural migration genes are associated with dyslexia and may contribute to auditory processing deficits. In the current study, we tested the hypothesis that RNAi suppression of Dcdc2 in rats causes abnormal cortical responses to sound and impaired speech sound discrimination. In the current study, rats were subjected in utero to RNA interference targeting of the gene Dcdc2 or a scrambled sequence. Primary auditory cortex (A1) responses were acquired from 11 rats (5 with Dcdc2 RNAi; DC-) before any behavioral training. A separate group of 8 rats (3 DC-) were trained on a variety of speech sound discrimination tasks, and auditory cortex responses were acquired following training. Dcdc2 RNAi nearly eliminated the ability of rats to identify specific speech sounds from a continuous train of speech sounds but did not impair performance during discrimination of isolated speech sounds. The neural responses to speech sounds in A1 were not degraded as a function of presentation rate before training. These results suggest that A1 is not directly involved in the impaired speech discrimination caused by Dcdc2 RNAi. This result contrasts earlier results using Kiaa0319 RNAi and suggests that different dyslexia genes may cause different deficits in the speech processing circuitry, which may explain differential responses to therapy. Although dyslexia is diagnosed through reading difficulty, there is a great deal of variation in the phenotypes of these individuals. The underlying neural and genetic mechanisms causing these differences are still widely debated. In the current study, we demonstrate that suppression of a candidate-dyslexia gene causes deficits on tasks of rapid stimulus processing

  7. Discrimination of musical instrument sounds resynthesized with simplified spectrotemporal parameters.

    Science.gov (United States)

    McAdams, S; Beauchamp, J W; Meneguzzi, S

    1999-02-01

    The perceptual salience of several outstanding features of quasiharmonic, time-variant spectra was investigated in musical instrument sounds. Spectral analyses of sounds from seven musical instruments (clarinet, flute, oboe, trumpet, violin, harpsichord, and marimba) produced time-varying harmonic amplitude and frequency data. Six basic data simplifications and five combinations of them were applied to the reference tones: amplitude-variation smoothing, coherent variation of amplitudes over time, spectral-envelope smoothing, forced harmonic-frequency variation, frequency-variation smoothing, and harmonic-frequency flattening. Listeners were asked to discriminate sounds resynthesized with simplified data from reference sounds resynthesized with the full data. Averaged over the seven instruments, the discrimination was very good for spectral envelope smoothing and amplitude envelope coherence, but was moderate to poor in decreasing order for forced harmonic frequency variation, frequency variation smoothing, frequency flattening, and amplitude variation smoothing. Discrimination of combinations of simplifications was equivalent to that of the most potent constituent simplification. Objective measurements were made on the spectral data for harmonic amplitude, harmonic frequency, and spectral centroid changes resulting from simplifications. These measures were found to correlate well with discrimination results, indicating that listeners have access to a relatively fine-grained sensory representation of musical instrument sounds.

  8. Task-irrelevant emotion facilitates face discrimination learning.

    Science.gov (United States)

    Lorenzino, Martina; Caudek, Corrado

    2015-03-01

    We understand poorly how the ability to discriminate faces from one another is shaped by visual experience. The purpose of the present study is to determine whether face discrimination learning can be facilitated by facial emotions. To answer this question, we used a task-irrelevant perceptual learning paradigm because it closely mimics the learning processes that, in daily life, occur without a conscious intention to learn and without an attentional focus on specific facial features. We measured face discrimination thresholds before and after training. During the training phase (4 days), participants performed a contrast discrimination task on face images. They were not informed that we introduced (task-irrelevant) subtle variations in the face images from trial to trial. For the Identity group, the task-irrelevant features were variations along a morphing continuum of facial identity. For the Emotion group, the task-irrelevant features were variations along an emotional expression morphing continuum. The Control group did not undergo contrast discrimination learning and only performed the pre-training and post-training tests, with the same temporal gap between them as the other two groups. Results indicate that face discrimination improved, but only for the Emotion group. Participants in the Emotion group, moreover, showed face discrimination improvements also for stimulus variations along the facial identity dimension, even if these (task-irrelevant) stimulus features had not been presented during training. The present results highlight the importance of emotions for face discrimination learning. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Discrimination and streaming of speech sounds based on differences in interaural and spectral cues.

    Science.gov (United States)

    David, Marion; Lavandier, Mathieu; Grimault, Nicolas; Oxenham, Andrew J

    2017-09-01

    Differences in spatial cues, including interaural time differences (ITDs), interaural level differences (ILDs) and spectral cues, can lead to stream segregation of alternating noise bursts. It is unknown how effective such cues are for streaming sounds with realistic spectro-temporal variations. In particular, it is not known whether the high-frequency spectral cues associated with elevation remain sufficiently robust under such conditions. To answer these questions, sequences of consonant-vowel tokens were generated and filtered by non-individualized head-related transfer functions to simulate the cues associated with different positions in the horizontal and median planes. A discrimination task showed that listeners could discriminate changes in interaural cues both when the stimulus remained constant and when it varied between presentations. However, discrimination of changes in spectral cues was much poorer in the presence of stimulus variability. A streaming task, based on the detection of repeated syllables in the presence of interfering syllables, revealed that listeners can use both interaural and spectral cues to segregate alternating syllable sequences, despite the large spectro-temporal differences between stimuli. However, only the full complement of spatial cues (ILDs, ITDs, and spectral cues) resulted in obligatory streaming in a task that encouraged listeners to integrate the tokens into a single stream.

  10. Infant speech-sound discrimination testing: effects of stimulus intensity and procedural model on measures of performance.

    Science.gov (United States)

    Nozza, R J

    1987-06-01

    Performance of infants in a speech-sound discrimination task (/ba/ vs /da/) was measured at three stimulus intensity levels (50, 60, and 70 dB SPL) using the operant head-turn procedure. The procedure was modified so that data could be treated as though from a single-interval (yes-no) procedure, as is commonly done, as well as if from a sustained attention (vigilance) task. Discrimination performance changed significantly with increase in intensity, suggesting caution in the interpretation of results from infant discrimination studies in which only single stimulus intensity levels within this range are used. The assumptions made about the underlying methodological model did not change the performance-intensity relationships. However, infants demonstrated response decrement, typical of vigilance tasks, which supports the notion that the head-turn procedure is represented best by the vigilance model. Analysis then was done according to a method designed for tasks with undefined observation intervals [C. S. Watson and T. L. Nichols, J. Acoust. Soc. Am. 59, 655-668 (1976)]. Results reveal that, while group data are reasonably well represented across levels of difficulty by the fixed-interval model, there is a variation in performance as a function of time following trial onset that could lead to underestimation of performance in some cases.

  11. Response properties of neurons in the cat's putamen during auditory discrimination.

    Science.gov (United States)

    Zhao, Zhenling; Sato, Yu; Qin, Ling

    2015-10-01

    The striatum integrates diverse convergent input and plays a critical role in the goal-directed behaviors. To date, the auditory functions of striatum are less studied. Recently, it was demonstrated that auditory cortico-striatal projections influence behavioral performance during a frequency discrimination task. To reveal the functions of striatal neurons in auditory discrimination, we recorded the single-unit spike activities in the putamen (dorsal striatum) of free-moving cats while performing a Go/No-go task to discriminate the sounds with different modulation rates (12.5 Hz vs. 50 Hz) or envelopes (damped vs. ramped). We found that the putamen neurons can be broadly divided into four groups according to their contributions to sound discrimination. First, 40% of neurons showed vigorous responses synchronized to the sound envelope, and could precisely discriminate different sounds. Second, 18% of neurons showed a high preference of ramped to damped sounds, but no preference for modulation rate. They could only discriminate the change of sound envelope. Third, 27% of neurons rapidly adapted to the sound stimuli, had no ability of sound discrimination. Fourth, 15% of neurons discriminated the sounds dependent on the reward-prediction. Comparing to passively listening condition, the activities of putamen neurons were significantly enhanced by the engagement of the auditory tasks, but not modulated by the cat's behavioral choice. The coexistence of multiple types of neurons suggests that the putamen is involved in the transformation from auditory representation to stimulus-reward association. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. The Effects of Static and Dynamic Visual Representations as Aids for Primary School Children in Tasks of Auditory Discrimination of Sound Patterns. An Intervention-based Study.

    Directory of Open Access Journals (Sweden)

    Jesus Tejada

    2018-02-01

    Full Text Available It has been proposed that non-conventional presentations of visual information could be very useful as a scaffolding strategy in the learning of Western music notation. As a result, this study has attempted to determine if there is any effect of static and dynamic presentation modes of visual information in the recognition of sound patterns. An intervention-based quasi-experimental design was adopted with two groups of fifth-grade students in a Spanish city. Students did tasks involving discrimination, auditory recognition and symbolic association of the sound patterns with non-musical representations, either static images (S group, or dynamic images (D group. The results showed neither statistically significant differences in the scores of D and S, nor influence of the covariates on the dependent variable, although statistically significant intra-group differences were found for both groups. This suggests that both types of graphic formats could be effective as digital learning mediators in the learning of Western musical notation.

  13. Subcortical plasticity following perceptual learning in a pitch discrimination task.

    Science.gov (United States)

    Carcagno, Samuele; Plack, Christopher J

    2011-02-01

    Practice can lead to dramatic improvements in the discrimination of auditory stimuli. In this study, we investigated changes of the frequency-following response (FFR), a subcortical component of the auditory evoked potentials, after a period of pitch discrimination training. Twenty-seven adult listeners were trained for 10 h on a pitch discrimination task using one of three different complex tone stimuli. One had a static pitch contour, one had a rising pitch contour, and one had a falling pitch contour. Behavioral measures of pitch discrimination and FFRs for all the stimuli were measured before and after the training phase for these participants, as well as for an untrained control group (n = 12). Trained participants showed significant improvements in pitch discrimination compared to the control group for all three trained stimuli. These improvements were partly specific for stimuli with the same pitch modulation (dynamic vs. static) and with the same pitch trajectory (rising vs. falling) as the trained stimulus. Also, the robustness of FFR neural phase locking to the sound envelope increased significantly more in trained participants compared to the control group for the static and rising contour, but not for the falling contour. Changes in FFR strength were partly specific for stimuli with the same pitch modulation (dynamic vs. static) of the trained stimulus. Changes in FFR strength, however, were not specific for stimuli with the same pitch trajectory (rising vs. falling) as the trained stimulus. These findings indicate that even relatively low-level processes in the mature auditory system are subject to experience-related change.

  14. Sound localization and word discrimination in reverberant environment in children with developmental dyslexia

    Directory of Open Access Journals (Sweden)

    Wendy Castro-Camacho

    2015-04-01

    Full Text Available Objective Compare if localization of sounds and words discrimination in reverberant environment is different between children with dyslexia and controls. Method We studied 30 children with dyslexia and 30 controls. Sound and word localization and discrimination was studied in five angles from left to right auditory fields (-90o, -45o, 0o, +45o, +90o, under reverberant and no-reverberant conditions; correct answers were compared. Results Spatial location of words in no-reverberant test was deficient in children with dyslexia at 0º and +90o. Spatial location for reverberant test was altered in children with dyslexia at all angles, except –-90o. Word discrimination in no-reverberant test in children with dyslexia had a poor performance at left angles. In reverberant test, children with dyslexia exhibited deficiencies at -45o, -90o, and +45o angles. Conclusion Children with dyslexia could had problems when have to locate sound, and discriminate words in extreme locations of the horizontal plane in classrooms with reverberation.

  15. Localizing semantic interference from distractor sounds in picture naming: A dual-task study.

    Science.gov (United States)

    Mädebach, Andreas; Kieseler, Marie-Luise; Jescheniak, Jörg D

    2017-10-13

    In this study we explored the locus of semantic interference in a novel picture-sound interference task in which participants name pictures while ignoring environmental distractor sounds. In a previous study using this task (Mädebach, Wöhner, Kieseler, & Jescheniak, in Journal of Experimental Psychology: Human Perception and Performance, 43, 1629-1646, 2017), we showed that semantically related distractor sounds (e.g., BARKING dog ) interfere with a picture-naming response (e.g., "horse") more strongly than unrelated distractor sounds do (e.g., DRUMMING drum ). In the experiment reported here, we employed the psychological refractory period (PRP) approach to explore the locus of this effect. We combined a geometric form classification task (square vs. circle; Task 1) with the picture-sound interference task (Task 2). The stimulus onset asynchrony (SOA) between the tasks was systematically varied (0 vs. 500 ms). There were three central findings. First, the semantic interference effect from distractor sounds was replicated. Second, picture naming (in Task 2) was slower with the short than with the long task SOA. Third, both effects were additive-that is, the semantic interference effects were of similar magnitude at both task SOAs. This suggests that the interference arises during response selection or later stages, not during early perceptual processing. This finding corroborates the theory that semantic interference from distractor sounds reflects a competitive selection mechanism in word production.

  16. Mechanisms underlying speech sound discrimination and categorization in humans and zebra finches

    NARCIS (Netherlands)

    Burgering, Merel A.; ten Cate, Carel; Vroomen, Jean

    Speech sound categorization in birds seems in many ways comparable to that by humans, but it is unclear what mechanisms underlie such categorization. To examine this, we trained zebra finches and humans to discriminate two pairs of edited speech sounds that varied either along one dimension (vowel

  17. Task-irrelevant novel sounds improve attentional performance in children with and without ADHD

    Directory of Open Access Journals (Sweden)

    Jana eTegelbeckers

    2016-01-01

    Full Text Available Task-irrelevant salient stimuli involuntarily capture attention and can lead to distraction from an ongoing task, especially in children with ADHD. However, there has been tentative evidence that the presentation of novel sounds can have beneficial effects on cognitive performance. In the present study, we aimed to investigate the influence of novel sounds compared to no sound and a repeatedly presented standard sound on attentional performance in children and adolescents with and without ADHD. We therefore had 32 patients with ADHD and 32 typically developing children and adolescents (8 to 13 years executed a flanker task in which each trial was preceded either by a repeatedly presented standard sound (33%, an unrepeated novel sound (33% or no auditory stimulation (33%. Task-irrelevant novel sounds facilitated attentional performance similarly in children with and without ADHD, as indicated by reduced omission error rates, reaction times, and reaction time variability without compromising performance accuracy. By contrast, standard sounds, while also reducing omission error rates and reaction times, led to increased commission error rates. Therefore, the beneficial effect of novel sounds exceeds cueing of the target display by potentially increased alerting and/or enhanced behavioral control.

  18. Attention-dependent sound offset-related brain potentials.

    Science.gov (United States)

    Horváth, János

    2016-05-01

    When performing sensory tasks, knowing the potentially occurring goal-relevant and irrelevant stimulus events allows the establishment of selective attention sets, which result in enhanced sensory processing of goal-relevant events. In the auditory modality, such enhancements are reflected in the increased amplitude of the N1 ERP elicited by the onsets of task-relevant sounds. It has been recently suggested that ERPs to task-relevant sound offsets are similarly enhanced in a tone-focused state in comparison to a distracted one. The goal of the present study was to explore the influence of attention on ERPs elicited by sound offsets. ERPs elicited by tones in a duration-discrimination task were compared to ERPs elicited by the same tones in not-tone-focused attentional setting. Tone offsets elicited a consistent, attention-dependent biphasic (positive-negative--P1-N1) ERP waveform for tone durations ranging from 150 to 450 ms. The evidence, however, did not support the notion that the offset-related ERPs reflected an offset-specific attention set: The offset-related ERPs elicited in a duration-discrimination condition (in which offsets were task relevant) did not significantly differ from those elicited in a pitch-discrimination condition (in which the offsets were task irrelevant). Although an N2 reflecting the processing of offsets in task-related terms contributed to the observed waveform, this contribution was separable from the offset-related P1 and N1. The results demonstrate that when tones are attended, offset-related ERPs may substantially overlap endogenous ERP activity in the postoffset interval irrespective of tone duration, and attention differences may cause ERP differences in such postoffset intervals. © 2016 Society for Psychophysiological Research.

  19. Brain activity during auditory and visual phonological, spatial and simple discrimination tasks.

    Science.gov (United States)

    Salo, Emma; Rinne, Teemu; Salonen, Oili; Alho, Kimmo

    2013-02-16

    We used functional magnetic resonance imaging to measure human brain activity during tasks demanding selective attention to auditory or visual stimuli delivered in concurrent streams. Auditory stimuli were syllables spoken by different voices and occurring in central or peripheral space. Visual stimuli were centrally or more peripherally presented letters in darker or lighter fonts. The participants performed a phonological, spatial or "simple" (speaker-gender or font-shade) discrimination task in either modality. Within each modality, we expected a clear distinction between brain activations related to nonspatial and spatial processing, as reported in previous studies. However, within each modality, different tasks activated largely overlapping areas in modality-specific (auditory and visual) cortices, as well as in the parietal and frontal brain regions. These overlaps may be due to effects of attention common for all three tasks within each modality or interaction of processing task-relevant features and varying task-irrelevant features in the attended-modality stimuli. Nevertheless, brain activations caused by auditory and visual phonological tasks overlapped in the left mid-lateral prefrontal cortex, while those caused by the auditory and visual spatial tasks overlapped in the inferior parietal cortex. These overlapping activations reveal areas of multimodal phonological and spatial processing. There was also some evidence for intermodal attention-related interaction. Most importantly, activity in the superior temporal sulcus elicited by unattended speech sounds was attenuated during the visual phonological task in comparison with the other visual tasks. This effect might be related to suppression of processing irrelevant speech presumably distracting the phonological task involving the letters. Copyright © 2012 Elsevier B.V. All rights reserved.

  20. Atypical pattern of discriminating sound features in adults with Asperger syndrome as reflected by the mismatch negativity.

    Science.gov (United States)

    Kujala, T; Aho, E; Lepistö, T; Jansson-Verkasalo, E; Nieminen-von Wendt, T; von Wendt, L; Näätänen, R

    2007-04-01

    Asperger syndrome, which belongs to the autistic spectrum of disorders, is characterized by deficits of social interaction and abnormal perception, like hypo- or hypersensitivity in reacting to sounds and discriminating certain sound features. We determined auditory feature discrimination in adults with Asperger syndrome with the mismatch negativity (MMN), a neural response which is an index of cortical change detection. We recorded MMN for five different sound features (duration, frequency, intensity, location, and gap). Our results suggest hypersensitive auditory change detection in Asperger syndrome, as reflected in the enhanced MMN for deviant sounds with a gap or shorter duration, and speeded MMN elicitation for frequency changes.

  1. Action recognition and movement direction discrimination tasks are associated with different adaptation patterns

    Directory of Open Access Journals (Sweden)

    Stephan eDe La Rosa

    2016-02-01

    Full Text Available The ability to discriminate between different actions is essential for action recognition and social interaction. Surprisingly previous research has often probed action recognition mechanisms with tasks that did not require participants to discriminate between actions, e.g. left-right direction discrimination tasks. It is not known to what degree visual processes in direction discrimination tasks are also involved in the discrimination of actions, e.g. when telling apart a handshake from a high-five. Here, we examined whether action discrimination is influenced by movement direction and whether direction discrimination depends on the type of action. We used an action adaptation paradigm to target action and direction discrimination specific visual processes. In separate conditions participants visually adapted to forward and backward moving handshake and high-five actions. Participants subsequently either categorized the action or the movement direction of an ambiguous action. The results showed that direction discrimination adaptation effects were modulated by the type of action but action discrimination adaptation effects were unaffected by movement direction. These results suggest that action discrimination and direction categorization rely on partly different visual information. We propose that action discrimination tasks should be considered for the exploration of visual action recognition mechanisms.

  2. Sound segregation via embedded repetition is robust to inattention.

    Science.gov (United States)

    Masutomi, Keiko; Barascud, Nicolas; Kashino, Makio; McDermott, Josh H; Chait, Maria

    2016-03-01

    The segregation of sound sources from the mixture of sounds that enters the ear is a core capacity of human hearing, but the extent to which this process is dependent on attention remains unclear. This study investigated the effect of attention on the ability to segregate sounds via repetition. We utilized a dual task design in which stimuli to be segregated were presented along with stimuli for a "decoy" task that required continuous monitoring. The task to assess segregation presented a target sound 10 times in a row, each time concurrent with a different distractor sound. McDermott, Wrobleski, and Oxenham (2011) demonstrated that repetition causes the target sound to be segregated from the distractors. Segregation was queried by asking listeners whether a subsequent probe sound was identical to the target. A control task presented similar stimuli but probed discrimination without engaging segregation processes. We present results from 3 different decoy tasks: a visual multiple object tracking task, a rapid serial visual presentation (RSVP) digit encoding task, and a demanding auditory monitoring task. Load was manipulated by using high- and low-demand versions of each decoy task. The data provide converging evidence of a small effect of attention that is nonspecific, in that it affected the segregation and control tasks to a similar extent. In all cases, segregation performance remained high despite the presence of a concurrent, objectively demanding decoy task. The results suggest that repetition-based segregation is robust to inattention. (c) 2016 APA, all rights reserved).

  3. Concentrated pitch discrimination modulates auditory brainstem responses during contralateral noise exposure.

    Science.gov (United States)

    Ikeda, Kazunari; Sekiguchi, Takahiro; Hayashi, Akiko

    2010-03-31

    This study examined a notion that auditory discrimination is a requisite for attention-related modulation of the auditory brainstem response (ABR) during contralateral noise exposure. Given that the right ear was exposed continuously with white noise at an intensity of 60-80 dB sound pressure level, tone pips at 80 dB sound pressure level were delivered to the left ear through either single-stimulus or oddball procedures. Participants conducted reading (ignoring task) and counting target tones (attentive task) during stimulation. The oddball but not the single-stimulus procedures elicited task-related modulations in both early (ABR) and late (processing negativity) event-related potentials simultaneously. The elicitation of the attention-related ABR modulation during contralateral noise exposure is thus considered to require auditory discrimination and have the corticofugal nature evidently.

  4. Subcortical plasticity following perceptual learning in a pitch discrimination task

    OpenAIRE

    Carcagno, Samuele; Plack, Christopher J.

    2011-01-01

    Practice can lead to dramatic improvements in the discrimination of auditory stimuli. In this study, we investigated changes of the frequency-following response (FFR), a subcortical component of the auditory evoked potentials, after a period of pitch discrimination training. Twenty-seven adult listeners were trained for 10 h on a pitch discrimination task using one of three different complex tone stimuli. One had a static pitch contour, one had a rising pitch contour, and one had a falling pi...

  5. Effects of task-switching on neural representations of ambiguous sound input.

    Science.gov (United States)

    Sussman, Elyse S; Bregman, Albert S; Lee, Wei-Wei

    2014-11-01

    The ability to perceive discrete sound streams in the presence of competing sound sources relies on multiple mechanisms that organize the mixture of the auditory input entering the ears. Many studies have focused on mechanisms that contribute to integrating sounds that belong together into one perceptual stream (integration) and segregating those that come from different sound sources (segregation). However, little is known about mechanisms that allow us to perceive individual sound sources within a dynamically changing auditory scene, when the input may be ambiguous, and heard as either integrated or segregated. This study tested the question of whether focusing on one of two possible sound organizations suppressed representation of the alternative organization. We presented listeners with ambiguous input and cued them to switch between tasks that used either the integrated or the segregated percept. Electrophysiological measures indicated which organization was currently maintained in memory. If mutual exclusivity at the neural level was the rule, attention to one of two possible organizations would preclude neural representation of the other. However, significant MMNs were elicited to both the target organization and the unattended, alternative organization, along with the target-related P3b component elicited only to the designated target organization. Results thus indicate that both organizations (integrated and segregated) were simultaneously maintained in memory regardless of which task was performed. Focusing attention to one aspect of the sounds did not abolish the alternative, unattended organization when the stimulus input was ambiguous. In noisy environments, such as walking on a city street, rapid and flexible adaptive processes are needed to help facilitate rapid switching to different sound sources in the environment. Having multiple representations available to the attentive system would allow for such flexibility, needed in everyday situations to

  6. Gay- and Lesbian-Sounding Auditory Cues Elicit Stereotyping and Discrimination.

    Science.gov (United States)

    Fasoli, Fabio; Maass, Anne; Paladino, Maria Paola; Sulpizio, Simone

    2017-07-01

    The growing body of literature on the recognition of sexual orientation from voice ("auditory gaydar") is silent on the cognitive and social consequences of having a gay-/lesbian- versus heterosexual-sounding voice. We investigated this issue in four studies (overall N = 276), conducted in Italian language, in which heterosexual listeners were exposed to single-sentence voice samples of gay/lesbian and heterosexual speakers. In all four studies, listeners were found to make gender-typical inferences about traits and preferences of heterosexual speakers, but gender-atypical inferences about those of gay or lesbian speakers. Behavioral intention measures showed that listeners considered lesbian and gay speakers as less suitable for a leadership position, and male (but not female) listeners took distance from gay speakers. Together, this research demonstrates that having a gay/lesbian rather than heterosexual-sounding voice has tangible consequences for stereotyping and discrimination.

  7. Dual-task interference effects on cross-modal numerical order and sound intensity judgments: the more the louder?

    Science.gov (United States)

    Alards-Tomalin, Doug; Walker, Alexander C; Nepon, Hillary; Leboe-McGowan, Launa C

    2017-09-01

    In the current study, cross-task interactions between number order and sound intensity judgments were assessed using a dual-task paradigm. Participants first categorized numerical sequences composed of Arabic digits as either ordered (ascending, descending) or non-ordered. Following each number sequence, participants then had to judge the intensity level of a target sound. Experiment 1 emphasized processing the two tasks independently (serial processing), while Experiments 2 and 3 emphasized processing the two tasks simultaneously (parallel processing). Cross-task interference occurred only when the task required parallel processing and was specific to ascending numerical sequences, which led to a higher proportion of louder sound intensity judgments. In Experiment 4 we examined whether this unidirectional interaction was the result of participants misattributing enhanced processing fluency experienced on ascending sequences as indicating a louder target sound. The unidirectional finding could not be entirely attributed to misattributed processing fluency, and may also be connected to experientially derived conceptual associations between ascending number sequences and greater magnitude, consistent with conceptual mapping theory.

  8. Age-related emotional bias in processing two emotionally valenced tasks.

    Science.gov (United States)

    Allen, Philip A; Lien, Mei-Ching; Jardin, Elliott

    2017-01-01

    Previous studies suggest that older adults process positive emotions more efficiently than negative emotions, whereas younger adults show the reverse effect. We examined whether this age-related difference in emotional bias still occurs when attention is engaged in two emotional tasks. We used a psychological refractory period paradigm and varied the emotional valence of Task 1 and Task 2. In both experiments, Task 1 was emotional face discrimination (happy vs. angry faces) and Task 2 was sound discrimination (laugh, punch, vs. cork pop in Experiment 1 and laugh vs. scream in Experiment 2). The backward emotional correspondence effect for positively and negatively valenced Task 2 on Task 1 was measured. In both experiments, younger adults showed a backward correspondence effect from a negatively valenced Task 2, suggesting parallel processing of negatively valenced stimuli. Older adults showed similar negativity bias in Experiment 2 with a more salient negative sound ("scream" relative to "punch"). These results are consistent with an arousal-bias competition model [Mather and Sutherland (Perspectives in Psychological Sciences 6:114-133, 2011)], suggesting that emotional arousal modulates top-down attentional control settings (emotional regulation) with age.

  9. Binaural Sound Reduces Reaction Time in a Virtual Reality Search Task

    DEFF Research Database (Denmark)

    Høeg, Emil Rosenlund; Gerry, Lynda; Thomsen, Lui Albæk

    2017-01-01

    Salient features in a visual search task can direct attention and increase competency on these tasks. Simple cues, such as color change in a salient feature, called the "pop-out effect" can increase task solving efficiency [6]. Previous work has shown that nonspatial auditory signals temporally...... synched with a pop-out effect can improve reaction time in a visual search task, called the "pip and pop effect" [14]. This paper describes a within-group study on the effect of audiospatial attention in virtual reality given a 360-degree visual search. Three cue conditions were compared (no sound, stereo...

  10. Speech discrimination difficulties in High-Functioning Autism Spectrum Disorder are likely independent of auditory hypersensitivity.

    Directory of Open Access Journals (Sweden)

    William Andrew Dunlop

    2016-08-01

    Full Text Available Autism Spectrum Disorder (ASD, characterised by impaired communication skills and repetitive behaviours, can also result in differences in sensory perception. Individuals with ASD often perform normally in simple auditory tasks but poorly compared to typically developed (TD individuals on complex auditory tasks like discriminating speech from complex background noise. A common trait of individuals with ASD is hypersensitivity to auditory stimulation. No studies to our knowledge consider whether hypersensitivity to sounds is related to differences in speech-in-noise discrimination. We provide novel evidence that individuals with high-functioning ASD show poor performance compared to TD individuals in a speech-in-noise discrimination task with an attentionally demanding background noise, but not in a purely energetic noise. Further, we demonstrate in our small sample that speech-hypersensitivity does not appear to predict performance in the speech-in-noise task. The findings support the argument that an attentional deficit, rather than a perceptual deficit, affects the ability of individuals with ASD to discriminate speech from background noise. Finally, we piloted a novel questionnaire that measures difficulty hearing in noisy environments, and sensitivity to non-verbal and verbal sounds. Psychometric analysis using 128 TD participants provided novel evidence for a difference in sensitivity to non-verbal and verbal sounds, and these findings were reinforced by participants with ASD who also completed the questionnaire. The study was limited by a small and high-functioning sample of participants with ASD. Future work could test larger sample sizes and include lower-functioning ASD participants.

  11. Activations of human auditory cortex to phonemic and nonphonemic vowels during discrimination and memory tasks.

    Science.gov (United States)

    Harinen, Kirsi; Rinne, Teemu

    2013-08-15

    We used fMRI to investigate activations within human auditory cortex (AC) to vowels during vowel discrimination, vowel (categorical n-back) memory, and visual tasks. Based on our previous studies, we hypothesized that the vowel discrimination task would be associated with increased activations in the anterior superior temporal gyrus (STG), while the vowel memory task would enhance activations in the posterior STG and inferior parietal lobule (IPL). In particular, we tested the hypothesis that activations in the IPL during vowel memory tasks are associated with categorical processing. Namely, activations due to categorical processing should be higher during tasks performed on nonphonemic (hard to categorize) than on phonemic (easy to categorize) vowels. As expected, we found distinct activation patterns during vowel discrimination and vowel memory tasks. Further, these task-dependent activations were different during tasks performed on phonemic or nonphonemic vowels. However, activations in the IPL associated with the vowel memory task were not stronger during nonphonemic than phonemic vowel blocks. Together these results demonstrate that activations in human AC to vowels depend on both the requirements of the behavioral task and the phonemic status of the vowels. Copyright © 2013 Elsevier Inc. All rights reserved.

  12. Statistics of natural binaural sounds.

    Directory of Open Access Journals (Sweden)

    Wiktor Młynarski

    Full Text Available Binaural sound localization is usually considered a discrimination task, where interaural phase (IPD and level (ILD disparities at narrowly tuned frequency channels are utilized to identify a position of a sound source. In natural conditions however, binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. Distribution of cues encountered naturally and their dependence on physical properties of an auditory scene have not been studied before. In the present work we analyzed statistics of naturally encountered binaural sounds. We performed binaural recordings of three auditory scenes with varying spatial configuration and analyzed empirical cue distributions from each scene. We have found that certain properties such as the spread of IPD distributions as well as an overall shape of ILD distributions do not vary strongly between different auditory scenes. Moreover, we found that ILD distributions vary much weaker across frequency channels and IPDs often attain much higher values, than can be predicted from head filtering properties. In order to understand the complexity of the binaural hearing task in the natural environment, sound waveforms were analyzed by performing Independent Component Analysis (ICA. Properties of learned basis functions indicate that in natural conditions soundwaves in each ear are predominantly generated by independent sources. This implies that the real-world sound localization must rely on mechanisms more complex than a mere cue extraction.

  13. Statistics of natural binaural sounds.

    Science.gov (United States)

    Młynarski, Wiktor; Jost, Jürgen

    2014-01-01

    Binaural sound localization is usually considered a discrimination task, where interaural phase (IPD) and level (ILD) disparities at narrowly tuned frequency channels are utilized to identify a position of a sound source. In natural conditions however, binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. Distribution of cues encountered naturally and their dependence on physical properties of an auditory scene have not been studied before. In the present work we analyzed statistics of naturally encountered binaural sounds. We performed binaural recordings of three auditory scenes with varying spatial configuration and analyzed empirical cue distributions from each scene. We have found that certain properties such as the spread of IPD distributions as well as an overall shape of ILD distributions do not vary strongly between different auditory scenes. Moreover, we found that ILD distributions vary much weaker across frequency channels and IPDs often attain much higher values, than can be predicted from head filtering properties. In order to understand the complexity of the binaural hearing task in the natural environment, sound waveforms were analyzed by performing Independent Component Analysis (ICA). Properties of learned basis functions indicate that in natural conditions soundwaves in each ear are predominantly generated by independent sources. This implies that the real-world sound localization must rely on mechanisms more complex than a mere cue extraction.

  14. Atypical central auditory speech-sound discrimination in children who stutter as indexed by the mismatch negativity

    NARCIS (Netherlands)

    Jansson-Verkasalo, E.; Eggers, K.; Järvenpää, A.; Suominen, K.; Van Den Bergh, B.R.H.; de Nil, L.; Kujala, T.

    2014-01-01

    Purpose Recent theoretical conceptualizations suggest that disfluencies in stuttering may arise from several factors, one of them being atypical auditory processing. The main purpose of the present study was to investigate whether speech sound encoding and central auditory discrimination, are

  15. Modulation of radial blood flow during Braille character discrimination task.

    Science.gov (United States)

    Murata, Jun; Matsukawa, K; Komine, H; Tsuchimochi, H

    2012-03-01

    Human hands are excellent in performing sensory and motor function. We have hypothesized that blood flow of the hand is dynamically regulated by sympathetic outflow during concentrated finger perception. To identify this hypothesis, we measured radial blood flow (RBF), radial vascular conductance (RVC), heart rate (HR), and arterial blood pressure (AP) during Braille reading performed under the blind condition in nine healthy subjects. The subjects were instructed to read a flat plate with raised letters (Braille reading) for 30 s by the forefinger, and to touch a blank plate as control for the Braille discrimination procedure. HR and AP slightly increased during Braille reading but remained unchanged during the touching of the blank plate. RBF and RVC were reduced during the Braille character discrimination task (decreased by -46% and -49%, respectively). Furthermore, the changes in RBF and RVC were much greater during the Braille character discrimination task than during the touching of the blank plate (decreased by -20% and -20%, respectively). These results have suggested that the distribution of blood flow to the hand is modulated via sympathetic nerve activity during concentrated finger perception.

  16. Investigating the time course of tactile reflexive attention using a non-spatial discrimination task.

    Science.gov (United States)

    Miles, Eleanor; Poliakoff, Ellen; Brown, Richard J

    2008-06-01

    Peripheral cues are thought to facilitate responses to stimuli presented at the same location because they lead to exogenous attention shifts. Facilitation has been observed in numerous studies of visual and auditory attention, but there have been only four demonstrations of tactile facilitation, all in studies with potential confounds. Three studies used a spatial (finger versus thumb) discrimination task, where the cue could have provided a spatial framework that might have assisted the discrimination of subsequent targets presented on the same side as the cue. The final study circumvented this problem by using a non-spatial discrimination; however, the cues were informative and interspersed with visual cues which may have affected the attentional effects observed. In the current study, therefore, we used a non-spatial tactile frequency discrimination task following a non-informative tactile white noise cue. When the target was presented 150 ms after the cue, we observed faster discrimination responses to targets presented on the same side compared to the opposite side as the cue; by 1000 ms, responses were significantly faster to targets presented on the opposite side to the cue. Thus, we demonstrated that tactile attentional facilitation can be observed in a non-spatial discrimination task, under unimodal conditions and with entirely non-predictive cues. Furthermore, we provide the first demonstration of significant tactile facilitation and tactile inhibition of return within a single experiment.

  17. The influence of short-term memory on standard discrimination and cued identification olfactory tasks.

    Science.gov (United States)

    Zucco, Gesualdo M; Hummel, Thomas; Tomaiuolo, Francesco; Stevenson, Richard J

    2014-01-30

    Amongst the techniques to assess olfactory functions, discrimination and cued identification are those most prone to the influence of odour short-term memory (STM). Discrimination task requires participants to detect the odd one out of three presented odourants. As re-smelling is not permitted, an un-intended STM load may generate, even though the task purports to assess discrimination ability. Analogously, cued identification task requires participants to smell an odour, and then select a label from three or four alternatives. As the interval between smelling and reading each label increases this too imposes a STM load, even though the task aims to measure identification ability. We tested whether modifying task design to reduce STM load improve performance on these tests. We examined five age-groups of participants (Adolescents, Young adults, Middle-aged, Elderly, very Elderly), some of whom should be more prone to the effects of STM load than others, on standard and modified tests of discrimination and identification. We found that using a technique to reduce STM load improved performance, especially for the very Elderly and Adolescent groups. Sources of error are now prevented. Findings indicate that STM load can adversely affect performance in groups vulnerable from memory impairment (i.e., very Elderly) and in those who may still be acquiring memory-based representations of familiar odours (i.e., Adolescents). It may be that adults in general would be even more sensitive to the effects of olfactory STM load reduction, if the odour-related task was more difficult. Copyright © 2013 Elsevier B.V. All rights reserved.

  18. Classification of Real and Imagined Sounds in Early Visual Cortex

    Directory of Open Access Journals (Sweden)

    Petra Vetter

    2011-10-01

    Full Text Available Early visual cortex has been thought to be mainly involved in the detection of low-level visual features. Here we show that complex natural sounds can be decoded from early visual cortex activity, in the absence of visual stimulation and both when sounds are actually displayed and when they are merely imagined. Blindfolded subjects listened to three complex natural sounds (bird singing, people talking, traffic noise; Exp. 1 or received word cues (“forest”, “people”, “traffic”; Exp 2 to imagine the associated scene. fMRI BOLD activation patterns from retinotopically defined early visual areas were fed into a multivariate pattern classification algorithm (a linear support vector machine. Actual sounds were discriminated above chance in V2 and V3 and imagined sounds were decoded in V1. Also cross-classification, ie, training the classifier to real sounds and testing it to imagined sounds and vice versa, was successful. Two further experiments showed that an orthogonal working memory task does not interfere with sound classification in early visual cortex (Exp. 3, however, an orthogonal visuo-spatial imagery task does (Exp. 4. These results demonstrate that early visual cortex activity contains content-specific information from hearing and from imagery, challenging the view of a strict modality-specific function of early visual cortex.

  19. Discriminating talent-identified junior Australian football players using a video decision-making task.

    Science.gov (United States)

    Woods, Carl T; Raynor, Annette J; Bruce, Lyndell; McDonald, Zane

    2016-01-01

    This study examined if a video decision-making task could discriminate talent-identified junior Australian football players from their non-talent-identified counterparts. Participants were recruited from the 2013 under 18 (U18) West Australian Football League competition and classified into two groups: talent-identified (State U18 Academy representatives; n = 25; 17.8 ± 0.5 years) and non-talent-identified (non-State U18 Academy selection; n = 25; 17.3 ± 0.6 years). Participants completed a video decision-making task consisting of 26 clips sourced from the Australian Football League game-day footage, recording responses on a sheet provided. A score of "1" was given for correct and "0" for incorrect responses, with the participants total score used as the criterion value. One-way analysis of variance tested the main effect of "status" on the task criterion, whilst a bootstrapped receiver operating characteristic (ROC) curve assessed the discriminant ability of the task. An area under the curve (AUC) of 1 (100%) represented perfect discrimination. Between-group differences were evident (P talent-identified and non-talent-identified participants, respectively. Future research should investigate the mechanisms leading to the superior decision-making observed in the talent-identified group.

  20. The time-course of activation in the dorsal and ventral visual streams during landmark cueing and perceptual discrimination tasks.

    Science.gov (United States)

    Lambert, Anthony J; Wootton, Adrienne

    2017-08-01

    Different patterns of high density EEG activity were elicited by the same peripheral stimuli, in the context of Landmark Cueing and Perceptual Discrimination tasks. The C1 component of the visual event-related potential (ERP) at parietal - occipital electrode sites was larger in the Landmark Cueing task, and source localisation suggested greater activation in the superior parietal lobule (SPL) in this task, compared to the Perceptual Discrimination task, indicating stronger early recruitment of the dorsal visual stream. In the Perceptual Discrimination task, source localisation suggested widespread activation of the inferior temporal gyrus (ITG) and fusiform gyrus (FFG), structures associated with the ventral visual stream, during the early phase of the P1 ERP component. Moreover, during a later epoch (171-270ms after stimulus onset) increased temporal-occipital negativity, and stronger recruitment of ITG and FFG were observed in the Perceptual Discrimination task. These findings illuminate the contrasting functions of the dorsal and ventral visual streams, to support rapid shifts of attention in response to contextual landmarks, and conscious discrimination, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. The neural network involved in a bimanual tactile-tactile matching discrimination task: a functional imaging study at 3 T

    Energy Technology Data Exchange (ETDEWEB)

    Habas, Christophe; Cabanis, Emmanuel A. [UPMC Paris 6, Service de NeuroImagerie, Hopital des Quinze-Vingts, Paris (France)

    2007-08-15

    The cerebral and cerebellar network involved in a bimanual object recognition was studied in blood oxygenation dependent level functional magnetic resonance imaging (fMRI). Nine healthy right-handed volunteers were scanned (1) while performing bilateral finger movements (nondiscrimination motor task), and (2) while performing a bimanual tactile-tactile matching discrimination task using small chess pieces (tactile discrimination task). Extensive activations were specifically observed in the parietal (SII, superior lateral lobule), insular, prefrontal, cingulate and neocerebellar cortices (HVIII), with a left predominance in motor areas, during the tactile discrimination task in contrast to the findings during the nondiscrimination motor task. Bimanual tactile-tactile matching discrimination recruits multiple sensorimotor and associative cerebral and neocerebellar networks (including the cerebellar second homunculus, HVIII), comparable to the neural circuits involved in unimanual tactile object recognition. (orig.)

  2. Valence of Facial Cues Influences Sheep Learning in a Visual Discrimination Task

    Directory of Open Access Journals (Sweden)

    Lucille G. A. Bellegarde

    2017-11-01

    Full Text Available Sheep are one of the most studied farm species in terms of their ability to process information from faces, but little is known about their face-based emotion recognition abilities. We investigated (a whether sheep could use images of sheep faces taken in situation of varying valence as cues in a simultaneous discrimination task and (b whether the valence of the situation affects their learning performance. To accomplish this, we photographed faces of sheep in three situations inducing emotional states of neutral (ruminating in the home pen or negative valence (social isolation or aggressive interaction. Sheep (n = 35 first had to learn a discrimination task with colored cards. Animals that reached the learning criterion (n = 16 were then presented with pairs of images of the face of a single individual taken in the neutral situation and in one of the negative situations. Finally, sheep had to generalize what they had learned to new pairs of images of faces taken in the same situation, but of a different conspecific. All sheep that learned the discrimination task with colored cards reached the learning criterion with images of faces. Sheep that had to associate a negative image with a food reward learned faster than sheep that had to associate a neutral image with a reward. With the exception of sheep from the aggression-rewarded group, sheep generalized this discrimination to images of faces of different individuals. Our results suggest that sheep can perceive the emotional valence displayed on faces of conspecifics and that this valence affects learning processes.

  3. A novel perceptual discrimination training task: Reducing fear overgeneralization in the context of fear learning.

    Science.gov (United States)

    Ginat-Frolich, Rivkah; Klein, Zohar; Katz, Omer; Shechner, Tomer

    2017-06-01

    Generalization is an adaptive learning mechanism, but it can be maladaptive when it occurs in excess. A novel perceptual discrimination training task was therefore designed to moderate fear overgeneralization. We hypothesized that improvement in basic perceptual discrimination would translate into lower fear overgeneralization in affective cues. Seventy adults completed a fear-conditioning task prior to being allocated into training or placebo groups. Predesignated geometric shape pairs were constructed for the training task. A target shape from each pair was presented. Thereafter, participants in the training group were shown both shapes and asked to identify the image that differed from the target. Placebo task participants only indicated the location of each shape on the screen. All participants then viewed new geometric pairs and indicated whether they were identical or different. Finally, participants completed a fear generalization test consisting of perceptual morphs ranging from the CS + to the CS-. Fear-conditioning was observed through physiological and behavioural measures. Furthermore, the training group performed better than the placebo group on the assessment task and exhibited decreased fear generalization in response to threat/safety cues. The findings offer evidence for the effectiveness of the novel discrimination training task, setting the stage for future research with clinical populations. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Primate auditory recognition memory performance varies with sound type.

    Science.gov (United States)

    Ng, Chi-Wing; Plakke, Bethany; Poremba, Amy

    2009-10-01

    Neural correlates of auditory processing, including for species-specific vocalizations that convey biological and ethological significance (e.g., social status, kinship, environment), have been identified in a wide variety of areas including the temporal and frontal cortices. However, few studies elucidate how non-human primates interact with these vocalization signals when they are challenged by tasks requiring auditory discrimination, recognition and/or memory. The present study employs a delayed matching-to-sample task with auditory stimuli to examine auditory memory performance of rhesus macaques (Macaca mulatta), wherein two sounds are determined to be the same or different. Rhesus macaques seem to have relatively poor short-term memory with auditory stimuli, and we examine if particular sound types are more favorable for memory performance. Experiment 1 suggests memory performance with vocalization sound types (particularly monkey), are significantly better than when using non-vocalization sound types, and male monkeys outperform female monkeys overall. Experiment 2, controlling for number of sound exemplars and presentation pairings across types, replicates Experiment 1, demonstrating better performance or decreased response latencies, depending on trial type, to species-specific monkey vocalizations. The findings cannot be explained by acoustic differences between monkey vocalizations and the other sound types, suggesting the biological, and/or ethological meaning of these sounds are more effective for auditory memory. 2009 Elsevier B.V.

  5. Serial recall of rhythms and verbal sequences: Impacts of concurrent tasks and irrelevant sound.

    Science.gov (United States)

    Hall, Debbora; Gathercole, Susan E

    2011-08-01

    Rhythmic grouping enhances verbal serial recall, yet very little is known about memory for rhythmic patterns. The aim of this study was to compare the cognitive processes supporting memory for rhythmic and verbal sequences using a range of concurrent tasks and irrelevant sounds. In Experiment 1, both concurrent articulation and paced finger tapping during presentation and during a retention interval impaired rhythm recall, while letter recall was only impaired by concurrent articulation. In Experiments 2 and 3, irrelevant sound consisted of irrelevant speech or tones, changing-state or steady-state sound, and syncopated or paced sound during presentation and during a retention interval. Irrelevant speech was more damaging to rhythm and letter recall than was irrelevant tone sound, but there was no effect of changing state on rhythm recall, while letter recall accuracy was disrupted by changing-state sound. Pacing of sound did not consistently affect either rhythm or letter recall. There are similarities in the way speech and rhythms are processed that appear to extend beyond reliance on temporal coding mechanisms involved in serial-order recall.

  6. Temporal and spectral contributions to musical instrument identification and discrimination among cochlear implant users.

    Science.gov (United States)

    Prentiss, Sandra M; Friedland, David R; Fullmer, Tanner; Crane, Alison; Stoddard, Timothy; Runge, Christina L

    2016-09-01

    To investigate the contributions of envelope and fine-structure to the perception of timbre by cochlear implant (CI) users as compared to normal hearing (NH) listeners. This was a prospective cohort comparison study. Normal hearing and cochlear implant patients were tested. Three experiments were performed in sound field using musical notes altered to affect the characteristic pitch of an instrument and the acoustic envelope. Experiment 1 assessed the ability to identify the instrument playing each note, while experiments 2 and 3 assessed the ability to discriminate the different stimuli. Normal hearing subjects performed better than CI subjects in all instrument identification tasks, reaching statistical significance for 4 of 5 stimulus conditions. Within the CI population, acoustic envelope modifications did not significantly affect instrument identification or discrimination. With envelope and pitch cues removed, fine structure discrimination performance was similar between normal hearing and CI users for the majority of conditions, but some specific instrument comparisons were significantly more challenging for CI users. Cochlear implant users perform significantly worse than normal hearing listeners on tasks of instrument identification. However, cochlear implant listeners can discriminate differences in envelope and some fine structure components of musical instrument sounds as well as normal hearing listeners. The results indicated that certain fine structure cues are important for cochlear implant users to make discrimination judgments, and therefore may affect interpretation toward associating with a specific instrument for identification.

  7. Constraints on decay of environmental sound memory in adult rats.

    Science.gov (United States)

    Sakai, Masashi

    2006-11-27

    When adult rats are pretreated with a 48-h-long 'repetitive nonreinforced sound exposure', performance in two-sound discriminative operant conditioning transiently improves. We have already proven that this 'sound exposure-enhanced discrimination' is dependent upon enhancement of the perceptual capacity of the auditory cortex. This study investigated principles governing decay of sound exposure-enhanced discrimination decay. Sound exposure-enhanced discrimination disappeared within approximately 72 h if animals were deprived of environmental sounds after sound exposure, and that shortened to less than approximately 60 h if they were exposed to environmental sounds in the animal room. Sound-deprivation itself exerted no clear effects. These findings suggest that the memory of a passively exposed behaviorally irrelevant sound signal does not merely pass along the intrinsic lifetime but also gets deteriorated by other incoming signals.

  8. Musical Sophistication and the Effect of Complexity on Auditory Discrimination in Finnish Speakers

    Science.gov (United States)

    Dawson, Caitlin; Aalto, Daniel; Šimko, Juraj; Vainio, Martti; Tervaniemi, Mari

    2017-01-01

    Musical experiences and native language are both known to affect auditory processing. The present work aims to disentangle the influences of native language phonology and musicality on behavioral and subcortical sound feature processing in a population of musically diverse Finnish speakers as well as to investigate the specificity of enhancement from musical training. Finnish speakers are highly sensitive to duration cues since in Finnish, vowel and consonant duration determine word meaning. Using a correlational approach with a set of behavioral sound feature discrimination tasks, brainstem recordings, and a musical sophistication questionnaire, we find no evidence for an association between musical sophistication and more precise duration processing in Finnish speakers either in the auditory brainstem response or in behavioral tasks, but they do show an enhanced pitch discrimination compared to Finnish speakers with less musical experience and show greater duration modulation in a complex task. These results are consistent with a ceiling effect set for certain sound features which corresponds to the phonology of the native language, leaving an opportunity for music experience-based enhancement of sound features not explicitly encoded in the language (such as pitch, which is not explicitly encoded in Finnish). Finally, the pattern of duration modulation in more musically sophisticated Finnish speakers suggests integrated feature processing for greater efficiency in a real world musical situation. These results have implications for research into the specificity of plasticity in the auditory system as well as to the effects of interaction of specific language features with musical experiences. PMID:28450829

  9. Musical Sophistication and the Effect of Complexity on Auditory Discrimination in Finnish Speakers.

    Science.gov (United States)

    Dawson, Caitlin; Aalto, Daniel; Šimko, Juraj; Vainio, Martti; Tervaniemi, Mari

    2017-01-01

    Musical experiences and native language are both known to affect auditory processing. The present work aims to disentangle the influences of native language phonology and musicality on behavioral and subcortical sound feature processing in a population of musically diverse Finnish speakers as well as to investigate the specificity of enhancement from musical training. Finnish speakers are highly sensitive to duration cues since in Finnish, vowel and consonant duration determine word meaning. Using a correlational approach with a set of behavioral sound feature discrimination tasks, brainstem recordings, and a musical sophistication questionnaire, we find no evidence for an association between musical sophistication and more precise duration processing in Finnish speakers either in the auditory brainstem response or in behavioral tasks, but they do show an enhanced pitch discrimination compared to Finnish speakers with less musical experience and show greater duration modulation in a complex task. These results are consistent with a ceiling effect set for certain sound features which corresponds to the phonology of the native language, leaving an opportunity for music experience-based enhancement of sound features not explicitly encoded in the language (such as pitch, which is not explicitly encoded in Finnish). Finally, the pattern of duration modulation in more musically sophisticated Finnish speakers suggests integrated feature processing for greater efficiency in a real world musical situation. These results have implications for research into the specificity of plasticity in the auditory system as well as to the effects of interaction of specific language features with musical experiences.

  10. Abnormal sound detection device

    International Nuclear Information System (INIS)

    Yamada, Izumi; Matsui, Yuji.

    1995-01-01

    Only components synchronized with rotation of pumps are sampled from detected acoustic sounds, to judge the presence or absence of abnormality based on the magnitude of the synchronized components. A synchronized component sampling means can remove resonance sounds and other acoustic sounds generated at a synchronously with the rotation based on the knowledge that generated acoustic components in a normal state are a sort of resonance sounds and are not precisely synchronized with the number of rotation. On the other hand, abnormal sounds of a rotating body are often caused by compulsory force accompanying the rotation as a generation source, and the abnormal sounds can be detected by extracting only the rotation-synchronized components. Since components of normal acoustic sounds generated at present are discriminated from the detected sounds, reduction of the abnormal sounds due to a signal processing can be avoided and, as a result, abnormal sound detection sensitivity can be improved. Further, since it is adapted to discriminate the occurrence of the abnormal sound from the actually detected sounds, the other frequency components which are forecast but not generated actually are not removed, so that it is further effective for the improvement of detection sensitivity. (N.H.)

  11. Phoneme categorization and discrimination in younger and older adults: a comparative analysis of perceptual, lexical, and attentional factors.

    Science.gov (United States)

    Mattys, Sven L; Scharenborg, Odette

    2014-03-01

    This study investigates the extent to which age-related language processing difficulties are due to a decline in sensory processes or to a deterioration of cognitive factors, specifically, attentional control. Two facets of attentional control were examined: inhibition of irrelevant information and divided attention. Younger and older adults were asked to categorize the initial phoneme of spoken syllables ("Was it m or n?"), trying to ignore the lexical status of the syllables. The phonemes were manipulated to range in eight steps from m to n. Participants also did a discrimination task on syllable pairs ("Were the initial sounds the same or different?"). Categorization and discrimination were performed under either divided attention (concurrent visual-search task) or focused attention (no visual task). The results showed that even when the younger and older adults were matched on their discrimination scores: (1) the older adults had more difficulty inhibiting lexical knowledge than did younger adults, (2) divided attention weakened lexical inhibition in both younger and older adults, and (3) divided attention impaired sound discrimination more in older than younger listeners. The results confirm the independent and combined contribution of sensory decline and deficit in attentional control to language processing difficulties associated with aging. The relative weight of these variables and their mechanisms of action are discussed in the context of theories of aging and language. (c) 2014 APA, all rights reserved.

  12. Temporal and spectral contributions to musical instrument identification and discrimination among cochlear implant users

    Institute of Scientific and Technical Information of China (English)

    Sandra M. Prentiss; David R. Friedland; Tanner Fullmer; Alison Crane; Timothy Stoddard; Christina L. Runge

    2016-01-01

    Objective:To investigate the contributions of envelope and fine-structure to the perception of timbre by cochlear implant (CI) users as compared to normal hearing (NH) lis-teners. Methods: This was a prospective cohort comparison study. Normal hearing and cochlear implant patients were tested. Three experiments were performed in sound field using musical notes altered to affect the characteristic pitch of an instrument and the acoustic envelope. Experiment 1 assessed the ability to identify the instrument playing each note, while experi-ments 2 and 3 assessed the ability to discriminate the different stimuli. Results:Normal hearing subjects performed better than CI subjects in all instrument identifi-cation tasks, reaching statistical significance for 4 of 5 stimulus conditions. Within the CI pop-ulation, acoustic envelope modifications did not significantly affect instrument identification or discrimination. With envelope and pitch cues removed, fine structure discrimination perfor-mance was similar between normal hearing and CI users for the majority of conditions, but some specific instrument comparisons were significantly more challenging for CI users. Conclusions:Cochlear implant users perform significantly worse than normal hearing listeners on tasks of instrument identification. However, cochlear implant listeners can discriminate differences in envelope and some fine structure components of musical instrument sounds as well as normal hearing listeners. The results indicated that certain fine structure cues are important for cochlear implant users to make discrimination judgments, and therefore may affect interpretation toward associating with a specific instrument for identification.

  13. An extended research of crossmodal correspondence between color and sound in psychology and cognitive ergonomics.

    Science.gov (United States)

    Sun, Xiuwen; Li, Xiaoling; Ji, Lingyu; Han, Feng; Wang, Huifen; Liu, Yang; Chen, Yao; Lou, Zhiyuan; Li, Zhuoyun

    2018-01-01

    Based on the existing research on sound symbolism and crossmodal correspondence, this study proposed an extended research on cross-modal correspondence between various sound attributes and color properties in a group of non-synesthetes. In Experiment 1, we assessed the associations between each property of sounds and colors. Twenty sounds with five auditory properties (pitch, roughness, sharpness, tempo and discontinuity), each varied in four levels, were used as the sound stimuli. Forty-nine colors with different hues, saturation and brightness were used to match to those sounds. Result revealed that besides pitch and tempo, roughness and sharpness also played roles in sound-color correspondence. Reaction times of sound-hue were a little longer than the reaction times of sound-lightness. In Experiment 2, a speeded target discrimination task was used to assess whether the associations between sound attributes and color properties could invoke natural cross-modal correspondence and improve participants' cognitive efficiency in cognitive tasks. Several typical sound-color pairings were selected according to the results of Experiment 1. Participants were divided into two groups (congruent and incongruent). In each trial participants had to judge whether the presented color could appropriately be associated with the sound stimuli. Result revealed that participants responded more quickly and accurately in the congruent group than in the incongruent group. It was also found that there was no significant difference in reaction times and error rates between sound-hue and sound-lightness. The results of Experiment 1 and 2 indicate the existence of a robust crossmodal correspondence between multiple attributes of sound and color, which also has strong influence on cognitive tasks. The inconsistency of the reaction times between sound-hue and sound-lightness in Experiment 1 and 2 is probably owing to the difference in experimental protocol, which indicates that the complexity

  14. Cortical activity patterns predict robust speech discrimination ability in noise

    Science.gov (United States)

    Shetake, Jai A.; Wolf, Jordan T.; Cheung, Ryan J.; Engineer, Crystal T.; Ram, Satyananda K.; Kilgard, Michael P.

    2012-01-01

    The neural mechanisms that support speech discrimination in noisy conditions are poorly understood. In quiet conditions, spike timing information appears to be used in the discrimination of speech sounds. In this study, we evaluated the hypothesis that spike timing is also used to distinguish between speech sounds in noisy conditions that significantly degrade neural responses to speech sounds. We tested speech sound discrimination in rats and recorded primary auditory cortex (A1) responses to speech sounds in background noise of different intensities and spectral compositions. Our behavioral results indicate that rats, like humans, are able to accurately discriminate consonant sounds even in the presence of background noise that is as loud as the speech signal. Our neural recordings confirm that speech sounds evoke degraded but detectable responses in noise. Finally, we developed a novel neural classifier that mimics behavioral discrimination. The classifier discriminates between speech sounds by comparing the A1 spatiotemporal activity patterns evoked on single trials with the average spatiotemporal patterns evoked by known sounds. Unlike classifiers in most previous studies, this classifier is not provided with the stimulus onset time. Neural activity analyzed with the use of relative spike timing was well correlated with behavioral speech discrimination in quiet and in noise. Spike timing information integrated over longer intervals was required to accurately predict rat behavioral speech discrimination in noisy conditions. The similarity of neural and behavioral discrimination of speech in noise suggests that humans and rats may employ similar brain mechanisms to solve this problem. PMID:22098331

  15. Categorization of common sounds by cochlear implanted and normal hearing adults.

    Science.gov (United States)

    Collett, E; Marx, M; Gaillard, P; Roby, B; Fraysse, B; Deguine, O; Barone, P

    2016-05-01

    Auditory categorization involves grouping of acoustic events along one or more shared perceptual dimensions which can relate to both semantic and physical attributes. This process involves both high level cognitive processes (categorization) and low-level perceptual encoding of the acoustic signal, both of which are affected by the use of a cochlear implant (CI) device. The goal of this study was twofold: I) compare the categorization strategies of CI users and normal hearing listeners (NHL) II) investigate if any characteristics of the raw acoustic signal could explain the results. 16 experienced CI users and 20 NHL were tested using a Free-Sorting Task of 16 common sounds divided into 3 predefined categories of environmental, musical and vocal sounds. Multiple Correspondence Analysis (MCA) and Hierarchical Clustering based on Principal Components (HCPC) show that CI users followed a similar categorization strategy to that of NHL and were able to discriminate between the three different types of sounds. However results for CI users were more varied and showed less inter-participant agreement. Acoustic analysis also highlighted the average pitch salience and average autocorrelation peak as being important for the perception and categorization of the sounds. The results therefore show that on a broad level of categorization CI users may not have as many difficulties as previously thought in discriminating certain kinds of sound; however the perception of individual sounds remains challenging. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Cascaded Amplitude Modulations in Sound Texture Perception

    Directory of Open Access Journals (Sweden)

    Richard McWalter

    2017-09-01

    Full Text Available Sound textures, such as crackling fire or chirping crickets, represent a broad class of sounds defined by their homogeneous temporal structure. It has been suggested that the perception of texture is mediated by time-averaged summary statistics measured from early auditory representations. In this study, we investigated the perception of sound textures that contain rhythmic structure, specifically second-order amplitude modulations that arise from the interaction of different modulation rates, previously described as “beating” in the envelope-frequency domain. We developed an auditory texture model that utilizes a cascade of modulation filterbanks that capture the structure of simple rhythmic patterns. The model was examined in a series of psychophysical listening experiments using synthetic sound textures—stimuli generated using time-averaged statistics measured from real-world textures. In a texture identification task, our results indicated that second-order amplitude modulation sensitivity enhanced recognition. Next, we examined the contribution of the second-order modulation analysis in a preference task, where the proposed auditory texture model was preferred over a range of model deviants that lacked second-order modulation rate sensitivity. Lastly, the discriminability of textures that included second-order amplitude modulations appeared to be perceived using a time-averaging process. Overall, our results demonstrate that the inclusion of second-order modulation analysis generates improvements in the perceived quality of synthetic textures compared to the first-order modulation analysis considered in previous approaches.

  17. Memory for product sounds: the effect of sound and label type.

    Science.gov (United States)

    Ozcan, Elif; van Egmond, René

    2007-11-01

    The (mnemonic) interactions between auditory, visual, and the semantic systems have been investigated using structurally complex auditory stimuli (i.e., product sounds). Six types of product sounds (air, alarm, cyclic, impact, liquid, mechanical) that vary in spectral-temporal structure were presented in four label type conditions: self-generated text, text, image, and pictogram. A memory paradigm that incorporated free recall, recognition, and matching tasks was employed. The results for the sound type suggest that the amount of spectral-temporal structure in a sound can be indicative for memory performance. Findings related to label type suggest that 'self' creates a strong bias for the retrieval and the recognition of sounds that were self-labeled; the density and the complexity of the visual information (i.e., pictograms) hinders the memory performance ('visual' overshadowing effect); and image labeling has an additive effect on the recall and matching tasks (dual coding). Thus, the findings suggest that the memory performances for product sounds are task-dependent.

  18. Cognitive and linguistic sources of variance in 2-year-olds’ speech-sound discrimination: a preliminary investigation.

    Science.gov (United States)

    Lalonde, Kaylah; Holt, Rachael Frush

    2014-02-01

    This preliminary investigation explored potential cognitive and linguistic sources of variance in 2-year-olds’ speech-sound discrimination by using the toddler change/ no-change procedure and examined whether modifications would result in a procedure that can be used consistently with younger 2-year-olds. Twenty typically developing 2-year-olds completed the newly modified toddler change/no-change procedure. Behavioral tests and parent report questionnaires were used to measure several cognitive and linguistic constructs. Stepwise linear regression was used to relate discrimination sensitivity to the cognitive and linguistic measures. In addition, discrimination results from the current experiment were compared with those from 2-year-old children tested in a previous experiment. Receptive vocabulary and working memory explained 56.6% of variance in discrimination performance. Performance was not different on the modified toddler change/no-change procedure used in the current experiment from in a previous investigation, which used the original version of the procedure. The relationship between speech discrimination and receptive vocabulary and working memory provides further evidence that the procedure is sensitive to the strength of perceptual representations. The role for working memory might also suggest that there are specific subject-related, nonsensory factors limiting the applicability of the procedure to children who have not reached the necessary levels of cognitive and linguistic development.

  19. An extended research of crossmodal correspondence between color and sound in psychology and cognitive ergonomics

    Science.gov (United States)

    Sun, Xiuwen; Ji, Lingyu; Han, Feng; Wang, Huifen; Liu, Yang; Chen, Yao; Lou, Zhiyuan; Li, Zhuoyun

    2018-01-01

    Based on the existing research on sound symbolism and crossmodal correspondence, this study proposed an extended research on cross-modal correspondence between various sound attributes and color properties in a group of non-synesthetes. In Experiment 1, we assessed the associations between each property of sounds and colors. Twenty sounds with five auditory properties (pitch, roughness, sharpness, tempo and discontinuity), each varied in four levels, were used as the sound stimuli. Forty-nine colors with different hues, saturation and brightness were used to match to those sounds. Result revealed that besides pitch and tempo, roughness and sharpness also played roles in sound-color correspondence. Reaction times of sound-hue were a little longer than the reaction times of sound-lightness. In Experiment 2, a speeded target discrimination task was used to assess whether the associations between sound attributes and color properties could invoke natural cross-modal correspondence and improve participants’ cognitive efficiency in cognitive tasks. Several typical sound-color pairings were selected according to the results of Experiment 1. Participants were divided into two groups (congruent and incongruent). In each trial participants had to judge whether the presented color could appropriately be associated with the sound stimuli. Result revealed that participants responded more quickly and accurately in the congruent group than in the incongruent group. It was also found that there was no significant difference in reaction times and error rates between sound-hue and sound-lightness. The results of Experiment 1 and 2 indicate the existence of a robust crossmodal correspondence between multiple attributes of sound and color, which also has strong influence on cognitive tasks. The inconsistency of the reaction times between sound-hue and sound-lightness in Experiment 1 and 2 is probably owing to the difference in experimental protocol, which indicates that the

  20. An extended research of crossmodal correspondence between color and sound in psychology and cognitive ergonomics

    Directory of Open Access Journals (Sweden)

    Xiuwen Sun

    2018-03-01

    Full Text Available Based on the existing research on sound symbolism and crossmodal correspondence, this study proposed an extended research on cross-modal correspondence between various sound attributes and color properties in a group of non-synesthetes. In Experiment 1, we assessed the associations between each property of sounds and colors. Twenty sounds with five auditory properties (pitch, roughness, sharpness, tempo and discontinuity, each varied in four levels, were used as the sound stimuli. Forty-nine colors with different hues, saturation and brightness were used to match to those sounds. Result revealed that besides pitch and tempo, roughness and sharpness also played roles in sound-color correspondence. Reaction times of sound-hue were a little longer than the reaction times of sound-lightness. In Experiment 2, a speeded target discrimination task was used to assess whether the associations between sound attributes and color properties could invoke natural cross-modal correspondence and improve participants’ cognitive efficiency in cognitive tasks. Several typical sound-color pairings were selected according to the results of Experiment 1. Participants were divided into two groups (congruent and incongruent. In each trial participants had to judge whether the presented color could appropriately be associated with the sound stimuli. Result revealed that participants responded more quickly and accurately in the congruent group than in the incongruent group. It was also found that there was no significant difference in reaction times and error rates between sound-hue and sound-lightness. The results of Experiment 1 and 2 indicate the existence of a robust crossmodal correspondence between multiple attributes of sound and color, which also has strong influence on cognitive tasks. The inconsistency of the reaction times between sound-hue and sound-lightness in Experiment 1 and 2 is probably owing to the difference in experimental protocol, which indicates

  1. Recurrence of task set-related MEG signal patterns during auditory working memory.

    Science.gov (United States)

    Peters, Benjamin; Bledowski, Christoph; Rieder, Maria; Kaiser, Jochen

    2016-06-01

    Processing of auditory spatial and non-spatial information in working memory has been shown to rely on separate cortical systems. While previous studies have demonstrated differences in spatial versus non-spatial processing from the encoding of to-be-remembered stimuli onwards, here we investigated whether such differences would be detectable already prior to presentation of the sample stimulus. We analyzed broad-band magnetoencephalography data from 15 healthy adults during an auditory working memory paradigm starting with a visual cue indicating the task-relevant stimulus feature for a given trial (lateralization or pitch) and a subsequent 1.5-s pre-encoding phase. This was followed by a sample sound (0.2s), the delay phase (0.8s) and a test stimulus (0.2s) after which participants made a match/non-match decision. Linear discriminant functions were trained to decode task-specific signal patterns throughout the task, and temporal generalization was used to assess whether the neural codes discriminating between the tasks during the pre-encoding phase would recur during later task periods. The spatial versus non-spatial tasks could indeed be discriminated after the onset of the cue onwards, and decoders trained during the pre-encoding phase successfully discriminated the tasks during both sample stimulus encoding and during the delay phase. This demonstrates that task-specific neural codes are established already before the memorandum is presented and that the same patterns are reestablished during stimulus encoding and maintenance. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Dopamine modulates memory consolidation of discrimination learning in the auditory cortex.

    Science.gov (United States)

    Schicknick, Horst; Reichenbach, Nicole; Smalla, Karl-Heinz; Scheich, Henning; Gundelfinger, Eckart D; Tischmeyer, Wolfgang

    2012-03-01

    In Mongolian gerbils, the auditory cortex is critical for discriminating rising vs. falling frequency-modulated tones. Based on our previous studies, we hypothesized that dopaminergic inputs to the auditory cortex during and shortly after acquisition of the discrimination strategy control long-term memory formation. To test this hypothesis, we studied frequency-modulated tone discrimination learning of gerbils in a shuttle box GO/NO-GO procedure following differential treatments. (i) Pre-exposure of gerbils to the frequency-modulated tones at 1 day before the first discrimination training session severely impaired the accuracy of the discrimination acquired in that session during the initial trials of a second training session, performed 1 day later. (ii) Local injection of the D1/D5 dopamine receptor antagonist SCH-23390 into the auditory cortex after task acquisition caused a discrimination deficit of similar extent and time course as with pre-exposure. This effect was dependent on the dose and time point of injection. (iii) Injection of the D1/D5 dopamine receptor agonist SKF-38393 into the auditory cortex after retraining caused a further discrimination improvement at the beginning of subsequent sessions. All three treatments, which supposedly interfered with dopamine signalling during conditioning and/or retraining, had a substantial impact on the dynamics of the discrimination performance particularly at the beginning of subsequent training sessions. These findings suggest that auditory-cortical dopamine activity after acquisition of a discrimination of complex sounds and after retrieval of weak frequency-modulated tone discrimination memory further improves memory consolidation, i.e. the correct association of two sounds with their respective GO/NO-GO meaning, in support of future memory recall. © 2012 The Authors. European Journal of Neuroscience © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.

  3. Effect of fMRI acoustic noise on non-auditory working memory task: comparison between continuous and pulsed sound emitting EPI.

    Science.gov (United States)

    Haller, Sven; Bartsch, Andreas J; Radue, Ernst W; Klarhöfer, Markus; Seifritz, Erich; Scheffler, Klaus

    2005-11-01

    Conventional blood oxygenation level-dependent (BOLD) based functional magnetic resonance imaging (fMRI) is accompanied by substantial acoustic gradient noise. This noise can influence the performance as well as neuronal activations. Conventional fMRI typically has a pulsed noise component, which is a particularly efficient auditory stimulus. We investigated whether the elimination of this pulsed noise component in a recent modification of continuous-sound fMRI modifies neuronal activations in a cognitively demanding non-auditory working memory task. Sixteen normal subjects performed a letter variant n-back task. Brain activity and psychomotor performance was examined during fMRI with continuous-sound fMRI and conventional fMRI. We found greater BOLD responses in bilateral medial frontal gyrus, left middle frontal gyrus, left middle temporal gyrus, left hippocampus, right superior frontal gyrus, right precuneus and right cingulate gyrus with continuous-sound compared to conventional fMRI. Conversely, BOLD responses were greater in bilateral cingulate gyrus, left middle and superior frontal gyrus and right lingual gyrus with conventional compared to continuous-sound fMRI. There were no differences in psychomotor performance between both scanning protocols. Although behavioral performance was not affected, acoustic gradient noise interferes with neuronal activations in non-auditory cognitive tasks and represents a putative systematic confound.

  4. Discrimination task reveals differences in neural bases of tinnitus and hearing impairment.

    Directory of Open Access Journals (Sweden)

    Fatima T Husain

    Full Text Available We investigated auditory perception and cognitive processing in individuals with chronic tinnitus or hearing loss using functional magnetic resonance imaging (fMRI. Our participants belonged to one of three groups: bilateral hearing loss and tinnitus (TIN, bilateral hearing loss without tinnitus (HL, and normal hearing without tinnitus (NH. We employed pure tones and frequency-modulated sweeps as stimuli in two tasks: passive listening and active discrimination. All subjects had normal hearing through 2 kHz and all stimuli were low-pass filtered at 2 kHz so that all participants could hear them equally well. Performance was similar among all three groups for the discrimination task. In all participants, a distributed set of brain regions including the primary and non-primary auditory cortices showed greater response for both tasks compared to rest. Comparing the groups directly, we found decreased activation in the parietal and frontal lobes in the participants with tinnitus compared to the HL group and decreased response in the frontal lobes relative to the NH group. Additionally, the HL subjects exhibited increased response in the anterior cingulate relative to the NH group. Our results suggest that a differential engagement of a putative auditory attention and short-term memory network, comprising regions in the frontal, parietal and temporal cortices and the anterior cingulate, may represent a key difference in the neural bases of chronic tinnitus accompanied by hearing loss relative to hearing loss alone.

  5. Eye Contact and Fear of Being Laughed at in a Gaze Discrimination Task

    Directory of Open Access Journals (Sweden)

    Jorge Torres-Marín

    2017-11-01

    Full Text Available Current approaches conceptualize gelotophobia as a personality trait characterized by a disproportionate fear of being laughed at by others. Consistently with this perspective, gelotophobes are also described as neurotic and introverted and as having a paranoid tendency to anticipate derision and mockery situations. Although research on gelotophobia has significantly progressed over the past two decades, no evidence exists concerning the potential effects of gelotophobia in reaction to eye contact. Previous research has pointed to difficulties in discriminating gaze direction as the basis of possible misinterpretations of others’ intentions or mental states. The aim of the present research was to examine whether gelotophobia predisposition modulates the effects of eye contact (i.e., gaze discrimination when processing faces portraying several emotional expressions. In two different experiments, participants performed an experimental gaze discrimination task in which they responded, as quickly and accurately as possible, to the eyes’ directions on faces displaying either a happy, angry, fear, neutral, or sad emotional expression. In particular, we expected trait-gelotophobia to modulate the eye contact effect, showing specific group differences in the happiness condition. The results of Study 1 (N = 40 indicated that gelotophobes made more errors than non-gelotophobes did in the gaze discrimination task. In contrast to our initial hypothesis, the happiness expression did not have any special role in the observed differences between individuals with high vs. low trait-gelotophobia. In Study 2 (N = 40, we replicated the pattern of data concerning gaze discrimination ability, even after controlling for individuals’ scores on social anxiety. Furthermore, in our second experiment, we found that gelotophobes did not exhibit any problem with identifying others’ emotions, or a general incorrect attribution of affective features, such as valence

  6. Face adaptation does not improve performance on search or discrimination tasks.

    Science.gov (United States)

    Ng, Minna; Boynton, Geoffrey M; Fine, Ione

    2008-01-04

    The face adaptation effect, as described by M. A. Webster and O. H. MacLin (1999), is a robust perceptual shift in the appearance of faces after a brief adaptation period. For example, prolonged exposure to Asian faces causes a Eurasian face to appear distinctly Caucasian. This adaptation effect has been documented for general configural effects, as well as for the facial properties of gender, ethnicity, expression, and identity. We began by replicating the finding that adaptation to ethnicity, gender, and a combination of both features induces selective shifts in category appearance. We then investigated whether this adaptation has perceptual consequences beyond a shift in the perceived category boundary by measuring the effects of adaptation on RSVP, spatial search, and discrimination tasks. Adaptation had no discernable effect on performance for any of these tasks.

  7. Vehicle surge detection and pathway discrimination by pedestrians who are blind: Effect of adding an alert sound to hybrid electric vehicles on performance.

    Science.gov (United States)

    Kim, Dae Shik; Emerson, Robert Wall; Naghshineh, Koorosh; Pliskow, Jay; Myers, Kyle

    2012-05-01

    This study examined the effect of adding an artificially generated alert sound to a quiet vehicle on its detectability and localizability with 15 visually impaired adults. When starting from a stationary position, the hybrid electric vehicle with an alert sound was significantly more quickly and reliably detected than either the identical vehicle without such added sound or the comparable internal combustion engine vehicle. However, no significant difference was found between the vehicles in respect to how accurately the participants could discriminate the path of a given vehicle (straight vs. right turn). These results suggest that adding an artificial sound to a hybrid electric vehicle may help reduce delay in street crossing initiation by a blind pedestrian, but the benefit of such alert sound may not be obvious in determining whether the vehicle in his near parallel lane proceeds straight through the intersection or turns right in front of him.

  8. What and Where in auditory sensory processing: A high-density electrical mapping study of distinct neural processes underlying sound object recognition and sound localization

    Directory of Open Access Journals (Sweden)

    Victoria M Leavitt

    2011-06-01

    Full Text Available Functionally distinct dorsal and ventral auditory pathways for sound localization (where and sound object recognition (what have been described in non-human primates. A handful of studies have explored differential processing within these streams in humans, with highly inconsistent findings. Stimuli employed have included simple tones, noise bursts and speech sounds, with simulated left-right spatial manipulations, and in some cases participants were not required to actively discriminate the stimuli. Our contention is that these paradigms were not well suited to dissociating processing within the two streams. Our aim here was to determine how early in processing we could find evidence for dissociable pathways using better titrated what and where task conditions. The use of more compelling tasks should allow us to amplify differential processing within the dorsal and ventral pathways. We employed high-density electrical mapping using a relatively large and environmentally realistic stimulus set (seven animal calls delivered from seven free-field spatial locations; with stimulus configuration identical across the where and what tasks. Topographic analysis revealed distinct dorsal and ventral auditory processing networks during the where and what tasks with the earliest point of divergence seen during the N1 component of the auditory evoked response, beginning at approximately 100 ms. While this difference occurred during the N1 timeframe, it was not a simple modulation of N1 amplitude as it displayed a wholly different topographic distribution to that of the N1. Global dissimilarity measures using topographic modulation analysis confirmed that this difference between tasks was driven by a shift in the underlying generator configuration. Minimum norm source reconstruction revealed distinct activations that corresponded well with activity within putative dorsal and ventral auditory structures.

  9. Differences in phonetic discrimination stem from differences in psychoacoustic abilities in learning the sounds of a second language: Evidence from ERP research.

    Science.gov (United States)

    Lin, Yi; Fan, Ruolin; Mo, Lei

    2017-01-01

    The scientific community has been divided as to the origin of individual differences in perceiving the sounds of a second language (L2). There are two alternative explanations: a general psychoacoustic origin vs. a speech-specific one. A previous study showed that such individual variability is linked to the perceivers' speech-specific capabilities, rather than the perceivers' psychoacoustic abilities. However, we assume that the selection of participants and parameters of sound stimuli might not appropriate. Therefore, we adjusted the sound stimuli and recorded event-related potentials (ERPs) from two groups of early, proficient Cantonese (L1)-Mandarin (L2) bilinguals who differed in their mastery of the Mandarin (L2) phonetic contrast /in-ing/, to explore whether the individual differences in perceiving L2 stem from participants' ability to discriminate various pure tones (frequency, duration and pattern). To precisely measure the participants' acoustic discrimination, mismatch negativity (MMN) elicited by the oddball paradigm was recorded in the experiment. The results showed that significant differences between good perceivers (GPs) and poor perceivers (PPs) were found in the three general acoustic conditions (frequency, duration and pattern), and the MMN amplitude for GP was significantly larger than for PP. Therefore, our results support a general psychoacoustic origin of individual variability in L2 phonetic mastery.

  10. Temporal integration: intentional sound discrimination does not modulate stimulus-driven processes in auditory event synthesis.

    Science.gov (United States)

    Sussman, Elyse; Winkler, István; Kreuzer, Judith; Saher, Marieke; Näätänen, Risto; Ritter, Walter

    2002-12-01

    Our previous study showed that the auditory context could influence whether two successive acoustic changes occurring within the temporal integration window (approximately 200ms) were pre-attentively encoded as a single auditory event or as two discrete events (Cogn Brain Res 12 (2001) 431). The aim of the current study was to assess whether top-down processes could influence the stimulus-driven processes in determining what constitutes an auditory event. Electroencepholagram (EEG) was recorded from 11 scalp electrodes to frequently occurring standard and infrequently occurring deviant sounds. Within the stimulus blocks, deviants either occurred only in pairs (successive feature changes) or both singly and in pairs. Event-related potential indices of change and target detection, the mismatch negativity (MMN) and the N2b component, respectively, were compared with the simultaneously measured performance in discriminating the deviants. Even though subjects could voluntarily distinguish the two successive auditory feature changes from each other, which was also indicated by the elicitation of the N2b target-detection response, top-down processes did not modify the event organization reflected by the MMN response. Top-down processes can extract elemental auditory information from a single integrated acoustic event, but the extraction occurs at a later processing stage than the one whose outcome is indexed by MMN. Initial processes of auditory event-formation are fully governed by the context within which the sounds occur. Perception of the deviants as two separate sound events (the top-down effects) did not change the initial neural representation of the same deviants as one event (indexed by the MMN), without a corresponding change in the stimulus-driven sound organization.

  11. Brand Discrimination: An Implicit Measure of the Strength of Mental Brand Representations

    OpenAIRE

    Friedman, Mike; Leclercq, Thomas

    2015-01-01

    While mental associations between a brand and its marketing elements are an important part of brand equity, previous research has yet to provide a sound methodology to measure the strength of these links. The following studies present the development and validation of an implicit measure to assess the strength of mental representations of brand elements in the mind of the consumer. The measure described in this paper, which we call the Brand Discrimination task, requires participants to ident...

  12. Performance and strategy comparisons of human listeners and logistic regression in discriminating underwater targets.

    Science.gov (United States)

    Yang, Lixue; Chen, Kean

    2015-11-01

    To improve the design of underwater target recognition systems based on auditory perception, this study compared human listeners with automatic classifiers. Performances measures and strategies in three discrimination experiments, including discriminations between man-made and natural targets, between ships and submarines, and among three types of ships, were used. In the experiments, the subjects were asked to assign a score to each sound based on how confident they were about the category to which it belonged, and logistic regression, which represents linear discriminative models, also completed three similar tasks by utilizing many auditory features. The results indicated that the performances of logistic regression improved as the ratio between inter- and intra-class differences became larger, whereas the performances of the human subjects were limited by their unfamiliarity with the targets. Logistic regression performed better than the human subjects in all tasks but the discrimination between man-made and natural targets, and the strategies employed by excellent human subjects were similar to that of logistic regression. Logistic regression and several human subjects demonstrated similar performances when discriminating man-made and natural targets, but in this case, their strategies were not similar. An appropriate fusion of their strategies led to further improvement in recognition accuracy.

  13. Brand discrimination: an implicit measure of the strength of mental brand representations.

    Science.gov (United States)

    Friedman, Mike; Leclercq, Thomas

    2015-01-01

    While mental associations between a brand and its marketing elements are an important part of brand equity, previous research has yet to provide a sound methodology to measure the strength of these links. The following studies present the development and validation of an implicit measure to assess the strength of mental representations of brand elements in the mind of the consumer. The measure described in this paper, which we call the Brand Discrimination task, requires participants to identify whether images of brand elements (e.g. color, logo, packaging) belong to a target brand or not. Signal detection theory (SDT) is used to calculate a Brand Discrimination index which gives a measure of overall recognition accuracy for a brand's elements in the context of its competitors. A series of five studies shows that the Brand Discrimination task can discriminate between strong and weak brands, increases when mental representations of brands are experimentally strengthened, is relatively stable across time, and can predict brand choice, independently and while controlling for other explicit and implicit brand evaluation measures. Together, these studies provide unique evidence for the importance of mental brand representations in marketing and consumer behavior, along with a research methodology to measure this important consumer-based brand attribute.

  14. Brand discrimination: an implicit measure of the strength of mental brand representations.

    Directory of Open Access Journals (Sweden)

    Mike Friedman

    Full Text Available While mental associations between a brand and its marketing elements are an important part of brand equity, previous research has yet to provide a sound methodology to measure the strength of these links. The following studies present the development and validation of an implicit measure to assess the strength of mental representations of brand elements in the mind of the consumer. The measure described in this paper, which we call the Brand Discrimination task, requires participants to identify whether images of brand elements (e.g. color, logo, packaging belong to a target brand or not. Signal detection theory (SDT is used to calculate a Brand Discrimination index which gives a measure of overall recognition accuracy for a brand's elements in the context of its competitors. A series of five studies shows that the Brand Discrimination task can discriminate between strong and weak brands, increases when mental representations of brands are experimentally strengthened, is relatively stable across time, and can predict brand choice, independently and while controlling for other explicit and implicit brand evaluation measures. Together, these studies provide unique evidence for the importance of mental brand representations in marketing and consumer behavior, along with a research methodology to measure this important consumer-based brand attribute.

  15. Brand Discrimination: An Implicit Measure of the Strength of Mental Brand Representations

    Science.gov (United States)

    Friedman, Mike; Leclercq, Thomas

    2015-01-01

    While mental associations between a brand and its marketing elements are an important part of brand equity, previous research has yet to provide a sound methodology to measure the strength of these links. The following studies present the development and validation of an implicit measure to assess the strength of mental representations of brand elements in the mind of the consumer. The measure described in this paper, which we call the Brand Discrimination task, requires participants to identify whether images of brand elements (e.g. color, logo, packaging) belong to a target brand or not. Signal detection theory (SDT) is used to calculate a Brand Discrimination index which gives a measure of overall recognition accuracy for a brand’s elements in the context of its competitors. A series of five studies shows that the Brand Discrimination task can discriminate between strong and weak brands, increases when mental representations of brands are experimentally strengthened, is relatively stable across time, and can predict brand choice, independently and while controlling for other explicit and implicit brand evaluation measures. Together, these studies provide unique evidence for the importance of mental brand representations in marketing and consumer behavior, along with a research methodology to measure this important consumer-based brand attribute. PMID:25803845

  16. Spike-timing-based computation in sound localization.

    Directory of Open Access Journals (Sweden)

    Dan F M Goodman

    2010-11-01

    Full Text Available Spike timing is precise in the auditory system and it has been argued that it conveys information about auditory stimuli, in particular about the location of a sound source. However, beyond simple time differences, the way in which neurons might extract this information is unclear and the potential computational advantages are unknown. The computational difficulty of this task for an animal is to locate the source of an unexpected sound from two monaural signals that are highly dependent on the unknown source signal. In neuron models consisting of spectro-temporal filtering and spiking nonlinearity, we found that the binaural structure induced by spatialized sounds is mapped to synchrony patterns that depend on source location rather than on source signal. Location-specific synchrony patterns would then result in the activation of location-specific assemblies of postsynaptic neurons. We designed a spiking neuron model which exploited this principle to locate a variety of sound sources in a virtual acoustic environment using measured human head-related transfer functions. The model was able to accurately estimate the location of previously unknown sounds in both azimuth and elevation (including front/back discrimination in a known acoustic environment. We found that multiple representations of different acoustic environments could coexist as sets of overlapping neural assemblies which could be associated with spatial locations by Hebbian learning. The model demonstrates the computational relevance of relative spike timing to extract spatial information about sources independently of the source signal.

  17. Valence of facial cues influences sheep learning in a visual discrimination task

    OpenAIRE

    Bellegarde, Lucille; Erhard, Hans; Weiss, A.; Boissy, Alain; Haskell, M.J.

    2017-01-01

    Sheep are one of the most studied farm species in terms of their ability to process information from faces, but little is known about their face-based emotion recognition abilities. We investigated (a) whether sheep could use images of sheep faces taken in situation of varying valence as cues in a simultaneous discrimination task and (b) whether the valence of the situation affects their learning performance. To accomplish this, we photographed faces of sheep in three situations inducing emot...

  18. Valence of Facial Cues Influences Sheep Learning in a Visual Discrimination Task

    OpenAIRE

    Lucille G. A. Bellegarde; Lucille G. A. Bellegarde; Lucille G. A. Bellegarde; Hans W. Erhard; Alexander Weiss; Alain Boissy; Marie J. Haskell

    2017-01-01

    Sheep are one of the most studied farm species in terms of their ability to process information from faces, but little is known about their face-based emotion recognition abilities. We investigated (a) whether sheep could use images of sheep faces taken in situation of varying valence as cues in a simultaneous discrimination task and (b) whether the valence of the situation affects their learning performance. To accomplish this, we photographed faces of sheep in three situations inducing emot...

  19. Voluntary Exercise Improves Performance of a Discrimination Task through Effects on the Striatal Dopamine System

    Science.gov (United States)

    Eddy, Meghan C.; Stansfield, Katherine J.; Green, John T.

    2014-01-01

    We have previously demonstrated that voluntary exercise facilitates discrimination learning in a modified T-maze. There is evidence implicating the dorsolateral striatum (DLS) as the substrate for this task. The present experiments examined whether changes in DLS dopamine receptors might underlie the exercise-associated facilitation. Infusing a…

  20. Effects of spectral complexity and sound duration on automatic complex-sound pitch processing in humans - a mismatch negativity study.

    Science.gov (United States)

    Tervaniemi, M; Schröger, E; Saher, M; Näätänen, R

    2000-08-18

    The pitch of a spectrally rich sound is known to be more easily perceived than that of a sinusoidal tone. The present study compared the importance of spectral complexity and sound duration in facilitated pitch discrimination. The mismatch negativity (MMN), which reflects automatic neural discrimination, was recorded to a 2. 5% pitch change in pure tones with only one sinusoidal frequency component (500 Hz) and in spectrally rich tones with three (500-1500 Hz) and five (500-2500 Hz) harmonic partials. During the recordings, subjects concentrated on watching a silent movie. In separate blocks, stimuli were of 100 and 250 ms in duration. The MMN amplitude was enhanced with both spectrally rich sounds when compared with pure tones. The prolonged sound duration did not significantly enhance the MMN. This suggests that increased spectral rather than temporal information facilitates pitch processing of spectrally rich sounds.

  1. A Report on Applying EEGnet to Discriminate Human State Effects on Task Performance

    Science.gov (United States)

    2018-01-01

    indicate that EEGNet could discriminate sleep history of a user. This could be used in future adaptive technologies to detect user fatigue and likely...is unlimited. 1. Introduction As the amount of battlefield technology continues to increase, Soldiers are faced with a daunting task of trying to...integrate diverse information across numerous devices. The growing information burden across devices has spawned a strong in- terest in “smart technology

  2. Ethnic and gender discrimination in the private rental housing market in Finland: A field experiment.

    Directory of Open Access Journals (Sweden)

    Annamaria Öblom

    Full Text Available Ethnic and gender discrimination in a variety of markets has been documented in several populations. We conducted an online field experiment to examine ethnic and gender discrimination in the private rental housing market in Finland. We sent 1459 inquiries regarding 800 apartments. We compared responses to standardized apartment inquiries including fictive Arabic-sounding, Finnish-sounding or Swedish-sounding female or male names. We found evidence of discrimination against Arabic-sounding names and male names. Inquiries including Arabic-sounding male names had the lowest probability of receiving a response, receiving a response to about 16% of the inquiries made, while Finnish-sounding female names received a response to 42% of the inquires. We did not find any evidence of the landlord's gender being associated with the discrimination pattern. The findings suggest that both ethnic and gender discrimination occur in the private rental housing market in Finland.

  3. Ethnic and gender discrimination in the private rental housing market in Finland: A field experiment.

    Science.gov (United States)

    Öblom, Annamaria; Antfolk, Jan

    2017-01-01

    Ethnic and gender discrimination in a variety of markets has been documented in several populations. We conducted an online field experiment to examine ethnic and gender discrimination in the private rental housing market in Finland. We sent 1459 inquiries regarding 800 apartments. We compared responses to standardized apartment inquiries including fictive Arabic-sounding, Finnish-sounding or Swedish-sounding female or male names. We found evidence of discrimination against Arabic-sounding names and male names. Inquiries including Arabic-sounding male names had the lowest probability of receiving a response, receiving a response to about 16% of the inquiries made, while Finnish-sounding female names received a response to 42% of the inquires. We did not find any evidence of the landlord's gender being associated with the discrimination pattern. The findings suggest that both ethnic and gender discrimination occur in the private rental housing market in Finland.

  4. Cognitive Control of Involuntary Distraction by Deviant Sounds

    Science.gov (United States)

    Parmentier, Fabrice B. R.; Hebrero, Maria

    2013-01-01

    It is well established that a task-irrelevant sound (deviant sound) departing from an otherwise repetitive sequence of sounds (standard sounds) elicits an involuntary capture of attention and orienting response toward the deviant stimulus, resulting in the lengthening of response times in an ongoing task. Some have argued that this type of…

  5. Priming in implicit memory tasks: prior study causes enhanced discriminability, not only bias.

    Science.gov (United States)

    Zeelenberg, René; Wagenmakers, Eric-Jan M; Raaijmakers, Jeroen G W

    2002-03-01

    R. Ratcliff and G. McKoon (1995, 1996, 1997; R. Ratcliff, D. Allbritton, & G. McKoon, 1997) have argued that repetition priming effects are solely due to bias. They showed that prior study of the target resulted in a benefit in a later implicit memory task. However, prior study of a stimulus similar to the target resulted in a cost. The present study, using a 2-alternative forced-choice procedure, investigated the effect of prior study in an unbiased condition: Both alternatives were studied prior to their presentation in an implicit memory task. Contrary to a pure bias interpretation of priming, consistent evidence was obtained in 3 implicit memory tasks (word fragment completion, auditory word identification, and picture identification) that performance was better when both alternatives were studied than when neither alternative was studied. These results show that prior study results in enhanced discriminability, not only bias.

  6. Task-Modulated Cortical Representations of Natural Sound Source Categories

    DEFF Research Database (Denmark)

    Hjortkjær, Jens; Kassuba, Tanja; Madsen, Kristoffer Hougaard

    2018-01-01

    In everyday sound environments, we recognize sound sources and events by attending to relevant aspects of an acoustic input. Evidence about the cortical mechanisms involved in extracting relevant category information from natural sounds is, however, limited to speech. Here, we used functional MRI...

  7. Coherence of the irrelevant-sound effect: individual profiles of short-term memory and susceptibility to task-irrelevant materials.

    Science.gov (United States)

    Elliott, Emily M; Cowan, Nelson

    2005-06-01

    We examined individual and developmental differences in the disruptive effects of irrelevant sounds on serial recall of printed lists. In Experiment 1, we examined adults (N = 205) receiving eight-item lists to be recalled. Although their susceptibility to disruption of recall by irrelevant sounds was only slightly related to memory span, regression analyses documented highly reliable individual differences in this susceptibility across speech and tone distractors, even with variance from span level removed. In Experiment 2, we examined adults (n = 64) and 8-year-old children (n = 63) receiving lists of a length equal to a predetermined span and one item shorter (span-1). We again found significant relationships between measures of span and susceptibility to irrelevant sounds, although in only two of the measures. We conclude that some of the cognitive processes helpful in performing a span task may not be beneficial in the presence of irrelevant sounds.

  8. Analysis, Synthesis, and Perception of Musical Sounds The Sound of Music

    CERN Document Server

    Beauchamp, James W

    2007-01-01

    Analysis, Synthesis, and Perception of Musical Sounds contains a detailed treatment of basic methods for analysis and synthesis of musical sounds, including the phase vocoder method, the McAulay-Quatieri frequency-tracking method, the constant-Q transform, and methods for pitch tracking with several examples shown. Various aspects of musical sound spectra such as spectral envelope, spectral centroid, spectral flux, and spectral irregularity are defined and discussed. One chapter is devoted to the control and synthesis of spectral envelopes. Two advanced methods of analysis/synthesis are given: "Sines Plus Transients Plus Noise" and "Spectrotemporal Reassignment" are covered. Methods for timbre morphing are given. The last two chapters discuss the perception of musical sounds based on discrimination and multidimensional scaling timbre models.

  9. Comparison of model and human observer performance for detection and discrimination tasks using dual-energy x-ray images

    International Nuclear Information System (INIS)

    Richard, Samuel; Siewerdsen, Jeffrey H.

    2008-01-01

    Model observer performance, computed theoretically using cascaded systems analysis (CSA), was compared to the performance of human observers in detection and discrimination tasks. Dual-energy (DE) imaging provided a wide range of acquisition and decomposition parameters for which observer performance could be predicted and measured. This work combined previously derived observer models (e.g., Fisher-Hotelling and non-prewhitening) with CSA modeling of the DE image noise-equivalent quanta (NEQ) and imaging task (e.g., sphere detection, shape discrimination, and texture discrimination) to yield theoretical predictions of detectability index (d ' ) and area under the receiver operating characteristic (A Z ). Theoretical predictions were compared to human observer performance assessed using 9-alternative forced-choice tests to yield measurement of A Z as a function of DE image acquisition parameters (viz., allocation of dose between the low- and high-energy images) and decomposition technique [viz., three DE image decomposition algorithms: standard log subtraction (SLS), simple-smoothing of the high-energy image (SSH), and anti-correlated noise reduction (ACNR)]. Results showed good agreement between theory and measurements over a broad range of imaging conditions. The incorporation of an eye filter and internal noise in the observer models demonstrated improved correspondence with human observer performance. Optimal acquisition and decomposition parameters were shown to depend on the imaging task; for example, ACNR and SSH yielded the greatest performance in the detection of soft-tissue and bony lesions, respectively. This study provides encouraging evidence that Fourier-based modeling of NEQ computed via CSA and imaging task provides a good approximation to human observer performance for simple imaging tasks, helping to bridge the gap between Fourier metrics of detector performance (e.g., NEQ) and human observer performance.

  10. Single-trial multisensory memories affect later auditory and visual object discrimination.

    Science.gov (United States)

    Thelen, Antonia; Talsma, Durk; Murray, Micah M

    2015-05-01

    Multisensory memory traces established via single-trial exposures can impact subsequent visual object recognition. This impact appears to depend on the meaningfulness of the initial multisensory pairing, implying that multisensory exposures establish distinct object representations that are accessible during later unisensory processing. Multisensory contexts may be particularly effective in influencing auditory discrimination, given the purportedly inferior recognition memory in this sensory modality. The possibility of this generalization and the equivalence of effects when memory discrimination was being performed in the visual vs. auditory modality were at the focus of this study. First, we demonstrate that visual object discrimination is affected by the context of prior multisensory encounters, replicating and extending previous findings by controlling for the probability of multisensory contexts during initial as well as repeated object presentations. Second, we provide the first evidence that single-trial multisensory memories impact subsequent auditory object discrimination. Auditory object discrimination was enhanced when initial presentations entailed semantically congruent multisensory pairs and was impaired after semantically incongruent multisensory encounters, compared to sounds that had been encountered only in a unisensory manner. Third, the impact of single-trial multisensory memories upon unisensory object discrimination was greater when the task was performed in the auditory vs. visual modality. Fourth, there was no evidence for correlation between effects of past multisensory experiences on visual and auditory processing, suggestive of largely independent object processing mechanisms between modalities. We discuss these findings in terms of the conceptual short term memory (CSTM) model and predictive coding. Our results suggest differential recruitment and modulation of conceptual memory networks according to the sensory task at hand. Copyright

  11. Spatial localization deficits and auditory cortical dysfunction in schizophrenia

    Science.gov (United States)

    Perrin, Megan A.; Butler, Pamela D.; DiCostanzo, Joanna; Forchelli, Gina; Silipo, Gail; Javitt, Daniel C.

    2014-01-01

    Background Schizophrenia is associated with deficits in the ability to discriminate auditory features such as pitch and duration that localize to primary cortical regions. Lesions of primary vs. secondary auditory cortex also produce differentiable effects on ability to localize and discriminate free-field sound, with primary cortical lesions affecting variability as well as accuracy of response. Variability of sound localization has not previously been studied in schizophrenia. Methods The study compared performance between patients with schizophrenia (n=21) and healthy controls (n=20) on sound localization and spatial discrimination tasks using low frequency tones generated from seven speakers concavely arranged with 30 degrees separation. Results For the sound localization task, patients showed reduced accuracy (p=0.004) and greater overall response variability (p=0.032), particularly in the right hemifield. Performance was also impaired on the spatial discrimination task (p=0.018). On both tasks, poorer accuracy in the right hemifield was associated with greater cognitive symptom severity. Better accuracy in the left hemifield was associated with greater hallucination severity on the sound localization task (p=0.026), but no significant association was found for the spatial discrimination task. Conclusion Patients show impairments in both sound localization and spatial discrimination of sounds presented free-field, with a pattern comparable to that of individuals with right superior temporal lobe lesions that include primary auditory cortex (Heschl’s gyrus). Right primary auditory cortex dysfunction may protect against hallucinations by influencing laterality of functioning. PMID:20619608

  12. Temporal discrimination thresholds in adult-onset primary torsion dystonia: an analysis by task type and by dystonia phenotype.

    LENUS (Irish Health Repository)

    Bradley, D

    2012-01-01

    Adult-onset primary torsion dystonia (AOPTD) is an autosomal dominant disorder with markedly reduced penetrance. Sensory abnormalities are present in AOPTD and also in unaffected relatives, possibly indicating non-manifesting gene carriage (acting as an endophenotype). The temporal discrimination threshold (TDT) is the shortest time interval at which two stimuli are detected to be asynchronous. We aimed to compare the sensitivity and specificity of three different TDT tasks (visual, tactile and mixed\\/visual-tactile). We also aimed to examine the sensitivity of TDTs in different AOPTD phenotypes. To examine tasks, we tested TDT in 41 patients and 51 controls using visual (2 lights), tactile (non-painful electrical stimulation) and mixed (1 light, 1 electrical) stimuli. To investigate phenotypes, we examined 71 AOPTD patients (37 cervical dystonia, 14 writer\\'s cramp, 9 blepharospasm, 11 spasmodic dysphonia) and 8 musician\\'s dystonia patients. The upper limit of normal was defined as control mean +2.5 SD. In dystonia patients, the visual task detected abnormalities in 35\\/41 (85%), the tactile task in 35\\/41 (85%) and the mixed task in 26\\/41 (63%); the mixed task was less sensitive than the other two (p = 0.04). Specificity was 100% for the visual and tactile tasks. Abnormal TDTs were found in 36 of 37 (97.3%) cervical dystonia, 12 of 14 (85.7%) writer\\'s cramp, 8 of 9 (88.8%) blepharospasm, 10 of 11 (90.1%) spasmodic dysphonia patients and 5 of 8 (62.5%) musicians. The visual and tactile tasks were found to be more sensitive than the mixed task. Temporal discrimination threshold results were comparable across common adult-onset primary torsion dystonia phenotypes, with lower sensitivity in the musicians.

  13. To call a cloud 'cirrus': sound symbolism in names for categories or items.

    Science.gov (United States)

    Ković, Vanja; Sučević, Jelena; Styles, Suzy J

    2017-01-01

    The aim of the present paper is to experimentally test whether sound symbolism has selective effects on labels with different ranges-of-reference within a simple noun-hierarchy. In two experiments, adult participants learned the make up of two categories of unfamiliar objects ('alien life forms'), and were passively exposed to either category-labels or item-labels, in a learning-by-guessing categorization task. Following category training, participants were tested on their visual discrimination of object pairs. For different groups of participants, the labels were either congruent or incongruent with the objects. In Experiment 1, when trained on items with individual labels, participants were worse (made more errors) at detecting visual object mismatches when trained labels were incongruent. In Experiment 2, when participants were trained on items in labelled categories, participants were faster at detecting a match if the trained labels were congruent, and faster at detecting a mismatch if the trained labels were incongruent. This pattern of results suggests that sound symbolism in category labels facilitates later similarity judgments when congruent, and discrimination when incongruent, whereas for item labels incongruence generates error in judgements of visual object differences. These findings reveal that sound symbolic congruence has a different outcome at different levels of labelling within a noun hierarchy. These effects emerged in the absence of the label itself, indicating subtle but pervasive effects on visual object processing.

  14. Long-term exposure to noise impairs cortical sound processing and attention control.

    Science.gov (United States)

    Kujala, Teija; Shtyrov, Yury; Winkler, Istvan; Saher, Marieke; Tervaniemi, Mari; Sallinen, Mikael; Teder-Sälejärvi, Wolfgang; Alho, Kimmo; Reinikainen, Kalevi; Näätänen, Risto

    2004-11-01

    Long-term exposure to noise impairs human health, causing pathological changes in the inner ear as well as other anatomical and physiological deficits. Numerous individuals are daily exposed to excessive noise. However, there is a lack of systematic research on the effects of noise on cortical function. Here we report data showing that long-term exposure to noise has a persistent effect on central auditory processing and leads to concurrent behavioral deficits. We found that speech-sound discrimination was impaired in noise-exposed individuals, as indicated by behavioral responses and the mismatch negativity brain response. Furthermore, irrelevant sounds increased the distractibility of the noise-exposed subjects, which was shown by increased interference in task performance and aberrant brain responses. These results demonstrate that long-term exposure to noise has long-lasting detrimental effects on central auditory processing and attention control.

  15. A discrimination task used as a novel method of testing decision-making behavior following traumatic brain injury.

    Science.gov (United States)

    Martens, Kris M; Vonder Haar, Cole; Hutsell, Blake A; Hoane, Michael R

    2012-10-10

    Traumatic brain injury (TBI) results in a multitude of deficits following injury. Some of the most pervasive in humans are the changes that affect frontally-mediated cognitive functioning, such as decision making. The assessment of decision-making behavior in rodents has been extensively tested in the field of the experimental analysis of behavior. However, due to the narrow therapeutic window following TBI, time-intensive operant paradigms are rarely incorporated into the battery of tests traditionally used, the majority of which assess motor and sensory functioning. The cognitive measures that are used are frequently limited to memory and do not account for changes in decision-making behavior. The purpose of the present study was to develop a simplified discrimination task that can assess deficits in decision-making behavior in rodents. For the task, rats were required to dig in cocoa-scented sand (versus unscented sand) for a reinforcer. Rats were given 12 sessions per day until a criterion level of 80% accuracy for 3 days straight was reached. Once the criterion was achieved, cortical contusion injuries were induced (frontal, parietal, or sham). Following a recovery period, the rats were re-tested on cocoa versus unscented sand. Upon reaching criterion, a reversal discrimination was evaluated in which the reinforcer was placed in unscented sand. Finally, a novel scent discrimination (basil versus coffee with basil reinforced), and a reversal (coffee) were evaluated. The results indicated that the Dig task is a simple experimental preparation that can be used to assess deficits in decision-making behavior following TBI.

  16. Anterior paracingulate and cingulate cortex mediates the effects of cognitive load on speech sound discrimination.

    Science.gov (United States)

    Gennari, Silvia P; Millman, Rebecca E; Hymers, Mark; Mattys, Sven L

    2018-06-11

    Perceiving speech while performing another task is a common challenge in everyday life. How the brain controls resource allocation during speech perception remains poorly understood. Using functional magnetic resonance imaging (fMRI), we investigated the effect of cognitive load on speech perception by examining brain responses of participants performing a phoneme discrimination task and a visual working memory task simultaneously. The visual task involved holding either a single meaningless image in working memory (low cognitive load) or four different images (high cognitive load). Performing the speech task under high load, compared to low load, resulted in decreased activity in pSTG/pMTG and increased activity in visual occipital cortex and two regions known to contribute to visual attention regulation-the superior parietal lobule (SPL) and the paracingulate and anterior cingulate gyrus (PaCG, ACG). Critically, activity in PaCG/ACG was correlated with performance in the visual task and with activity in pSTG/pMTG: Increased activity in PaCG/ACG was observed for individuals with poorer visual performance and with decreased activity in pSTG/pMTG. Moreover, activity in a pSTG/pMTG seed region showed psychophysiological interactions with areas of the PaCG/ACG, with stronger interaction in the high-load than the low-load condition. These findings show that the acoustic analysis of speech is affected by the demands of a concurrent visual task and that the PaCG/ACG plays a role in allocating cognitive resources to concurrent auditory and visual information. Copyright © 2018. Published by Elsevier Inc.

  17. Perception of acoustic scale and size in musical instrument sounds.

    Science.gov (United States)

    van Dinther, Ralph; Patterson, Roy D

    2006-10-01

    There is size information in natural sounds. For example, as humans grow in height, their vocal tracts increase in length, producing a predictable decrease in the formant frequencies of speech sounds. Recent studies have shown that listeners can make fine discriminations about which of two speakers has the longer vocal tract, supporting the view that the auditory system discriminates changes on the acoustic-scale dimension. Listeners can also recognize vowels scaled well beyond the range of vocal tracts normally experienced, indicating that perception is robust to changes in acoustic scale. This paper reports two perceptual experiments designed to extend research on acoustic scale and size perception to the domain of musical sounds: The first study shows that listeners can discriminate the scale of musical instrument sounds reliably, although not quite as well as for voices. The second experiment shows that listeners can recognize the family of an instrument sound which has been modified in pitch and scale beyond the range of normal experience. We conclude that processing of acoustic scale in music perception is very similar to processing of acoustic scale in speech perception.

  18. Sex differences in audiovisual discrimination learning by Bengalese finches (Lonchura striata var. domestica).

    Science.gov (United States)

    Seki, Yoshimasa; Okanoya, Kazuo

    2008-02-01

    Both visual and auditory information are important for songbirds, especially in developmental and sexual contexts. To investigate bimodal cognition in songbirds, the authors conducted audiovisual discrimination training in Bengalese finches. The authors used two types of stimulus: an "artificial stimulus," which is a combination of simple figures and sound, and a "biological stimulus," consisting of video images of singing males along with their songs. The authors found that while both sexes predominantly used visual cues in the discrimination tasks, males tended to be more dependent on auditory information for the biological stimulus. Female responses were always dependent on the visual stimulus for both stimulus types. Only males changed their discrimination strategy according to stimulus type. Although males used both visual and auditory cues for the biological stimulus, they responded to the artificial stimulus depending only on visual information, as the females did. These findings suggest a sex difference in innate auditory sensitivity. (c) 2008 APA.

  19. Effect of casino-related sound, red light and pairs on decision-making during the Iowa gambling task.

    Science.gov (United States)

    Brevers, Damien; Noël, Xavier; Bechara, Antoine; Vanavermaete, Nora; Verbanck, Paul; Kornreich, Charles

    2015-06-01

    Casino venues are often characterized by "warm" colors, reward-related sounds, and the presence of others. These factors have always been identified as a key factor in energizing gambling. However, few empirical studies have examined their impact on gambling behaviors. Here, we aimed to explore the impact of combined red light and casino-related sounds, with or without the presence of another participant, on gambling-related behaviors. Gambling behavior was estimated with the Iowa Gambling Task (IGT). Eighty non-gamblers participants took part in one of four experimental conditions (20 participants in each condition); (1) IGT without casino-related sound and under normal (white) light (control), (2) IGT with combined casino-related sound and red light (casino alone), (3) IGT with combined casino-related sound, red light and in front of another participant (casino competition-implicit), and (4) IGT with combined casino-related sound, red light and against another participant (casino competition-explicit). Results showed that, in contrast to the control condition, participants in the three "casino" conditions did not exhibit slower deck selection reaction time after losses than after rewards. Moreover, participants in the two "competition" conditions displayed lowered deck selection reaction time after losses and rewards, as compared with the control and the "casino alone" conditions. These findings suggest that casino environment may diminish the time used for reflecting and thinking before acting after losses. These findings are discussed along with the methodological limitations, potential directions for future studies, as well as implications to enhance prevention strategies of abnormal gambling.

  20. Aging Effect on Audiovisual Integrative Processing in Spatial Discrimination Task

    Directory of Open Access Journals (Sweden)

    Zhi Zou

    2017-11-01

    Full Text Available Multisensory integration is an essential process that people employ daily, from conversing in social gatherings to navigating the nearby environment. The aim of this study was to investigate the impact of aging on modulating multisensory integrative processes using event-related potential (ERP, and the validity of the study was improved by including “noise” in the contrast conditions. Older and younger participants were involved in perceiving visual and/or auditory stimuli that contained spatial information. The participants responded by indicating the spatial direction (far vs. near and left vs. right conveyed in the stimuli using different wrist movements. electroencephalograms (EEGs were captured in each task trial, along with the accuracy and reaction time of the participants’ motor responses. Older participants showed a greater extent of behavioral improvements in the multisensory (as opposed to unisensory condition compared to their younger counterparts. Older participants were found to have fronto-centrally distributed super-additive P2, which was not the case for the younger participants. The P2 amplitude difference between the multisensory condition and the sum of the unisensory conditions was found to correlate significantly with performance on spatial discrimination. The results indicated that the age-related effect modulated the integrative process in the perceptual and feedback stages, particularly the evaluation of auditory stimuli. Audiovisual (AV integration may also serve a functional role during spatial-discrimination processes to compensate for the compromised attention function caused by aging.

  1. A Nonword Repetition Task for Speakers with Misarticulations: The Syllable Repetition Task (SRT)

    Science.gov (United States)

    Shriberg, Lawrence D.; Lohmeier, Heather L.; Campbell, Thomas F.; Dollaghan, Christine A.; Green, Jordan R.; Moore, Christopher A.

    2009-01-01

    Purpose: Conceptual and methodological confounds occur when non(sense) word repetition tasks are administered to speakers who do not have the target speech sounds in their phonetic inventories or who habitually misarticulate targeted speech sounds. In this article, the authors (a) describe a nonword repetition task, the Syllable Repetition Task…

  2. Face-gender discrimination is possible in the near-absence of attention.

    Science.gov (United States)

    Reddy, Leila; Wilken, Patrick; Koch, Christof

    2004-03-02

    The attentional cost associated with the visual discrimination of the gender of a face was investigated. Participants performed a face-gender discrimination task either alone (single-task) or concurrently (dual-task) with a known attentional demanding task (5-letter T/L discrimination). Overall performance on face-gender discrimination suffered remarkably little under the dual-task condition compared to the single-task condition. Similar results were obtained in experiments that controlled for potential training effects or the use of low-level cues in this discrimination task. Our results provide further evidence against the notion that only low-level representations can be accessed outside the focus of attention.

  3. Intelligence and P3 Components of the Event-Related Potential Elicited during an Auditory Discrimination Task with Masking

    Science.gov (United States)

    De Pascalis, V.; Varriale, V.; Matteoli, A.

    2008-01-01

    The relationship between fluid intelligence (indexed by scores on Raven Progressive Matrices) and auditory discrimination ability was examined by recording event-related potentials from 48 women during the performance of an auditory oddball task with backward masking. High ability (HA) subjects exhibited shorter response times, greater response…

  4. Isolating Discriminant Neural Activity in the Presence of Eye Movements and Concurrent Task Demands

    Directory of Open Access Journals (Sweden)

    Jon Touryan

    2017-07-01

    Full Text Available A growing number of studies use the combination of eye-tracking and electroencephalographic (EEG measures to explore the neural processes that underlie visual perception. In these studies, fixation-related potentials (FRPs are commonly used to quantify early and late stages of visual processing that follow the onset of each fixation. However, FRPs reflect a mixture of bottom-up (sensory-driven and top-down (goal-directed processes, in addition to eye movement artifacts and unrelated neural activity. At present there is little consensus on how to separate this evoked response into its constituent elements. In this study we sought to isolate the neural sources of target detection in the presence of eye movements and over a range of concurrent task demands. Here, participants were asked to identify visual targets (Ts amongst a grid of distractor stimuli (Ls, while simultaneously performing an auditory N-back task. To identify the discriminant activity, we used independent components analysis (ICA for the separation of EEG into neural and non-neural sources. We then further separated the neural sources, using a modified measure-projection approach, into six regions of interest (ROIs: occipital, fusiform, temporal, parietal, cingulate, and frontal cortices. Using activity from these ROIs, we identified target from non-target fixations in all participants at a level similar to other state-of-the-art classification techniques. Importantly, we isolated the time course and spectral features of this discriminant activity in each ROI. In addition, we were able to quantify the effect of cognitive load on both fixation-locked potential and classification performance across regions. Together, our results show the utility of a measure-projection approach for separating task-relevant neural activity into meaningful ROIs within more complex contexts that include eye movements.

  5. Paintings discrimination by mice: Different strategies for different paintings.

    Science.gov (United States)

    Watanabe, Shigeru

    2017-09-01

    C57BL/6 mice were trained on simultaneous discrimination of paintings with multiple exemplars, using an operant chamber with a touch screen. The number of exemplars was successively increased up to six. Those mice trained in Kandinsky/Mondrian discrimination showed improved learning and generalization, whereas those trained in Picasso/Renoir discrimination showed no improvements in learning or generalization. These results suggest category-like discrimination in the Kandinsky/Mondrian task, but item-to-item discrimination in the Picasso/Renoir task. Mice maintained their discriminative behavior in a pixelization test with various paintings; however, mice in the Picasso/Renoir task showed poor performance in a test that employed scrambling processing. These results do not indicate that discrimination strategy for any Kandinsky/Mondrian combinations differed from that for any Picasso/Monet combinations but suggest the mice employed different strategies of discrimination tasks depending upon stimuli. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Sound-Symbolism Boosts Novel Word Learning

    Science.gov (United States)

    Lockwood, Gwilym; Dingemanse, Mark; Hagoort, Peter

    2016-01-01

    The existence of sound-symbolism (or a non-arbitrary link between form and meaning) is well-attested. However, sound-symbolism has mostly been investigated with nonwords in forced choice tasks, neither of which are representative of natural language. This study uses ideophones, which are naturally occurring sound-symbolic words that depict sensory…

  7. Multi-task linear programming discriminant analysis for the identification of progressive MCI individuals.

    Science.gov (United States)

    Yu, Guan; Liu, Yufeng; Thung, Kim-Han; Shen, Dinggang

    2014-01-01

    Accurately identifying mild cognitive impairment (MCI) individuals who will progress to Alzheimer's disease (AD) is very important for making early interventions. Many classification methods focus on integrating multiple imaging modalities such as magnetic resonance imaging (MRI) and fluorodeoxyglucose positron emission tomography (FDG-PET). However, the main challenge for MCI classification using multiple imaging modalities is the existence of a lot of missing data in many subjects. For example, in the Alzheimer's Disease Neuroimaging Initiative (ADNI) study, almost half of the subjects do not have PET images. In this paper, we propose a new and flexible binary classification method, namely Multi-task Linear Programming Discriminant (MLPD) analysis, for the incomplete multi-source feature learning. Specifically, we decompose the classification problem into different classification tasks, i.e., one for each combination of available data sources. To solve all different classification tasks jointly, our proposed MLPD method links them together by constraining them to achieve the similar estimated mean difference between the two classes (under classification) for those shared features. Compared with the state-of-the-art incomplete Multi-Source Feature (iMSF) learning method, instead of constraining different classification tasks to choose a common feature subset for those shared features, MLPD can flexibly and adaptively choose different feature subsets for different classification tasks. Furthermore, our proposed MLPD method can be efficiently implemented by linear programming. To validate our MLPD method, we perform experiments on the ADNI baseline dataset with the incomplete MRI and PET images from 167 progressive MCI (pMCI) subjects and 226 stable MCI (sMCI) subjects. We further compared our method with the iMSF method (using incomplete MRI and PET images) and also the single-task classification method (using only MRI or only subjects with both MRI and PET images

  8. Multi-task linear programming discriminant analysis for the identification of progressive MCI individuals.

    Directory of Open Access Journals (Sweden)

    Guan Yu

    Full Text Available Accurately identifying mild cognitive impairment (MCI individuals who will progress to Alzheimer's disease (AD is very important for making early interventions. Many classification methods focus on integrating multiple imaging modalities such as magnetic resonance imaging (MRI and fluorodeoxyglucose positron emission tomography (FDG-PET. However, the main challenge for MCI classification using multiple imaging modalities is the existence of a lot of missing data in many subjects. For example, in the Alzheimer's Disease Neuroimaging Initiative (ADNI study, almost half of the subjects do not have PET images. In this paper, we propose a new and flexible binary classification method, namely Multi-task Linear Programming Discriminant (MLPD analysis, for the incomplete multi-source feature learning. Specifically, we decompose the classification problem into different classification tasks, i.e., one for each combination of available data sources. To solve all different classification tasks jointly, our proposed MLPD method links them together by constraining them to achieve the similar estimated mean difference between the two classes (under classification for those shared features. Compared with the state-of-the-art incomplete Multi-Source Feature (iMSF learning method, instead of constraining different classification tasks to choose a common feature subset for those shared features, MLPD can flexibly and adaptively choose different feature subsets for different classification tasks. Furthermore, our proposed MLPD method can be efficiently implemented by linear programming. To validate our MLPD method, we perform experiments on the ADNI baseline dataset with the incomplete MRI and PET images from 167 progressive MCI (pMCI subjects and 226 stable MCI (sMCI subjects. We further compared our method with the iMSF method (using incomplete MRI and PET images and also the single-task classification method (using only MRI or only subjects with both MRI and

  9. Pitch discrimination associated with phonological awareness: Evidence from congenital amusia.

    Science.gov (United States)

    Sun, Yanan; Lu, Xuejing; Ho, Hao Tam; Thompson, William Forde

    2017-03-13

    Research suggests that musical skills are associated with phonological abilities. To further investigate this association, we examined whether phonological impairments are evident in individuals with poor music abilities. Twenty individuals with congenital amusia and 20 matched controls were assessed on a pure-tone pitch discrimination task, a rhythm discrimination task, and four phonological tests. Amusic participants showed deficits in discriminating pitch and discriminating rhythmic patterns that involve a regular beat. At a group level, these individuals performed similarly to controls on all phonological tests. However, eight amusics with severe pitch impairment, as identified by the pitch discrimination task, exhibited significantly worse performance than all other participants in phonological awareness. A hierarchical regression analysis indicated that pitch discrimination thresholds predicted phonological awareness beyond that predicted by phonological short-term memory and rhythm discrimination. In contrast, our rhythm discrimination task did not predict phonological awareness beyond that predicted by pitch discrimination thresholds. These findings suggest that accurate pitch discrimination is critical for phonological processing. We propose that deficits in early-stage pitch discrimination may be associated with impaired phonological awareness and we discuss the shared role of pitch discrimination for processing music and speech.

  10. Discriminative Transfer Learning for General Image Restoration

    KAUST Repository

    Xiao, Lei; Heide, Felix; Heidrich, Wolfgang; Schö lkopf, Bernhard; Hirsch, Michael

    2018-01-01

    Recently, several discriminative learning approaches have been proposed for effective image restoration, achieving convincing trade-off between image quality and computational efficiency. However, these methods require separate training for each restoration task (e.g., denoising, deblurring, demosaicing) and problem condition (e.g., noise level of input images). This makes it time-consuming and difficult to encompass all tasks and conditions during training. In this paper, we propose a discriminative transfer learning method that incorporates formal proximal optimization and discriminative learning for general image restoration. The method requires a single-pass discriminative training and allows for reuse across various problems and conditions while achieving an efficiency comparable to previous discriminative approaches. Furthermore, after being trained, our model can be easily transferred to new likelihood terms to solve untrained tasks, or be combined with existing priors to further improve image restoration quality.

  11. Discriminative Transfer Learning for General Image Restoration

    KAUST Repository

    Xiao, Lei

    2018-04-30

    Recently, several discriminative learning approaches have been proposed for effective image restoration, achieving convincing trade-off between image quality and computational efficiency. However, these methods require separate training for each restoration task (e.g., denoising, deblurring, demosaicing) and problem condition (e.g., noise level of input images). This makes it time-consuming and difficult to encompass all tasks and conditions during training. In this paper, we propose a discriminative transfer learning method that incorporates formal proximal optimization and discriminative learning for general image restoration. The method requires a single-pass discriminative training and allows for reuse across various problems and conditions while achieving an efficiency comparable to previous discriminative approaches. Furthermore, after being trained, our model can be easily transferred to new likelihood terms to solve untrained tasks, or be combined with existing priors to further improve image restoration quality.

  12. Reinforcing and discriminative stimulus properties of music in goldfish.

    Science.gov (United States)

    Shinozuka, Kazutaka; Ono, Haruka; Watanabe, Shigeru

    2013-10-01

    This paper investigated whether music has reinforcing and discriminative stimulus properties in goldfish. Experiment 1 examined the discriminative stimulus properties of music. The subjects were successfully trained to discriminate between two pieces of music--Toccata and Fugue in D minor (BWV 565) by J. S. Bach and The Rite of Spring by I. Stravinsky. Experiment 2 examined the reinforcing properties of sounds, including BWV 565 and The Rite of Spring. We developed an apparatus for measuring spontaneous sound preference in goldfish. Music or noise stimuli were presented depending on the subject's position in the aquarium, and the time spent in each area was measured. The results indicated that the goldfish did not show consistent preferences for music, although they showed significant avoidance of noise stimuli. These results suggest that music has discriminative but not reinforcing stimulus properties in goldfish. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. Vulnerability to the Irrelevant Sound Effect in Adult ADHD.

    Science.gov (United States)

    Pelletier, Marie-France; Hodgetts, Helen M; Lafleur, Martin F; Vincent, Annick; Tremblay, Sébastien

    2016-04-01

    An ecologically valid adaptation of the irrelevant sound effect paradigm was employed to examine the relative roles of short-term memory, selective attention, and sustained attention in ADHD. In all, 32 adults with ADHD and 32 control participants completed a serial recall task in silence or while ignoring irrelevant background sound. Serial recall performance in adults with ADHD was reduced relative to controls in both conditions. The degree of interference due to irrelevant sound was greater for adults with ADHD. Furthermore, a positive correlation was observed between task performance under conditions of irrelevant sound and the extent of attentional problems reported by patients on a clinical symptom scale. The results demonstrate that adults with ADHD exhibit impaired short-term memory and a low resistance to distraction; however, their capacity for sustained attention is preserved as the impact of irrelevant sound diminished over the course of the task. © The Author(s) 2013.

  14. Comparison of learning ability and memory retention in altricial (Bengalese finch, Lonchura striata var. domestica) and precocial (blue-breasted quail, Coturnix chinensis) birds using a color discrimination task.

    Science.gov (United States)

    Ueno, Aki; Suzuki, Kaoru

    2014-02-01

    The present study sought to assess the potential application of avian models with different developmental modes to studies on cognition and neuroscience. Six altricial Bengalese finches (Lonchura striata var. domestica), and eight precocial blue-breasted quails (Coturnix chinensis) were presented with color discrimination tasks to compare their respective faculties for learning and memory retention within the context of the two developmental modes. Tasks consisted of presenting birds with discriminative cues in the form of colored feeder lids, and birds were considered to have learned a task when 80% of their attempts at selecting the correctly colored lid in two consecutive blocks of 10 trials were successful. All of the finches successfully performed the required experimental tasks, whereas only half of the quails were able to execute the same tasks. In the learning test, finches required significantly fewer trials than quails to learn the task (finches: 13.5 ± 9.14 trials, quails: 45.8 ± 4.35 trials, P memory retention tests, which were conducted 45 days after the learning test, finches retained the ability to discriminate between colors correctly (95.0 ± 4.47%), whereas quails did not retain any memory of the experimental procedure and so could not be tested. These results suggested that altricial and precocial birds both possess the faculty for learning and retaining discrimination-type tasks, but that altricial birds perform better than precocial birds in both faculties. The present findings imply that developmental mode is an important consideration for assessing the suitability of bird species for particular experiments. © 2013 Japanese Society of Animal Science.

  15. The Perception of Sound Movements as Expressive Gestures

    DEFF Research Database (Denmark)

    Götzen, Amalia De; Sikström, Erik; Korsgaard, Dannie

    2014-01-01

    This paper is a preliminary attempt to investigate the perception of sound movements as expressive gestures. The idea is that if sound movement is used as a musical parameter, a listener (or a subject) should be able to distinguish among dierent movements and she/he should be able to group them a...... by drawing it on a tablet. Preliminary results show that subjects could consistently group the stimuli, and that they primarily used paths and legato{staccato patterns to discriminate among dierent sound movements/expressive intention....

  16. Sound-symbolism boosts novel word learning

    NARCIS (Netherlands)

    Lockwood, G.F.; Dingemanse, M.; Hagoort, P.

    2016-01-01

    The existence of sound-symbolism (or a non-arbitrary link between form and meaning) is well-attested. However, sound-symbolism has mostly been investigated with nonwords in forced choice tasks, neither of which are representative of natural language. This study uses ideophones, which are naturally

  17. Neural responses in the primary auditory cortex of freely behaving cats while discriminating fast and slow click-trains.

    Science.gov (United States)

    Dong, Chao; Qin, Ling; Liu, Yongchun; Zhang, Xinan; Sato, Yu

    2011-01-01

    Repeated acoustic events are ubiquitous temporal features of natural sounds. To reveal the neural representation of the sound repetition rate, a number of electrophysiological studies have been conducted on various mammals and it has been proposed that both the spike-time and firing rate of primary auditory cortex (A1) neurons encode the repetition rate. However, previous studies rarely examined how the experimental animals perceive the difference in the sound repetition rate, and a caveat to these experiments is that they compared physiological data obtained from animals with psychophysical data obtained from humans. In this study, for the first time, we directly investigated acoustic perception and the underlying neural mechanisms in the same experimental animal by examining spike activities in the A1 of free-moving cats while performing a Go/No-go task to discriminate the click-trains at different repetition rates (12.5-200 Hz). As reported by previous studies on passively listening animals, A1 neurons showed both synchronized and non-synchronized responses to the click-trains. We further found that the neural performance estimated from the precise temporal information of synchronized units was good enough to distinguish all 16.7-200 Hz from the 12.5 Hz repetition rate; however, the cats showed declining behavioral performance with the decrease of the target repetition rate, indicating an increase of difficulty in discriminating two slower click-trains. Such behavioral performance was well explained by the firing rate of some synchronized and non-synchronized units. Trial-by-trial analysis indicated that A1 activity was not affected by the cat's judgment of behavioral response. Our results suggest that the main function of A1 is to effectively represent temporal signals using both spike timing and firing rate, while the cats may read out the rate-coding information to perform the task in this experiment.

  18. Neural responses in the primary auditory cortex of freely behaving cats while discriminating fast and slow click-trains.

    Directory of Open Access Journals (Sweden)

    Chao Dong

    Full Text Available Repeated acoustic events are ubiquitous temporal features of natural sounds. To reveal the neural representation of the sound repetition rate, a number of electrophysiological studies have been conducted on various mammals and it has been proposed that both the spike-time and firing rate of primary auditory cortex (A1 neurons encode the repetition rate. However, previous studies rarely examined how the experimental animals perceive the difference in the sound repetition rate, and a caveat to these experiments is that they compared physiological data obtained from animals with psychophysical data obtained from humans. In this study, for the first time, we directly investigated acoustic perception and the underlying neural mechanisms in the same experimental animal by examining spike activities in the A1 of free-moving cats while performing a Go/No-go task to discriminate the click-trains at different repetition rates (12.5-200 Hz. As reported by previous studies on passively listening animals, A1 neurons showed both synchronized and non-synchronized responses to the click-trains. We further found that the neural performance estimated from the precise temporal information of synchronized units was good enough to distinguish all 16.7-200 Hz from the 12.5 Hz repetition rate; however, the cats showed declining behavioral performance with the decrease of the target repetition rate, indicating an increase of difficulty in discriminating two slower click-trains. Such behavioral performance was well explained by the firing rate of some synchronized and non-synchronized units. Trial-by-trial analysis indicated that A1 activity was not affected by the cat's judgment of behavioral response. Our results suggest that the main function of A1 is to effectively represent temporal signals using both spike timing and firing rate, while the cats may read out the rate-coding information to perform the task in this experiment.

  19. Distraction and Facilitation--Two Faces of the Same Coin?

    Science.gov (United States)

    Wetzel, Nicole; Widmann, Andreas; Schroger, Erich

    2012-01-01

    Unexpected and task-irrelevant sounds can capture our attention and may cause distraction effects reflected by impaired performance in a primary task unrelated to the perturbing sound. The present auditory-visual oddball study examines the effect of the informational content of a sound on the performance in a visual discrimination task. The…

  20. Conditioned sounds enhance visual processing.

    Directory of Open Access Journals (Sweden)

    Fabrizio Leo

    Full Text Available This psychophysics study investigated whether prior auditory conditioning influences how a sound interacts with visual perception. In the conditioning phase, subjects were presented with three pure tones ( =  conditioned stimuli, CS that were paired with positive, negative or neutral unconditioned stimuli. As unconditioned reinforcers we employed pictures (highly pleasant, unpleasant and neutral or monetary outcomes (+50 euro cents, -50 cents, 0 cents. In the subsequent visual selective attention paradigm, subjects were presented with near-threshold Gabors displayed in their left or right hemifield. Critically, the Gabors were presented in synchrony with one of the conditioned sounds. Subjects discriminated whether the Gabors were presented in their left or right hemifields. Participants determined the location more accurately when the Gabors were presented in synchrony with positive relative to neutral sounds irrespective of reinforcer type. Thus, previously rewarded relative to neutral sounds increased the bottom-up salience of the visual Gabors. Our results are the first demonstration that prior auditory conditioning is a potent mechanism to modulate the effect of sounds on visual perception.

  1. Facilitated auditory detection for speech sounds

    Directory of Open Access Journals (Sweden)

    Carine eSignoret

    2011-07-01

    Full Text Available If it is well known that knowledge facilitates higher cognitive functions, such as visual and auditory word recognition, little is known about the influence of knowledge on detection, particularly in the auditory modality. Our study tested the influence of phonological and lexical knowledge on auditory detection. Words, pseudo words and complex non phonological sounds, energetically matched as closely as possible, were presented at a range of presentation levels from sub threshold to clearly audible. The participants performed a detection task (Experiments 1 and 2 that was followed by a two alternative forced choice recognition task in Experiment 2. The results of this second task in Experiment 2 suggest a correct recognition of words in the absence of detection with a subjective threshold approach. In the detection task of both experiments, phonological stimuli (words and pseudo words were better detected than non phonological stimuli (complex sounds, presented close to the auditory threshold. This finding suggests an advantage of speech for signal detection. An additional advantage of words over pseudo words was observed in Experiment 2, suggesting that lexical knowledge could also improve auditory detection when listeners had to recognize the stimulus in a subsequent task. Two simulations of detection performance performed on the sound signals confirmed that the advantage of speech over non speech processing could not be attributed to energetic differences in the stimuli.

  2. Hippocampal-dependent memory in the plus-maze discriminative avoidance task: The role of spatial cues and CA1 activity.

    Science.gov (United States)

    Leão, Anderson H F F; Medeiros, André M; Apolinário, Gênedy K S; Cabral, Alícia; Ribeiro, Alessandra M; Barbosa, Flávio F; Silva, Regina H

    2016-05-01

    The plus-maze discriminative avoidance task (PMDAT) has been used to investigate interactions between aversive memory and an anxiety-like response in rodents. Suitable performance in this task depends on the activity of the basolateral amygdala, similar to other aversive-based memory tasks. However, the role of spatial cues and hippocampal-dependent learning in the performance of PMDAT remains unknown. Here, we investigated the role of proximal and distal cues in the retrieval of this task. Animals tested under misplaced proximal cues had diminished performance, and animals tested under both misplaced proximal cues and absent distal cues could not discriminate the aversive arm. We also assessed the role of the dorsal hippocampus (CA1) in this aversive memory task. Temporary bilateral inactivation of dorsal CA1 was conducted with muscimol (0.05 μg, 0.1 μg, and 0.2 μg) prior to the training session. While the acquisition of the task was not altered, muscimol impaired the performance in the test session and reduced the anxiety-like response in the training session. We also performed a spreading analysis of a fluorophore-conjugated muscimol to confirm selective inhibition of CA1. In conclusion, both distal and proximal cues are required to retrieve the task, with the latter being more relevant to spatial orientation. Dorsal CA1 activity is also required for aversive memory formation in this task, and interfered with the anxiety-like response as well. Importantly, both effects were detected by different parameters in the same paradigm, endorsing the previous findings of independent assessment of aversive memory and anxiety-like behavior in the PMDAT. Taken together, these findings suggest that the PMDAT probably requires an integration of multiple systems for memory formation, resembling an episodic-like memory rather than a pure conditioning behavior. Furthermore, the concomitant and independent assessment of emotionality and memory in rodents is relevant to

  3. Language Experience Affects Grouping of Musical Instrument Sounds

    Science.gov (United States)

    Bhatara, Anjali; Boll-Avetisyan, Natalie; Agus, Trevor; Höhle, Barbara; Nazzi, Thierry

    2016-01-01

    Language experience clearly affects the perception of speech, but little is known about whether these differences in perception extend to non-speech sounds. In this study, we investigated rhythmic perception of non-linguistic sounds in speakers of French and German using a grouping task, in which complexity (variability in sounds, presence of…

  4. Effects of musical training and hearing loss on pitch discrimination

    DEFF Research Database (Denmark)

    Santurette, Sébastien; Bianchi, Federica; Dau, Torsten

    2018-01-01

    content of the sound and whether the harmonics are resolved by the auditory frequency analysis operated by cochlear processing. F0DLs are also heavily influenced by the amount of musical training received by the listener and by the spectrotemporal auditory processing deficits that often accompany...... sensorineural hearing loss. This paper reviews the latest evidence for how musical training and hearing loss affect pitch discrimination performance, based on behavioral F0DL experiments with complex tones containing either resolved or unresolved harmonics, carried out in listeners with different degrees...... of hearing loss and musicianship. A better understanding of the interaction between these two factors is crucial to determine whether auditory training based on musical tasks or targeted towards specific auditory cues may be useful to hearing-impaired patients undergoing hearing rehabilitation....

  5. Developmental change in children's sensitivity to sound symbolism.

    Science.gov (United States)

    Tzeng, Christina Y; Nygaard, Lynne C; Namy, Laura L

    2017-08-01

    The current study examined developmental change in children's sensitivity to sound symbolism. Three-, five-, and seven-year-old children heard sound symbolic novel words and foreign words meaning round and pointy and chose which of two pictures (one round and one pointy) best corresponded to each word they heard. Task performance varied as a function of both word type and age group such that accuracy was greater for novel words than for foreign words, and task performance increased with age for both word types. For novel words, children in all age groups reliably chose the correct corresponding picture. For foreign words, 3-year-olds showed chance performance, whereas 5- and 7-year-olds showed reliably above-chance performance. Results suggest increased sensitivity to sound symbolic cues with development and imply that although sensitivity to sound symbolism may be available early and facilitate children's word-referent mappings, sensitivity to subtler sound symbolic cues requires greater language experience. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. COMMUNICATION: On variability and use of rat primary motor cortex responses in behavioral task discrimination

    Science.gov (United States)

    Jensen, Winnie; Rousche, Patrick J.

    2006-03-01

    The success of a cortical motor neuroprosthetic system will rely on the system's ability to effectively execute complex motor tasks in a changing environment. Invasive, intra-cortical electrodes have been successfully used to predict joint movement and grip force of a robotic arm/hand with a non-human primate (Chapin J K, Moxon K A, Markowitz R S and Nicolelis M A L 1999 Real-time control of a robotic arm using simultaneously recorded neurons in the motor cortex Nat. Neurosci. 2 664-70). It is well known that cortical encoding occurs with a high degree of cortical plasticity and depends on both the functional and behavioral context. Questions on the expected robustness of future motor prosthesis systems therefore still remain. The objective of the present work was to study the effect of minor changes in functional movement strategies on the M1 encoding. We compared the M1 encoding in freely moving, non-constrained animals that performed two similar behavioral tasks with the same end-goal, and investigated if these behavioral tasks could be discriminated based on the M1 recordings. The rats depressed a response paddle either with a set of restrictive bars ('WB') or without the bars ('WOB') placed in front of the paddle. The WB task required changes in the motor strategy to complete the paddle press and resulted in highly stereotyped movements, whereas in the WOB task the movement strategy was not restricted. Neural population activity was recorded from 16-channel micro-wire arrays and data up to 200 ms before a paddle hit were analyzed off-line. The analysis showed a significant neural firing difference between the two similar WB and WOB tasks, and using principal component analysis it was possible to distinguish between the two tasks with a best classification at 76.6%. While the results are dependent upon a small, randomly sampled neural population, they indicate that information about similar behavioral tasks may be extracted from M1 based on relatively few

  7. Eye movements discriminate fatigue due to chronotypical factors and time spent on task--a double dissociation.

    Directory of Open Access Journals (Sweden)

    Dario Cazzoli

    Full Text Available Systematic differences in circadian rhythmicity are thought to be a substantial factor determining inter-individual differences in fatigue and cognitive performance. The synchronicity effect (when time of testing coincides with the respective circadian peak period seems to play an important role. Eye movements have been shown to be a reliable indicator of fatigue due to sleep deprivation or time spent on cognitive tasks. However, eye movements have not been used so far to investigate the circadian synchronicity effect and the resulting differences in fatigue. The aim of the present study was to assess how different oculomotor parameters in a free visual exploration task are influenced by: a fatigue due to chronotypical factors (being a 'morning type' or an 'evening type'; b fatigue due to the time spent on task. Eighteen healthy participants performed a free visual exploration task of naturalistic pictures while their eye movements were recorded. The task was performed twice, once at their optimal and once at their non-optimal time of the day. Moreover, participants rated their subjective fatigue. The non-optimal time of the day triggered a significant and stable increase in the mean visual fixation duration during the free visual exploration task for both chronotypes. The increase in the mean visual fixation duration correlated with the difference in subjectively perceived fatigue at optimal and non-optimal times of the day. Conversely, the mean saccadic speed significantly and progressively decreased throughout the duration of the task, but was not influenced by the optimal or non-optimal time of the day for both chronotypes. The results suggest that different oculomotor parameters are discriminative for fatigue due to different sources. A decrease in saccadic speed seems to reflect fatigue due to time spent on task, whereas an increase in mean fixation duration a lack of synchronicity between chronotype and time of the day.

  8. Modeling phoneme perception. II: A model of stop consonant discrimination.

    Science.gov (United States)

    van Hessen, A J; Schouten, M E

    1992-10-01

    Combining elements from two existing theories of speech sound discrimination, dual process theory (DPT) and trace context theory (TCT), a new theory, called phoneme perception theory, is proposed, consisting of a long-term phoneme memory, a context-coding memory, and a trace memory, each with its own time constants. This theory is tested by means of stop-consonant discrimination data in which interstimulus interval (ISI; values of 100, 300, and 2000 ms) is an important variable. It is shown that discrimination in which labeling plays an important part (2IFC and AX between category) benefits from increased ISI, whereas discrimination in which only sensory traces are compared (AX within category), decreases with increasing ISI. The theory is also tested on speech discrimination data from the literature in which ISI is a variable [Pisoni, J. Acoust. Soc. Am. 36, 277-282 (1964); Cowan and Morse, J. Acoust. Soc. Am. 79, 500-507 (1986)]. It is concluded that the number of parameters in trace context theory is not sufficient to account for most speech-sound discrimination data and that a few additional assumptions are needed, such as a form of sublabeling, in which subjects encode the quality of a stimulus as a member of a category, and which requires processing time.

  9. The dark side of ambiguous discrimination: how state self-esteem moderates emotional and behavioural responses to ambiguous and unambiguous discrimination.

    Science.gov (United States)

    Cihangir, Sezgin; Barreto, Manuela; Ellemers, Naomi

    2010-03-01

    Two experiments examine how experimentally induced differences in state self-esteem moderate emotional and behavioural responses to ambiguous and unambiguous discrimination. Study 1 (N=108) showed that participants who were exposed to ambiguous discrimination report more negative self-directed emotions when they have low compared to high self-esteem. These differences did not emerge when participants were exposed to unambiguous discrimination. Study 2 (N=118) additionally revealed that self-esteem moderated the effect of ambiguous discrimination on self-concern, task performance, and self-stereotyping. Results show that ambiguous discrimination caused participants with low self-esteem to report more negative self-directed emotions, more self-concern, an inferior task performance, and more self-stereotyping, compared to participants in the high self-esteem condition. Emotional and behavioural responses to unambiguous discrimination did not depend on the induced level of self-esteem in these studies.

  10. Distraction by novel and pitch-deviant sounds in children

    Directory of Open Access Journals (Sweden)

    Nicole Wetzel

    2016-12-01

    Full Text Available The control of attention is an important part of our executive functions and enables us to focus on relevant information and to ignore irrelevant information. The ability to shield against distraction by task-irrelevant sounds is suggested to mature during school age. The present study investigated the developmental time course of distraction in three groups of children aged 7 – 10 years. Two different types of distractor sounds that have been frequently used in auditory attention research – novel environmental and pitch-deviant sounds – were presented within an oddball paradigm while children performed a visual categorization task. Reaction time measurements revealed decreasing distractor-related impairment with age. Novel environmental sounds impaired performance in the categorization task more than pitch-deviant sounds. The youngest children showed a pronounced decline of novel-related distraction effects throughout the experimental session. Such a significant decline as a result of practice was not observed in the pitch-deviant condition and not in older children. We observed no correlation between cross-modal distraction effects and performance in standardized tests of concentration and visual distraction. Results of the cross-modal distraction paradigm indicate that separate mechanisms underlying the processing of novel environmental and pitch-deviant sounds develop with different time courses and that these mechanisms develop considerably within a few years in middle childhood.

  11. Genetic pleiotropy explains associations between musical auditory discrimination and intelligence.

    Science.gov (United States)

    Mosing, Miriam A; Pedersen, Nancy L; Madison, Guy; Ullén, Fredrik

    2014-01-01

    Musical aptitude is commonly measured using tasks that involve discrimination of different types of musical auditory stimuli. Performance on such different discrimination tasks correlates positively with each other and with intelligence. However, no study to date has explored these associations using a genetically informative sample to estimate underlying genetic and environmental influences. In the present study, a large sample of Swedish twins (N = 10,500) was used to investigate the genetic architecture of the associations between intelligence and performance on three musical auditory discrimination tasks (rhythm, melody and pitch). Phenotypic correlations between the tasks ranged between 0.23 and 0.42 (Pearson r values). Genetic modelling showed that the covariation between the variables could be explained by shared genetic influences. Neither shared, nor non-shared environment had a significant effect on the associations. Good fit was obtained with a two-factor model where one underlying shared genetic factor explained all the covariation between the musical discrimination tasks and IQ, and a second genetic factor explained variance exclusively shared among the discrimination tasks. The results suggest that positive correlations among musical aptitudes result from both genes with broad effects on cognition, and genes with potentially more specific influences on auditory functions.

  12. Different Neuroplasticity for Task Targets and Distractors

    Science.gov (United States)

    Spingath, Elsie Y.; Kang, Hyun Sug; Plummer, Thane; Blake, David T.

    2011-01-01

    Adult learning-induced sensory cortex plasticity results in enhanced action potential rates in neurons that have the most relevant information for the task, or those that respond strongly to one sensory stimulus but weakly to its comparison stimulus. Current theories suggest this plasticity is caused when target stimulus evoked activity is enhanced by reward signals from neuromodulatory nuclei. Prior work has found evidence suggestive of nonselective enhancement of neural responses, and suppression of responses to task distractors, but the differences in these effects between detection and discrimination have not been directly tested. Using cortical implants, we defined physiological responses in macaque somatosensory cortex during serial, matched, detection and discrimination tasks. Nonselective increases in neural responsiveness were observed during detection learning. Suppression of responses to task distractors was observed during discrimination learning, and this suppression was specific to cortical locations that sampled responses to the task distractor before learning. Changes in receptive field size were measured as the area of skin that had a significant response to a constant magnitude stimulus, and these areal changes paralleled changes in responsiveness. From before detection learning until after discrimination learning, the enduring changes were selective suppression of cortical locations responsive to task distractors, and nonselective enhancement of responsiveness at cortical locations selective for target and control skin sites. A comparison of observations in prior studies with the observed plasticity effects suggests that the non-selective response enhancement and selective suppression suffice to explain known plasticity phenomena in simple spatial tasks. This work suggests that differential responsiveness to task targets and distractors in primary sensory cortex for a simple spatial detection and discrimination task arise from nonselective

  13. Different neuroplasticity for task targets and distractors.

    Directory of Open Access Journals (Sweden)

    Elsie Y Spingath

    2011-01-01

    Full Text Available Adult learning-induced sensory cortex plasticity results in enhanced action potential rates in neurons that have the most relevant information for the task, or those that respond strongly to one sensory stimulus but weakly to its comparison stimulus. Current theories suggest this plasticity is caused when target stimulus evoked activity is enhanced by reward signals from neuromodulatory nuclei. Prior work has found evidence suggestive of nonselective enhancement of neural responses, and suppression of responses to task distractors, but the differences in these effects between detection and discrimination have not been directly tested. Using cortical implants, we defined physiological responses in macaque somatosensory cortex during serial, matched, detection and discrimination tasks. Nonselective increases in neural responsiveness were observed during detection learning. Suppression of responses to task distractors was observed during discrimination learning, and this suppression was specific to cortical locations that sampled responses to the task distractor before learning. Changes in receptive field size were measured as the area of skin that had a significant response to a constant magnitude stimulus, and these areal changes paralleled changes in responsiveness. From before detection learning until after discrimination learning, the enduring changes were selective suppression of cortical locations responsive to task distractors, and nonselective enhancement of responsiveness at cortical locations selective for target and control skin sites. A comparison of observations in prior studies with the observed plasticity effects suggests that the non-selective response enhancement and selective suppression suffice to explain known plasticity phenomena in simple spatial tasks. This work suggests that differential responsiveness to task targets and distractors in primary sensory cortex for a simple spatial detection and discrimination task arise from

  14. Social cognition and African American men: The roles of perceived discrimination and experimenter race on task performance.

    Science.gov (United States)

    Nagendra, Arundati; Twery, Benjamin L; Neblett, Enrique W; Mustafic, Hasan; Jones, Tevin S; Gatewood, D'Angelo; Penn, David L

    2018-01-01

    The Social Cognition Psychometric Evaluation (SCOPE) study consists of a battery of eight tasks selected to measure social-cognitive deficits in individuals with schizophrenia. The battery is currently in a multisite validation process. While the SCOPE study collects basic demographic data, more nuanced race-related factors might artificially inflate cross-cultural differences in social cognition. As an initial step, we investigated whether race, independent of mental illness status, affects performance on the SCOPE battery. Thus, we examined the effects of perceived discrimination and experimenter race on the performance of 51 non-clinical African American men on the SCOPE battery. Results revealed that these factors impacted social cognitive task performance. Specifically, participants performed better on a skills-based task factor in the presence of Black experimenters, and frequency of perceived racism predicted increased perception of hostility in negative interpersonal situations with accidental causes. Thus, race-related factors are important to identify and explore in the measurement of social cognition in African Americans. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Assessment of Spectral and Temporal Resolution in Cochlear Implant Users Using Psychoacoustic Discrimination and Speech Cue Categorization.

    Science.gov (United States)

    Winn, Matthew B; Won, Jong Ho; Moon, Il Joon

    This study was conducted to measure auditory perception by cochlear implant users in the spectral and temporal domains, using tests of either categorization (using speech-based cues) or discrimination (using conventional psychoacoustic tests). The authors hypothesized that traditional nonlinguistic tests assessing spectral and temporal auditory resolution would correspond to speech-based measures assessing specific aspects of phonetic categorization assumed to depend on spectral and temporal auditory resolution. The authors further hypothesized that speech-based categorization performance would ultimately be a superior predictor of speech recognition performance, because of the fundamental nature of speech recognition as categorization. Nineteen cochlear implant listeners and 10 listeners with normal hearing participated in a suite of tasks that included spectral ripple discrimination, temporal modulation detection, and syllable categorization, which was split into a spectral cue-based task (targeting the /ba/-/da/ contrast) and a timing cue-based task (targeting the /b/-/p/ and /d/-/t/ contrasts). Speech sounds were manipulated to contain specific spectral or temporal modulations (formant transitions or voice onset time, respectively) that could be categorized. Categorization responses were quantified using logistic regression to assess perceptual sensitivity to acoustic phonetic cues. Word recognition testing was also conducted for cochlear implant listeners. Cochlear implant users were generally less successful at utilizing both spectral and temporal cues for categorization compared with listeners with normal hearing. For the cochlear implant listener group, spectral ripple discrimination was significantly correlated with the categorization of formant transitions; both were correlated with better word recognition. Temporal modulation detection using 100- and 10-Hz-modulated noise was not correlated either with the cochlear implant subjects' categorization of

  16. Memory for pictures and sounds: independence of auditory and visual codes.

    Science.gov (United States)

    Thompson, V A; Paivio, A

    1994-09-01

    Three experiments examined the mnemonic independence of auditory and visual nonverbal stimuli in free recall. Stimulus lists consisted of (1) pictures, (2) the corresponding environmental sounds, or (3) picture-sound pairs. In Experiment 1, free recall was tested under three learning conditions: standard intentional, intentional with a rehearsal-inhibiting distracter task, or incidental with the distracter task. In all three groups, recall was best for the picture-sound items. In addition, recall for the picture-sound stimuli appeared to be additive relative to pictures or sounds alone when the distracter task was used. Experiment 2 included two additional groups: In one, two copies of the same picture were shown simultaneously; in the other, two different pictures of the same concept were shown. There was no difference in recall among any of the picture groups; in contrast, recall in the picture-sound condition was greater than recall in either single-modality condition. However, doubling the exposure time in a third experiment resulted in additively higher recall for repeated pictures with different exemplars than ones with identical exemplars. The results are discussed in terms of dual coding theory and alternative conceptions of the memory trace.

  17. Statistical learning of recurring sound patterns encodes auditory objects in songbird forebrain.

    Science.gov (United States)

    Lu, Kai; Vicario, David S

    2014-10-07

    Auditory neurophysiology has demonstrated how basic acoustic features are mapped in the brain, but it is still not clear how multiple sound components are integrated over time and recognized as an object. We investigated the role of statistical learning in encoding the sequential features of complex sounds by recording neuronal responses bilaterally in the auditory forebrain of awake songbirds that were passively exposed to long sound streams. These streams contained sequential regularities, and were similar to streams used in human infants to demonstrate statistical learning for speech sounds. For stimulus patterns with contiguous transitions and with nonadjacent elements, single and multiunit responses reflected neuronal discrimination of the familiar patterns from novel patterns. In addition, discrimination of nonadjacent patterns was stronger in the right hemisphere than in the left, and may reflect an effect of top-down modulation that is lateralized. Responses to recurring patterns showed stimulus-specific adaptation, a sparsening of neural activity that may contribute to encoding invariants in the sound stream and that appears to increase coding efficiency for the familiar stimuli across the population of neurons recorded. As auditory information about the world must be received serially over time, recognition of complex auditory objects may depend on this type of mnemonic process to create and differentiate representations of recently heard sounds.

  18. Pattern recognition in bees : orientation discrimination

    NARCIS (Netherlands)

    Hateren, J.H. van; Srinivasan, M.V.; Wait, P.B.

    1990-01-01

    Honey bees (Apis mellifera, worker) were trained to discriminate between two random gratings oriented perpendicularly to each other. This task was quickly learned with vertical, horizontal, and oblique gratings. After being trained on perpendicularly-oriented random gratings, bees could discriminate

  19. The location discrimination reversal task in mice is sensitive to deficits in performance caused by aging, pharmacological and other challenges.

    Science.gov (United States)

    Graf, Radka; Longo, Jami L; Hughes, Zoë A

    2018-06-01

    Deficits in hippocampal-mediated pattern separation are one aspect of cognitive function affected in schizophrenia (SZ) or Alzheimer's disease (AD). To develop novel therapies, it is beneficial to explore this specific aspect of cognition preclinically. The location discrimination reversal (LDR) task is a hippocampal-dependent operant paradigm that evaluates spatial learning and cognitive flexibility using touchscreens. Here we assessed baseline performance as well as multimodal disease-relevant manipulations in mice. Mice were trained to discriminate between the locations of two images where the degree of separation impacted performance. Administration of putative pro-cognitive agents was unable to improve performance at narrow separation. Furthermore, a range of disease-relevant manipulations were characterized to assess whether performance could be impaired and restored. Pertinent to the cholinergic loss in AD, scopolamine (0.1 mg/kg) produced a disruption in LDR, which was attenuated by donepezil (1 mg/kg). Consistent with NMDA hypofunction in cognitive impairment associated with SZ, MK-801 (0.1 mg/kg) also disrupted performance; however, this deficit was not modified by rolipram. Microdeletion of genes associated with SZ (22q11) resulted in impaired performance, which was restored by rolipram (0.032 mg/kg). Since aging and inflammation affect cognition and are risk factors for AD, these aspects were also evaluated. Aged mice were slower to acquire the task than young mice and did not reach the same level of performance. A systemic inflammatory challenge (lipopolysaccharide (LPS), 1 mg/kg) produced prolonged (7 days) deficits in the LDR task. These data suggest that LDR task is a valuable platform for evaluating disease-relevant deficits in pattern separation and offers potential for identifying novel therapies.

  20. Speech Discrimination in Preschool Children: A Comparison of Two Tasks.

    Science.gov (United States)

    Menary, Susan; And Others

    1982-01-01

    Eleven four-year-old children were tested for discrimination of the following word pairs: rope/robe, seat/seed, pick/pig, ice/eyes, and mouse/mouth. All word pairs were found to be discriminable, but performance on seat/seed and mouse/mouth was inferior to that of the other word pairs. (Author)

  1. Contralateral routing of signals disrupts monaural level and spectral cues to sound localisation on the horizontal plane.

    Science.gov (United States)

    Pedley, Adam J; Kitterick, Pádraig T

    2017-09-01

    Contra-lateral routing of signals (CROS) devices re-route sound between the deaf and hearing ears of unilaterally-deaf individuals. This rerouting would be expected to disrupt access to monaural level cues that can support monaural localisation in the horizontal plane. However, such a detrimental effect has not been confirmed by clinical studies of CROS use. The present study aimed to exercise strict experimental control over the availability of monaural cues to localisation in the horizontal plane and the fitting of the CROS device to assess whether signal routing can impair the ability to locate sources of sound and, if so, whether CROS selectively disrupts monaural level or spectral cues to horizontal location, or both. Unilateral deafness and CROS device use were simulated in twelve normal hearing participants. Monaural recordings of broadband white noise presented from three spatial locations (-60°, 0°, and +60°) were made in the ear canal of a model listener using a probe microphone with and without a CROS device. The recordings were presented to participants via an insert earphone placed in their right ear. The recordings were processed to disrupt either monaural level or spectral cues to horizontal sound location by roving presentation level or the energy across adjacent frequency bands, respectively. Localisation ability was assessed using a three-alternative forced-choice spatial discrimination task. Participants localised above chance levels in all conditions. Spatial discrimination accuracy was poorer when participants only had access to monaural spectral cues compared to when monaural level cues were available. CROS use impaired localisation significantly regardless of whether level or spectral cues were available. For both cues, signal re-routing had a detrimental effect on the ability to localise sounds originating from the side of the deaf ear (-60°). CROS use also impaired the ability to use level cues to localise sounds originating from

  2. Meta-analytic review of the development of face discrimination in infancy: Face race, face gender, infant age, and methodology moderate face discrimination.

    Science.gov (United States)

    Sugden, Nicole A; Marquis, Alexandra R

    2017-11-01

    Infants show facility for discriminating between individual faces within hours of birth. Over the first year of life, infants' face discrimination shows continued improvement with familiar face types, such as own-race faces, but not with unfamiliar face types, like other-race faces. The goal of this meta-analytic review is to provide an effect size for infants' face discrimination ability overall, with own-race faces, and with other-race faces within the first year of life, how this differs with age, and how it is influenced by task methodology. Inclusion criteria were (a) infant participants aged 0 to 12 months, (b) completing a human own- or other-race face discrimination task, (c) with discrimination being determined by infant looking. Our analysis included 30 works (165 samples, 1,926 participants participated in 2,623 tasks). The effect size for infants' face discrimination was small, 6.53% greater than chance (i.e., equal looking to the novel and familiar). There was a significant difference in discrimination by race, overall (own-race, 8.18%; other-race, 3.18%) and between ages (own-race: 0- to 4.5-month-olds, 7.32%; 5- to 7.5-month-olds, 9.17%; and 8- to 12-month-olds, 7.68%; other-race: 0- to 4.5-month-olds, 6.12%; 5- to 7.5-month-olds, 3.70%; and 8- to 12-month-olds, 2.79%). Multilevel linear (mixed-effects) models were used to predict face discrimination; infants' capacity to discriminate faces is sensitive to face characteristics including race, gender, and emotion as well as the methods used, including task timing, coding method, and visual angle. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  3. Toward Inverse Control of Physics-Based Sound Synthesis

    Science.gov (United States)

    Pfalz, A.; Berdahl, E.

    2017-05-01

    Long Short-Term Memory networks (LSTMs) can be trained to realize inverse control of physics-based sound synthesizers. Physics-based sound synthesizers simulate the laws of physics to produce output sound according to input gesture signals. When a user's gestures are measured in real time, she or he can use them to control physics-based sound synthesizers, thereby creating simulated virtual instruments. An intriguing question is how to program a computer to learn to play such physics-based models. This work demonstrates that LSTMs can be trained to accomplish this inverse control task with four physics-based sound synthesizers.

  4. Imperfect pitch: Gabor's uncertainty principle and the pitch of extremely brief sounds.

    Science.gov (United States)

    Hsieh, I-Hui; Saberi, Kourosh

    2016-02-01

    How brief must a sound be before its pitch is no longer perceived? The uncertainty tradeoff between temporal and spectral resolution (Gabor's principle) limits the minimum duration required for accurate pitch identification or discrimination. Prior studies have reported that pitch can be extracted from sinusoidal pulses as brief as half a cycle. This finding has been used in a number of classic papers to develop models of pitch encoding. We have found that phase randomization, which eliminates timbre confounds, degrades this ability to chance, raising serious concerns over the foundation on which classic pitch models have been built. The current study investigated whether subthreshold pitch cues may still exist in partial-cycle pulses revealed through statistical integration in a time series containing multiple pulses. To this end, we measured frequency-discrimination thresholds in a two-interval forced-choice task for trains of partial-cycle random-phase tone pulses. We found that residual pitch cues exist in these pulses but discriminating them requires an order of magnitude (ten times) larger frequency difference than that reported previously, necessitating a re-evaluation of pitch models built on earlier findings. We also found that as pulse duration is decreased to less than two cycles its pitch becomes biased toward higher frequencies, consistent with predictions of an auto-correlation model of pitch extraction.

  5. Auditory velocity discrimination in the horizontal plane at very high velocities.

    Science.gov (United States)

    Frissen, Ilja; Féron, François-Xavier; Guastavino, Catherine

    2014-10-01

    We determined velocity discrimination thresholds and Weber fractions for sounds revolving around the listener at very high velocities. Sounds used were a broadband white noise and two harmonic sounds with fundamental frequencies of 330 Hz and 1760 Hz. Experiment 1 used velocities ranging between 288°/s and 720°/s in an acoustically treated room and Experiment 2 used velocities between 288°/s and 576°/s in a highly reverberant hall. A third experiment addressed potential confounds in the first two experiments. The results show that people can reliably discriminate velocity at very high velocities and that both thresholds and Weber fractions decrease as velocity increases. These results violate Weber's law but are consistent with the empirical trend observed in the literature. While thresholds for the noise and 330 Hz harmonic stimulus were similar, those for the 1760 Hz harmonic stimulus were substantially higher. There were no reliable differences in velocity discrimination between the two acoustical environments, suggesting that auditory motion perception at high velocities is robust against the effects of reverberation. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Anti-discrimination Analysis Using Privacy Attack Strategies

    KAUST Repository

    Ruggieri, Salvatore; Hajian, Sara; Kamiran, Faisal; Zhang, Xiangliang

    2014-01-01

    Social discrimination discovery from data is an important task to identify illegal and unethical discriminatory patterns towards protected-by-law groups, e.g., ethnic minorities. We deploy privacy attack strategies as tools for discrimination

  7. Noise detection during heart sound recording using periodicity signatures

    International Nuclear Information System (INIS)

    Kumar, D; Carvalho, P; Paiva, R P; Henriques, J; Antunes, M

    2011-01-01

    Heart sound is a valuable biosignal for diagnosis of a large set of cardiac diseases. Ambient and physiological noise interference is one of the most usual and highly probable incidents during heart sound acquisition. It tends to change the morphological characteristics of heart sound that may carry important information for heart disease diagnosis. In this paper, we propose a new method applicable in real time to detect ambient and internal body noises manifested in heart sound during acquisition. The algorithm is developed on the basis of the periodic nature of heart sounds and physiologically inspired criteria. A small segment of uncontaminated heart sound exhibiting periodicity in time as well as in the time-frequency domain is first detected and applied as a reference signal in discriminating noise from the sound. The proposed technique has been tested with a database of heart sounds collected from 71 subjects with several types of heart disease inducing several noises during recording. The achieved average sensitivity and specificity are 95.88% and 97.56%, respectively

  8. Electromagnetic sounding of the Earth's interior

    CERN Document Server

    Spichak, Viacheslav V

    2015-01-01

    Electromagnetic Sounding of the Earth's Interior 2nd edition provides a comprehensive up-to-date collection of contributions, covering methodological, computational and practical aspects of Electromagnetic sounding of the Earth by different techniques at global, regional and local scales. Moreover, it contains new developments such as the concept of self-consistent tasks of geophysics and , 3-D interpretation of the TEM sounding which, so far, have not all been covered by one book. Electromagnetic Sounding of the Earth's Interior 2nd edition consists of three parts: I- EM sounding methods, II- Forward modelling and inversion techniques, and III - Data processing, analysis, modelling and interpretation. The new edition includes brand new chapters on Pulse and frequency electromagnetic sounding for hydrocarbon offshore exploration. Additionally all other chapters have been extensively updated to include new developments. Presents recently developed methodological findings of the earth's study, including seism...

  9. The development of infants' use of property-poor sounds to individuate objects.

    Science.gov (United States)

    Wilcox, Teresa; Smith, Tracy R

    2010-12-01

    There is evidence that infants as young as 4.5 months use property-rich but not property-poor sounds as the basis for individuating objects (Wilcox, Woods, Tuggy, & Napoli, 2006). The current research sought to identify the age at which infants demonstrate the capacity to use property-poor sounds. Using the task of Wilcox et al., infants aged 7 and 9 months were tested. The results revealed that 9- but not 7-month-olds demonstrated sensitivity to property-poor sounds (electronic tones) in an object individuation task. Additional results confirmed that the younger infants were sensitive to property-rich sounds (rattle sounds). These are the first positive results obtained with property-poor sounds in infants and lay the foundation for future research to identify the underlying basis for the developmental hierarchy favoring property-rich over property-poor sounds and possible mechanisms for change. Copyright © 2010 Elsevier Inc. All rights reserved.

  10. Early postnatal x-irradiation of the hippocampus and discrimination learning in adult rats

    International Nuclear Information System (INIS)

    Gazzara, R.A.; Altman, J.

    1981-01-01

    Rats with X-irradiation-produced degranulation of the hippocampal dentate gyrus were trained in the acquisition and reversal of simultaneous visual and tactile discriminations in a T-maze. These experiments employed the same treatment, apparatus, and procedure but varied in task difficulty. In the brightness and roughness discriminations, the irradiated rats were not handicapped in acquiring or reversing discriminations of low or low-moderate task difficulty. However, these rats were handicapped in acquiring and reversing discriminations of moderate and high task difficulty. In a Black/White discrimination, in which the stimuli were restricted to the goal-arm walls, the irradiated rats were handicapped in the acquisition (low task difficulty) and reversal (moderate task difficulty) phases of the task. These results suggest that the irradiated rats were not handicapped when the noticeability of the stimuli was high, irrespective of modality used, but were handicapped when the noticeability of the stimuli was low. In addition, these results are consistent with the hypothesis that rats with hippocampal damage are inattentive due to hyperactivity

  11. Residual Neural Processing of Musical Sound Features in Adult Cochlear Implant Users

    Science.gov (United States)

    Timm, Lydia; Vuust, Peter; Brattico, Elvira; Agrawal, Deepashri; Debener, Stefan; Büchner, Andreas; Dengler, Reinhard; Wittfoth, Matthias

    2014-01-01

    Auditory processing in general and music perception in particular are hampered in adult cochlear implant (CI) users. To examine the residual music perception skills and their underlying neural correlates in CI users implanted in adolescence or adulthood, we conducted an electrophysiological and behavioral study comparing adult CI users with normal-hearing age-matched controls (NH controls). We used a newly developed musical multi-feature paradigm, which makes it possible to test automatic auditory discrimination of six different types of sound feature changes inserted within a musical enriched setting lasting only 20 min. The presentation of stimuli did not require the participants’ attention, allowing the study of the early automatic stage of feature processing in the auditory cortex. For the CI users, we obtained mismatch negativity (MMN) brain responses to five feature changes but not to changes of rhythm, whereas we obtained MMNs for all the feature changes in the NH controls. Furthermore, the MMNs to deviants of pitch of CI users were reduced in amplitude and later than those of NH controls for changes of pitch and guitar timber. No other group differences in MMN parameters were found to changes in intensity and saxophone timber. Furthermore, the MMNs in CI users reflected the behavioral scores from a respective discrimination task and were correlated with patients’ age and speech intelligibility. Our results suggest that even though CI users are not performing at the same level as NH controls in neural discrimination of pitch-based features, they do possess potential neural abilities for music processing. However, CI users showed a disrupted ability to automatically discriminate rhythmic changes compared with controls. The current behavioral and MMN findings highlight the residual neural skills for music processing even in CI users who have been implanted in adolescence or adulthood. Highlights: -Automatic brain responses to musical feature changes

  12. Task-specific reorganization of the auditory cortex in deaf humans.

    Science.gov (United States)

    Bola, Łukasz; Zimmermann, Maria; Mostowski, Piotr; Jednoróg, Katarzyna; Marchewka, Artur; Rutkowski, Paweł; Szwed, Marcin

    2017-01-24

    The principles that guide large-scale cortical reorganization remain unclear. In the blind, several visual regions preserve their task specificity; ventral visual areas, for example, become engaged in auditory and tactile object-recognition tasks. It remains open whether task-specific reorganization is unique to the visual cortex or, alternatively, whether this kind of plasticity is a general principle applying to other cortical areas. Auditory areas can become recruited for visual and tactile input in the deaf. Although nonhuman data suggest that this reorganization might be task specific, human evidence has been lacking. Here we enrolled 15 deaf and 15 hearing adults into an functional MRI experiment during which they discriminated between temporally complex sequences of stimuli (rhythms). Both deaf and hearing subjects performed the task visually, in the central visual field. In addition, hearing subjects performed the same task in the auditory modality. We found that the visual task robustly activated the auditory cortex in deaf subjects, peaking in the posterior-lateral part of high-level auditory areas. This activation pattern was strikingly similar to the pattern found in hearing subjects performing the auditory version of the task. Although performing the visual task in deaf subjects induced an increase in functional connectivity between the auditory cortex and the dorsal visual cortex, no such effect was found in hearing subjects. We conclude that in deaf humans the high-level auditory cortex switches its input modality from sound to vision but preserves its task-specific activation pattern independent of input modality. Task-specific reorganization thus might be a general principle that guides cortical plasticity in the brain.

  13. Assessment of sound quality perception in cochlear implant users during music listening.

    Science.gov (United States)

    Roy, Alexis T; Jiradejvong, Patpong; Carver, Courtney; Limb, Charles J

    2012-04-01

    Although cochlear implant (CI) users frequently report deterioration of sound quality when listening to music, few methods exist to quantify these subjective claims. 1) To design a novel research method for quantifying sound quality perception in CI users during music listening; 2) To validate this method by assessing one attribute of music perception, bass frequency perception, which is hypothesized to be relevant to overall musical sound quality perception. Limitations in bass frequency perception contribute to CI-mediated sound quality deteriorations. The proposed method will quantify this deterioration by measuring CI users' impaired ability to make sound quality discriminations among musical stimuli with variable amounts of bass frequency removal. A method commonly used in the audio industry (multiple stimulus with hidden reference and anchor [MUSHRA]) was adapted for CI users, referred to as CI-MUSHRA. CI users and normal hearing controls were presented with 7 sound quality versions of a musical segment: 5 high pass filter cutoff versions (200-, 400-, 600-, 800-, 1000-Hz) with decreasing amounts of bass information, an unaltered version ("hidden reference"), and a highly altered version (1,000-1,200 Hz band pass filter; "anchor"). Participants provided sound quality ratings between 0 (very poor) and 100 (excellent) for each version; ratings reflected differences in perceived sound quality among stimuli. CI users had greater difficulty making overall sound quality discriminations as a function of bass frequency loss than normal hearing controls, as demonstrated by a significantly weaker correlation between bass frequency content and sound quality ratings. In particular, CI users could not perceive sound quality difference among stimuli missing up to 400 Hz of bass frequency information. Bass frequency impairments contribute to sound quality deteriorations during music listening for CI users. CI-MUSHRA provided a systematic and quantitative assessment of this

  14. Effects of noise and task loading on a communication task loading on a communication task

    Science.gov (United States)

    Orrell, Dean H., II

    Previous research had shown the effect of noise on a single communication task. This research has been criticized as not being representative of a real world situation since subjects allocated all of their attention to only one task. In the present study, the effect of adding a loading task to a standard noise-communication paradigm was investigated. Subjects performed both a communication task (Modified Rhyme Test; House et al. 1965) and a short term memory task (Sternberg, 1969) in simulated levels of aircraft noise (95, 105 and 115 dB overall sound pressure level (OASPL)). Task loading was varied with Sternberg's task by requiring subjects to memorize one, four, or six alphanumeric characters. Simulated aircraft noise was varied between levels of 95, 105 and 115 dB OASPL using a pink noise source. Results show that the addition of Sternberg's task and little effect on the intelligibility of the communication task while response time for the communication task increased.

  15. Sound lateralization test in adolescent blind individuals.

    Science.gov (United States)

    Yabe, Takao; Kaga, Kimitaka

    2005-06-21

    Blind individuals require to compensate for the lack of visual information by other sensory inputs. In particular, auditory inputs are crucial to such individuals. To investigate whether blind individuals localize sound in space better than sighted individuals, we tested the auditory ability of adolescent blind individuals using a sound lateralization method. The interaural time difference discrimination thresholds of blind individuals were statistically significantly shorter than those of blind individuals with residual vision and controls. These findings suggest that blind individuals have better auditory spatial ability than individuals with visual cues; therefore, some perceptual compensation occurred in the former.

  16. Individual differences in attention strategies during detection, fine discrimination, and coarse discrimination

    Science.gov (United States)

    Hecker, Elizabeth A.; Serences, John T.; Srinivasan, Ramesh

    2013-01-01

    Interacting with the environment requires the ability to flexibly direct attention to relevant features. We examined the degree to which individuals attend to visual features within and across Detection, Fine Discrimination, and Coarse Discrimination tasks. Electroencephalographic (EEG) responses were measured to an unattended peripheral flickering (4 or 6 Hz) grating while individuals (n = 33) attended to orientations that were offset by 0°, 10°, 20°, 30°, 40°, and 90° from the orientation of the unattended flicker. These unattended responses may be sensitive to attentional gain at the attended spatial location, since attention to features enhances early visual responses throughout the visual field. We found no significant differences in tuning curves across the three tasks in part due to individual differences in strategies. We sought to characterize individual attention strategies using hierarchical Bayesian modeling, which grouped individuals into families of curves that reflect attention to the physical target orientation (“on-channel”) or away from the target orientation (“off-channel”) or a uniform distribution of attention. The different curves were related to behavioral performance; individuals with “on-channel” curves had lower thresholds than individuals with uniform curves. Individuals with “off-channel” curves during Fine Discrimination additionally had lower thresholds than those assigned to uniform curves, highlighting the perceptual benefits of attending away from the physical target orientation during fine discriminations. Finally, we showed that a subset of individuals with optimal curves (“on-channel”) during Detection also demonstrated optimal curves (“off-channel”) during Fine Discrimination, indicating that a subset of individuals can modulate tuning optimally for detection and discrimination. PMID:23678013

  17. Different levels of food restriction reveal genotype-specific differences in learning a visual discrimination task.

    Directory of Open Access Journals (Sweden)

    Kalina Makowiecki

    Full Text Available In behavioural experiments, motivation to learn can be achieved using food rewards as positive reinforcement in food-restricted animals. Previous studies reduce animal weights to 80-90% of free-feeding body weight as the criterion for food restriction. However, effects of different degrees of food restriction on task performance have not been assessed. We compared learning task performance in mice food-restricted to 80 or 90% body weight (BW. We used adult wildtype (WT; C57Bl/6j and knockout (ephrin-A2⁻/⁻ mice, previously shown to have a reverse learning deficit. Mice were trained in a two-choice visual discrimination task with food reward as positive reinforcement. When mice reached criterion for one visual stimulus (80% correct in three consecutive 10 trial sets they began the reverse learning phase, where the rewarded stimulus was switched to the previously incorrect stimulus. For the initial learning and reverse phase of the task, mice at 90%BW took almost twice as many trials to reach criterion as mice at 80%BW. Furthermore, WT 80 and 90%BW groups significantly differed in percentage correct responses and learning strategy in the reverse learning phase, whereas no differences between weight restriction groups were observed in ephrin-A2⁻/⁻ mice. Most importantly, genotype-specific differences in reverse learning strategy were only detected in the 80%BW groups. Our results indicate that increased food restriction not only results in better performance and a shorter training period, but may also be necessary for revealing behavioural differences between experimental groups. This has important ethical and animal welfare implications when deciding extent of diet restriction in behavioural studies.

  18. Reach on sound: a key to object permanence in visually impaired children.

    Science.gov (United States)

    Fazzi, Elisa; Signorini, Sabrina Giovanna; Bomba, Monica; Luparia, Antonella; Lanners, Josée; Balottin, Umberto

    2011-04-01

    The capacity to reach an object presented through sound clue indicates, in the blind child, the acquisition of object permanence and gives information over his/her cognitive development. To assess cognitive development in congenitally blind children with or without multiple disabilities. Cohort study. Thirty-seven congenitally blind subjects (17 with associated multiple disabilities, 20 mainly blind) were enrolled. We used Bigelow's protocol to evaluate "reach on sound" capacity over time (at 6, 12, 18, 24, and 36 months), and a battery of clinical, neurophysiological and cognitive instruments to assess clinical features. Tasks n.1 to 5 were acquired by most of the mainly blind children by 12 months of age. Task 6 coincided with a drop in performance, and the acquisition of the subsequent tasks showed a less agehomogeneous pattern. In blind children with multiple disabilities, task acquisition rates were lower, with the curves dipping in relation to the more complex tasks. The mainly blind subjects managed to overcome Fraiberg's "conceptual problem"--i.e., they acquired the ability to attribute an external object with identity and substance even when it manifested its presence through sound only--and thus developed the ability to reach an object presented through sound. Instead, most of the blind children with multiple disabilities presented poor performances on the "reach on sound" protocol and were unable, before 36 months of age, to develop the strategies needed to resolve Fraiberg's "conceptual problem". Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. Synchronous Sounds Enhance Visual Sensitivity without Reducing Target Uncertainty

    Directory of Open Access Journals (Sweden)

    Yi-Chuan Chen

    2011-10-01

    Full Text Available We examined the crossmodal effect of the presentation of a simultaneous sound on visual detection and discrimination sensitivity using the equivalent noise paradigm (Dosher & Lu, 1998. In each trial, a tilted Gabor patch was presented in either the first or second of two intervals consisting of dynamic 2D white noise with one of seven possible contrast levels. The results revealed that the sensitivity of participants' visual detection and discrimination performance were both enhanced by the presentation of a simultaneous sound, though only close to the noise level at which participants' target contrast thresholds started to increase with the increasing noise contrast. A further analysis of the psychometric function at this noise level revealed that the increase in sensitivity could not be explained by the reduction of participants' uncertainty regarding the onset time of the visual target. We suggest that this crossmodal facilitatory effect may be accounted for by perceptual enhancement elicited by a simultaneously-presented sound, and that the crossmodal facilitation was easier to observe when the visual system encountered a level of noise that happened to be close to the level of internal noise embedded within the system.

  20. Swallowing sound detection using hidden markov modeling of recurrence plot features

    International Nuclear Information System (INIS)

    Aboofazeli, Mohammad; Moussavi, Zahra

    2009-01-01

    Automated detection of swallowing sounds in swallowing and breath sound recordings is of importance for monitoring purposes in which the recording durations are long. This paper presents a novel method for swallowing sound detection using hidden Markov modeling of recurrence plot features. Tracheal sound recordings of 15 healthy and nine dysphagic subjects were studied. The multidimensional state space trajectory of each signal was reconstructed using the Taken method of delays. The sequences of three recurrence plot features of the reconstructed trajectories (which have shown discriminating capability between swallowing and breath sounds) were modeled by three hidden Markov models. The Viterbi algorithm was used for swallowing sound detection. The results were validated manually by inspection of the simultaneously recorded airflow signal and spectrogram of the sounds, and also by auditory means. The experimental results suggested that the performance of the proposed method using hidden Markov modeling of recurrence plot features was superior to the previous swallowing sound detection methods.

  1. Swallowing sound detection using hidden markov modeling of recurrence plot features

    Energy Technology Data Exchange (ETDEWEB)

    Aboofazeli, Mohammad [Faculty of Engineering, Department of Electrical and Computer Engineering, University of Manitoba, Winnipeg, Manitoba, R3T 5V6 (Canada)], E-mail: umaboofa@cc.umanitoba.ca; Moussavi, Zahra [Faculty of Engineering, Department of Electrical and Computer Engineering, University of Manitoba, Winnipeg, Manitoba, R3T 5V6 (Canada)], E-mail: mousavi@ee.umanitoba.ca

    2009-01-30

    Automated detection of swallowing sounds in swallowing and breath sound recordings is of importance for monitoring purposes in which the recording durations are long. This paper presents a novel method for swallowing sound detection using hidden Markov modeling of recurrence plot features. Tracheal sound recordings of 15 healthy and nine dysphagic subjects were studied. The multidimensional state space trajectory of each signal was reconstructed using the Taken method of delays. The sequences of three recurrence plot features of the reconstructed trajectories (which have shown discriminating capability between swallowing and breath sounds) were modeled by three hidden Markov models. The Viterbi algorithm was used for swallowing sound detection. The results were validated manually by inspection of the simultaneously recorded airflow signal and spectrogram of the sounds, and also by auditory means. The experimental results suggested that the performance of the proposed method using hidden Markov modeling of recurrence plot features was superior to the previous swallowing sound detection methods.

  2. A possible structural correlate of learning performance on a colour discrimination task in the brain of the bumblebee

    Science.gov (United States)

    Li, Li; MaBouDi, HaDi; Egertová, Michaela; Elphick, Maurice R.

    2017-01-01

    Synaptic plasticity is considered to be a basis for learning and memory. However, the relationship between synaptic arrangements and individual differences in learning and memory is poorly understood. Here, we explored how the density of microglomeruli (synaptic complexes) within specific regions of the bumblebee (Bombus terrestris) brain relates to both visual learning and inter-individual differences in learning and memory performance on a visual discrimination task. Using whole-brain immunolabelling, we measured the density of microglomeruli in the collar region (visual association areas) of the mushroom bodies of the bumblebee brain. We found that bumblebees which made fewer errors during training in a visual discrimination task had higher microglomerular density. Similarly, bumblebees that had better retention of the learned colour-reward associations two days after training had higher microglomerular density. Further experiments indicated experience-dependent changes in neural circuitry: learning a colour-reward contingency with 10 colours (but not two colours) does result, and exposure to many different colours may result, in changes to microglomerular density in the collar region of the mushroom bodies. These results reveal the varying roles that visual experience, visual learning and foraging activity have on neural structure. Although our study does not provide a causal link between microglomerular density and performance, the observed positive correlations provide new insights for future studies into how neural structure may relate to inter-individual differences in learning and memory. PMID:28978727

  3. A possible structural correlate of learning performance on a colour discrimination task in the brain of the bumblebee.

    Science.gov (United States)

    Li, Li; MaBouDi, HaDi; Egertová, Michaela; Elphick, Maurice R; Chittka, Lars; Perry, Clint J

    2017-10-11

    Synaptic plasticity is considered to be a basis for learning and memory. However, the relationship between synaptic arrangements and individual differences in learning and memory is poorly understood. Here, we explored how the density of microglomeruli (synaptic complexes) within specific regions of the bumblebee ( Bombus terrestris ) brain relates to both visual learning and inter-individual differences in learning and memory performance on a visual discrimination task. Using whole-brain immunolabelling, we measured the density of microglomeruli in the collar region (visual association areas) of the mushroom bodies of the bumblebee brain. We found that bumblebees which made fewer errors during training in a visual discrimination task had higher microglomerular density. Similarly, bumblebees that had better retention of the learned colour-reward associations two days after training had higher microglomerular density. Further experiments indicated experience-dependent changes in neural circuitry: learning a colour-reward contingency with 10 colours (but not two colours) does result, and exposure to many different colours may result, in changes to microglomerular density in the collar region of the mushroom bodies. These results reveal the varying roles that visual experience, visual learning and foraging activity have on neural structure. Although our study does not provide a causal link between microglomerular density and performance, the observed positive correlations provide new insights for future studies into how neural structure may relate to inter-individual differences in learning and memory. © 2017 The Authors.

  4. Mind the gap: temporal discrimination and dystonia.

    Science.gov (United States)

    Sadnicka, A; Daum, C; Cordivari, C; Bhatia, K P; Rothwell, J C; Manohar, S; Edwards, M J

    2017-06-01

    One of the most widely studied perceptual measures of sensory dysfunction in dystonia is the temporal discrimination threshold (TDT) (the shortest interval at which subjects can perceive that there are two stimuli rather than one). However the elevated thresholds described may be due to a number of potential mechanisms as current paradigms test not only temporal discrimination but also extraneous sensory and decision-making parameters. In this study two paradigms designed to better quantify temporal processing are presented and a decision-making model is used to assess the influence of decision strategy. 22 patients with cervical dystonia and 22 age-matched controls completed two tasks (i) temporal resolution (a randomized, automated version of existing TDT paradigms) and (ii) interval discrimination (rating the length of two consecutive intervals). In the temporal resolution task patients had delayed (P = 0.021) and more variable (P = 0.013) response times but equivalent discrimination thresholds. Modelling these effects suggested this was due to an increased perceptual decision boundary in dystonia with patients requiring greater evidence before committing to decisions (P = 0.020). Patient performance on the interval discrimination task was normal. Our work suggests that previously observed abnormalities in TDT may not be due to a selective sensory deficit of temporal processing as decision-making itself is abnormal in cervical dystonia. © 2017 EAN.

  5. Data preprocessing techniques for classification without discrimination

    NARCIS (Netherlands)

    Kamiran, F.; Calders, T.G.K.

    2012-01-01

    Recently, the following Discrimination-Aware Classification Problem was introduced: Suppose we are given training data that exhibit unlawful discrimination; e.g., toward sensitive attributes such as gender or ethnicity. The task is to learn a classifier that optimizes accuracy, but does not have

  6. Sleep deprivation effects on object discrimination task in zebrafish (Danio rerio).

    Science.gov (United States)

    Pinheiro-da-Silva, Jaquelinne; Silva, Priscila Fernandes; Nogueira, Marcelo Borges; Luchiari, Ana Carolina

    2017-03-01

    The zebrafish is an ideal vertebrate model for neurobehavioral studies with translational relevance to humans. Many aspects of sleep have been studied, but we still do not understand how and why sleep deprivation alters behavioral and physiological processes. A number of hypotheses suggest its role in memory consolidation. In this respect, the aim of this study was to analyze the effects of sleep deprivation on memory in zebrafish (Danio rerio), using an object discrimination paradigm. Four treatments were tested: control, partial sleep deprivation, total sleep deprivation by light pulses, and total sleep deprivation by extended light. The control group explored the new object more than the known object, indicating clear discrimination. The partially sleep-deprived group explored the new object more than the other object in the discrimination phase, suggesting a certain degree of discriminative performance. By contrast, both total sleep deprivation groups equally explored all objects, regardless of their novelty. It seems that only one night of sleep deprivation is enough to affect discriminative response in zebrafish, indicating its negative impact on cognitive processes. We suggest that this study could be a useful screening tool for cognitive dysfunction and a better understanding of the effect of sleep-wake cycles on cognition.

  7. Sound localization and occupational noise

    Directory of Open Access Journals (Sweden)

    Pedro de Lemos Menezes

    2014-02-01

    Full Text Available OBJECTIVE: The aim of this study was to determine the effects of occupational noise on sound localization in different spatial planes and frequencies among normal hearing firefighters. METHOD: A total of 29 adults with pure-tone hearing thresholds below 25 dB took part in the study. The participants were divided into a group of 19 firefighters exposed to occupational noise and a control group of 10 adults who were not exposed to such noise. All subjects were assigned a sound localization task involving 117 stimuli from 13 sound sources that were spatially distributed in horizontal, vertical, midsagittal and transverse planes. The three stimuli, which were square waves with fundamental frequencies of 500, 2,000 and 4,000 Hz, were presented at a sound level of 70 dB and were randomly repeated three times from each sound source. The angle between the speaker's axis in the same plane was 45°, and the distance to the subject was 1 m. RESULT: The results demonstrate that the sound localization ability of the firefighters was significantly lower (p<0.01 than that of the control group. CONCLUSION: Exposure to occupational noise, even when not resulting in hearing loss, may lead to a diminished ability to locate a sound source.

  8. Experimentel Evidence of Discrimination in the Labour Market

    DEFF Research Database (Denmark)

    Dahl, Malte Rokkjær; Krog, Niels

    , there is limited evidence on the way gender and ethnicity interact across different occupations. By randomly assigning gender and ethnicity, this study suggests that ethnic discrimination is strongly moderated by gender: minority males are consistently subject to a much larger degree of discrimination than......This paper presents evidence of ethnic discrimination in the recruitment process from a field experiment conducted in the Danish labour market. In a correspondence experiment, fictitious job applications were randomly assigned either a Danish or Middle Eastern-sounding name and sent to real job...... openings. In addition to providing evidence on the extent of ethnic discrimination in the Danish labour market, the study offers two novel contributions to the literature more generally. First, because a majority of European correspondence experiments have relied solely on applications with male aliases...

  9. Evaluative conditioning induces changes in sound valence

    Directory of Open Access Journals (Sweden)

    Anna C. Bolders

    2012-04-01

    Full Text Available Evaluative Conditioning (EC has hardly been tested in the auditory domain, but it is a potentially valuable research tool. In Experiment 1 we investigated whether the affective evaluation of short environmental sounds can be changed using affective words as unconditioned stimuli (US. Congruence effects on an affective priming task (APT for conditioned sounds demonstrated successful EC. Subjective ratings for sounds paired with negative words changed accordingly. In Experiment 2 we investigated whether the acquired valence remains stable after repeated presentation of the conditioned sound without the US or whether extinction occurs. The acquired affective value remained present, albeit weaker, even after 40 extinction trials. These results warrant the use of EC to study processing of short environmental sounds with acquired valence, even if this requires repeated stimulus presentations. This paves the way for studying processing of affective environmental sounds while effectively controlling low level-stimulus properties.

  10. Keeping Timbre in Mind: Working Memory for Complex Sounds that Can't Be Verbalized

    Science.gov (United States)

    Golubock, Jason L.; Janata, Petr

    2013-01-01

    Properties of auditory working memory for sounds that lack strong semantic associations and are not readily verbalized or sung are poorly understood. We investigated auditory working memory capacity for lists containing 2-6 easily discriminable abstract sounds synthesized within a constrained timbral space, at delays of 1-6 s (Experiment 1), and…

  11. Russian blues reveal effects of language on color discrimination.

    Science.gov (United States)

    Winawer, Jonathan; Witthoft, Nathan; Frank, Michael C; Wu, Lisa; Wade, Alex R; Boroditsky, Lera

    2007-05-08

    English and Russian color terms divide the color spectrum differently. Unlike English, Russian makes an obligatory distinction between lighter blues ("goluboy") and darker blues ("siniy"). We investigated whether this linguistic difference leads to differences in color discrimination. We tested English and Russian speakers in a speeded color discrimination task using blue stimuli that spanned the siniy/goluboy border. We found that Russian speakers were faster to discriminate two colors when they fell into different linguistic categories in Russian (one siniy and the other goluboy) than when they were from the same linguistic category (both siniy or both goluboy). Moreover, this category advantage was eliminated by a verbal, but not a spatial, dual task. These effects were stronger for difficult discriminations (i.e., when the colors were perceptually close) than for easy discriminations (i.e., when the colors were further apart). English speakers tested on the identical stimuli did not show a category advantage in any of the conditions. These results demonstrate that (i) categories in language affect performance on simple perceptual color tasks and (ii) the effect of language is online (and can be disrupted by verbal interference).

  12. Fos Protein Expression in Olfactory-Related Brain Areas after Learning and after Reactivation of a Slowly Acquired Olfactory Discrimination Task in the Rat

    Science.gov (United States)

    Roullet, Florence; Lienard, Fabienne; Datiche, Frederique; Cattarelli, Martine

    2005-01-01

    Fos protein immunodetection was used to investigate the neuronal activation elicited in some olfactory-related areas after either learning of an olfactory discrimination task or its reactivation 10 d later. Trained rats (T) progressively acquired the association between one odor of a pair and water-reward in a four-arm maze. Two groups of…

  13. Discriminative Shape Alignment

    DEFF Research Database (Denmark)

    Loog, M.; de Bruijne, M.

    2009-01-01

    , not taking into account that eventually the shapes are to be assigned to two or more different classes. This work introduces a discriminative variation to well-known Procrustes alignment and demonstrates its benefit over this classical method in shape classification tasks. The focus is on two...

  14. Speed and accuracy of visual image discrimination by rats

    Directory of Open Access Journals (Sweden)

    Pamela eReinagel

    2013-12-01

    Full Text Available The trade-off between speed and accuracy of sensory discrimination has most often been studying using sensory stimuli that evolve over time, such as random dot motion discrimination tasks. We previously reported that when rats perform motion discrimination, correct trials have longer reaction times than errors, accuracy increases with reaction time, and reaction time increases with stimulus ambiguity. In such experiments, new sensory information is continually presented, which could partly explain interactions between reaction time and accuracy. The present study shows that a changing physical stimulus is not essential to those findings. Freely behaving rats were trained to discriminate between two static visual images in a self-paced, 2-alternative forced-choice (2AFC reaction time task. Each trial was initiated by the rat, and the two images were presented simultaneously and persisted until the rat responded, with no time limit. Reaction times were longer in correct trials than in error trials, and accuracy increased with reaction time, comparable to results previously reported for rats performing motion discrimination. In the motion task, coherence has been used to vary discrimination difficulty. Here morphs between the previously learned images were used to parametrically vary the image similarity. In randomly interleaved trials, rats took more time on average to respond in trials in which they had to discriminate more similar stimuli. For both the motion and image tasks, the dependence of reaction time on ambiguity is weak, as if rats prioritized speed over accuracy. Therefore we asked whether rats can change the priority of speed and accuracy adaptively in response to a change in reward contingencies. For two rats, the penalty delay was increased from two to six seconds. When the penalty was longer, reaction times increased, and accuracy improved. This demonstrates that rats can flexibly adjust their behavioral strategy in response to the

  15. Context-Dependent Modulation of Functional Connectivity: Secondary Somatosensory Cortex to Prefrontal Cortex Connections in Two-Stimulus-Interval Discrimination Tasks

    OpenAIRE

    Chow, Stephanie S.; Romo, Ranulfo; Brody, Carlos D.

    2009-01-01

    In a complex world, a sensory cue may prompt different actions in different contexts. A laboratory example of context-dependent sensory processing is the two-stimulus-interval discrimination task. In each trial, a first stimulus (f1) must be stored in short-term memory and later compared with a second stimulus (f2), for the animal to come to a binary decision. Prefrontal cortex (PFC) neurons need to interpret the f1 information in one way (perhaps with a positive weight) and the f2 informatio...

  16. The Influence of Eye Closure on Somatosensory Discrimination: A Trade-off Between Simple Perception and Discrimination.

    Science.gov (United States)

    Götz, Theresa; Hanke, David; Huonker, Ralph; Weiss, Thomas; Klingner, Carsten; Brodoehl, Stefan; Baumbach, Philipp; Witte, Otto W

    2017-06-01

    We often close our eyes to improve perception. Recent results have shown a decrease of perception thresholds accompanied by an increase in somatosensory activity after eye closure. However, does somatosensory spatial discrimination also benefit from eye closure? We previously showed that spatial discrimination is accompanied by a reduction of somatosensory activity. Using magnetoencephalography, we analyzed the magnitude of primary somatosensory (somatosensory P50m) and primary auditory activity (auditory P50m) during a one-back discrimination task in 21 healthy volunteers. In complete darkness, participants were requested to pay attention to either the somatosensory or auditory stimulation and asked to open or close their eyes every 6.5 min. Somatosensory P50m was reduced during a task requiring the distinguishing of stimulus location changes at the distal phalanges of different fingers. The somatosensory P50m was further reduced and detection performance was higher during eyes open. A similar reduction was found for the auditory P50m during a task requiring the distinguishing of changing tones. The function of eye closure is more than controlling visual input. It might be advantageous for perception because it is an effective way to reduce interference from other modalities, but disadvantageous for spatial discrimination because it requires at least one top-down processing stage. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  17. Effects of early postnatal X-irradiation of the hippocampus on discrimination learning in adult rats

    International Nuclear Information System (INIS)

    Gazzara, R.A.

    1980-01-01

    Rats with x-irradiation-produced degranulation of the hippocampal dentate gyrus were trained in the acquisition and reversal of simultaneous visual and tactile discriminations in a T-maze. These experiments employed the same treatment, apparatus, and procedure, but varied in task difficulty. In the brightness and roughness discriminations, the irradiated rats were not handicapped in acquiring or reversing discriminations of low or low-moderate task-difficulty. However, these rats were handicapped in acquiring and reversing discriminations of moderate and high task-difficulty. In a Black/White discrimination, in which the stimuli were restricted to the goal-arm walls, the irradiated rats were handicapped in the acquisition (low task-difficulty) and reversal (moderate task-difficulty) phases of the task. These results suggest that the irradiated rats were not handicapped when the noticeability of the stimuli was high, irrespective of modality used, but were handicapped when the noticeability of the stimuli was low. In addition, these results are consistent with the hypothesis that hippocampal-damaged rats are inattentive due to hyperactivity

  18. Response competition and response inhibition during different choice-discrimination tasks: evidence from ERP measured inside MRI scanner.

    Science.gov (United States)

    Gonzalez-Rosa, Javier J; Inuggi, Alberto; Blasi, Valeria; Cursi, Marco; Annovazzi, Pietro; Comi, Giancarlo; Falini, Andrea; Leocani, Letizia

    2013-07-01

    We investigated the neural correlates underlying response inhibition and conflict detection processes using ERPs and source localization analyses simultaneously acquired during fMRI scanning. ERPs were elicited by a simple reaction time task (SRT), a Go/NoGo task, and a Stroop-like task (CST). The cognitive conflict was thus manipulated in order to probe the degree to which information processing is shared across cognitive systems. We proposed to dissociate inhibition and interference conflict effects on brain activity by using identical Stroop-like congruent/incongruent stimuli in all three task contexts and while varying the response required. NoGo-incongruent trials showed a larger N2 and enhanced activations of rostral anterior cingulate cortex (ACC) and pre-supplementary motor area, whereas Go-congruent trials showed a larger P3 and increased parietal activations. Congruent and incongruent conditions of the CST task also elicited similar N2, P3 and late negativity (LN) ERPs, though CST-incongruent trials revealed a larger LN and enhanced prefrontal and ACC activations. Considering the stimulus probability and experimental manipulation of our study, current findings suggest that NoGo N2 and frontal NoGo P3 appear to be more associated to response inhibition rather than a specific conflict monitoring, whereas occipito-parietal P3 of Go and CST conditions may be more linked to a planned response competition between the prepared and required response. LN, however, appears to be related to higher level conflict monitoring associated with response choice-discrimination but not when the presence of cognitive conflict is associated with response inhibition. Copyright © 2013. Published by Elsevier B.V.

  19. Is 1/f sound more effective than simple resting in reducing stress response?

    Science.gov (United States)

    Oh, Eun-Joo; Cho, Il-Young; Park, Soon-Kwon

    2014-01-01

    It has been previously demonstrated that listening to 1/f sound effectively reduces stress. However, these findings have been inconsistent and further study on the relationship between 1/f sound and the stress response is consequently necessary. The present study examined whether sound with 1/f properties (1/f sound) affects stress-induced electroencephalogram (EEG) changes. Twenty-six subjects who voluntarily participated in the study were randomly assigned to the experimental or control group. Data from four participants were excluded because of EEG artifacts. A mental arithmetic task was used as a stressor. Participants in the experiment group listened to 1/f sound for 5 minutes and 33 seconds, while participants in the control group sat quietly for the same duration. EEG recordings were obtained at various points throughout the experiment. After the experiment, participants completed a questionnaire on the affective impact of the 1/f sound. The results indicated that the mental arithmetic task effectively induced a stress response measurable by EEG. Relative theta power at all electrode sites was significantly lower than baseline in both the control and experimental group. Relative alpha power was significantly lower, and relative beta power was significantly higher in the T3 and T4 areas. Secondly, 1/f sound and simple resting affected task-associated EEG changes in a similar manner. Finally, participants reported in the questionnaire that they experienced a positive feeling in response to the 1/f sound. Our results suggest that a commercialized 1/f sound product is not more effective than simple resting in alleviating the physiological stress response.

  20. Music and Sound in Time Processing of Children with ADHD.

    Science.gov (United States)

    Carrer, Luiz Rogério Jorgensen

    2015-01-01

    ADHD involves cognitive and behavioral aspects with impairments in many environments of children and their families' lives. Music, with its playful, spontaneous, affective, motivational, temporal, and rhythmic dimensions can be of great help for studying the aspects of time processing in ADHD. In this article, we studied time processing with simple sounds and music in children with ADHD with the hypothesis that children with ADHD have a different performance when compared with children with normal development in tasks of time estimation and production. The main objective was to develop sound and musical tasks to evaluate and correlate the performance of children with ADHD, with and without methylphenidate, compared to a control group with typical development. The study involved 36 participants of age 6-14 years, recruited at NANI-UNIFESP/SP, subdivided into three groups with 12 children in each. Data was collected through a musical keyboard using Logic Audio Software 9.0 on the computer that recorded the participant's performance in the tasks. Tasks were divided into sections: spontaneous time production, time estimation with simple sounds, and time estimation with music. (1) performance of ADHD groups in temporal estimation of simple sounds in short time intervals (30 ms) were statistically lower than that of control group (p < 0.05); (2) in the task comparing musical excerpts of the same duration (7 s), ADHD groups considered the tracks longer when the musical notes had longer durations, while in the control group, the duration was related to the density of musical notes in the track. The positive average performance observed in the three groups in most tasks perhaps indicates the possibility that music can, in some way, positively modulate the symptoms of inattention in ADHD.

  1. Spanish is better than English for discriminating Portuguese vowels: acoustic similarity versus vowel inventory size

    Science.gov (United States)

    Elvin, Jaydene; Escudero, Paola; Vasiliev, Polina

    2014-01-01

    Second language (L2) learners often struggle to distinguish sound contrasts that are not present in their native language (L1). Models of non-native and L2 sound perception claim that perceptual similarity between L1 and L2 sound contrasts correctly predicts discrimination by naïve listeners and L2 learners. The present study tested the explanatory power of vowel inventory size versus acoustic properties as predictors of discrimination accuracy when naïve Australian English (AusE) and Iberian Spanish (IS) listeners are presented with six Brazilian Portuguese (BP) vowel contrasts. Our results show that IS listeners outperformed AusE listeners, confirming that cross-linguistic acoustic properties, rather than cross-linguistic vowel inventory sizes, successfully predict non-native discrimination difficulty. Furthermore, acoustic distance between BP vowels and closest L1 vowels successfully predicted differential levels of difficulty among the six BP contrasts, with BP /e-i/ and /o-u/ being the most difficult for both listener groups. We discuss the importance of our findings for the adequacy of models of L2 speech perception. PMID:25400599

  2. Discrimination and preference of speech and non-speech sounds in autism patients%孤独症患者言语及非言语声音辨识和偏好特征

    Institute of Scientific and Technical Information of China (English)

    王崇颖; 江鸣山; 徐旸; 马斐然; 石锋

    2011-01-01

    Objective:To explore the discrimination and preference of speech and non-speech sounds in autism patients. Methods: Ten people (5 children vs. 5 adults) diagnosed with autism according to the criteria of Diagnostic and Statistical Manual of Mental Disorders. Fourth Version ( DSM-Ⅳ) were selected from database of Nankai University Center for Behavioural Science. Together with 10 healthy controls with matched age, people with autism were tested by three experiments on speech sounds, pure tone and intonation which were recorded and modified by Praat, a voice analysis software. Their discrimination and preference were collected orally. The exact probability values were calculated. Results: The results showed that there were no significant differences on the discrimination of speech sounds, pure tone and intonation between autism patients and controls ( P > 0. 05) while controls preferred speech and non-speech sounds with higher pitch than autism ( e. g. , - 100Hz/ +50Hz. 2 vs. 7. P < 0. 05:50Hz/250Hz. 4 vs. 10. P < 0. 05) and autism preferred non-speech sounds with lower pitch ( 100Hz/250Hz. 6 vs. 3.P < 0. 05). No significant difference on the preference of intonation between autism and controls ( P > 0. 05) was found. Conclusion:lt shows that people with autism have impaired auditory processing on speech and non-speech sounds.%目的:探究孤独症患者对言语及非言语声音的辨识和偏好特征.方法:从南开大学医学院行为医学中心患者数据库中选取根据美国精神障碍诊断与统计手册第4版(DSM-Ⅳ)诊断标准确诊的孤独症患者10名(儿童和成年人各5例),选取与年龄匹配的正常对照10名.所有被试均接受由专业的语音软件Praat录制和生成的语音音高、纯音音高和韵律的实验测试,口头报告其对言语及非言语声音的辨识和偏好结果.结果:孤独症患者在语音音高、纯音音高和韵律的辨识上和正常对照组差异无统计学意义(均P>0.05).

  3. Annoyance caused by the sounds of a magnetic levitation train

    NARCIS (Netherlands)

    Vos, J.

    2004-01-01

    In a laboratory study, the annoyance caused by the passby sounds from a magnetic levitation (maglev) train was investigated. The listeners were presented with various sound fragments. The task of the listeners was to respond after each presentation to the question: "How annoying would you find the

  4. Left and right reaction time differences to the sound intensity in normal and AD/HD children.

    Science.gov (United States)

    Baghdadi, Golnaz; Towhidkhah, Farzad; Rostami, Reza

    2017-06-01

    Right hemisphere, which is attributed to the sound intensity discrimination, has abnormality in people with attention deficit/hyperactivity disorder (AD/HD). However, it is not studied whether the defect in the right hemisphere has influenced on the intensity sensation of AD/HD subjects or not. In this study, the sensitivity of normal and AD/HD children to the sound intensity was investigated. Nineteen normal and fourteen AD/HD children participated in the study and performed a simple auditory reaction time task. Using the regression analysis, the sensitivity of right and left ears to various sound intensity levels was examined. The statistical results showed that the sensitivity of AD/HD subjects to the intensity was lower than the normal group (p Left and right pathways of the auditory system had the same pattern of response in AD/HD subjects (p > 0.05). However, in control group the left pathway was more sensitive to the sound intensity level than the right one (p = 0.0156). It can be probable that the deficit of the right hemisphere has influenced on the auditory sensitivity of AD/HD children. The possible existent deficits of other auditory system components such as middle ear, inner ear, or involved brain stem nucleuses may also lead to the observed results. The development of new biomarkers based on the sensitivity of the brain hemispheres to the sound intensity has been suggested to estimate the risk of AD/HD. Designing new technique to correct the auditory feedback has been also proposed in behavioral treatment sessions. Copyright © 2017. Published by Elsevier B.V.

  5. Quantum-state comparison and discrimination

    Science.gov (United States)

    Hayashi, A.; Hashimoto, T.; Horibe, M.

    2018-05-01

    We investigate the performance of discrimination strategy in the comparison task of known quantum states. In the discrimination strategy, one infers whether or not two quantum systems are in the same state on the basis of the outcomes of separate discrimination measurements on each system. In some cases with more than two possible states, the optimal strategy in minimum-error comparison is that one should infer the two systems are in different states without any measurement, implying that the discrimination strategy performs worse than the trivial "no-measurement" strategy. We present a sufficient condition for this phenomenon to happen. For two pure states with equal prior probabilities, we determine the optimal comparison success probability with an error margin, which interpolates the minimum-error and unambiguous comparison. We find that the discrimination strategy is not optimal except for the minimum-error case.

  6. Developmental changes in brain activation involved in the production of novel speech sounds in children.

    Science.gov (United States)

    Hashizume, Hiroshi; Taki, Yasuyuki; Sassa, Yuko; Thyreau, Benjamin; Asano, Michiko; Asano, Kohei; Takeuchi, Hikaru; Nouchi, Rui; Kotozaki, Yuka; Jeong, Hyeonjeong; Sugiura, Motoaki; Kawashima, Ryuta

    2014-08-01

    Older children are more successful at producing unfamiliar, non-native speech sounds than younger children during the initial stages of learning. To reveal the neuronal underpinning of the age-related increase in the accuracy of non-native speech production, we examined the developmental changes in activation involved in the production of novel speech sounds using functional magnetic resonance imaging. Healthy right-handed children (aged 6-18 years) were scanned while performing an overt repetition task and a perceptual task involving aurally presented non-native and native syllables. Productions of non-native speech sounds were recorded and evaluated by native speakers. The mouth regions in the bilateral primary sensorimotor areas were activated more significantly during the repetition task relative to the perceptual task. The hemodynamic response in the left inferior frontal gyrus pars opercularis (IFG pOp) specific to non-native speech sound production (defined by prior hypothesis) increased with age. Additionally, the accuracy of non-native speech sound production increased with age. These results provide the first evidence of developmental changes in the neural processes underlying the production of novel speech sounds. Our data further suggest that the recruitment of the left IFG pOp during the production of novel speech sounds was possibly enhanced due to the maturation of the neuronal circuits needed for speech motor planning. This, in turn, would lead to improvement in the ability to immediately imitate non-native speech. Copyright © 2014 Wiley Periodicals, Inc.

  7. Auditory Memory for Timbre

    Science.gov (United States)

    McKeown, Denis; Wellsted, David

    2009-01-01

    Psychophysical studies are reported examining how the context of recent auditory stimulation may modulate the processing of new sounds. The question posed is how recent tone stimulation may affect ongoing performance in a discrimination task. In the task, two complex sounds occurred in successive intervals. A single target component of one complex…

  8. Efficient Coding and Statistically Optimal Weighting of Covariance among Acoustic Attributes in Novel Sounds

    Science.gov (United States)

    Stilp, Christian E.; Kluender, Keith R.

    2012-01-01

    To the extent that sensorineural systems are efficient, redundancy should be extracted to optimize transmission of information, but perceptual evidence for this has been limited. Stilp and colleagues recently reported efficient coding of robust correlation (r = .97) among complex acoustic attributes (attack/decay, spectral shape) in novel sounds. Discrimination of sounds orthogonal to the correlation was initially inferior but later comparable to that of sounds obeying the correlation. These effects were attenuated for less-correlated stimuli (r = .54) for reasons that are unclear. Here, statistical properties of correlation among acoustic attributes essential for perceptual organization are investigated. Overall, simple strength of the principal correlation is inadequate to predict listener performance. Initial superiority of discrimination for statistically consistent sound pairs was relatively insensitive to decreased physical acoustic/psychoacoustic range of evidence supporting the correlation, and to more frequent presentations of the same orthogonal test pairs. However, increased range supporting an orthogonal dimension has substantial effects upon perceptual organization. Connectionist simulations and Eigenvalues from closed-form calculations of principal components analysis (PCA) reveal that perceptual organization is near-optimally weighted to shared versus unshared covariance in experienced sound distributions. Implications of reduced perceptual dimensionality for speech perception and plausible neural substrates are discussed. PMID:22292057

  9. Spatial aspects of sound quality - and by multichannel systems subjective assessment of sound reproduced by stereo

    DEFF Research Database (Denmark)

    Choisel, Sylvain

    the fidelity with which sound reproduction systems can re-create the desired stereo image, a laser pointing technique was developed to accurately collect subjects' responses in a localization task. This method is subsequently applied in an investigation of the effects of loudspeaker directivity...... on the perceived direction of panned sources. The second part of the thesis addresses the identification of auditory attributes which play a role in the perception of sound reproduced by multichannel systems. Short musical excerpts were presented in mono, stereo and several multichannel formats to evoke various...

  10. A hybrid generative-discriminative approach to speaker diarization

    NARCIS (Netherlands)

    Noulas, A.K.; van Kasteren, T.; Kröse, B.J.A.

    2008-01-01

    In this paper we present a sound probabilistic approach to speaker diarization. We use a hybrid framework where a distribution over the number of speakers at each point of a multimodal stream is estimated with a discriminative model. The output of this process is used as input in a generative model

  11. Anti-discrimination Analysis Using Privacy Attack Strategies

    KAUST Repository

    Ruggieri, Salvatore

    2014-09-15

    Social discrimination discovery from data is an important task to identify illegal and unethical discriminatory patterns towards protected-by-law groups, e.g., ethnic minorities. We deploy privacy attack strategies as tools for discrimination discovery under hard assumptions which have rarely tackled in the literature: indirect discrimination discovery, privacy-aware discrimination discovery, and discrimination data recovery. The intuition comes from the intriguing parallel between the role of the anti-discrimination authority in the three scenarios above and the role of an attacker in private data publishing. We design strategies and algorithms inspired/based on Frèchet bounds attacks, attribute inference attacks, and minimality attacks to the purpose of unveiling hidden discriminatory practices. Experimental results show that they can be effective tools in the hands of anti-discrimination authorities.

  12. Universal programmable devices for unambiguous discrimination

    International Nuclear Information System (INIS)

    Zhang Chi; Ying Mingsheng; Qiao, Bo

    2006-01-01

    We discuss the problem of designing unambiguous programmable discriminators for any n unknown quantum states in an m-dimensional Hilbert space. The discriminator is a fixed measurement that has two kinds of input registers: the program registers and the data register. The quantum state in the data register is what users want to identify, which is confirmed to be among the n states in program registers. The task of the discriminator is to tell the users which state stored in the program registers is equivalent to that in the data register. First, we give a necessary and sufficient condition for judging an unambiguous programmable discriminator. Then, if m=n, we present an optimal unambiguous programmable discriminator for them, in the sense of maximizing the worst-case probability of success. Finally, we propose a universal unambiguous programmable discriminator for arbitrary n quantum states

  13. Using postmeasurement information in state discrimination

    International Nuclear Information System (INIS)

    Gopal, Deepthi; Wehner, Stephanie

    2010-01-01

    We consider a special form of state discrimination in which after the measurement we are given additional information that may help us identify the state. This task plays a central role in the analysis of quantum cryptographic protocols in the noisy-storage model, where the identity of the state corresponds to a certain bit string, and the additional information is typically a choice of encoding that is initially unknown to the cheating party. We first provide simple optimality conditions for measurements for any such problem and show upper and lower bounds on the success probability. For a certain class of problems, we furthermore provide tight bounds on how useful postmeasurement information can be. In particular, we show that for this class finding the optimal measurement for the task of state discrimination with postmeasurement information does in fact reduce to solving a different problem of state discrimination without such information. However, we show that for the corresponding classical state discrimination problems with postmeasurement information such a reduction is impossible, by relating the success probability to the violation of Bell inequalities. This suggests the usefulness of postmeasurement information as another feature that distinguishes the classical from a quantum world.

  14. Visual Speech Fills in Both Discrimination and Identification of Non-Intact Auditory Speech in Children

    Science.gov (United States)

    Jerger, Susan; Damian, Markus F.; McAlpine, Rachel P.; Abdi, Herve

    2018-01-01

    To communicate, children must discriminate and identify speech sounds. Because visual speech plays an important role in this process, we explored how visual speech influences phoneme discrimination and identification by children. Critical items had intact visual speech (e.g. baez) coupled to non-intact (excised onsets) auditory speech (signified…

  15. Auditory phase and frequency discrimination: a comparison of nine procedures.

    Science.gov (United States)

    Creelman, C D; Macmillan, N A

    1979-02-01

    Two auditory discrimination tasks were thoroughly investigated: discrimination of frequency differences from a sinusoidal signal of 200 Hz and discrimination of differences in relative phase of mixed sinusoids of 200 Hz and 400 Hz. For each task psychometric functions were constructed for three observers, using nine different psychophysical measurement procedures. These procedures included yes-no, two-interval forced-choice, and various fixed- and variable-standard designs that investigators have used in recent years. The data showed wide ranges of apparent sensitivity. For frequency discrimination, models derived from signal detection theory for each psychophysical procedure seem to account for the performance differences. For phase discrimination the models do not account for the data. We conclude that for some discriminative continua the assumptions of signal detection theory are appropriate, and underlying sensitivity may be derived from raw data by appropriate transformations. For other continua the models of signal detection theory are probably inappropriate; we speculate that phase might be discriminable only on the basis of comparison or change and suggest some tests of our hypothesis.

  16. Food words distract the hungry: Evidence of involuntary semantic processing of task-irrelevant but biologically-relevant unexpected auditory words.

    Science.gov (United States)

    Parmentier, Fabrice B R; Pacheco-Unguetti, Antonia P; Valero, Sara

    2018-01-01

    Rare changes in a stream of otherwise repeated task-irrelevant sounds break through selective attention and disrupt performance in an unrelated visual task by triggering shifts of attention to and from the deviant sound (deviance distraction). Evidence indicates that the involuntary orientation of attention to unexpected sounds is followed by their semantic processing. However, past demonstrations relied on tasks in which the meaning of the deviant sounds overlapped with features of the primary task. Here we examine whether such processing is observed when no such overlap is present but sounds carry some relevance to the participants' biological need to eat when hungry. We report the results of an experiment in which hungry and satiated participants partook in a cross-modal oddball task in which they categorized visual digits (odd/even) while ignoring task-irrelevant sounds. On most trials the irrelevant sound was a sinewave tone (standard sound). On the remaining trials, deviant sounds consisted of spoken words related to food (food deviants) or control words (control deviants). Questionnaire data confirmed state (but not trait) differences between the two groups with respect to food craving, as well as a greater desire to eat the food corresponding to the food-related words in the hungry relative to the satiated participants. The results of the oddball task revealed that food deviants produced greater distraction (longer response times) than control deviants in hungry participants while the reverse effect was observed in satiated participants. This effect was observed in the first block of trials but disappeared thereafter, reflecting semantic saturation. Our results suggest that (1) the semantic content of deviant sounds is involuntarily processed even when sharing no feature with the primary task; and that (2) distraction by deviant sounds can be modulated by the participants' biological needs.

  17. Discrimination Learning in Children

    Science.gov (United States)

    Ochocki, Thomas E.; And Others

    1975-01-01

    Examined the learning performance of 192 fourth-, fifth-, and sixth-grade children on either a two or four choice simultaneous color discrimination task. Compared the use of verbal reinforcement and/or punishment, under conditions of either complete or incomplete instructions. (Author/SDH)

  18. Ethnical discrimination in Europe: Field evidence from the finance industry.

    Science.gov (United States)

    Stefan, Matthias; Holzmeister, Felix; Müllauer, Alexander; Kirchler, Michael

    2018-01-01

    The integration of ethnical minorities has been a hotly discussed topic in the political, societal, and economic debate. Persistent discrimination of ethnical minorities can hinder successful integration. Given that unequal access to investment and financing opportunities can cause social and economic disparities due to inferior economic prospects, we conducted a field experiment on ethnical discrimination in the finance sector with 1,218 banks in seven European countries. We contacted banks via e-mail, either with domestic or Arabic sounding names, asking for contact details only. We find pronounced discrimination in terms of a substantially lower response rate to e-mails from Arabic senders. Remarkably, the observed discrimination effect is robust for loan- and investment-related requests, across rural and urban locations of banks, and across countries.

  19. Temporal Resolution and Active Auditory Discrimination Skill in Vocal Musicians

    Directory of Open Access Journals (Sweden)

    Kumar, Prawin

    2015-12-01

    Full Text Available Introduction Enhanced auditory perception in musicians is likely to result from auditory perceptual learning during several years of training and practice. Many studies have focused on biological processing of auditory stimuli among musicians. However, there is a lack of literature on temporal resolution and active auditory discrimination skills in vocal musicians. Objective The aim of the present study is to assess temporal resolution and active auditory discrimination skill in vocal musicians. Method The study participants included 15 vocal musicians with a minimum professional experience of 5 years of music exposure, within the age range of 20 to 30 years old, as the experimental group, while 15 age-matched non-musicians served as the control group. We used duration discrimination using pure-tones, pulse-train duration discrimination, and gap detection threshold tasks to assess temporal processing skills in both groups. Similarly, we assessed active auditory discrimination skill in both groups using Differential Limen of Frequency (DLF. All tasks were done using MATLab software installed in a personal computer at 40dBSL with maximum likelihood procedure. The collected data were analyzed using SPSS (version 17.0. Result Descriptive statistics showed better threshold for vocal musicians compared with non-musicians for all tasks. Further, independent t-test showed that vocal musicians performed significantly better compared with non-musicians on duration discrimination using pure tone, pulse train duration discrimination, gap detection threshold, and differential limen of frequency. Conclusion The present study showed enhanced temporal resolution ability and better (lower active discrimination threshold in vocal musicians in comparison to non-musicians.

  20. Auditive Discrimination of Equine Gaits by Parade Horses

    Directory of Open Access Journals (Sweden)

    Duilio Cruz-Becerra

    2009-06-01

    Full Text Available The purpose of this study was to examine parade horses’ auditory discriminationamong four types of equine gaits: paso-fino (“fine step”, trote-reunido(“two-beat trot”, trocha (“trot”, and galope-reunido (“gallop”. Two experimentallynaïve horses were trained to discriminate the sound of their owngait (paso-fino or fine step, through an experimental module that dispensedfood if the subject pressed a lever after hearing a sound reproduction of aparticular gait. Three experimental phases were developed, defined by theperiod of exposure to the sounds (20, 10, and 5 seconds, respectively. Thechoice between pairs of sounds including the horse’s own gait (fine stepand two-beat trot; fine step and gallop; and fine step and trot was reinforceddifferentially. The results indicate that the fine step horses are able todiscriminate their own gait from others, and that receptivity to their ownsounds could be included in their training regime.

  1. Cascaded Amplitude Modulations in Sound Texture Perception

    DEFF Research Database (Denmark)

    McWalter, Richard Ian; Dau, Torsten

    2017-01-01

    . In this study, we investigated the perception of sound textures that contain rhythmic structure, specifically second-order amplitude modulations that arise from the interaction of different modulation rates, previously described as "beating" in the envelope-frequency domain. We developed an auditory texture...... model that utilizes a cascade of modulation filterbanks that capture the structure of simple rhythmic patterns. The model was examined in a series of psychophysical listening experiments using synthetic sound textures-stimuli generated using time-averaged statistics measured from real-world textures....... In a texture identification task, our results indicated that second-order amplitude modulation sensitivity enhanced recognition. Next, we examined the contribution of the second-order modulation analysis in a preference task, where the proposed auditory texture model was preferred over a range of model...

  2. Aging increases distraction by auditory oddballs in visual, but not auditory tasks.

    Science.gov (United States)

    Leiva, Alicia; Parmentier, Fabrice B R; Andrés, Pilar

    2015-05-01

    Aging is typically considered to bring a reduction of the ability to resist distraction by task-irrelevant stimuli. Yet recent work suggests that this conclusion must be qualified and that the effect of aging is mitigated by whether irrelevant and target stimuli emanate from the same modalities or from distinct ones. Some studies suggest that aging is especially sensitive to distraction within-modality while others suggest it is greater across modalities. Here we report the first study to measure the effect of aging on deviance distraction in cross-modal (auditory-visual) and uni-modal (auditory-auditory) oddball tasks. Young and older adults were asked to judge the parity of target digits (auditory or visual in distinct blocks of trials), each preceded by a task-irrelevant sound (the same tone on most trials-the standard sound-or, on rare and unpredictable trials, a burst of white noise-the deviant sound). Deviant sounds yielded distraction (longer response times relative to standard sounds) in both tasks and age groups. However, an age-related increase in distraction was observed in the cross-modal task and not in the uni-modal task. We argue that aging might affect processes involved in the switching of attention across modalities and speculate that this may due to the slowing of this type of attentional shift or a reduction in cognitive control required to re-orient attention toward the target's modality.

  3. Context effects in a temporal discrimination task" further tests of the Scalar Expectancy Theory and Learning-to-Time models.

    Science.gov (United States)

    Arantes, Joana; Machado, Armando

    2008-07-01

    Pigeons were trained on two temporal bisection tasks, which alternated every two sessions. In the first task, they learned to choose a red key after a 1-s signal and a green key after a 4-s signal; in the second task, they learned to choose a blue key after a 4-s signal and a yellow key after a 16-s signal. Then the pigeons were exposed to a series of test trials in order to contrast two timing models, Learning-to-Time (LeT) and Scalar Expectancy Theory (SET). The models made substantially different predictions particularly for the test trials in which the sample duration ranged from 1 s to 16 s and the choice keys were Green and Blue, the keys associated with the same 4-s samples: LeT predicted that preference for Green should increase with sample duration, a context effect, but SET predicted that preference for Green should not vary with sample duration. The results were consistent with LeT. The present study adds to the literature the finding that the context effect occurs even when the two basic discriminations are never combined in the same session.

  4. Comparison of RASS temperature profiles with other tropospheric soundings

    International Nuclear Information System (INIS)

    Bonino, G.; Lombardini, P.P.; Trivero, P.

    1980-01-01

    The vertical temperature profile of the lower troposphere can be measured with a radio-acoustic sounding system (RASS). A comparison of the thermal profiles measured with the RASS and with traditional methods shows a) RASS ability to produce vertical thermal profiles at an altitude range of 170 to 1000 m with temperature accuracy and height discrimination comparable with conventional soundings, b) advantages of remote sensing as offered by new sounder, c) applicability of RASS both in assessing evolution of thermodynamic conditions in PBL and in sensing conditions conducive to high concentrations of air pollutants at the ground level. (author)

  5. Costs of suppressing emotional sound and countereffects of a mindfulness induction: an experimental analog of tinnitus impact.

    Directory of Open Access Journals (Sweden)

    Hugo Hesser

    Full Text Available Tinnitus is the experience of sounds without an appropriate external auditory source. These auditory sensations are intertwined with emotional and attentional processing. Drawing on theories of mental control, we predicted that suppressing an affectively negative sound mimicking the psychoacoustic features of tinnitus would result in decreased persistence in a mentally challenging task (mental arithmetic that required participants to ignore the same sound, but that receiving a mindfulness exercise would reduce this effect. Normal hearing participants (N = 119 were instructed to suppress an affectively negative sound under cognitive load or were given no such instructions. Next, participants received either a mindfulness induction or an attention control task. Finally, all participants worked with mental arithmetic while exposed to the same sound. The length of time participants could persist in the second task served as the dependent variable. As hypothesized, results indicated that an auditory suppression rationale reduced time of persistence relative to no such rationale, and that a mindfulness induction counteracted this detrimental effect. The study may offer new insights into the mechanisms involved in the development of tinnitus interference. Implications are also discussed in the broader context of attention control strategies and the effects of emotional sound on task performance. The ironic processes of mental control may have an analog in the experience of sounds.

  6. Active listening: task-dependent plasticity of spectrotemporal receptive fields in primary auditory cortex.

    Science.gov (United States)

    Fritz, Jonathan; Elhilali, Mounya; Shamma, Shihab

    2005-08-01

    Listening is an active process in which attentive focus on salient acoustic features in auditory tasks can influence receptive field properties of cortical neurons. Recent studies showing rapid task-related changes in neuronal spectrotemporal receptive fields (STRFs) in primary auditory cortex of the behaving ferret are reviewed in the context of current research on cortical plasticity. Ferrets were trained on spectral tasks, including tone detection and two-tone discrimination, and on temporal tasks, including gap detection and click-rate discrimination. STRF changes could be measured on-line during task performance and occurred within minutes of task onset. During spectral tasks, there were specific spectral changes (enhanced response to tonal target frequency in tone detection and discrimination, suppressed response to tonal reference frequency in tone discrimination). However, only in the temporal tasks, the STRF was changed along the temporal dimension by sharpening temporal dynamics. In ferrets trained on multiple tasks, distinctive and task-specific STRF changes could be observed in the same cortical neurons in successive behavioral sessions. These results suggest that rapid task-related plasticity is an ongoing process that occurs at a network and single unit level as the animal switches between different tasks and dynamically adapts cortical STRFs in response to changing acoustic demands.

  7. Rainforests as concert halls for birds: Are reverberations improving sound transmission of long song elements?

    DEFF Research Database (Denmark)

    Nemeth, Erwin; Dabelsteen, Torben; Pedersen, Simon Boel

    2006-01-01

    that longer sounds are less attenuated. The results indicate that higher sound pressure level is caused by superimposing reflections. It is suggested that this beneficial effect of reverberations explains interspecific birdsong differences in element length. Transmission paths with stronger reverberations......In forests reverberations have probably detrimental and beneficial effects on avian communication. They constrain signal discrimination by masking fast repetitive sounds and they improve signal detection by elongating sounds. This ambivalence of reflections for animal signals in forests is similar...... to the influence of reverberations on speech or music in indoor sound transmission. Since comparisons of sound fields of forests and concert halls have demonstrated that reflections can contribute in both environments a considerable part to the energy of a received sound, it is here assumed that reverberations...

  8. Classification of lung sounds using higher-order statistics: A divide-and-conquer approach.

    Science.gov (United States)

    Naves, Raphael; Barbosa, Bruno H G; Ferreira, Danton D

    2016-06-01

    Lung sound auscultation is one of the most commonly used methods to evaluate respiratory diseases. However, the effectiveness of this method depends on the physician's training. If the physician does not have the proper training, he/she will be unable to distinguish between normal and abnormal sounds generated by the human body. Thus, the aim of this study was to implement a pattern recognition system to classify lung sounds. We used a dataset composed of five types of lung sounds: normal, coarse crackle, fine crackle, monophonic and polyphonic wheezes. We used higher-order statistics (HOS) to extract features (second-, third- and fourth-order cumulants), Genetic Algorithms (GA) and Fisher's Discriminant Ratio (FDR) to reduce dimensionality, and k-Nearest Neighbors and Naive Bayes classifiers to recognize the lung sound events in a tree-based system. We used the cross-validation procedure to analyze the classifiers performance and the Tukey's Honestly Significant Difference criterion to compare the results. Our results showed that the Genetic Algorithms outperformed the Fisher's Discriminant Ratio for feature selection. Moreover, each lung class had a different signature pattern according to their cumulants showing that HOS is a promising feature extraction tool for lung sounds. Besides, the proposed divide-and-conquer approach can accurately classify different types of lung sounds. The classification accuracy obtained by the best tree-based classifier was 98.1% for classification accuracy on training, and 94.6% for validation data. The proposed approach achieved good results even using only one feature extraction tool (higher-order statistics). Additionally, the implementation of the proposed classifier in an embedded system is feasible. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  9. A quantitative method for determining spatial discriminative capacity

    Directory of Open Access Journals (Sweden)

    Dennis Robert G

    2008-03-01

    Full Text Available Abstract Background The traditional two-point discrimination (TPD test, a widely used tactile spatial acuity measure, has been criticized as being imprecise because it is based on subjective criteria and involves a number of non-spatial cues. The results of a recent study showed that as two stimuli were delivered simultaneously, vibrotactile amplitude discrimination became worse when the two stimuli were positioned relatively close together and was significantly degraded when the probes were within a subject's two-point limen. The impairment of amplitude discrimination with decreasing inter-probe distance suggested that the metric of amplitude discrimination could possibly provide a means of objective and quantitative measurement of spatial discrimination capacity. Methods A two alternative forced-choice (2AFC tracking procedure was used to assess a subject's ability to discriminate the amplitude difference between two stimuli positioned at near-adjacent skin sites. Two 25 Hz flutter stimuli, identical except for a constant difference in amplitude, were delivered simultaneously to the hand dorsum. The stimuli were initially spaced 30 mm apart, and the inter-stimulus distance was modified on a trial-by-trial basis based on the subject's performance of discriminating the stimulus with higher intensity. The experiment was repeated via sequential, rather than simultaneous, delivery of the same vibrotactile stimuli. Results Results obtained from this study showed that the performance of the amplitude discrimination task was significantly degraded when the stimuli were delivered simultaneously and were near a subject's two-point limen. In contrast, subjects were able to correctly discriminate between the amplitudes of the two stimuli when they were sequentially delivered at all inter-probe distances (including those within the two-point limen, and improved when an adapting stimulus was delivered prior to simultaneously delivered stimuli. Conclusion

  10. Validation of the Mnemonic Similarity Task – Context Version

    Directory of Open Access Journals (Sweden)

    Giulia A. Aldi

    2018-02-01

    Full Text Available Objective: Pattern separation (PS is the ability to represent similar experiences as separate, non-overlapping representations. It is usually assessed via the Mnemonic Similarity Task – Object Version (MST-O which, however, assesses PS performance without taking behavioral context discrimination into account, since it is based on pictures of everyday simple objects on a white background. We here present a validation study for a new task, the Mnemonic Similarity Task – Context Version (MST-C, which is designed to measure PS while taking behavioral context discrimination into account by using real-life context photographs. Methods: Fifty healthy subjects underwent the two MST tasks to assess convergent evidence. Instruments assessing memory and attention were also administered to study discriminant evidence. The test-retest reliability of MST-C was analyzed. Results: Weak evidence supports convergent validity between the MST-C task and the MST-O as measures of PS (rs = 0.464; p < 0.01; PS performance assessed via the MST-C did not correlate with memory or attention; a moderate test-retest reliability was found (rs = 0.595; p < 0.01. Conclusion: The MST-C seems useful for assessing PS performance conceptualized as the ability to discriminate complex and realistic spatial contexts. Future studies are welcome to evaluate the validity of the MST-C task as a measure of PS in clinical populations.

  11. Sound and sound sources

    DEFF Research Database (Denmark)

    Larsen, Ole Næsbye; Wahlberg, Magnus

    2017-01-01

    There is no difference in principle between the infrasonic and ultrasonic sounds, which are inaudible to humans (or other animals) and the sounds that we can hear. In all cases, sound is a wave of pressure and particle oscillations propagating through an elastic medium, such as air. This chapter...... is about the physical laws that govern how animals produce sound signals and how physical principles determine the signals’ frequency content and sound level, the nature of the sound field (sound pressure versus particle vibrations) as well as directional properties of the emitted signal. Many...... of these properties are dictated by simple physical relationships between the size of the sound emitter and the wavelength of emitted sound. The wavelengths of the signals need to be sufficiently short in relation to the size of the emitter to allow for the efficient production of propagating sound pressure waves...

  12. Purposeful Goal-Directed Movements Give Rise to Higher Tactile Discrimination Performance

    Directory of Open Access Journals (Sweden)

    Georgiana Juravle

    2011-10-01

    Full Text Available Tactile perception is inhibited during goal-directed reaching movements (sensory suppression. Here, participants performed simple reaching or exploratory movements (where contact with the table surface was maintained. We measured tactile discrimination thresholds for vibratory stimuli delivered to participants' wrists while executing the movement, and while at rest. Moreover, we measured discrimination performance (in a same vs. different task for the materials covering the table surface, during the execution of the different movements. The threshold and discrimination tasks could be performed either singly or together, both under active movement and passive conditions (ie, no movement required, but with tactile stimulation. Thresholds measured at rest were significantly lower than thresholds measured during both active movements and passive touches. This provides a clear indication of sensory suppression during movement execution. Moreover, the discrimination data revealed main effects of task (single vs. dual, movement execution type (passive vs. active, and movement type (reach vs. exploration: Discrimination performance was significantly higher under conditions of single-tasking, active movements, as well as exploratory movements. Therefore, active movement of the hand with the purpose of gaining tactual information about the surface of the table gives rise to enhanced performance, thus suggesting that we feel more when we need to; It would appear that tactual information is prioritized when relevant for the movement being executed.

  13. What is a melody? On the relationship between pitch and brightness of timbre.

    Science.gov (United States)

    Cousineau, Marion; Carcagno, Samuele; Demany, Laurent; Pressnitzer, Daniel

    2013-01-01

    Previous studies showed that the perceptual processing of sound sequences is more efficient when the sounds vary in pitch than when they vary in loudness. We show here that sequences of sounds varying in brightness of timbre are processed with the same efficiency as pitch sequences. The sounds used consisted of two simultaneous pure tones one octave apart, and the listeners' task was to make same/different judgments on pairs of sequences varying in length (one, two, or four sounds). In one condition, brightness of timbre was varied within the sequences by changing the relative level of the two pure tones. In other conditions, pitch was varied by changing fundamental frequency, or loudness was varied by changing the overall level. In all conditions, only two possible sounds could be used in a given sequence, and these two sounds were equally discriminable. When sequence length increased from one to four, discrimination performance decreased substantially for loudness sequences, but to a smaller extent for brightness sequences and pitch sequences. In the latter two conditions, sequence length had a similar effect on performance. These results suggest that the processes dedicated to pitch and brightness analysis, when probed with a sequence-discrimination task, share unexpected similarities.

  14. The role of task interference and exposure duration in judging noise annoyance

    Science.gov (United States)

    Zimmer, Karin; Ghani, Jody; Ellermeier, Wolfgang

    2008-04-01

    To determine whether the amount of performance disruption by a noise has an effect on the annoyance that noise evokes, a laboratory situation was created in which the participants rated a number of sounds before, after, and while performing a cognitively demanding memory task. The task consisted of memorizing, and later reproducing, a visually presented sequence of digits while being exposed to irrelevant sound chosen to produce different degrees of disruption. In two experiments, participants assessed these background sounds (frequency-modulated tones, broadband noise and speech) on a rating scale consisting of thirteen categories ranging from 'not annoying at all' to 'extremely annoying.' The judgments were collected immediately before, after, and concomitant to, the memory task. The results of the first experiment (N=24) showed that the annoyance assessments were indeed altered by the experience of disruption, most strongly during, and to a lesser extent after task completion, whereas ratings of the non-disruptive sounds remained largely unaffected. In the second experiment (N=25), participants were exposed to the same sounds, but for longer intervals at a time: 10 min as opposed to 14 s in the first experiment. The longer exposure resulted in increased annoyance in all noise conditions, but did not alter the differential effect of disruption on annoyance, which was replicated. The results of these laboratory experiments support the notion that annoyance cannot be conceived of as a purely perceptual sound property; rather, it is influenced by the degree of interference with the task at hand.

  15. Don't demotivate, discriminate

    NARCIS (Netherlands)

    J.J.A. Kamphorst (Jurjen); O.H. Swank (Otto)

    2013-01-01

    markdownabstract__Abstract__ This paper offers a new theory of discrimination in the workplace. We consider a manager who has to assign two tasks to two employees. The manager has superior information about the employees' abilities. We show that besides an equilibrium where the manager does not

  16. Adults with dyslexia demonstrate large effects of crowding and detrimental effects of distractors in a visual tilt discrimination task.

    Directory of Open Access Journals (Sweden)

    Rizan Cassim

    Full Text Available Previous research has shown that adults with dyslexia (AwD are disproportionately impacted by close spacing of stimuli and increased numbers of distractors in a visual search task compared to controls [1]. Using an orientation discrimination task, the present study extended these findings to show that even in conditions where target search was not required: (i AwD had detrimental effects of both crowding and increased numbers of distractors; (ii AwD had more pronounced difficulty with distractor exclusion in the left visual field and (iii measures of crowding and distractor exclusion correlated significantly with literacy measures. Furthermore, such difficulties were not accounted for by the presence of covarying symptoms of ADHD in the participant groups. These findings provide further evidence to suggest that the ability to exclude distracting stimuli likely contributes to the reported visual attention difficulties in AwD and to the aetiology of literacy difficulties. The pattern of results is consistent with weaker and asymmetric attention in AwD.

  17. A neurally inspired musical instrument classification system based upon the sound onset.

    Science.gov (United States)

    Newton, Michael J; Smith, Leslie S

    2012-06-01

    Physiological evidence suggests that sound onset detection in the auditory system may be performed by specialized neurons as early as the cochlear nucleus. Psychoacoustic evidence shows that the sound onset can be important for the recognition of musical sounds. Here the sound onset is used in isolation to form tone descriptors for a musical instrument classification task. The task involves 2085 isolated musical tones from the McGill dataset across five instrument categories. A neurally inspired tone descriptor is created using a model of the auditory system's response to sound onset. A gammatone filterbank and spiking onset detectors, built from dynamic synapses and leaky integrate-and-fire neurons, create parallel spike trains that emphasize the sound onset. These are coded as a descriptor called the onset fingerprint. Classification uses a time-domain neural network, the echo state network. Reference strategies, based upon mel-frequency cepstral coefficients, evaluated either over the whole tone or only during the sound onset, provide context to the method. Classification success rates for the neurally-inspired method are around 75%. The cepstral methods perform between 73% and 76%. Further testing with tones from the Iowa MIS collection shows that the neurally inspired method is considerably more robust when tested with data from an unrelated dataset.

  18. Ethnical discrimination in Europe: Field evidence from the finance industry

    Science.gov (United States)

    Stefan, Matthias; Holzmeister, Felix; Müllauer, Alexander

    2018-01-01

    The integration of ethnical minorities has been a hotly discussed topic in the political, societal, and economic debate. Persistent discrimination of ethnical minorities can hinder successful integration. Given that unequal access to investment and financing opportunities can cause social and economic disparities due to inferior economic prospects, we conducted a field experiment on ethnical discrimination in the finance sector with 1,218 banks in seven European countries. We contacted banks via e-mail, either with domestic or Arabic sounding names, asking for contact details only. We find pronounced discrimination in terms of a substantially lower response rate to e-mails from Arabic senders. Remarkably, the observed discrimination effect is robust for loan- and investment-related requests, across rural and urban locations of banks, and across countries. PMID:29377964

  19. Importance of the left auditory areas in chord discrimination in music experts as demonstrated by MEG.

    Science.gov (United States)

    Tervaniemi, Mari; Sannemann, Christian; Noyranen, Maiju; Salonen, Johanna; Pihko, Elina

    2011-08-01

    The brain basis behind musical competence in its various forms is not yet known. To determine the pattern of hemispheric lateralization during sound-change discrimination, we recorded the magnetic counterpart of the electrical mismatch negativity (MMNm) responses in professional musicians, musical participants (with high scores in the musicality tests but without professional training in music) and non-musicians. While watching a silenced video, they were presented with short sounds with frequency and duration deviants and C major chords with C minor chords as deviants. MMNm to chord deviants was stronger in both musicians and musical participants than in non-musicians, particularly in their left hemisphere. No group differences were obtained in the MMNm strength in the right hemisphere in any of the conditions or in the left hemisphere in the case of frequency or duration deviants. Thus, in addition to professional training in music, musical aptitude (combined with lower-level musical training) is also reflected in brain functioning related to sound discrimination. The present magnetoencephalographic evidence therefore indicates that the sound discrimination abilities may be differentially distributed in the brain in musically competent and naïve participants, especially in a musical context established by chord stimuli: the higher forms of musical competence engage both auditory cortices in an integrative manner. © 2011 The Authors. European Journal of Neuroscience © 2011 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.

  20. Melodic multi-feature paradigm reveals auditory profiles in music-sound encoding

    Directory of Open Access Journals (Sweden)

    Mari eTervaniemi

    2014-07-01

    Full Text Available Musical expertise modulates preattentive neural sound discrimination. However, this evidence up to great extent originates from paradigms using very simple stimulation. Here we use a novel melody paradigm (revealing the auditory profile for six sound parameters in parallel to compare memory-related MMN and attention-related P3a responses recorded from non-musicians and Finnish Folk musicians. MMN emerged in both groups of participants for all sound changes (except for rhythmic changes in non-musicians. In Folk musicians, the MMN was enlarged for mistuned sounds when compared with non-musicians. This is taken to reflect their familiarity with pitch information which is in key position in Finnish folk music when compared with e.g., rhythmic information. The MMN was followed by P3a after timbre changes, rhythm changes, and melody transposition. The MMN and P3a topographies differentiated the groups for all sound changes. Thus, the melody paradigm offers a fast and cost-effective means for determining the auditory profile for music-sound encoding and also, importantly, for probing the effects of musical expertise on it.

  1. Melodic multi-feature paradigm reveals auditory profiles in music-sound encoding.

    Science.gov (United States)

    Tervaniemi, Mari; Huotilainen, Minna; Brattico, Elvira

    2014-01-01

    Musical expertise modulates preattentive neural sound discrimination. However, this evidence up to great extent originates from paradigms using very simple stimulation. Here we use a novel melody paradigm (revealing the auditory profile for six sound parameters in parallel) to compare memory-related mismatch negativity (MMN) and attention-related P3a responses recorded from non-musicians and Finnish Folk musicians. MMN emerged in both groups of participants for all sound changes (except for rhythmic changes in non-musicians). In Folk musicians, the MMN was enlarged for mistuned sounds when compared with non-musicians. This is taken to reflect their familiarity with pitch information which is in key position in Finnish folk music when compared with e.g., rhythmic information. The MMN was followed by P3a after timbre changes, rhythm changes, and melody transposition. The MMN and P3a topographies differentiated the groups for all sound changes. Thus, the melody paradigm offers a fast and cost-effective means for determining the auditory profile for music-sound encoding and also, importantly, for probing the effects of musical expertise on it.

  2. Melodic multi-feature paradigm reveals auditory profiles in music-sound encoding

    DEFF Research Database (Denmark)

    Tervaniemi, Mari; Huotilainen, Minna; Brattico, Elvira

    2014-01-01

    Musical expertise modulates preattentive neural sound discrimination. However, this evidence up to great extent originates from paradigms using very simple stimulation. Here we use a novel melody paradigm (revealing the auditory profile for six sound parameters in parallel) to compare memory......-related mismatch negativity (MMN) and attention-related P3a responses recorded from non-musicians and Finnish Folk musicians. MMN emerged in both groups of participants for all sound changes (except for rhythmic changes in non-musicians). In Folk musicians, the MMN was enlarged for mistuned sounds when compared...... with non-musicians. This is taken to reflect their familiarity with pitch information which is in key position in Finnish folk music when compared with e.g., rhythmic information. The MMN was followed by P3a after timbre changes, rhythm changes, and melody transposition. The MMN and P3a topographies...

  3. Attentional Capture by Deviant Sounds: A Noncontingent Form of Auditory Distraction?

    Science.gov (United States)

    Vachon, François; Labonté, Katherine; Marsh, John E.

    2017-01-01

    The occurrence of an unexpected, infrequent sound in an otherwise homogeneous auditory background tends to disrupt the ongoing cognitive task. This "deviation effect" is typically explained in terms of attentional capture whereby the deviant sound draws attention away from the focal activity, regardless of the nature of this activity.…

  4. Dynamic Assessment of Phonological Awareness for Children with Speech Sound Disorders

    Science.gov (United States)

    Gillam, Sandra Laing; Ford, Mikenzi Bentley

    2012-01-01

    The current study was designed to examine the relationships between performance on a nonverbal phoneme deletion task administered in a dynamic assessment format with performance on measures of phoneme deletion, word-level reading, and speech sound production that required verbal responses for school-age children with speech sound disorders (SSDs).…

  5. Semi-Supervised Active Learning for Sound Classification in Hybrid Learning Environments

    Science.gov (United States)

    Han, Wenjing; Coutinho, Eduardo; Li, Haifeng; Schuller, Björn; Yu, Xiaojie; Zhu, Xuan

    2016-01-01

    Coping with scarcity of labeled data is a common problem in sound classification tasks. Approaches for classifying sounds are commonly based on supervised learning algorithms, which require labeled data which is often scarce and leads to models that do not generalize well. In this paper, we make an efficient combination of confidence-based Active Learning and Self-Training with the aim of minimizing the need for human annotation for sound classification model training. The proposed method pre-processes the instances that are ready for labeling by calculating their classifier confidence scores, and then delivers the candidates with lower scores to human annotators, and those with high scores are automatically labeled by the machine. We demonstrate the feasibility and efficacy of this method in two practical scenarios: pool-based and stream-based processing. Extensive experimental results indicate that our approach requires significantly less labeled instances to reach the same performance in both scenarios compared to Passive Learning, Active Learning and Self-Training. A reduction of 52.2% in human labeled instances is achieved in both of the pool-based and stream-based scenarios on a sound classification task considering 16,930 sound instances. PMID:27627768

  6. Semi-Supervised Active Learning for Sound Classification in Hybrid Learning Environments.

    Science.gov (United States)

    Han, Wenjing; Coutinho, Eduardo; Ruan, Huabin; Li, Haifeng; Schuller, Björn; Yu, Xiaojie; Zhu, Xuan

    2016-01-01

    Coping with scarcity of labeled data is a common problem in sound classification tasks. Approaches for classifying sounds are commonly based on supervised learning algorithms, which require labeled data which is often scarce and leads to models that do not generalize well. In this paper, we make an efficient combination of confidence-based Active Learning and Self-Training with the aim of minimizing the need for human annotation for sound classification model training. The proposed method pre-processes the instances that are ready for labeling by calculating their classifier confidence scores, and then delivers the candidates with lower scores to human annotators, and those with high scores are automatically labeled by the machine. We demonstrate the feasibility and efficacy of this method in two practical scenarios: pool-based and stream-based processing. Extensive experimental results indicate that our approach requires significantly less labeled instances to reach the same performance in both scenarios compared to Passive Learning, Active Learning and Self-Training. A reduction of 52.2% in human labeled instances is achieved in both of the pool-based and stream-based scenarios on a sound classification task considering 16,930 sound instances.

  7. Effect of gap detection threshold on consistency of speech in children with speech sound disorder.

    Science.gov (United States)

    Sayyahi, Fateme; Soleymani, Zahra; Akbari, Mohammad; Bijankhan, Mahmood; Dolatshahi, Behrooz

    2017-02-01

    The present study examined the relationship between gap detection threshold and speech error consistency in children with speech sound disorder. The participants were children five to six years of age who were categorized into three groups of typical speech, consistent speech disorder (CSD) and inconsistent speech disorder (ISD).The phonetic gap detection threshold test was used for this study, which is a valid test comprised six syllables with inter-stimulus intervals between 20-300ms. The participants were asked to listen to the recorded stimuli three times and indicate whether they heard one or two sounds. There was no significant difference between the typical and CSD groups (p=0.55), but there were significant differences in performance between the ISD and CSD groups and the ISD and typical groups (p=0.00). The ISD group discriminated between speech sounds at a higher threshold. Children with inconsistent speech errors could not distinguish speech sounds during time-limited phonetic discrimination. It is suggested that inconsistency in speech is a representation of inconsistency in auditory perception, which causes by high gap detection threshold. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Measuring Multi-tasking Ability

    Science.gov (United States)

    2003-07-01

    sociological factors pertaining to social structures and values. For example, telecommuting , job-sharing, and families’ attempts to decrease the amount...achievement strivings (actively working hard to achieve goals), and poly- chronicity ( the preference for working on more than one task at a time) with MT...Joslyn note (2000), this description of ADM makes it sound exceedingly easy. However, nothing could be farther from the truth . The task qualifies as an MT

  9. Towards the standardisation of lung sound nomenclature

    NARCIS (Netherlands)

    Pasterkamp, Hans; Brand, Paul L. P.; Everard, Mark; Garcia-Marcos, Luis; Melbye, Hasse; Priftis, Kostas N.

    Auscultation of the lung remains an essential part of physical examination even though its limitations, particularly with regard to communicating subjective findings, are well recognised. The European Respiratory Society (ERS) Task Force on Respiratory Sounds was established to build a reference

  10. The Effects of Visual Discriminability and Rotation Angle on 30-Month-Olds' Search Performance in Spatial Rotation Tasks.

    Science.gov (United States)

    Ebersbach, Mirjam; Nawroth, Christian

    2016-01-01

    Tracking objects that are hidden and then moved is a crucial ability related to object permanence, which develops across several stages in early childhood. In spatial rotation tasks, children observe a target object that is hidden in one of two or more containers before the containers are rotated around a fixed axis. Usually, 30-month-olds fail to find the hidden object after it was rotated by 180°. We examined whether visual discriminability of the containers improves 30-month-olds' success in this task and whether children perform better after 90° than after 180° rotations. Two potential hiding containers with same or different colors were placed on a board that was rotated by 90° or 180° in a within-subjects design. Children ( N = 29) performed above chance level in all four conditions. Their overall success in finding the object did not improve by differently colored containers. However, different colors prevented children from showing an inhibition bias in 90° rotations, that is, choosing the empty container more often when it was located close to them than when it was farther away: This bias emerged in the same colors condition but not in the different colors condition. Results are discussed in view of particular challenges that might facilitate or deteriorate spatial rotation tasks for young children.

  11. The effects of visual discriminability and rotation angle on 30-month-olds’ search performance in spatial rotation tasks

    Directory of Open Access Journals (Sweden)

    Mirjam Ebersbach

    2016-10-01

    Full Text Available Tracking objects that are hidden and then moved is a crucial ability related to object permanence, which develops across several stages in early childhood. In spatial rotation tasks, children observe a target object that is hidden in one of two or more containers before the containers are rotated around a fixed axis. Usually, 30-month-olds fail to find the hidden object after it was rotated by 180°. We examined whether visual discriminability of the containers improves 30-month-olds’ success in this task and whether children perform better after 90° than after 180° rotations. Two potential hiding containers with same or different colors were placed on a board that was rotated by 90° or 180° in a within-subjects design. Children (N = 29 performed above chance level in all four conditions. Their overall success in finding the object did not improve by differently colored containers. However, different colors prevented children from showing an inhibition bias in 90° rotations, that is, choosing the empty container more often when it was located close to them than when it was farther away: This bias emerged in the same colors condition but not in the different colors condition. Results are discussed in view of particular challenges that might facilitate or deteriorate spatial rotation tasks for young children.

  12. The Effects of Visual Discriminability and Rotation Angle on 30-Month-Olds’ Search Performance in Spatial Rotation Tasks

    Science.gov (United States)

    Ebersbach, Mirjam; Nawroth, Christian

    2016-01-01

    Tracking objects that are hidden and then moved is a crucial ability related to object permanence, which develops across several stages in early childhood. In spatial rotation tasks, children observe a target object that is hidden in one of two or more containers before the containers are rotated around a fixed axis. Usually, 30-month-olds fail to find the hidden object after it was rotated by 180°. We examined whether visual discriminability of the containers improves 30-month-olds’ success in this task and whether children perform better after 90° than after 180° rotations. Two potential hiding containers with same or different colors were placed on a board that was rotated by 90° or 180° in a within-subjects design. Children (N = 29) performed above chance level in all four conditions. Their overall success in finding the object did not improve by differently colored containers. However, different colors prevented children from showing an inhibition bias in 90° rotations, that is, choosing the empty container more often when it was located close to them than when it was farther away: This bias emerged in the same colors condition but not in the different colors condition. Results are discussed in view of particular challenges that might facilitate or deteriorate spatial rotation tasks for young children. PMID:27812346

  13. Enhanced Memory Consolidation Via Automatic Sound Stimulation During Non-REM Sleep.

    Science.gov (United States)

    Leminen, Miika M; Virkkala, Jussi; Saure, Emma; Paajanen, Teemu; Zee, Phyllis C; Santostasi, Giovanni; Hublin, Christer; Müller, Kiti; Porkka-Heiskanen, Tarja; Huotilainen, Minna; Paunio, Tiina

    2017-03-01

    Slow-wave sleep (SWS) slow waves and sleep spindle activity have been shown to be crucial for memory consolidation. Recently, memory consolidation has been causally facilitated in human participants via auditory stimuli phase-locked to SWS slow waves. Here, we aimed to develop a new acoustic stimulus protocol to facilitate learning and to validate it using different memory tasks. Most importantly, the stimulation setup was automated to be applicable for ambulatory home use. Fifteen healthy participants slept 3 nights in the laboratory. Learning was tested with 4 memory tasks (word pairs, serial finger tapping, picture recognition, and face-name association). Additional questionnaires addressed subjective sleep quality and overnight changes in mood. During the stimulus night, auditory stimuli were adjusted and targeted by an unsupervised algorithm to be phase-locked to the negative peak of slow waves in SWS. During the control night no sounds were presented. Results showed that the sound stimulation increased both slow wave (p = .002) and sleep spindle activity (p memory performance was compared between stimulus and control nights, we found a significant effect in word pair task but not in other memory tasks. The stimulation did not affect sleep structure or subjective sleep quality. We showed that the memory effect of the SWS-targeted individually triggered single-sound stimulation is specific to verbal associative memory. Moreover, the ambulatory and automated sound stimulus setup was promising and allows for a broad range of potential follow-up studies in the future. © Sleep Research Society 2017. Published by Oxford University Press [on behalf of the Sleep Research Society].

  14. Discrimination of fundamental frequency of synthesized vowel sounds in a noise background

    NARCIS (Netherlands)

    Scheffers, M.T.M.

    1984-01-01

    An experiment was carried out, investigating the relationship between the just noticeable difference of fundamental frequency (jndf0) of three stationary synthesized vowel sounds in noise and the signal-to-noise ratio. To this end the S/N ratios were measured at which listeners could just

  15. Numerosity but not texture-density discrimination correlates with math ability in children.

    Science.gov (United States)

    Anobile, Giovanni; Castaldi, Elisa; Turi, Marco; Tinelli, Francesca; Burr, David C

    2016-08-01

    Considerable recent work suggests that mathematical abilities in children correlate with the ability to estimate numerosity. Does math correlate only with numerosity estimation, or also with other similar tasks? We measured discrimination thresholds of school-age (6- to 12.5-years-old) children in 3 tasks: numerosity of patterns of relatively sparse, segregatable items (24 dots); numerosity of very dense textured patterns (250 dots); and discrimination of direction of motion. Thresholds in all tasks improved with age, but at different rates, implying the action of different mechanisms: In particular, in young children, thresholds were lower for sparse than textured patterns (the opposite of adults), suggesting earlier maturation of numerosity mechanisms. Importantly, numerosity thresholds for sparse stimuli correlated strongly with math skills, even after controlling for the influence of age, gender and nonverbal IQ. However, neither motion-direction discrimination nor numerosity discrimination of texture patterns showed a significant correlation with math abilities. These results provide further evidence that numerosity and texture-density are perceived by independent neural mechanisms, which develop at different rates; and importantly, only numerosity mechanisms are related to math. As developmental dyscalculia is characterized by a profound deficit in discriminating numerosity, it is fundamental to understand the mechanism behind the discrimination. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  16. Spatiotemporal Relationships among Audiovisual Stimuli Modulate Auditory Facilitation of Visual Target Discrimination.

    Science.gov (United States)

    Li, Qi; Yang, Huamin; Sun, Fang; Wu, Jinglong

    2015-03-01

    Sensory information is multimodal; through audiovisual interaction, task-irrelevant auditory stimuli tend to speed response times and increase visual perception accuracy. However, mechanisms underlying these performance enhancements have remained unclear. We hypothesize that task-irrelevant auditory stimuli might provide reliable temporal and spatial cues for visual target discrimination and behavioral response enhancement. Using signal detection theory, the present study investigated the effects of spatiotemporal relationships on auditory facilitation of visual target discrimination. Three experiments were conducted where an auditory stimulus maintained reliable temporal and/or spatial relationships with visual target stimuli. Results showed that perception sensitivity (d') to visual target stimuli was enhanced only when a task-irrelevant auditory stimulus maintained reliable spatiotemporal relationships with a visual target stimulus. When only reliable spatial or temporal information was contained, perception sensitivity was not enhanced. These results suggest that reliable spatiotemporal relationships between visual and auditory signals are required for audiovisual integration during a visual discrimination task, most likely due to a spread of attention. These results also indicate that auditory facilitation of visual target discrimination follows from late-stage cognitive processes rather than early stage sensory processes. © 2015 SAGE Publications.

  17. Dynamic functional brain networks involved in simple visual discrimination learning.

    Science.gov (United States)

    Fidalgo, Camino; Conejo, Nélida María; González-Pardo, Héctor; Arias, Jorge Luis

    2014-10-01

    Visual discrimination tasks have been widely used to evaluate many types of learning and memory processes. However, little is known about the brain regions involved at different stages of visual discrimination learning. We used cytochrome c oxidase histochemistry to evaluate changes in regional brain oxidative metabolism during visual discrimination learning in a water-T maze at different time points during training. As compared with control groups, the results of the present study reveal the gradual activation of cortical (prefrontal and temporal cortices) and subcortical brain regions (including the striatum and the hippocampus) associated to the mastery of a simple visual discrimination task. On the other hand, the brain regions involved and their functional interactions changed progressively over days of training. Regions associated with novelty, emotion, visuo-spatial orientation and motor aspects of the behavioral task seem to be relevant during the earlier phase of training, whereas a brain network comprising the prefrontal cortex was found along the whole learning process. This study highlights the relevance of functional interactions among brain regions to investigate learning and memory processes. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Improved discrimination of visual stimuli following repetitive transcranial magnetic stimulation.

    Directory of Open Access Journals (Sweden)

    Michael L Waterston

    Full Text Available BACKGROUND: Repetitive transcranial magnetic stimulation (rTMS at certain frequencies increases thresholds for motor-evoked potentials and phosphenes following stimulation of cortex. Consequently rTMS is often assumed to introduce a "virtual lesion" in stimulated brain regions, with correspondingly diminished behavioral performance. METHODOLOGY/PRINCIPAL FINDINGS: Here we investigated the effects of rTMS to visual cortex on subjects' ability to perform visual psychophysical tasks. Contrary to expectations of a visual deficit, we find that rTMS often improves the discrimination of visual features. For coarse orientation tasks, discrimination of a static stimulus improved consistently following theta-burst stimulation of the occipital lobe. Using a reaction-time task, we found that these improvements occurred throughout the visual field and lasted beyond one hour post-rTMS. Low-frequency (1 Hz stimulation yielded similar improvements. In contrast, we did not find consistent effects of rTMS on performance in a fine orientation discrimination task. CONCLUSIONS/SIGNIFICANCE: Overall our results suggest that rTMS generally improves or has no effect on visual acuity, with the nature of the effect depending on the type of stimulation and the task. We interpret our results in the context of an ideal-observer model of visual perception.

  19. Intercepting a sound without vision

    Science.gov (United States)

    Vercillo, Tiziana; Tonelli, Alessia; Gori, Monica

    2017-01-01

    Visual information is extremely important to generate internal spatial representations. In the auditory modality, the absence of visual cues during early infancy does not preclude the development of some spatial strategies. However, specific spatial abilities might result impaired. In the current study, we investigated the effect of early visual deprivation on the ability to localize static and moving auditory stimuli by comparing sighted and early blind individuals’ performance in different spatial tasks. We also examined perceptual stability in the two groups of participants by matching localization accuracy in a static and a dynamic head condition that involved rotational head movements. Sighted participants accurately localized static and moving sounds. Their localization ability remained unchanged after rotational movements of the head. Conversely, blind participants showed a leftward bias during the localization of static sounds and a little bias for moving sounds. Moreover, head movements induced a significant bias in the direction of head motion during the localization of moving sounds. These results suggest that internal spatial representations might be body-centered in blind individuals and that in sighted people the availability of visual cues during early infancy may affect sensory-motor interactions. PMID:28481939

  20. Auditory capture of visual motion: effects on perception and discrimination.

    Science.gov (United States)

    McCourt, Mark E; Leone, Lynnette M

    2016-09-28

    We asked whether the perceived direction of visual motion and contrast thresholds for motion discrimination are influenced by the concurrent motion of an auditory sound source. Visual motion stimuli were counterphasing Gabor patches, whose net motion energy was manipulated by adjusting the contrast of the leftward-moving and rightward-moving components. The presentation of these visual stimuli was paired with the simultaneous presentation of auditory stimuli, whose apparent motion in 3D auditory space (rightward, leftward, static, no sound) was manipulated using interaural time and intensity differences, and Doppler cues. In experiment 1, observers judged whether the Gabor visual stimulus appeared to move rightward or leftward. In experiment 2, contrast discrimination thresholds for detecting the interval containing unequal (rightward or leftward) visual motion energy were obtained under the same auditory conditions. Experiment 1 showed that the perceived direction of ambiguous visual motion is powerfully influenced by concurrent auditory motion, such that auditory motion 'captured' ambiguous visual motion. Experiment 2 showed that this interaction occurs at a sensory stage of processing as visual contrast discrimination thresholds (a criterion-free measure of sensitivity) were significantly elevated when paired with congruent auditory motion. These results suggest that auditory and visual motion signals are integrated and combined into a supramodal (audiovisual) representation of motion.

  1. Avatar Weight Estimates based on Footstep Sounds in Three Presentation Formats

    DEFF Research Database (Denmark)

    Sikström, Erik; Götzen, Amalia De; Serafin, Stefania

    2015-01-01

    When evaluating a sound design for virtual environment, the context where it is to be implemented in may have an influence on how it may be perceived. In this paper we perform an experiment comparing three presentation formats (audio only, video with audio and an interactive immersive VR format......) and their influences on a sound design evaluation task concerning footstep sounds. The evaluation involved estimating the perceived weight of a virtual avatar seen from a first person perspective, as well as the suitability of the sound effect relative to the context. The results show significant differences for three...

  2. Attention-related modulation of auditory brainstem responses during contralateral noise exposure.

    Science.gov (United States)

    Ikeda, Kazunari; Sekiguchi, Takahiro; Hayashi, Akiko

    2008-10-29

    As determinants facilitating attention-related modulation of the auditory brainstem response (ABR), two experimental factors were examined: (i) auditory discrimination; and (ii) contralateral masking intensity. Tone pips at 80 dB sound pressure level were presented to the left ear via either single-tone exposures or oddball exposures, whereas white noise was delivered continuously to the right ear at variable intensities (none--80 dB sound pressure level). Participants each conducted two tasks during stimulation, either reading a book (ignoring task) or detecting target tones (attentive task). Task-related modulation within the ABR range was found only during oddball exposures at contralateral masking intensities greater than or equal to 60 dB. Attention-related modulation of ABR can thus be detected reliably during auditory discrimination under contralateral masking of sufficient intensity.

  3. Deficits in discrimination after experimental frontal brain injury are mediated by motivation and can be improved by nicotinamide administration.

    Science.gov (United States)

    Vonder Haar, Cole; Maass, William R; Jacobs, Eric A; Hoane, Michael R

    2014-10-15

    One of the largest challenges in experimental neurotrauma work is the development of models relevant to the human condition. This includes both creating similar pathophysiology as well as the generation of relevant behavioral deficits. Recent studies have shown that there is a large potential for the use of discrimination tasks in rats to detect injury-induced deficits. The literature on discrimination and TBI is still limited, however. The current study investigated motivational and motor factors that could potentially contribute to deficits in discrimination. In addition, the efficacy of a neuroprotective agent, nicotinamide, was assessed. Rats were trained on a discrimination task and motivation task, given a bilateral frontal controlled cortical impact TBI (+3.0 AP, 0.0 ML from bregma), and then reassessed. They were also assessed on motor ability and Morris water maze (MWM) performance. Experiment 1 showed that TBI resulted in large deficits in discrimination and motivation. No deficits were observed on gross motor measures; however, the vehicle group showed impairments in fine motor control. Both injured groups were impaired on the reference memory MWM, but only nicotinamide-treated rats were impaired on the working memory MWM. Nicotinamide administration improved performance on discrimination and motivation measures. Experiment 2 evaluated retraining on the discrimination task and suggested that motivation may be a large factor underlying discrimination deficits. Retrained rats improved considerably on the discrimination task. The tasks evaluated in this study demonstrate robust deficits and may improve the detection of pharmaceutical effects by being very sensitive to pervasive cognitive deficits that occur after frontal TBI.

  4. Learning for pitch and melody discrimination in congenital amusia.

    Science.gov (United States)

    Whiteford, Kelly L; Oxenham, Andrew J

    2018-03-23

    Congenital amusia is currently thought to be a life-long neurogenetic disorder in music perception, impervious to training in pitch or melody discrimination. This study provides an explicit test of whether amusic deficits can be reduced with training. Twenty amusics and 20 matched controls participated in four sessions of psychophysical training involving either pure-tone (500 Hz) pitch discrimination or a control task of lateralization (interaural level differences for bandpass white noise). Pure-tone pitch discrimination at low, medium, and high frequencies (500, 2000, and 8000 Hz) was measured before and after training (pretest and posttest) to determine the specificity of learning. Melody discrimination was also assessed before and after training using the full Montreal Battery of Evaluation of Amusia, the most widely used standardized test to diagnose amusia. Amusics performed more poorly than controls in pitch but not localization discrimination, but both groups improved with practice on the trained stimuli. Learning was broad, occurring across all three frequencies and melody discrimination for all groups, including those who trained on the non-pitch control task. Following training, 11 of 20 amusics no longer met the global diagnostic criteria for amusia. A separate group of untrained controls (n = 20), who also completed melody discrimination and pretest, improved by an equal amount as trained controls on all measures, suggesting that the bulk of learning for the control group occurred very rapidly from the pretest. Thirty-one trained participants (13 amusics) returned one year later to assess long-term maintenance of pitch and melody discrimination. On average, there was no change in performance between posttest and one-year follow-up, demonstrating that improvements on pitch- and melody-related tasks in amusics and controls can be maintained. The findings indicate that amusia is not always a life-long deficit when using the current standard

  5. Perceptual learning of basic visual features remains task specific with Training-Plus-Exposure (TPE) training.

    Science.gov (United States)

    Cong, Lin-Juan; Wang, Ru-Jie; Yu, Cong; Zhang, Jun-Yun

    2016-01-01

    Visual perceptual learning is known to be specific to the trained retinal location, feature, and task. However, location and feature specificity can be eliminated by double-training or TPE training protocols, in which observers receive additional exposure to the transfer location or feature dimension via an irrelevant task besides the primary learning task Here we tested whether these new training protocols could even make learning transfer across different tasks involving discrimination of basic visual features (e.g., orientation and contrast). Observers practiced a near-threshold orientation (or contrast) discrimination task. Following a TPE training protocol, they also received exposure to the transfer task via performing suprathreshold contrast (or orientation) discrimination in alternating blocks of trials in the same sessions. The results showed no evidence for significant learning transfer to the untrained near-threshold contrast (or orientation) discrimination task after discounting the pretest effects and the suprathreshold practice effects. These results thus do not support a hypothetical task-independent component in perceptual learning of basic visual features. They also set the boundary of the new training protocols in their capability to enable learning transfer.

  6. Hemispheric processing of vocal emblem sounds.

    Science.gov (United States)

    Neumann-Werth, Yael; Levy, Erika S; Obler, Loraine K

    2013-01-01

    Vocal emblems, such as shh and brr, are speech sounds that have linguistic and nonlinguistic features; thus, it is unclear how they are processed in the brain. Five adult dextral individuals with left-brain damage and moderate-severe Wernicke's aphasia, five adult dextral individuals with right-brain damage, and five Controls participated in two tasks: (1) matching vocal emblems to photographs ('picture task') and (2) matching vocal emblems to verbal translations ('phrase task'). Cross-group statistical analyses on items on which the Controls performed at ceiling revealed lower accuracy by the group with left-brain damage (than by Controls) on both tasks, and lower accuracy by the group with right-brain damage (than by Controls) on the picture task. Additionally, the group with left-brain damage performed significantly less accurately than the group with right-brain damage on the phrase task only. Findings suggest that comprehension of vocal emblems recruits more left- than right-hemisphere processing.

  7. Dynamic, continuous multitasking training leads to task-specific improvements but does not transfer across action selection tasks

    Science.gov (United States)

    Bender, Angela D.; Filmer, Hannah L.; Naughtin, Claire K.; Dux, Paul E.

    2017-12-01

    The ability to perform multiple tasks concurrently is an ever-increasing requirement in our information-rich world. Despite this, multitasking typically compromises performance due to the processing limitations associated with cognitive control and decision-making. While intensive dual-task training is known to improve multitasking performance, only limited evidence suggests that training-related performance benefits can transfer to untrained tasks that share overlapping processes. In the real world, however, coordinating and selecting several responses within close temporal proximity will often occur in high-interference environments. Over the last decade, there have been notable reports that training on video action games that require dynamic multitasking in a demanding environment can lead to transfer effects on aspects of cognition such as attention and working memory. Here, we asked whether continuous and dynamic multitasking training extends benefits to tasks that are theoretically related to the trained tasks. To examine this issue, we asked a group of participants to train on a combined continuous visuomotor tracking task and a perceptual discrimination task for six sessions, while an active control group practiced the component tasks in isolation. A battery of tests measuring response selection, response inhibition, and spatial attention was administered before and immediately after training to investigate transfer. Multitasking training resulted in substantial, task-specific gains in dual-task ability, but there was no evidence that these benefits generalized to other action control tasks. The findings suggest that training on a combined visuomotor tracking and discrimination task results in task-specific benefits but provides no additional value for untrained action selection tasks.

  8. Exposure to arousal-inducing sounds facilitates visual search.

    Science.gov (United States)

    Asutay, Erkin; Västfjäll, Daniel

    2017-09-04

    Exposure to affective stimuli could enhance perception and facilitate attention via increasing alertness, vigilance, and by decreasing attentional thresholds. However, evidence on the impact of affective sounds on perception and attention is scant. Here, a novel aspect of affective facilitation of attention is studied: whether arousal induced by task-irrelevant auditory stimuli could modulate attention in a visual search. In two experiments, participants performed a visual search task with and without auditory-cues that preceded the search. Participants were faster in locating high-salient targets compared to low-salient targets. Critically, search times and search slopes decreased with increasing auditory-induced arousal while searching for low-salient targets. Taken together, these findings suggest that arousal induced by sounds can facilitate attention in a subsequent visual search. This novel finding provides support for the alerting function of the auditory system by showing an auditory-phasic alerting effect in visual attention. The results also indicate that stimulus arousal modulates the alerting effect. Attention and perception are our everyday tools to navigate our surrounding world and the current findings showing that affective sounds could influence visual attention provide evidence that we make use of affective information during perceptual processing.

  9. Social and nonsocial category discriminations in a chimpanzee (Pan troglodytes) and American black bears (Ursus americanus).

    Science.gov (United States)

    Vonk, Jennifer; Johnson-Ulrich, Zoe

    2014-09-01

    One captive adult chimpanzee and 3 adult American black bears were presented with a series of natural category discrimination tasks on a touch-screen computer. This is the first explicit comparison of bear and primate abilities using identical tasks, and the first test of a social concept in a carnivore. The discriminations involved a social relationship category (mother/offspring) and a nonsocial category involving food items. The social category discrimination could be made using knowledge of the overarching mother/offspring concept, whereas the nonsocial category discriminations could be made only by using perceptual rules, such as "choose images that show larger and smaller items of the same type." The bears failed to show above-chance transfer on either the social or nonsocial discriminations, indicating that they did not use either the perceptual rule or knowledge of the overarching concept of mother/offspring to guide their choices in these tasks. However, at least 1 bear remembered previously reinforced stimuli when these stimuli were recombined, later. The chimpanzee showed transfer on a control task and did not consistently apply a perceptual rule to solve the nonsocial task, so it is possible that he eventually acquired the social concept. Further comparisons between species on identical tasks assessing social knowledge will help illuminate the selective pressures responsible for a range of social cognitive skills.

  10. Tinnitus (Phantom Sound: Risk coming for future

    Directory of Open Access Journals (Sweden)

    Suresh Rewar

    2015-01-01

    Full Text Available The word 'tinnitus' comes from the Latin word tinnire, meaning “to ring” or “a ringing.” Tinnitus is the cognition of sound in the absence of any corresponding external sound. Tinnitus can take the form of continuous buzzing, hissing, or ringing, or a combination of these or other characteristics. Tinnitus affects 10% to 25% of the adult population. Tinnitus is classified as objective and subjective categories. Subjective tinnitus is meaningless sounds that are not associated with a physical sound and only the person who has the tinnitus can hear it. Objective tinnitus is the result of a sound that can be heard by the physician. Tinnitus is not a disease in itself but a common symptom, and because it involves the perception of sound or sounds, it is commonly associated with the hearing system. In fact, various parts of the hearing system, including the inner ear, are often responsible for this symptom. Tinnitus patients, which can lead to sleep disturbances, concentration problems, fatigue, depression, anxiety disorders, and sometimes even to suicide. The evaluation of tinnitus always begins with a thorough history and physical examination, with further testing performed when indicated. Diagnostic testing should include audiography, speech discrimination testing, computed tomography angiography, or magnetic resonance angiography should be performed. All patients with tinnitus can benefit from patient education and preventive measures, and oftentimes the physician's reassurance and assistance with the psychologic aftereffects of tinnitus can be the therapy most valuable to the patient. There are no specific medications for the treatment of tinnitus. Sedatives and some other medications may prove helpful in the early stages. The ultimate goal of neuro-imaging is to identify subtypes of tinnitus in order to better inform treatment strategies.

  11. [Music therapy in adults with cochlear implants : Effects on music perception and subjective sound quality].

    Science.gov (United States)

    Hutter, E; Grapp, M; Argstatter, H

    2016-12-01

    People with severe hearing impairments and deafness can achieve good speech comprehension using a cochlear implant (CI), although music perception often remains impaired. A novel concept of music therapy for adults with CI was developed and evaluated in this study. This study included 30 adults with a unilateral CI following postlingual deafness. The subjective sound quality of the CI was rated using the hearing implant sound quality index (HISQUI) and musical tests for pitch discrimination, melody recognition and timbre identification were applied. As a control 55 normally hearing persons also completed the musical tests. In comparison to normally hearing subjects CI users showed deficits in the perception of pitch, melody and timbre. Specific effects of therapy were observed in the subjective sound quality of the CI, in pitch discrimination into a high and low pitch range and in timbre identification, while general learning effects were found in melody recognition. Music perception shows deficits in CI users compared to normally hearing persons. After individual music therapy in the rehabilitation process, improvements in this delicate area could be achieved.

  12. Spectral envelope sensitivity of musical instrument sounds.

    Science.gov (United States)

    Gunawan, David; Sen, D

    2008-01-01

    It is well known that the spectral envelope is a perceptually salient attribute in musical instrument timbre perception. While a number of studies have explored discrimination thresholds for changes to the spectral envelope, the question of how sensitivity varies as a function of center frequency and bandwidth for musical instruments has yet to be addressed. In this paper a two-alternative forced-choice experiment was conducted to observe perceptual sensitivity to modifications made on trumpet, clarinet and viola sounds. The experiment involved attenuating 14 frequency bands for each instrument in order to determine discrimination thresholds as a function of center frequency and bandwidth. The results indicate that perceptual sensitivity is governed by the first few harmonics and sensitivity does not improve when extending the bandwidth any higher. However, sensitivity was found to decrease if changes were made only to the higher frequencies and continued to decrease as the distorted bandwidth was widened. The results are analyzed and discussed with respect to two other spectral envelope discrimination studies in the literature as well as what is predicted from a psychoacoustic model.

  13. Hand proximity facilitates spatial discrimination of auditory tones

    Directory of Open Access Journals (Sweden)

    Philip eTseng

    2014-06-01

    Full Text Available The effect of hand proximity on vision and visual attention has been well documented. In this study we tested whether such effect(s would also be present in the auditory modality. With hands placed either near or away from the audio sources, participants performed an auditory-spatial discrimination (Exp 1: left or right side, pitch discrimination (Exp 2: high, med, or low tone, and spatial-plus-pitch (Exp 3: left or right; high, med, or low discrimination task. In Exp 1, when hands were away from the audio source, participants consistently responded faster with their right hand regardless of stimulus location. This right hand advantage, however, disappeared in the hands-near condition because of a significant improvement in left hand’s reaction time. No effect of hand proximity was found in Exp 2 or 3, where a choice reaction time task requiring pitch discrimination was used. Together, these results suggest that the effect of hand proximity is not exclusive to vision alone, but is also present in audition, though in a much weaker form. Most important, these findings provide evidence from auditory attention that supports the multimodal account originally raised by Reed et al. in 2006.

  14. Context effects on processing widely deviant sounds in newborn infants

    Directory of Open Access Journals (Sweden)

    Gábor Péter Háden

    2013-09-01

    Full Text Available Detecting and orienting towards sounds carrying new information is a crucial feature of the human brain that supports adaptation to the environment. Rare, acoustically widely deviant sounds presented amongst frequent tones elicit large event related brain potentials (ERPs in neonates. Here we tested whether these discriminative ERP responses reflect only the activation of fresh afferent neuronal populations (i.e., neuronal circuits not affected by the tones or they also index the processing of contextual mismatch between the rare and the frequent sounds.In two separate experiments, we presented sleeping newborns with 150 different environmental sounds and the same number of white noise bursts. Both sounds served either as deviants in an oddball paradigm with the frequent standard stimulus a tone (Novel/Noise deviant, or as the standard stimulus with the tone as deviant (Novel/Noise standard, or they were delivered alone with the same timing as the deviants in the oddball condition (Novel/Noise alone.Whereas the ERP responses to noise–deviants elicited similar responses as the same sound presented alone, the responses elicited by environmental sounds in the corresponding conditions morphologically differed from each other. Thus whereas the ERP response to the noise sounds can be explained by the different refractory state of stimulus specific neuronal populations, the ERP response to environmental sounds indicated context sensitive processing. These results provide evidence for an innate tendency of context dependent auditory processing as well as a basis for the different developmental trajectories of processing acoustical deviance and contextual novelty.

  15. Pitch Discrimination Learning: Specificity for Pitch and Harmonic Resolvability, and Electrophysiological Correlates

    OpenAIRE

    Carcagno, Samuele; Plack, Christopher J.

    2011-01-01

    Multiple-hour training on a pitch discrimination task dramatically decreases the threshold for detecting a pitch difference between two harmonic complexes. Here, we investigated the specificity of this perceptual learning with respect to the pitch and the resolvability of the trained harmonic complex, as well as its cortical electrophysiological correlates. We trained 24 participants for 12 h on a pitch discrimination task using one of four different harmonic complexes. The complexes differed...

  16. Optimizing sound quality with AACHENHEAD {sup trademark} -technology; Optimierung der Geraeuschqualitaet mit der AACHENHEAD {sup trademark} -Technologie

    Energy Technology Data Exchange (ETDEWEB)

    Krohn, C. [HEAD acoustics GmbH (Germany)

    1997-08-01

    The sound quality of interior and exterior vehicle noise is gaining more and more importance. Not only that modern vehicles show a considerably lower noise level, the characteristic sound of a vehicle influences the buyer`s choice of a car. The acoustic engineer is faced with a complex task. In addition to reducing the noise level he has to make sure that the sound design goes along with the product. This is where aurally-adequate sound diagnosis is needed. Conventional methods, e.g. measuring the noise level or third-octave spectrum, alone cannot solve this specific task. HEAD acoustics offers the full range of aurally-adequate measurement and analysis systems, starting with the Artificial Head Measurement System and making the recorded data available for further analysis with the Binaural Analysis System BAS or the Mobile Sound Quality Laboratory SQlab. This full range of systems helps the acoustic engineer to solve his tasks for improving sound quality in a fast and cost-effective way. (orig.) [Deutsch] Kaum jemand wuerde ein Geraeusch mit verschlossenen Ohren beurteilen. Dennoch ist dies in der Praxis durchaus noch ueblich. Konventionelle Analyseverfahren in der Akustik und Schwingungsmesstechnik verlassen sich ausschliesslich auf den Augenschein. Dieser Aufsatz zeigt neue Wege von HEAD acoustics in diesem Bereich der Messtechnik. (orig.)

  17. View-invariant object recognition ability develops after discrimination, not mere exposure, at several viewing angles.

    Science.gov (United States)

    Yamashita, Wakayo; Wang, Gang; Tanaka, Keiji

    2010-01-01

    One usually fails to recognize an unfamiliar object across changes in viewing angle when it has to be discriminated from similar distractor objects. Previous work has demonstrated that after long-term experience in discriminating among a set of objects seen from the same viewing angle, immediate recognition of the objects across 30-60 degrees changes in viewing angle becomes possible. The capability for view-invariant object recognition should develop during the within-viewing-angle discrimination, which includes two kinds of experience: seeing individual views and discriminating among the objects. The aim of the present study was to determine the relative contribution of each factor to the development of view-invariant object recognition capability. Monkeys were first extensively trained in a task that required view-invariant object recognition (Object task) with several sets of objects. The animals were then exposed to a new set of objects over 26 days in one of two preparatory tasks: one in which each object view was seen individually, and a second that required discrimination among the objects at each of four viewing angles. After the preparatory period, we measured the monkeys' ability to recognize the objects across changes in viewing angle, by introducing the object set to the Object task. Results indicated significant view-invariant recognition after the second but not first preparatory task. These results suggest that discrimination of objects from distractors at each of several viewing angles is required for the development of view-invariant recognition of the objects when the distractors are similar to the objects.

  18. Gender and ethnic discrimination in the rental housing market

    DEFF Research Database (Denmark)

    Bengtsson, Ragnar; Iverman, Elis; Hinnerich, Bjørn Tyrefors

    2012-01-01

    *Corresponding author. E-mail: bjorn.hinnerich@ne.su.se We use a field experiment to measure discrimination in the housing market in Stockholm. Four fictitious persons, of different gender, with distinct-sounding Arabic or Swedish names, are randomly assigned to vacant apartments. We extend...... the study by Ahmed and Hammarstedt (2008). There are two new results. First, we provide evidence that there is no or little gender premium for the female with the Arabic name, which suggests that ethnic discrimination dominates the effects of gender. Secondly, discriminatory behaviour is only found...... in the suburbs or satellite cities/towns of Stockholm County not in the densely populated, affluent, city center. Moreover, we can replicate that there is a gender premium for females with Swedish names. However, we are not able to confirm that males with Arabic names face discrimination....

  19. Blast noise classification with common sound level meter metrics.

    Science.gov (United States)

    Cvengros, Robert M; Valente, Dan; Nykaza, Edward T; Vipperman, Jeffrey S

    2012-08-01

    A common set of signal features measurable by a basic sound level meter are analyzed, and the quality of information carried in subsets of these features are examined for their ability to discriminate military blast and non-blast sounds. The analysis is based on over 120 000 human classified signals compiled from seven different datasets. The study implements linear and Gaussian radial basis function (RBF) support vector machines (SVM) to classify blast sounds. Using the orthogonal centroid dimension reduction technique, intuition is developed about the distribution of blast and non-blast feature vectors in high dimensional space. Recursive feature elimination (SVM-RFE) is then used to eliminate features containing redundant information and rank features according to their ability to separate blasts from non-blasts. Finally, the accuracy of the linear and RBF SVM classifiers is listed for each of the experiments in the dataset, and the weights are given for the linear SVM classifier.

  20. Discovery of Sound in the Sea: Resources for Educators, Students, the Public, and Policymakers.

    Science.gov (United States)

    Vigness-Raposa, Kathleen J; Scowcroft, Gail; Miller, James H; Ketten, Darlene R; Popper, Arthur N

    2016-01-01

    There is increasing concern about the effects of underwater sound on marine life. However, the science of sound is challenging. The Discovery of Sound in the Sea (DOSITS) Web site ( http://www.dosits.org ) was designed to provide comprehensive scientific information on underwater sound for the public and educational and media professionals. It covers the physical science of underwater sound and its use by people and marine animals for a range of tasks. Celebrating 10 years of online resources, DOSITS continues to develop new material and improvements, providing the best resource for the most up-to-date information on underwater sound and its potential effects.

  1. Robust Sound Localization: An Application of an Auditory Perception System for a Humanoid Robot

    National Research Council Canada - National Science Library

    Irie, Robert E

    1995-01-01

    Localizing sounds with different frequency and time domain characteristics in a dynamic listening environment is a challenging task that has not been explored in the field of robotics as much as other perceptual tasks...

  2. Object discrimination using optimized multi-frequency auditory cross-modal haptic feedback.

    Science.gov (United States)

    Gibson, Alison; Artemiadis, Panagiotis

    2014-01-01

    As the field of brain-machine interfaces and neuro-prosthetics continues to grow, there is a high need for sensor and actuation mechanisms that can provide haptic feedback to the user. Current technologies employ expensive, invasive and often inefficient force feedback methods, resulting in an unrealistic solution for individuals who rely on these devices. This paper responds through the development, integration and analysis of a novel feedback architecture where haptic information during the neural control of a prosthetic hand is perceived through multi-frequency auditory signals. Through representing force magnitude with volume and force location with frequency, the feedback architecture can translate the haptic experiences of a robotic end effector into the alternative sensory modality of sound. Previous research with the proposed cross-modal feedback method confirmed its learnability, so the current work aimed to investigate which frequency map (i.e. frequency-specific locations on the hand) is optimal in helping users distinguish between hand-held objects and tasks associated with them. After short use with the cross-modal feedback during the electromyographic (EMG) control of a prosthetic hand, testing results show that users are able to use audial feedback alone to discriminate between everyday objects. While users showed adaptation to three different frequency maps, the simplest map containing only two frequencies was found to be the most useful in discriminating between objects. This outcome provides support for the feasibility and practicality of the cross-modal feedback method during the neural control of prosthetics.

  3. Sound improves diminished visual temporal sensitivity in schizophrenia

    NARCIS (Netherlands)

    de Boer-Schellekens, L.; Stekelenburg, J.J.; Maes, J.P.; van Gool, A.R.; Vroomen, J.

    2014-01-01

    Visual temporal processing and multisensory integration (MSI) of sound and vision were examined in individuals with schizophrenia using a visual temporal order judgment (TOJ) task. Compared to a non-psychiatric control group, persons with schizophrenia were less sensitive judging the temporal order

  4. The task-relevant attribute representation can mediate the Simon effect.

    Directory of Open Access Journals (Sweden)

    Dandan Tang

    Full Text Available Researchers have previously suggested a working memory (WM account of spatial codes, and based on this suggestion, the present study carries out three experiments to investigate how the task-relevant attribute representation (verbal or visual in the typical Simon task affects the Simon effect. Experiment 1 compared the Simon effect between the between- and within-category color conditions, which required subjects to discriminate between red and blue stimuli (presumed to be represented by verbal WM codes because it was easy and fast to name the colors verbally and to discriminate between two similar green stimuli (presumed to be represented by visual WM codes because it was hard and time-consuming to name the colors verbally, respectively. The results revealed a reliable Simon effect that only occurs in the between-category condition. Experiment 2 assessed the Simon effect by requiring subjects to discriminate between two different isosceles trapezoids (within-category shapes and to discriminate isosceles trapezoid from rectangle (between-category shapes, and the results replicated and expanded the findings of Experiment 1. In Experiment 3, subjects were required to perform both tasks from Experiment 1. Wherein, in Experiment 3A, the between-category task preceded the within-category task; in Experiment 3B, the task order was opposite. The results showed the reliable Simon effect when subjects represented the task-relevant stimulus attributes by verbal WM encoding. In addition, the response times (RTs distribution analysis for both the between- and within-category conditions of Experiments 3A and 3B showed decreased Simon effect with the RTs lengthened. Altogether, although the present results are consistent with the temporal coding account, we put forth that the Simon effect also depends on the verbal WM representation of task-relevant stimulus attribute.

  5. Intonation processing in congenital amusia: discrimination, identification and imitation.

    Science.gov (United States)

    Liu, Fang; Patel, Aniruddh D; Fourcin, Adrian; Stewart, Lauren

    2010-06-01

    This study investigated whether congenital amusia, a neuro-developmental disorder of musical perception, also has implications for speech intonation processing. In total, 16 British amusics and 16 matched controls completed five intonation perception tasks and two pitch threshold tasks. Compared with controls, amusics showed impaired performance on discrimination, identification and imitation of statements and questions that were characterized primarily by pitch direction differences in the final word. This intonation-processing deficit in amusia was largely associated with a psychophysical pitch direction discrimination deficit. These findings suggest that amusia impacts upon one's language abilities in subtle ways, and support previous evidence that pitch processing in language and music involves shared mechanisms.

  6. Emotional prosody of task-irrelevant speech interferes with the retention of serial order.

    Science.gov (United States)

    Kattner, Florian; Ellermeier, Wolfgang

    2018-04-09

    Task-irrelevant speech and other temporally changing sounds are known to interfere with the short-term memorization of ordered verbal materials, as compared to silence or stationary sounds. It has been argued that this disruption of short-term memory (STM) may be due to (a) interference of automatically encoded acoustical fluctuations with the process of serial rehearsal or (b) attentional capture by salient task-irrelevant information. To disentangle the contributions of these 2 processes, the authors investigated whether the disruption of serial recall is due to the semantic or acoustical properties of task-irrelevant speech (Experiment 1). They found that performance was affected by the prosody (emotional intonation), but not by the semantics (word meaning), of irrelevant speech, suggesting that the disruption of serial recall is due to interference of precategorically encoded changing-state sound (with higher fluctuation strength of emotionally intonated speech). The authors further demonstrated a functional distinction between this form of distraction and attentional capture by contrasting the effect of (a) speech prosody and (b) sudden prosody deviations on both serial and nonserial STM tasks (Experiment 2). Although serial recall was again sensitive to the emotional prosody of irrelevant speech, performance on a nonserial missing-item task was unaffected by the presence of neutral or emotionally intonated speech sounds. In contrast, sudden prosody changes tended to impair performance on both tasks, suggesting an independent effect of attentional capture. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  7. Foley Sounds vs Real Sounds

    DEFF Research Database (Denmark)

    Trento, Stefano; Götzen, Amalia De

    2011-01-01

    This paper is an initial attempt to study the world of sound effects for motion pictures, also known as Foley sounds. Throughout several audio and audio-video tests we have compared both Foley and real sounds originated by an identical action. The main purpose was to evaluate if sound effects...

  8. The Convergent, Discriminant, and Concurrent Validity of Scores on the Abbreviated Self-Leadership Questionnaire

    Directory of Open Access Journals (Sweden)

    Faruk Şahin

    2015-10-01

    Full Text Available The present study reports the psychometric properties of a short measure of self-leadership in the Turkish context: the Abbreviated Self-Leadership Questionnaire (ASLQ. The ASLQ was examined using two samples and showed sound psychometric properties. Confirmatory factor analysis showed that nine-item ASLQ measured a single construct of self-leadership. The results supported the convergent and discriminant validity of the one-factor model of the ASLQ in relation to the 35-item Revised Self-Leadership Questionnaire and General Self-Efficacy scale, respectively. With regard to internal consistency and test-retest reliability, the ASLQ showed acceptable results. Furthermore, the results provided evidence that scores on the ASLQ positively predicted individual's self-reported task performance and self-efficacy mediated this relationship. Taken together, these findings suggest that the Turkish version of the ASLQ is a reliable and valid measure that can be used to measure self-leadership as one variable of interest in the future studies.

  9. Dimensional feature weighting utilizing multiple kernel learning for single-channel talker location discrimination using the acoustic transfer function.

    Science.gov (United States)

    Takashima, Ryoichi; Takiguchi, Tetsuya; Ariki, Yasuo

    2013-02-01

    This paper presents a method for discriminating the location of the sound source (talker) using only a single microphone. In a previous work, the single-channel approach for discriminating the location of the sound source was discussed, where the acoustic transfer function from a user's position is estimated by using a hidden Markov model of clean speech in the cepstral domain. In this paper, each cepstral dimension of the acoustic transfer function is newly weighted, in order to obtain the cepstral dimensions having information that is useful for classifying the user's position. Then, this paper proposes a feature-weighting method for the cepstral parameter using multiple kernel learning, defining the base kernels for each cepstral dimension of the acoustic transfer function. The user's position is trained and classified by support vector machine. The effectiveness of this method has been confirmed by sound source (talker) localization experiments performed in different room environments.

  10. Spatial Frequency Discrimination: Effects of Age, Reward, and Practice.

    Directory of Open Access Journals (Sweden)

    Carlijn van den Boomen

    Full Text Available Social interaction starts with perception of the world around you. This study investigated two fundamental issues regarding the development of discrimination of higher spatial frequencies, which are important building blocks of perception. Firstly, it mapped the typical developmental trajectory of higher spatial frequency discrimination. Secondly, it developed and validated a novel design that could be applied to improve atypically developed vision. Specifically, this study examined the effect of age and reward on task performance, practice effects, and motivation (i.e., number of trials completed in a higher spatial frequency (reference frequency: 6 cycles per degree discrimination task. We measured discrimination thresholds in children aged between 7 to 12 years and adults (N = 135. Reward was manipulated by presenting either positive reinforcement or punishment. Results showed a decrease in discrimination thresholds with age, thus revealing that higher spatial frequency discrimination continues to develop after 12 years of age. This development continues longer than previously shown for discrimination of lower spatial frequencies. Moreover, thresholds decreased during the run, indicating that discrimination abilities improved. Reward did not affect performance or improvement. However, in an additional group of 5-6 year-olds (N = 28 punishments resulted in the completion of fewer trials compared to reinforcements. In both reward conditions children aged 5-6 years completed only a fourth or half of the run (64 to 128 out of 254 trials and were not motivated to continue. The design thus needs further adaptation before it can be applied to this age group. Children aged 7-12 years and adults completed the run, suggesting that the design is successful and motivating for children aged 7-12 years. This study thus presents developmental differences in higher spatial frequency discrimination thresholds. Furthermore, it presents a design that can be

  11. Spatial Frequency Discrimination: Effects of Age, Reward, and Practice.

    Science.gov (United States)

    van den Boomen, Carlijn; Peters, Judith Carolien

    2017-01-01

    Social interaction starts with perception of the world around you. This study investigated two fundamental issues regarding the development of discrimination of higher spatial frequencies, which are important building blocks of perception. Firstly, it mapped the typical developmental trajectory of higher spatial frequency discrimination. Secondly, it developed and validated a novel design that could be applied to improve atypically developed vision. Specifically, this study examined the effect of age and reward on task performance, practice effects, and motivation (i.e., number of trials completed) in a higher spatial frequency (reference frequency: 6 cycles per degree) discrimination task. We measured discrimination thresholds in children aged between 7 to 12 years and adults (N = 135). Reward was manipulated by presenting either positive reinforcement or punishment. Results showed a decrease in discrimination thresholds with age, thus revealing that higher spatial frequency discrimination continues to develop after 12 years of age. This development continues longer than previously shown for discrimination of lower spatial frequencies. Moreover, thresholds decreased during the run, indicating that discrimination abilities improved. Reward did not affect performance or improvement. However, in an additional group of 5-6 year-olds (N = 28) punishments resulted in the completion of fewer trials compared to reinforcements. In both reward conditions children aged 5-6 years completed only a fourth or half of the run (64 to 128 out of 254 trials) and were not motivated to continue. The design thus needs further adaptation before it can be applied to this age group. Children aged 7-12 years and adults completed the run, suggesting that the design is successful and motivating for children aged 7-12 years. This study thus presents developmental differences in higher spatial frequency discrimination thresholds. Furthermore, it presents a design that can be used in future

  12. 78 FR 13869 - Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy...

    Science.gov (United States)

    2013-03-01

    ...-123-LNG; 12-128-NG; 12-148-NG; 12- 158-NG] Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; CE FLNG, LLC; Consolidated...-NG Puget Sound Energy, Inc Order granting long- term authority to import/export natural gas from/to...

  13. Visual discrimination following partial telencephalic ablations in nurse sharks (Ginglymostoma cirratum).

    Science.gov (United States)

    Graeber, R C; Schroeder, D M; Jane, J A; Ebbesson, S O

    1978-07-15

    An instrumental conditioning task was used to examine the role of the nurse shark telencephalon in black-white (BW) and horizontal-vertical stripes (HV) discrimination performance. In the first experiment, subjects initially received either bilateral anterior telencephalic control lesions or bilateral posterior telencephalic lesions aimed at destroying the central telencephalic nuclei (CN), which are known to receive direct input from the thalamic visual area. Postoperatively, the sharks were trained first on BW and then on HV. Those with anterior lesions learned both tasks as rapidly as unoperated subjects. Those with posterior lesions exhibited visual discrimination deficits related to the amount of damage to the CN and its connecting pathways. Severe damage resulted in an inability to learn either task but caused no impairments in motivation or general learning ability. In the second experiment, the sharks were first trained on BW and HV and then operated. Suction ablations were used to remove various portions of the CN. Sharks with 10% or less damage to the CN retained the preoperatively acquired discriminations almost perfectly. Those with 11-50% damage had to be retrained on both tasks. Almost total removal of the CN produced behavioral indications of blindness along with an inability to perform above the chance level on BW despite excellent retention of both discriminations over a 28-day period before surgery. It appears, however, that such sharks can still detect light. These results implicate the central telencephalic nuclei in the control of visually guided behavior in sharks.

  14. Validity and reliability of acoustic analysis of respiratory sounds in infants

    Science.gov (United States)

    Elphick, H; Lancaster, G; Solis, A; Majumdar, A; Gupta, R; Smyth, R

    2004-01-01

    Objective: To investigate the validity and reliability of computerised acoustic analysis in the detection of abnormal respiratory noises in infants. Methods: Blinded, prospective comparison of acoustic analysis with stethoscope examination. Validity and reliability of acoustic analysis were assessed by calculating the degree of observer agreement using the κ statistic with 95% confidence intervals (CI). Results: 102 infants under 18 months were recruited. Convergent validity for agreement between stethoscope examination and acoustic analysis was poor for wheeze (κ = 0.07 (95% CI, –0.13 to 0.26)) and rattles (κ = 0.11 (–0.05 to 0.27)) and fair for crackles (κ = 0.36 (0.18 to 0.54)). Both the stethoscope and acoustic analysis distinguished well between sounds (discriminant validity). Agreement between observers for the presence of wheeze was poor for both stethoscope examination and acoustic analysis. Agreement for rattles was moderate for the stethoscope but poor for acoustic analysis. Agreement for crackles was moderate using both techniques. Within-observer reliability for all sounds using acoustic analysis was moderate to good. Conclusions: The stethoscope is unreliable for assessing respiratory sounds in infants. This has important implications for its use as a diagnostic tool for lung disorders in infants, and confirms that it cannot be used as a gold standard. Because of the unreliability of the stethoscope, the validity of acoustic analysis could not be demonstrated, although it could discriminate between sounds well and showed good within-observer reliability. For acoustic analysis, targeted training and the development of computerised pattern recognition systems may improve reliability so that it can be used in clinical practice. PMID:15499065

  15. Influence of background noise on the performance in the odor sensitivity task: effects of noise type and extraversion.

    Science.gov (United States)

    Seo, Han-Seok; Hähner, Antje; Gudziol, Volker; Scheibe, Mandy; Hummel, Thomas

    2012-10-01

    Recent research demonstrated that background noise relative to silence impaired subjects' performance in a cognitively driven odor discrimination test. The current study aimed to investigate whether the background noise can also modulate performance in an odor sensitivity task that is less cognitively loaded. Previous studies have shown that the effect of background noise on task performance can be different in relation to degree of extraversion and/or type of noise. Accordingly, we wanted to examine whether the influence of background noise on the odor sensitivity task can be altered as a function of the type of background noise (i.e., nonverbal vs. verbal noise) and the degree of extraversion (i.e., introvert vs. extrovert group). Subjects were asked to conduct an odor sensitivity task in the presence of either nonverbal noise (e.g., party sound) or verbal noise (e.g., audio book), or silence. Overall, the subjects' mean performance in the odor sensitivity task was not significantly different across three auditory conditions. However, with regard to the odor sensitivity task, a significant interaction emerged between the type of background noise and the degree of extraversion. Specifically, verbal noise relative to silence significantly impaired or improved the performance of the odor sensitivity task in the introvert or extrovert group, respectively; the differential effect of introversion/extraversion was not observed in the nonverbal noise-induced task performance. In conclusion, our findings provide new empirical evidence that type of background noise and degree of extraversion play an important role in modulating the effect of background noise on subjects' performance in an odor sensitivity task.

  16. Left occipitotemporal cortex contributes to the discrimination of tool-associated hand actions: fMRI and TMS evidence.

    Science.gov (United States)

    Perini, Francesca; Caramazza, Alfonso; Peelen, Marius V

    2014-01-01

    Functional neuroimaging studies have implicated the left lateral occipitotemporal cortex (LOTC) in both tool and hand perception but the functional role of this region is not fully known. Here, by using a task manipulation, we tested whether tool-/hand-selective LOTC contributes to the discrimination of tool-associated hand actions. Participants viewed briefly presented pictures of kitchen and garage tools while they performed one of two tasks: in the action task, they judged whether the tool is associated with a hand rotation action (e.g., screwdriver) or a hand squeeze action (e.g., garlic press), while in the location task they judged whether the tool is typically found in the kitchen (e.g., garlic press) or in the garage (e.g., screwdriver). Both tasks were performed on the same stimulus set and were matched for difficulty. Contrasting fMRI responses between these tasks showed stronger activity during the action task than the location task in both tool- and hand-selective LOTC regions, which closely overlapped. No differences were found in nearby object- and motion-selective control regions. Importantly, these findings were confirmed by a TMS study, which showed that effective TMS over the tool-/hand-selective LOTC region significantly slowed responses for tool action discriminations relative to tool location discriminations, with no such difference during sham TMS. We conclude that left LOTC contributes to the discrimination of tool-associated hand actions.

  17. Sound preference test in animal models of addicts and phobias.

    Science.gov (United States)

    Soga, Ryo; Shiramatsu, Tomoyo I; Kanzaki, Ryohei; Takahashi, Hirokazu

    2016-08-01

    Biased or too strong preference for a particular object is often problematic, resulting in addiction and phobia. In animal models, alternative forced-choice tasks have been routinely used, but such preference test is far from daily situations that addicts or phobic are facing. In the present study, we developed a behavioral assay to evaluate the preference of sounds in rodents. In the assay, several sounds were presented according to the position of free-moving rats, and quantified the sound preference based on the behavior. A particular tone was paired with microstimulation to the ventral tegmental area (VTA), which plays central roles in reward processing, to increase sound preference. The behaviors of rats were logged during the classical conditioning for six days. Consequently, some behavioral indices suggest that rats search for the conditioned sound. Thus, our data demonstrated that quantitative evaluation of preference in the behavioral assay is feasible.

  18. Infants' Discrimination of Consonants: Interplay between Word Position and Acoustic Saliency

    Science.gov (United States)

    Archer, Stephanie L.; Zamuner, Tania; Engel, Kathleen; Fais, Laurel; Curtin, Suzanne

    2016-01-01

    Research has shown that young infants use contrasting acoustic information to distinguish consonants. This has been used to argue that by 12 months, infants have homed in on their native language sound categories. However, this ability seems to be positionally constrained, with contrasts at the beginning of words (onsets) discriminated earlier.…

  19. Auditory short-term memory trace formation for nonspeech and speech in SLI and dyslexia as indexed by the N100 and mismatch negativity electrophysiological responses.

    Science.gov (United States)

    Tuomainen, Outi T

    2015-04-15

    This study investigates nonspeech and speech processing in specific language impairment (SLI) and dyslexia. We used a passive mismatch negativity (MMN) task to tap automatic brain responses and an active behavioural task to tap attended discrimination of nonspeech and speech sounds. Using the roving standard MMN paradigm, we varied the number of standards ('few' vs. 'many') to investigate the effect of sound repetition on N100 and MMN responses. The results revealed that the SLI group needed more repetitions than dyslexics and controls to create a strong enough sensory trace to elicit MMN. In contrast, in the behavioural task, we observed good discrimination of speech and nonspeech in all groups. The findings indicate that auditory processing deficits in SLI and dyslexia are dissociable and that memory trace formation may be implicated in SLI.

  20. Contextual Advantage for State Discrimination

    Science.gov (United States)

    Schmid, David; Spekkens, Robert W.

    2018-02-01

    Finding quantitative aspects of quantum phenomena which cannot be explained by any classical model has foundational importance for understanding the boundary between classical and quantum theory. It also has practical significance for identifying information processing tasks for which those phenomena provide a quantum advantage. Using the framework of generalized noncontextuality as our notion of classicality, we find one such nonclassical feature within the phenomenology of quantum minimum-error state discrimination. Namely, we identify quantitative limits on the success probability for minimum-error state discrimination in any experiment described by a noncontextual ontological model. These constraints constitute noncontextuality inequalities that are violated by quantum theory, and this violation implies a quantum advantage for state discrimination relative to noncontextual models. Furthermore, our noncontextuality inequalities are robust to noise and are operationally formulated, so that any experimental violation of the inequalities is a witness of contextuality, independently of the validity of quantum theory. Along the way, we introduce new methods for analyzing noncontextuality scenarios and demonstrate a tight connection between our minimum-error state discrimination scenario and a Bell scenario.

  1. Contextual Advantage for State Discrimination

    Directory of Open Access Journals (Sweden)

    David Schmid

    2018-02-01

    Full Text Available Finding quantitative aspects of quantum phenomena which cannot be explained by any classical model has foundational importance for understanding the boundary between classical and quantum theory. It also has practical significance for identifying information processing tasks for which those phenomena provide a quantum advantage. Using the framework of generalized noncontextuality as our notion of classicality, we find one such nonclassical feature within the phenomenology of quantum minimum-error state discrimination. Namely, we identify quantitative limits on the success probability for minimum-error state discrimination in any experiment described by a noncontextual ontological model. These constraints constitute noncontextuality inequalities that are violated by quantum theory, and this violation implies a quantum advantage for state discrimination relative to noncontextual models. Furthermore, our noncontextuality inequalities are robust to noise and are operationally formulated, so that any experimental violation of the inequalities is a witness of contextuality, independently of the validity of quantum theory. Along the way, we introduce new methods for analyzing noncontextuality scenarios and demonstrate a tight connection between our minimum-error state discrimination scenario and a Bell scenario.

  2. Discrimination of optical coherent states using a photon number resolving detector

    DEFF Research Database (Denmark)

    Wittmann, C.; Andersen, Ulrik Lund; Leuchs, G.

    2010-01-01

    The discrimination of non-orthogonal quantum states with reduced or without errors is a fundamental task in quantum measurement theory. In this work, we investigate a quantum measurement strategy capable of discriminating two coherent states probabilistically with significantly smaller error...... probabilities than can be obtained using non-probabilistic state discrimination. We find that appropriate postselection of the measurement data of a photon number resolving detector can be used to discriminate two coherent states with small error probability. We compare our new receiver to an optimal...

  3. Numerical value biases sound localization.

    Science.gov (United States)

    Golob, Edward J; Lewald, Jörg; Getzmann, Stephan; Mock, Jeffrey R

    2017-12-08

    Speech recognition starts with representations of basic acoustic perceptual features and ends by categorizing the sound based on long-term memory for word meaning. However, little is known about whether the reverse pattern of lexical influences on basic perception can occur. We tested for a lexical influence on auditory spatial perception by having subjects make spatial judgments of number stimuli. Four experiments used pointing or left/right 2-alternative forced choice tasks to examine perceptual judgments of sound location as a function of digit magnitude (1-9). The main finding was that for stimuli presented near the median plane there was a linear left-to-right bias for localizing smaller-to-larger numbers. At lateral locations there was a central-eccentric location bias in the pointing task, and either a bias restricted to the smaller numbers (left side) or no significant number bias (right side). Prior number location also biased subsequent number judgments towards the opposite side. Findings support a lexical influence on auditory spatial perception, with a linear mapping near midline and more complex relations at lateral locations. Results may reflect coding of dedicated spatial channels, with two representing lateral positions in each hemispace, and the midline area represented by either their overlap or a separate third channel.

  4. Automatic Sound Generation for Spherical Objects Hitting Straight Beams Based on Physical Models.

    Science.gov (United States)

    Rauterberg, M.; And Others

    Sounds are the result of one or several interactions between one or several objects at a certain place and in a certain environment; the attributes of every interaction influence the generated sound. The following factors influence users in human/computer interaction: the organization of the learning environment, the content of the learning tasks,…

  5. Pitch discrimination learning: specificity for pitch and harmonic resolvability, and electrophysiological correlates.

    Science.gov (United States)

    Carcagno, Samuele; Plack, Christopher J

    2011-08-01

    Multiple-hour training on a pitch discrimination task dramatically decreases the threshold for detecting a pitch difference between two harmonic complexes. Here, we investigated the specificity of this perceptual learning with respect to the pitch and the resolvability of the trained harmonic complex, as well as its cortical electrophysiological correlates. We trained 24 participants for 12 h on a pitch discrimination task using one of four different harmonic complexes. The complexes differed in pitch and/or spectral resolvability of their components by the cochlea, but were filtered into the same spectral region. Cortical-evoked potentials and a behavioral measure of pitch discrimination were assessed before and after training for all the four complexes. The change in these measures was compared to that of two control groups: one trained on a level discrimination task and one without any training. The behavioral results showed that learning was partly specific to both pitch and resolvability. Training with a resolved-harmonic complex improved pitch discrimination for resolved complexes more than training with an unresolved complex. However, we did not find evidence that training with an unresolved complex leads to specific learning for unresolved complexes. Training affected the P2 component of the cortical-evoked potentials, as well as a later component (250-400 ms). No significant changes were found on the mismatch negativity (MMN) component, although a separate experiment showed that this measure was sensitive to pitch changes equivalent to the pitch discriminability changes induced by training. This result suggests that pitch discrimination training affects processes not measured by the MMN, for example, processes higher in level or parallel to those involved in MMN generation.

  6. Discriminative power of visual attributes in dermatology.

    Science.gov (United States)

    Giotis, Ioannis; Visser, Margaretha; Jonkman, Marcel; Petkov, Nicolai

    2013-02-01

    Visual characteristics such as color and shape of skin lesions play an important role in the diagnostic process. In this contribution, we quantify the discriminative power of such attributes using an information theoretical approach. We estimate the probability of occurrence of each attribute as a function of the skin diseases. We use the distribution of this probability across the studied diseases and its entropy to define the discriminative power of the attribute. The discriminative power has a maximum value for attributes that occur (or do not occur) for only one disease and a minimum value for those which are equally likely to be observed among all diseases. Verrucous surface, red and brown colors, and the presence of more than 10 lesions are among the most informative attributes. A ranking of attributes is also carried out and used together with a naive Bayesian classifier, yielding results that confirm the soundness of the proposed method. proposed measure is proven to be a reliable way of assessing the discriminative power of dermatological attributes, and it also helps generate a condensed dermatological lexicon. Therefore, it can be of added value to the manual or computer-aided diagnostic process. © 2012 John Wiley & Sons A/S.

  7. Variability and reduced performance of preschool- and early school-aged children on psychoacoustic tasks: What are the relevant factors?

    Science.gov (United States)

    Allen, Prudence

    2003-04-01

    Young children typically perform more poorly on psychoacoustic tasks than do adults, with large individual differences. When performance is averaged across children within age groups, the data suggest a gradual change in performance with increasing age. However, an examination of individual data suggests that the performance matures more rapidly, although at different times for different children. The mechanisms of development responsible for these changes are likely very complex, involving both sensory and cognitive processes. This paper will discuss some previously suggested mechanisms including attention and cue weighting, as well as possibilities suggested from more recent studies in which learning effects were examined. In one task, a simple frequency discrimination was required, while in another the listener was required to extract regularities in complex sequences of sounds that varied from trial to trial. Results suggested that the ability to select and consistently employ an effective listening strategy was especially important in the performance of the more complex task, while simple stimulus exposure and motivation contributed to the simpler task. These factors are important for understanding the perceptual development and for the subsequent application of psychoacoustic findings to clinical populations. [Work supported by the NSERC and the Canadian Language and Literacy Research Network.

  8. Increased intensity discrimination thresholds in tinnitus subjects with a normal audiogram

    DEFF Research Database (Denmark)

    Epp, Bastian; Hots, J.; Verhey, J. L.

    2012-01-01

    Recent auditory brain stem response measurements in tinnitus subjects with normal audiograms indicate the presence of hidden hearing loss that manifests as reduced neural output from the cochlea at high sound intensities, and results from mice suggest a link to deafferentation of auditory nerve...... fibers. As deafferentation would lead to deficits in hearing performance, the present study investigates whether tinnitus patients with normal hearing thresholds show impairment in intensity discrimination compared to an audiometrically matched control group. Intensity discrimination thresholds were...... significantly increased in the tinnitus frequency range, consistent with the hypothesis that auditory nerve fiber deafferentation is associated with tinnitus....

  9. Braille character discrimination in blindfolded human subjects.

    Science.gov (United States)

    Kauffman, Thomas; Théoret, Hugo; Pascual-Leone, Alvaro

    2002-04-16

    Visual deprivation may lead to enhanced performance in other sensory modalities. Whether this is the case in the tactile modality is controversial and may depend upon specific training and experience. We compared the performance of sighted subjects on a Braille character discrimination task to that of normal individuals blindfolded for a period of five days. Some participants in each group (blindfolded and sighted) received intensive Braille training to offset the effects of experience. Blindfolded subjects performed better than sighted subjects in the Braille discrimination task, irrespective of tactile training. For the left index finger, which had not been used in the formal Braille classes, blindfolding had no effect on performance while subjects who underwent tactile training outperformed non-stimulated participants. These results suggest that visual deprivation speeds up Braille learning and may be associated with behaviorally relevant neuroplastic changes.

  10. Creating and Exploring Huge Parameter Spaces: Interactive Evolution as a Tool for Sound Generation

    DEFF Research Database (Denmark)

    Dahlstedt, Palle

    2001-01-01

    In this paper, a program is presented that applies interactive evolution to sound generation, i.e., preferred individuals are repeatedly selected from a population of genetically bred sound objects, created with various synthesis and pattern generation algorithms. This simplifies aural exploration...... applications. It is also shown how this technique can be used to simplify sound design in standard hardware synthesizers, a task normally avoided by most musicians, due to the required amount of technical understanding....

  11. Discrimination of Communication Vocalizations by Single Neurons and Groups of Neurons in the Auditory Midbrain

    OpenAIRE

    Schneider, David M.; Woolley, Sarah M. N.

    2010-01-01

    Many social animals including songbirds use communication vocalizations for individual recognition. The perception of vocalizations depends on the encoding of complex sounds by neurons in the ascending auditory system, each of which is tuned to a particular subset of acoustic features. Here, we examined how well the responses of single auditory neurons could be used to discriminate among bird songs and we compared discriminability to spectrotemporal tuning. We then used biologically realistic...

  12. Effects of capacity limits, memory loss, and sound type in change deafness.

    Science.gov (United States)

    Gregg, Melissa K; Irsik, Vanessa C; Snyder, Joel S

    2017-11-01

    Change deafness, the inability to notice changes to auditory scenes, has the potential to provide insights about sound perception in busy situations typical of everyday life. We determined the extent to which change deafness to sounds is due to the capacity of processing multiple sounds and the loss of memory for sounds over time. We also determined whether these processing limitations work differently for varying types of sounds within a scene. Auditory scenes composed of naturalistic sounds, spectrally dynamic unrecognizable sounds, tones, and noise rhythms were presented in a change-detection task. On each trial, two scenes were presented that were same or different. We manipulated the number of sounds within each scene to measure memory capacity and the silent interval between scenes to measure memory loss. For all sounds, change detection was worse as scene size increased, demonstrating the importance of capacity limits. Change detection to the natural sounds did not deteriorate much as the interval between scenes increased up to 2,000 ms, but it did deteriorate substantially with longer intervals. For artificial sounds, in contrast, change-detection performance suffered even for very short intervals. The results suggest that change detection is generally limited by capacity, regardless of sound type, but that auditory memory is more enduring for sounds with naturalistic acoustic structures.

  13. Numerical value biases sound localization

    OpenAIRE

    Golob, Edward J.; Lewald, Jörg; Getzmann, Stephan; Mock, Jeffrey R.

    2017-01-01

    Speech recognition starts with representations of basic acoustic perceptual features and ends by categorizing the sound based on long-term memory for word meaning. However, little is known about whether the reverse pattern of lexical influences on basic perception can occur. We tested for a lexical influence on auditory spatial perception by having subjects make spatial judgments of number stimuli. Four experiments used pointing or left/right 2-alternative forced choice tasks to examine perce...

  14. Eliciting the Dutch loan phoneme /g/ with the Menu Task

    NARCIS (Netherlands)

    Hamann, S.; de Jonge, A.

    2015-01-01

    This article introduces the menu task, which can be used to elicit infrequent sounds such as loan phonemes that only occur in a restricted set of words. The menu task is similar to the well-known map task and involves the interaction of two participants to create a menu on the basis of a list of

  15. A dual-task investigation of automaticity in visual word processing

    Science.gov (United States)

    McCann, R. S.; Remington, R. W.; Van Selst, M.

    2000-01-01

    An analysis of activation models of visual word processing suggests that frequency-sensitive forms of lexical processing should proceed normally while unattended. This hypothesis was tested by having participants perform a speeded pitch discrimination task followed by lexical decisions or word naming. As the stimulus onset asynchrony between the tasks was reduced, lexical-decision and naming latencies increased dramatically. Word-frequency effects were additive with the increase, indicating that frequency-sensitive processing was subject to postponement while attention was devoted to the other task. Either (a) the same neural hardware shares responsibility for lexical processing and central stages of choice reaction time task processing and cannot perform both computations simultaneously, or (b) lexical processing is blocked in order to optimize performance on the pitch discrimination task. Either way, word processing is not as automatic as activation models suggest.

  16. Dorso-Lateral Frontal Cortex of the Ferret Encodes Perceptual Difficulty during Visual Discrimination

    OpenAIRE

    Zhe Charles Zhou; Chunxiu Yu; Kristin K. Sellers; Flavio Fröhlich

    2016-01-01

    Visual discrimination requires sensory processing followed by a perceptual decision. Despite a growing understanding of visual areas in this behavior, it is unclear what role top-down signals from prefrontal cortex play, in particular as a function of perceptual difficulty. To address this gap, we investigated how neurons in dorso-lateral frontal cortex (dl-FC) of freely-moving ferrets encode task variables in a two-alternative forced choice visual discrimination task with high- and low-contr...

  17. Musical ability and non-native speech-sound processing are linked through sensitivity to pitch and spectral information.

    Science.gov (United States)

    Kempe, Vera; Bublitz, Dennis; Brooks, Patricia J

    2015-05-01

    Is the observed link between musical ability and non-native speech-sound processing due to enhanced sensitivity to acoustic features underlying both musical and linguistic processing? To address this question, native English speakers (N = 118) discriminated Norwegian tonal contrasts and Norwegian vowels. Short tones differing in temporal, pitch, and spectral characteristics were used to measure sensitivity to the various acoustic features implicated in musical and speech processing. Musical ability was measured using Gordon's Advanced Measures of Musical Audiation. Results showed that sensitivity to specific acoustic features played a role in non-native speech-sound processing: Controlling for non-verbal intelligence, prior foreign language-learning experience, and sex, sensitivity to pitch and spectral information partially mediated the link between musical ability and discrimination of non-native vowels and lexical tones. The findings suggest that while sensitivity to certain acoustic features partially mediates the relationship between musical ability and non-native speech-sound processing, complex tests of musical ability also tap into other shared mechanisms. © 2014 The British Psychological Society.

  18. Binocular contrast discrimination needs monocular multiplicative noise

    Science.gov (United States)

    Ding, Jian; Levi, Dennis M.

    2016-01-01

    The effects of signal and noise on contrast discrimination are difficult to separate because of a singularity in the signal-detection-theory model of two-alternative forced-choice contrast discrimination (Katkov, Tsodyks, & Sagi, 2006). In this article, we show that it is possible to eliminate the singularity by combining that model with a binocular combination model to fit monocular, dichoptic, and binocular contrast discrimination. We performed three experiments using identical stimuli to measure the perceived phase, perceived contrast, and contrast discrimination of a cyclopean sine wave. In the absence of a fixation point, we found a binocular advantage in contrast discrimination both at low contrasts (discrimination mechanisms: a nonlinear contrast transducer and multiplicative noise (MN). A binocular combination model (the DSKL model; Ding, Klein, & Levi, 2013b) was first fitted to both the perceived-phase and the perceived-contrast data sets, then combined with either the nonlinear contrast transducer or the MN mechanism to fit the contrast-discrimination data. We found that the best model combined the DSKL model with early MN. Model simulations showed that, after going through interocular suppression, the uncorrelated noise in the two eyes became anticorrelated, resulting in less binocular noise and therefore a binocular advantage in the discrimination task. Combining a nonlinear contrast transducer or MN with a binocular combination model (DSKL) provides a powerful method for evaluating the two putative contrast-discrimination mechanisms. PMID:26982370

  19. Effects of 3D sound on visual scanning

    NARCIS (Netherlands)

    Veltman, J.A.; Bronkhorst, A.W.; Oving, A.B.

    2000-01-01

    An experiment was conducted in a flight simulator to explore the effectiveness of a 3D sound display as support to visual information from a head down display (HDD). Pilots had to perform two main tasks in separate conditions: intercepting and following a target jet. Performance was measured for

  20. The Opponent Channel Population Code of Sound Location Is an Efficient Representation of Natural Binaural Sounds

    Science.gov (United States)

    Młynarski, Wiktor

    2015-01-01

    In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a “panoramic” code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding. PMID:25996373

  1. Fixation to features and neural processing of facial expressions in a gender discrimination task.

    Science.gov (United States)

    Neath, Karly N; Itier, Roxane J

    2015-10-01

    Early face encoding, as reflected by the N170 ERP component, is sensitive to fixation to the eyes. Whether this sensitivity varies with facial expressions of emotion and can also be seen on other ERP components such as P1 and EPN, was investigated. Using eye-tracking to manipulate fixation on facial features, we found the N170 to be the only eye-sensitive component and this was true for fearful, happy and neutral faces. A different effect of fixation to features was seen for the earlier P1 that likely reflected general sensitivity to face position. An early effect of emotion (∼120 ms) for happy faces was seen at occipital sites and was sustained until ∼350 ms post-stimulus. For fearful faces, an early effect was seen around 80 ms followed by a later effect appearing at ∼150 ms until ∼300 ms at lateral posterior sites. Results suggests that in this emotion-irrelevant gender discrimination task, processing of fearful and happy expressions occurred early and largely independently of the eye-sensitivity indexed by the N170. Processing of the two emotions involved different underlying brain networks active at different times. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Left occipitotemporal cortex contributes to the discrimination of tool-associated hand actions: fMRI and TMS evidence

    Directory of Open Access Journals (Sweden)

    Francesca ePerini

    2014-08-01

    Full Text Available Functional neuroimaging studies have implicated the left lateral occipitotemporal cortex (LOTC in both tool and hand perception but the functional role of this region is not fully known. Here, by using a task manipulation, we tested whether tool-/hand-selective LOTC contributes to the discrimination of tool-associated hand actions. Participants viewed briefly presented pictures of kitchen and garage tools while they performed one of two tasks: in the action task, they judged whether the tool is associated with a hand rotation action (e.g., screwdriver or a hand squeeze action (e.g., garlic press, while in the location task they judged whether the tool is typically found in the kitchen (e.g., garlic press or in the garage (e.g., screwdriver. Both tasks were performed on the same stimulus set and were matched for difficulty. Contrasting fMRI responses between these tasks showed stronger activity during the action task than the location task in both tool- and hand-selective LOTC regions, which closely overlapped. No differences were found in nearby object- and motion-selective control regions. Importantly, these findings were confirmed by a TMS study, which showed that effective TMS over the tool-/hand-selective LOTC region significantly slowed responses for tool action discriminations relative to tool location discriminations, with no such difference during sham TMS. We conclude that left LOTC contributes to the discrimination of tool-associated hand actions.

  3. Functional magnetic resonance imaging of visual object construction and shape discrimination : relations among task, hemispheric lateralization, and gender.

    Science.gov (United States)

    Georgopoulos, A P; Whang, K; Georgopoulos, M A; Tagaris, G A; Amirikian, B; Richter, W; Kim, S G; Uğurbil, K

    2001-01-01

    , the FIT distribution was, overall, more anterior and inferior than that of the SAME task. A detailed analysis of the counts and spatial distributions of activated pixels was carried out for 15 brain areas (all in the cerebral cortex) in which a consistent activation (in > or = 3 subjects) was observed (n = 323 activated pixels). We found the following. Except for the inferior temporal gyrus, which was activated exclusively in the FIT task, all other areas showed activation in both tasks but to different extents. Based on the extent of activation, areas fell within two distinct groups (FIT or SAME) depending on which pixel count (i.e., FIT or SAME) was greater. The FIT group consisted of the following areas, in decreasing FIT/SAME order (brackets indicate ties): GTi, GTs, GC, GFi, GFd, [GTm, GF], GO. The SAME group consisted of the following areas, in decreasing SAME/FIT order : GOi, LPs, Sca, GPrC, GPoC, [GFs, GFm]. These results indicate that there are distributed, graded, and partially overlapping patterns of activation during performance of the two tasks. We attribute these overlapping patterns of activation to the engagement of partially shared processes. Activated pixels clustered to three types of clusters : FIT-only (111 pixels), SAME-only (97 pixels), and FIT + SAME (115 pixels). Pixels contained in FIT-only and SAME-only clusters were distributed approximately equally between the left and right hemispheres, whereas pixels in the SAME + FIT clusters were located mostly in the left hemisphere. With respect to gender, the left-right distribution of activated pixels was very similar in women and men for the SAME-only and FIT + SAME clusters but differed for the FIT-only case in which there was a prominent left side preponderance for women, in contrast to a right side preponderance for men. We conclude that (a) cortical mechanisms common for processing visual object construction and discrimination involve mostly the left hemisphere, (b) cortical mechanisms

  4. Diagnostic validity of methods for assessment of swallowing sounds: a systematic review.

    Science.gov (United States)

    Taveira, Karinna Veríssimo Meira; Santos, Rosane Sampaio; Leão, Bianca Lopes Cavalcante de; Neto, José Stechman; Pernambuco, Leandro; Silva, Letícia Korb da; De Luca Canto, Graziela; Porporatti, André Luís

    2018-02-03

    Oropharyngeal dysphagia is a highly prevalent comorbidity in neurological patients and presents a serious health threat, which may lead to outcomes of aspiration pneumonia, ranging from hospitalization to death. This assessment proposes a non-invasive, acoustic-based method to differentiate between individuals with and without signals of penetration and aspiration. This systematic review evaluated the diagnostic validity of different methods for assessment of swallowing sounds, when compared to Videofluroscopic of Swallowing Study (VFSS) to detect oropharyngeal dysphagia. Articles in which the primary objective was to evaluate the accuracy of swallowing sounds were searched in five electronic databases with no language or time limitations. Accuracy measurements described in the studies were transformed to construct receiver operating characteristic curves and forest plots with the aid of Review Manager v. 5.2 (The Nordic Cochrane Centre, Copenhagen, Denmark). The methodology of the selected studies was evaluated using the Quality Assessment Tool for Diagnostic Accuracy Studies-2. The final electronic search revealed 554 records, however only 3 studies met the inclusion criteria. The accuracy values (area under the curve) were 0.94 for microphone, 0.80 for Doppler, and 0.60 for stethoscope. Based on limited evidence and low methodological quality because few studies were included, with a small sample size, from all index testes found for this systematic review, Doppler showed excellent diagnostic accuracy for the discrimination of swallowing sounds, whereas microphone-reported good accuracy discrimination of swallowing sounds of dysphagic patients and stethoscope showed best screening test. Copyright © 2018 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  5. Discrimination and identification of long vowels in children with typical language development and specific language impairment

    Science.gov (United States)

    Datta, Hia; Shafer, Valerie; Kurtzberg, Diane

    2004-05-01

    Researchers have claimed that children with specific language impairment (SLI) have particular difficulties in discriminating and identifying phonetically similar and brief speech sounds (Stark and Heinz, 1966; Studdert-Kennedy and Bradley, 1997; Sussman, 1993). In a recent study (Shafer et al., 2004), children with SLI were reported to have difficulty in processing brief (50 ms), phonetically similar vowels (/I-E/). The current study investigated perception of long (250 ms), phonetically similar vowels (/I-E/) in 8- to 10-year-old children with SLI and typical language development (TLD). The purpose was to examine whether phonetic similarity in vowels leads to poorer speech-perception in the SLI group. Behavioral and electrophysiological methods were employed to examine discrimination and identification of a nine-step vowel continuum from /I/ to /E/. Similar performances in discrimination were found for both groups, indicating that lengthening vowel duration indeed improves discrimination of phonetically similar vowels. However, these children with SLI showed poor behavioral identification, demonstrating that phonetic similarity of speech sounds, irrespective of their duration, contribute to the speech perception difficulty observed in SLI population. These findings suggest that the deficit in these children with SLI is at the level of working memory or long term memory representation of speech.

  6. Quantum state discrimination and its applications

    International Nuclear Information System (INIS)

    Bae, Joonwoo; Kwek, Leong-Chuan

    2015-01-01

    Quantum state discrimination underlies various applications in quantum information processing tasks. It essentially describes the distinguishability of quantum systems in different states, and the general process of extracting classical information from quantum systems. It is also useful in quantum information applications, such as the characterization of mutual information in cryptographic protocols, or as a technique for deriving fundamental theorems on quantum foundations. It has deep connections to physical principles such as relativistic causality. Quantum state discrimination traces a long history of several decades, starting with the early attempts to formalize information processing of physical systems such as optical communication with photons. Nevertheless, in most cases, the problems of finding optimal strategies of quantum state discrimination remain unsolved, and related applications are valid in some limited cases only. The present review aims to provide an overview on quantum state discrimination, covering some recent progress, and addressing applications in some selected areas. This review serves to strengthen the link between results in quantum state discrimination and quantum information applications, by showing the ways in which the fundamental results are exploited in applications and vice versa. (topical review)

  7. The Voice of the Heart: Vowel-Like Sound in Pulmonary Artery Hypertension

    Directory of Open Access Journals (Sweden)

    Mohamed Elgendi

    2018-04-01

    Full Text Available Increased blood pressure in the pulmonary artery is referred to as pulmonary hypertension and often is linked to loud pulmonic valve closures. For the purpose of this paper, it was hypothesized that pulmonary circulation vibrations will create sounds similar to sounds created by vocal cords during speech and that subjects with pulmonary artery hypertension (PAH could have unique sound signatures across four auscultatory sites. Using a digital stethoscope, heart sounds were recorded at the cardiac apex, 2nd left intercostal space (2LICS, 2nd right intercostal space (2RICS, and 4th left intercostal space (4LICS undergoing simultaneous cardiac catheterization. From the collected heart sounds, relative power of the frequency band, energy of the sinusoid formants, and entropy were extracted. PAH subjects were differentiated by applying the linear discriminant analysis with leave-one-out cross-validation. The entropy of the first sinusoid formant decreased significantly in subjects with a mean pulmonary artery pressure (mPAp ≥ 25 mmHg versus subjects with a mPAp < 25 mmHg with a sensitivity of 84% and specificity of 88.57%, within a 10-s optimized window length for heart sounds recorded at the 2LICS. First sinusoid formant entropy reduction of heart sounds in PAH subjects suggests the existence of a vowel-like pattern. Pattern analysis revealed a unique sound signature, which could be used in non-invasive screening tools.

  8. Performance Enhancements Under Dual-task Conditions

    Science.gov (United States)

    Kramer, A. F.; Wickens, C. D.; Donchin, E.

    1984-01-01

    Research on dual-task performance has been concerned with delineating the antecedent conditions which lead to dual-task decrements. Capacity models of attention, which propose that a hypothetical resource structure underlies performance, have been employed as predictive devices. These models predict that tasks which require different processing resources can be more successfully time shared than tasks which require common resources. The conditions under which such dual-task integrality can be fostered were assessed in a study in which three factors likely to influence the integrality between tasks were manipulated: inter-task redundancy, the physical proximity of tasks and the task relevant objects. Twelve subjects participated in three experimental sessions in which they performed both single and dual-tasks. The primary task was a pursuit step tracking task. The secondary tasks required the discrimination between different intensities or different spatial positions of a stimulus. The results are discussed in terms of a model of dual-task integrality.

  9. Tactile-dependant corticomotor facilitation is influenced by discrimination performance in seniors

    Directory of Open Access Journals (Sweden)

    Tremblay François

    2010-03-01

    Full Text Available Abstract Background Active contraction leads to facilitation of motor responses evoked by transcranial magnetic stimulation (TMS. In small hand muscles, motor facilitation is known to be also influenced by the nature of the task. Recently, we showed that corticomotor facilitation was selectively enhanced when young participants actively discriminated tactile symbols with the tip of their index or little finger. This tactile-dependant motor facilitation reflected, for the large part, attentional influences associated with performing tactile discrimination, since execution of a concomitant distraction task abolished facilitation. In the present report, we extend these observations to examine the influence of age on the ability to produce extra motor facilitation when the hand is used for sensory exploration. Methods Corticomotor excitability was tested in 16 healthy seniors (58-83 years while they actively moved their right index finger over a surface under two task conditions. In the tactile discrimination (TD condition, participants attended to the spatial location of two tactile symbols on the explored surface, while in the non discrimination (ND condition, participants simply moved their finger over a blank surface. Changes in amplitude, in latency and in the silent period (SP duration were measured from recordings of motor evoked potentials (MEP in the right first dorsal interosseous muscle in response to TMS of the left motor cortex. Results Healthy seniors exhibited widely varying levels of performance with the TD task, older age being associated with lower accuracy and vice-versa. Large inter-individual variations were also observed in terms of tactile-specific corticomotor facilitation. Regrouping seniors into higher (n = 6 and lower performance groups (n = 10 revealed a significant task by performance interaction. This latter interaction reflected differences between higher and lower performance groups; tactile-related facilitation being

  10. Conditioning procedure and color discrimination in the honeybee Apis mellifera

    Science.gov (United States)

    Giurfa, Martin

    We studied the influence of the conditioning procedure on color discrimination by free-flying honeybees. We asked whether absolute and differential conditioning result in different discrimination capabilities for the same pairs of colored targets. In absolute conditioning, bees were rewarded on a single color; in differential conditioning, bees were rewarded on the same color but an alternative, non-rewarding, similar color was also visible. In both conditioning procedures, bees learned their respective task and could also discriminate the training stimulus from a novel stimulus that was perceptually different from the trained one. Discrimination between perceptually closer stimuli was possible after differential conditioning but not after absolute conditioning. Differences in attention inculcated by these training procedures may underlie the different discrimination performances of the bees.

  11. Activation in the Right Inferior Parietal Lobule Reflects the Representation of Musical Structure beyond Simple Pitch Discrimination

    Science.gov (United States)

    Royal, Isabelle; Vuvan, Dominique T.; Zendel, Benjamin Rich; Robitaille, Nicolas; Schönwiesner, Marc; Peretz, Isabelle

    2016-01-01

    Pitch discrimination tasks typically engage the superior temporal gyrus and the right inferior frontal gyrus. It is currently unclear whether these regions are equally involved in the processing of incongruous notes in melodies, which requires the representation of musical structure (tonality) in addition to pitch discrimination. To this aim, 14 participants completed two tasks while undergoing functional magnetic resonance imaging, one in which they had to identify a pitch change in a series of non-melodic repeating tones and a second in which they had to identify an incongruous note in a tonal melody. In both tasks, the deviants activated the right superior temporal gyrus. A contrast between deviants in the melodic task and deviants in the non-melodic task (melodic > non-melodic) revealed additional activity in the right inferior parietal lobule. Activation in the inferior parietal lobule likely represents processes related to the maintenance of tonal pitch structure in working memory during pitch discrimination. PMID:27195523

  12. Impaired somatosensory discrimination of shape in Parkinson's disease : Association with caudate nucleus dopaminergic function

    NARCIS (Netherlands)

    Weder, BJ; Leenders, KL; Vontobel, P; Nienhusmeier, M; Keel, A; Zaunbauer, W; Vonesch, T; Ludin, HP

    1999-01-01

    Tactile discrimination of macrogeometric objects in a two-alternative forced-choice procedure represents a demanding task involving somatosensory pathways and higher cognitive processing. The objects for somatosensory discrimination, i.e., rectangular parallelepipeds differing only in oblongness,

  13. Discrimination of complex human behavior by pigeons (Columba livia and humans.

    Directory of Open Access Journals (Sweden)

    Muhammad A J Qadri

    Full Text Available The cognitive and neural mechanisms for recognizing and categorizing behavior are not well understood in non-human animals. In the current experiments, pigeons and humans learned to categorize two non-repeating, complex human behaviors ("martial arts" vs. "Indian dance". Using multiple video exemplars of a digital human model, pigeons discriminated these behaviors in a go/no-go task and humans in a choice task. Experiment 1 found that pigeons already experienced with discriminating the locomotive actions of digital animals acquired the discrimination more rapidly when action information was available than when only pose information was available. Experiments 2 and 3 found this same dynamic superiority effect with naïve pigeons and human participants. Both species used the same combination of immediately available static pose information and more slowly perceived dynamic action cues to discriminate the behavioral categories. Theories based on generalized visual mechanisms, as opposed to embodied, species-specific action networks, offer a parsimonious account of how these different animals recognize behavior across and within species.

  14. Sound

    CERN Document Server

    Robertson, William C

    2003-01-01

    Muddled about what makes music? Stuck on the study of harmonics? Dumbfounded by how sound gets around? Now you no longer have to struggle to teach concepts you really don t grasp yourself. Sound takes an intentionally light touch to help out all those adults science teachers, parents wanting to help with homework, home-schoolers seeking necessary scientific background to teach middle school physics with confidence. The book introduces sound waves and uses that model to explain sound-related occurrences. Starting with the basics of what causes sound and how it travels, you'll learn how musical instruments work, how sound waves add and subtract, how the human ear works, and even why you can sound like a Munchkin when you inhale helium. Sound is the fourth book in the award-winning Stop Faking It! Series, published by NSTA Press. Like the other popular volumes, it is written by irreverent educator Bill Robertson, who offers this Sound recommendation: One of the coolest activities is whacking a spinning metal rod...

  15. The pigeon's discrimination of visual entropy: a logarithmic function.

    Science.gov (United States)

    Young, Michael E; Wasserman, Edward A

    2002-11-01

    We taught 8 pigeons to discriminate 16-icon arrays that differed in their visual variability or "entropy" to see whether the relationship between entropy and discriminative behavior is linear (in which equivalent differences in entropy should produce equivalent changes in behavior) or logarithmic (in which higher entropy values should be less discriminable from one another than lower entropy values). Pigeons received a go/no-go task in which the lower entropy arrays were reinforced for one group and the higher entropy arrays were reinforced for a second group. The superior discrimination of the second group was predicted by a theoretical analysis in which excitatory and inhibitory stimulus generalization gradients fall along a logarithmic, but not a linear scale. Reanalysis of previously published data also yielded results consistent with a logarithmic relationship between entropy and discriminative behavior.

  16. Intelligent Systems Approaches to Product Sound Quality Analysis

    Science.gov (United States)

    Pietila, Glenn M.

    As a product market becomes more competitive, consumers become more discriminating in the way in which they differentiate between engineered products. The consumer often makes a purchasing decision based on the sound emitted from the product during operation by using the sound to judge quality or annoyance. Therefore, in recent years, many sound quality analysis tools have been developed to evaluate the consumer preference as it relates to a product sound and to quantify this preference based on objective measurements. This understanding can be used to direct a product design process in order to help differentiate the product from competitive products or to establish an impression on consumers regarding a product's quality or robustness. The sound quality process is typically a statistical tool that is used to model subjective preference, or merit score, based on objective measurements, or metrics. In this way, new product developments can be evaluated in an objective manner without the laborious process of gathering a sample population of consumers for subjective studies each time. The most common model used today is the Multiple Linear Regression (MLR), although recently non-linear Artificial Neural Network (ANN) approaches are gaining popularity. This dissertation will review publicly available published literature and present additional intelligent systems approaches that can be used to improve on the current sound quality process. The focus of this work is to address shortcomings in the current paired comparison approach to sound quality analysis. This research will propose a framework for an adaptive jury analysis approach as an alternative to the current Bradley-Terry model. The adaptive jury framework uses statistical hypothesis testing to focus on sound pairings that are most interesting and is expected to address some of the restrictions required by the Bradley-Terry model. It will also provide a more amicable framework for an intelligent systems approach

  17. The effect of brain lesions on sound localization in complex acoustic environments.

    Science.gov (United States)

    Zündorf, Ida C; Karnath, Hans-Otto; Lewald, Jörg

    2014-05-01

    Localizing sound sources of interest in cluttered acoustic environments--as in the 'cocktail-party' situation--is one of the most demanding challenges to the human auditory system in everyday life. In this study, stroke patients' ability to localize acoustic targets in a single-source and in a multi-source setup in the free sound field were directly compared. Subsequent voxel-based lesion-behaviour mapping analyses were computed to uncover the brain areas associated with a deficit in localization in the presence of multiple distracter sound sources rather than localization of individually presented sound sources. Analyses revealed a fundamental role of the right planum temporale in this task. The results from the left hemisphere were less straightforward, but suggested an involvement of inferior frontal and pre- and postcentral areas. These areas appear to be particularly involved in the spectrotemporal analyses crucial for effective segregation of multiple sound streams from various locations, beyond the currently known network for localization of isolated sound sources in otherwise silent surroundings.

  18. Perception of musical timbre in congenital amusia: categorization, discrimination and short-term memory.

    Science.gov (United States)

    Marin, Manuela M; Gingras, Bruno; Stewart, Lauren

    2012-02-01

    Congenital amusia is a neurodevelopmental disorder that is characterized primarily by difficulties in the pitch domain. The aim of the present study was to investigate the perception of musical timbre in a group of individuals with congenital amusia by probing discrimination and short-term memory for real-world timbral stimuli as well as examining the ability of these individuals to sort instrumental tones according to their timbral similarity. Thirteen amusic individuals were matched with thirteen non-amusic controls on a range of background variables. The discrimination task included stimuli of two different durations and pairings of instrumental tones that reflected varying distances in a perceptual timbre space. Performance in the discrimination task was at ceiling for both groups. In contrast, amusic individuals scored lower than controls on the short-term timbral memory task. Amusic individuals also performed worse than controls on the sorting task, suggesting differences in the higher-order representation of musical timbre. These findings add to the emerging picture of amusia as a disorder that has consequences for the perception and memory of musical timbre, as well as pitch. Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. Sex-specific asymmetries in communication sound perception are not related to hand preference in an early primate

    Directory of Open Access Journals (Sweden)

    Scheumann Marina

    2008-01-01

    Full Text Available Abstract Background Left hemispheric dominance of language processing and handedness, previously thought to be unique to humans, is currently under debate. To gain an insight into the origin of lateralization in primates, we have studied gray mouse lemurs, suggested to represent the most ancestral primate condition. We explored potential functional asymmetries on the behavioral level by applying a combined handedness and auditory perception task. For testing handedness, we used a forced food-grasping task. For testing auditory perception, we adapted the head turn paradigm, originally established for exploring hemispheric specializations in conspecific sound processing in Old World monkeys, and exposed 38 subjects to control sounds and conspecific communication sounds of positive and negative emotional valence. Results The tested mouse lemur population did not show an asymmetry in hand preference or in orientation towards conspecific communication sounds. However, males, but not females, exhibited a significant right ear-left hemisphere bias when exposed to conspecific communication sounds of negative emotional valence. Orientation asymmetries were not related to hand preference. Conclusion Our results provide the first evidence for sex-specific asymmetries for conspecific communication sound perception in non-human primates. Furthermore, they suggest that hemispheric dominance for communication sound processing evolved before handedness and independently from each other.

  20. Sound effects: Multimodal input helps infants find displaced objects.

    Science.gov (United States)

    Shinskey, Jeanne L

    2017-09-01

    Before 9 months, infants use sound to retrieve a stationary object hidden by darkness but not one hidden by occlusion, suggesting auditory input is more salient in the absence of visual input. This article addresses how audiovisual input affects 10-month-olds' search for displaced objects. In AB tasks, infants who previously retrieved an object at A subsequently fail to find it after it is displaced to B, especially following a delay between hiding and retrieval. Experiment 1 manipulated auditory input by keeping the hidden object audible versus silent, and visual input by presenting the delay in the light versus dark. Infants succeeded more at B with audible than silent objects and, unexpectedly, more after delays in the light than dark. Experiment 2 presented both the delay and search phases in darkness. The unexpected light-dark difference disappeared. Across experiments, the presence of auditory input helped infants find displaced objects, whereas the absence of visual input did not. Sound might help by strengthening object representation, reducing memory load, or focusing attention. This work provides new evidence on when bimodal input aids object processing, corroborates claims that audiovisual processing improves over the first year of life, and contributes to multisensory approaches to studying cognition. Statement of contribution What is already known on this subject Before 9 months, infants use sound to retrieve a stationary object hidden by darkness but not one hidden by occlusion. This suggests they find auditory input more salient in the absence of visual input in simple search tasks. After 9 months, infants' object processing appears more sensitive to multimodal (e.g., audiovisual) input. What does this study add? This study tested how audiovisual input affects 10-month-olds' search for an object displaced in an AB task. Sound helped infants find displaced objects in both the presence and absence of visual input. Object processing becomes more

  1. P3a from white noise.

    Science.gov (United States)

    Frank, David W; Yee, Ryan B; Polich, John

    2012-08-01

    P3a and P3b event-related brain potentials (ERPs) were elicited with an auditory three-stimulus (target, distracter, and standard) discrimination task in which subjects responded only to the target. Distracter stimuli consisted of white noise or novel sounds with stimulus characteristics perceptually matched. Target/standard discrimination difficulty was manipulated by varying target/standard pitch differences to produce relatively easy, medium, and hard tasks. Error rate and response time increased with increases in task difficulty. P3a was larger for the white noise compared to novel sounds, maximum over the central/parietal recording sites, and did not differ in size across difficulty levels. P3b was unaffected by distracter type, decreased as task difficulty increased, and maximum over the parietal recording sites. The findings indicate that P3a from white noise is robust and should be useful for applied studies as it removes stimulus novelty variability. Theoretical perspectives are discussed. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. Faster native vowel discrimination learning in musicians is mediated by an optimization of mnemonic functions.

    Science.gov (United States)

    Elmer, Stefan; Greber, Marielle; Pushparaj, Arethy; Kühnis, Jürg; Jäncke, Lutz

    2017-09-01

    The ability to discriminate phonemes varying in spectral and temporal attributes constitutes one of the most basic intrinsic elements underlying language learning mechanisms. Since previous work has consistently shown that professional musicians are characterized by perceptual and cognitive advantages in a variety of language-related tasks, and since vowels can be considered musical sounds within the domain of speech, here we investigated the behavioral and electrophysiological correlates of native vowel discrimination learning in a sample of professional musicians and non-musicians. We evaluated the contribution of both the neurophysiological underpinnings of perceptual (i.e., N1/P2 complex) and mnemonic functions (i.e., N400 and P600 responses) while the participants were instructed to judge whether pairs of native consonant-vowel (CV) syllables manipulated in the first formant transition of the vowel (i.e., from /tu/ to /to/) were identical or not. Results clearly demonstrated faster learning in musicians, compared to non-musicians, as reflected by shorter reaction times and higher accuracy. Most notably, in terms of morphology, time course, and voltage strength, this steeper learning curve was accompanied by distinctive N400 and P600 manifestations between the two groups. In contrast, we did not reveal any group differences during the early stages of auditory processing (i.e., N1/P2 complex), suggesting that faster learning was mediated by an optimization of mnemonic but not perceptual functions. Based on a clear taxonomy of the mnemonic functions involved in the task, results are interpreted as pointing to a relationship between faster learning mechanisms in musicians and an optimization of echoic (i.e., N400 component) and working memory (i.e., P600 component) functions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. HD-MTL: Hierarchical Deep Multi-Task Learning for Large-Scale Visual Recognition.

    Science.gov (United States)

    Fan, Jianping; Zhao, Tianyi; Kuang, Zhenzhong; Zheng, Yu; Zhang, Ji; Yu, Jun; Peng, Jinye

    2017-02-09

    In this paper, a hierarchical deep multi-task learning (HD-MTL) algorithm is developed to support large-scale visual recognition (e.g., recognizing thousands or even tens of thousands of atomic object classes automatically). First, multiple sets of multi-level deep features are extracted from different layers of deep convolutional neural networks (deep CNNs), and they are used to achieve more effective accomplishment of the coarseto- fine tasks for hierarchical visual recognition. A visual tree is then learned by assigning the visually-similar atomic object classes with similar learning complexities into the same group, which can provide a good environment for determining the interrelated learning tasks automatically. By leveraging the inter-task relatedness (inter-class similarities) to learn more discriminative group-specific deep representations, our deep multi-task learning algorithm can train more discriminative node classifiers for distinguishing the visually-similar atomic object classes effectively. Our hierarchical deep multi-task learning (HD-MTL) algorithm can integrate two discriminative regularization terms to control the inter-level error propagation effectively, and it can provide an end-to-end approach for jointly learning more representative deep CNNs (for image representation) and more discriminative tree classifier (for large-scale visual recognition) and updating them simultaneously. Our incremental deep learning algorithms can effectively adapt both the deep CNNs and the tree classifier to the new training images and the new object classes. Our experimental results have demonstrated that our HD-MTL algorithm can achieve very competitive results on improving the accuracy rates for large-scale visual recognition.

  4. Sound frequency affects speech emotion perception: Results from congenital amusia

    Directory of Open Access Journals (Sweden)

    Sydney eLolli

    2015-09-01

    Full Text Available Congenital amusics, or tone-deaf individuals, show difficulty in perceiving and producing small pitch differences. While amusia has marked effects on music perception, its impact on speech perception is less clear. Here we test the hypothesis that individual differences in pitch perception affect judgment of emotion in speech, by applying band-pass filters to spoken statements of emotional speech. A norming study was first conducted on Mechanical Turk to ensure that the intended emotions from the Macquarie Battery for Evaluation of Prosody (MBEP were reliably identifiable by US English speakers. The most reliably identified emotional speech samples were used in in Experiment 1, in which subjects performed a psychophysical pitch discrimination task, and an emotion identification task under band-pass and unfiltered speech conditions. Results showed a significant correlation between pitch discrimination threshold and emotion identification accuracy for band-pass filtered speech, with amusics (defined here as those with a pitch discrimination threshold > 16 Hz performing worse than controls. This relationship with pitch discrimination was not seen in unfiltered speech conditions. Given the dissociation between band-pass filtered and unfiltered speech conditions, we inferred that amusics may be compensating for poorer pitch perception by using speech cues that are filtered out in this manipulation.

  5. Mapping symbols to sounds: electrophysiological correlates of the impaired reading process in dyslexia

    Directory of Open Access Journals (Sweden)

    Andreas eWidmann

    2012-03-01

    Full Text Available Dyslexic and control first grade school children were compared in a Symbol-to-Sound matching test based on a nonlinguistic audiovisual training which is known to have a remediating effect on dyslexia. Visual symbol patterns had to be matched with predicted sound patterns. Sounds incongruent with the corresponding visual symbol (thus not matching the prediction elicited the N2b and P3a event-related potential (ERP components relative to congruent sounds in control children. Their ERPs resembled the ERP effects previously reported for healthy adults with this paradigm. In dyslexic children, N2b onset latency was delayed and its amplitude significantly reduced over left hemisphere whereas P3a was absent. Moreover, N2b amplitudes significantly correlated with the reading skills. ERPs to sound changes in a control condition were unaffected. In addition, correctly predicted sounds, that is, sounds that are congruent with the visual symbol, elicited an early induced auditory gamma band response (GBR reflecting synchronization of brain activity in normal-reading children as previously observed in healthy adults. However, dyslexic children showed no GBR. This indicates that visual symbolic and auditory sensory information are not integrated into a unitary audiovisual object representation in them. Finally, incongruent sounds were followed by a later desynchronization of brain activity in the gamma band in both groups. This desynchronization was significantly larger in dyslexic children. Although both groups accomplished the task successfully remarkable group differences in brain responses suggest that normal-reading children and dyslexic children recruit (partly different brain mechanisms when solving the task. We propose that abnormal ERPs and GBRs in dyslexic readers indicate a deficit resulting in a widespread impairment in processing and integrating auditory and visual information and contributing to the reading impairment in dyslexia.

  6. When Less Is More: Poor Discrimination but Good Colour Memory in Autism

    Science.gov (United States)

    Heaton, Pamela; Ludlow, Amanda; Roberson, Debi

    2008-01-01

    In two experiments children with autism and two groups of controls matched for either chronological or non-verbal mental age were tested on tasks of colour discrimination and memory. The results from experiment 1 showed significantly poorer colour discrimination in children with autism in comparison to typically developing chronological age…

  7. Role of Head Teachers in Ensuring Sound Climate

    Science.gov (United States)

    Kor, Jacob; Opare, James K.

    2017-01-01

    The school climate is outlined in literature as one of the most important within school factors required for effective teaching in learning. As leaders in any organisations are assigned the role of ensuring sound climates for work, head teachers also have the task of creating and maintaining an environment conducive for effective academic work…

  8. Colour discrimination and categorisation in Williams syndrome.

    Science.gov (United States)

    Farran, Emily K; Cranwell, Matthew B; Alvarez, James; Franklin, Anna

    2013-10-01

    Individuals with Williams syndrome (WS) present with impaired functioning of the dorsal visual stream relative to the ventral visual stream. As such, little attention has been given to ventral stream functions in WS. We investigated colour processing, a predominantly ventral stream function, for the first time in nineteen individuals with Williams syndrome. Colour discrimination was assessed using the Farnsworth-Munsell 100 hue test. Colour categorisation was assessed using a match-to-sample test and a colour naming task. A visual search task was also included as a measure of sensitivity to the size of perceptual colour difference. Results showed that individuals with WS have reduced colour discrimination relative to typically developing participants matched for chronological age; performance was commensurate with a typically developing group matched for non-verbal ability. In contrast, categorisation was typical in WS, although there was some evidence that sensitivity to the size of perceptual colour differences was reduced in this group. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Temporal discrimination, a cervical dystonia endophenotype: penetrance and functional correlates.

    Science.gov (United States)

    Kimmich, Okka; Molloy, Anna; Whelan, Robert; Williams, Laura; Bradley, David; Balsters, Joshua; Molloy, Fiona; Lynch, Tim; Healy, Daniel G; Walsh, Cathal; O'Riordan, Seán; Reilly, Richard B; Hutchinson, Michael

    2014-05-01

    The pathogenesis of adult-onset primary dystonia remains poorly understood. There is variable age-related and gender-related expression of the phenotype, the commonest of which is cervical dystonia. Endophenotypes may provide insight into underlying genetic and pathophysiological mechanisms of dystonia. The temporal discrimination threshold (TDT)-the shortest time interval at which two separate stimuli can be detected as being asynchronous-is abnormal both in patients with cervical dystonia and in their unaffected first-degree relatives. Functional magnetic resonance imaging (fMRI) studies have shown that putaminal activation positively correlates with the ease of temporal discrimination between two stimuli in healthy individuals. We hypothesized that abnormal temporal discrimination would exhibit similar age-related and gender-related penetrance as cervical dystonia and that unaffected relatives with an abnormal TDT would have reduced putaminal activation during a temporal discrimination task. TDTs were examined in a group of 192 healthy controls and in 158 unaffected first-degree relatives of 84 patients with cervical dystonia. In 24 unaffected first-degree relatives, fMRI scanning was performed during a temporal discrimination task. The prevalence of abnormal TDTs in unaffected female relatives reached 50% after age 48 years; whereas, in male relatives, penetrance of the endophenotype was reduced. By fMRI, relatives who had abnormal TDTs, compared with relatives who had normal TDTs, had significantly less activation in the putamina and in the middle frontal and precentral gyri. Only the degree of reduction of putaminal activity correlated significantly with worsening of temporal discrimination. These findings further support abnormal temporal discrimination as an endophenotype of cervical dystonia involving disordered basal ganglia circuits. © 2014 International Parkinson and Movement Disorder Society.

  10. From bird to sparrow: Learning-induced modulations in fine-grained semantic discrimination.

    Science.gov (United States)

    De Meo, Rosanna; Bourquin, Nathalie M-P; Knebel, Jean-François; Murray, Micah M; Clarke, Stephanie

    2015-09-01

    Recognition of environmental sounds is believed to proceed through discrimination steps from broad to more narrow categories. Very little is known about the neural processes that underlie fine-grained discrimination within narrow categories or about their plasticity in relation to newly acquired expertise. We investigated how the cortical representation of birdsongs is modulated by brief training to recognize individual species. During a 60-minute session, participants learned to recognize a set of birdsongs; they improved significantly their performance for trained (T) but not control species (C), which were counterbalanced across participants. Auditory evoked potentials (AEPs) were recorded during pre- and post-training sessions. Pre vs. post changes in AEPs were significantly different between T and C i) at 206-232ms post stimulus onset within a cluster on the anterior part of the left superior temporal gyrus; ii) at 246-291ms in the left middle frontal gyrus; and iii) 512-545ms in the left middle temporal gyrus as well as bilaterally in the cingulate cortex. All effects were driven by weaker activity for T than C species. Thus, expertise in discriminating T species modulated early stages of semantic processing, during and immediately after the time window that sustains the discrimination between human vs. animal vocalizations. Moreover, the training-induced plasticity is reflected by the sharpening of a left lateralized semantic network, including the anterior part of the temporal convexity and the frontal cortex. Training to identify birdsongs influenced, however, also the processing of C species, but at a much later stage. Correct discrimination of untrained sounds seems to require an additional step which results from lower-level features analysis such as apperception. We therefore suggest that the access to objects within an auditory semantic category is different and depends on subject's level of expertise. More specifically, correct intra

  11. Is 9 louder than 1? Audiovisual cross-modal interactions between number magnitude and judged sound loudness.

    Science.gov (United States)

    Alards-Tomalin, Doug; Walker, Alexander C; Shaw, Joshua D M; Leboe-McGowan, Launa C

    2015-09-01

    The cross-modal impact of number magnitude (i.e. Arabic digits) on perceived sound loudness was examined. Participants compared a target sound's intensity level against a previously heard reference sound (which they judged as quieter or louder). Paired with each target sound was a task irrelevant Arabic digit that varied in magnitude, being either small (1, 2, 3) or large (7, 8, 9). The degree to which the sound and the digit were synchronized was manipulated, with the digit and sound occurring simultaneously in Experiment 1, and the digit preceding the sound in Experiment 2. Firstly, when target sounds and digits occurred simultaneously, sounds paired with large digits were categorized as loud more frequently than sounds paired with small digits. Secondly, when the events were separated, number magnitude ceased to bias sound intensity judgments. In Experiment 3, the events were still separated, however the participants held the number in short-term memory. In this instance the bias returned. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. The dispersion-focalization theory of sound systems

    Science.gov (United States)

    Schwartz, Jean-Luc; Abry, Christian; Boë, Louis-Jean; Vallée, Nathalie; Ménard, Lucie

    2005-04-01

    The Dispersion-Focalization Theory states that sound systems in human languages are shaped by two major perceptual constraints: dispersion driving auditory contrast towards maximal or sufficient values [B. Lindblom, J. Phonetics 18, 135-152 (1990)] and focalization driving auditory spectra towards patterns with close neighboring formants. Dispersion is computed from the sum of the inverse squared inter-spectra distances in the (F1, F2, F3, F4) space, using a non-linear process based on the 3.5 Bark critical distance to estimate F2'. Focalization is based on the idea that close neighboring formants produce vowel spectra with marked peaks, easier to process and memorize in the auditory system. Evidence for increased stability of focal vowels in short-term memory was provided in a discrimination experiment on adult French subjects [J. L. Schwartz and P. Escudier, Speech Comm. 8, 235-259 (1989)]. A reanalysis of infant discrimination data shows that focalization could well be the responsible for recurrent discrimination asymmetries [J. L. Schwartz et al., Speech Comm. (in press)]. Recent data about children vowel production indicate that focalization seems to be part of the perceptual templates driving speech development. The Dispersion-Focalization Theory produces valid predictions for both vowel and consonant systems, in relation with available databases of human languages inventories.

  13. Background noise exerts diverse effects on the cortical encoding of foreground sounds.

    Science.gov (United States)

    Malone, B J; Heiser, Marc A; Beitel, Ralph E; Schreiner, Christoph E

    2017-08-01

    In natural listening conditions, many sounds must be detected and identified in the context of competing sound sources, which function as background noise. Traditionally, noise is thought to degrade the cortical representation of sounds by suppressing responses and increasing response variability. However, recent studies of neural network models and brain slices have shown that background synaptic noise can improve the detection of signals. Because acoustic noise affects the synaptic background activity of cortical networks, it may improve the cortical responses to signals. We used spike train decoding techniques to determine the functional effects of a continuous white noise background on the responses of clusters of neurons in auditory cortex to foreground signals, specifically frequency-modulated sweeps (FMs) of different velocities, directions, and amplitudes. Whereas the addition of noise progressively suppressed the FM responses of some cortical sites in the core fields with decreasing signal-to-noise ratios (SNRs), the stimulus representation remained robust or was even significantly enhanced at specific SNRs in many others. Even though the background noise level was typically not explicitly encoded in cortical responses, significant information about noise context could be decoded from cortical responses on the basis of how the neural representation of the foreground sweeps was affected. These findings demonstrate significant diversity in signal in noise processing even within the core auditory fields that could support noise-robust hearing across a wide range of listening conditions. NEW & NOTEWORTHY The ability to detect and discriminate sounds in background noise is critical for our ability to communicate. The neural basis of robust perceptual performance in noise is not well understood. We identified neuronal populations in core auditory cortex of squirrel monkeys that differ in how they process foreground signals in background noise and that may

  14. Auditory Discrimination of Anisochrony: Influence of the Tempo and Musical Backgrounds of Listeners

    Science.gov (United States)

    Ehrle, N.; Samson, S.

    2005-01-01

    This study explored the influence of several factors, physical and human, on anisochrony's thresholds measured with an adaptive two alternative forced choice paradigm. The effect of the number and duration of sounds on anisochrony discrimination was tested in the first experiment as well as potential interactions between each of these factors and…

  15. Sound localization and speech identification in the frontal median plane with a hear-through headset

    DEFF Research Database (Denmark)

    Hoffmann, Pablo F.; Møller, Anders Kalsgaard; Christensen, Flemming

    2014-01-01

    signals can be superimposed via earphone reproduction. An important aspect of the hear-through headset is its transparency, i.e. how close to real life can the electronically amplied sounds be perceived. Here we report experiments conducted to evaluate the auditory transparency of a hear-through headset...... prototype by comparing human performance in natural, hear-through, and fully occluded conditions for two spatial tasks: frontal vertical-plane sound localization and speech-on-speech spatial release from masking. Results showed that localization performance was impaired by the hear-through headset relative...... to the natural condition though not as much as in the fully occluded condition. Localization was affected the least when the sound source was in front of the listeners. Different from the vertical localization performance, results from the speech task suggest that normal speech-on-speech spatial release from...

  16. How Iconicity Helps People Learn New Words: Neural Correlates and Individual Differences in Sound-Symbolic Bootstrapping

    Directory of Open Access Journals (Sweden)

    Gwilym Lockwood

    2016-07-01

    Full Text Available Sound symbolism is increasingly understood as involving iconicity, or perceptual analogies and cross-modal correspondences between form and meaning, but the search for its functional and neural correlates is ongoing. Here we study how people learn sound-symbolic words, using behavioural, electrophysiological and individual difference measures. Dutch participants learned Japanese ideophones —lexical sound- symbolic words— with a translation of either the real meaning (in which form and meaning show cross-modal correspondences or the opposite meaning (in which form and meaning show cross-modal clashes. Participants were significantly better at identifying the words they learned in the real condition, correctly remembering the real word pairing 86.7% of the time, but the opposite word pairing only 71.3% of the time. Analysing event-related potentials (ERPs during the test round showed that ideophones in the real condition elicited a greater P3 component and late positive complex than ideophones in the opposite condition. In a subsequent forced choice task, participants were asked to guess the real translation from two alternatives. They did this with 73.0% accuracy, well above chance level even for words they had encountered in the opposite condition, showing that people are generally sensitive to the sound-symbolic cues in ideophones. Individual difference measures showed that the ERP effect in the test round of the learning task was greater for participants who were more sensitive to sound symbolism in the forced choice task. The main driver of the difference was a lower amplitude of the P3 component in response to ideophones in the opposite condition, suggesting that people who are more sensitive to sound symbolism may have more difficulty to suppress conflicting cross-modal information. The findings provide new evidence that cross-modal correspondences between sound and meaning facilitate word learning, while cross-modal clashes make word

  17. Music and Sound Elements in Time Estimation and Production of Children with Attention Deficit/Hyperactivity Disorder (ADHD

    Directory of Open Access Journals (Sweden)

    Luiz Rogerio Jorgensen Carrer

    2015-09-01

    Full Text Available ADHD involves cognitive and behavioral aspects with impairments in many environments of children and their families’ lives. Music, with its playful, spontaneous, affective, motivational, temporal and rhythmic dimensions can be of great help for studying the aspects of time processing in ADHD. In this article we studied time processing with simple sounds and music in children with ADHD with the hypothesis that children with ADHD have a different performance when compared with children with normal development in tasks of time estimation and production. The main objective was to develop sound and musical tasks to evaluate and correlate the performance of children with ADHD, with and without methylphenidate, compared to a control group with typical development. The study involved 36 participants age 6 to 14 years, recruited at NANI-Unifesp/SP, sub-divided into three groups with 12 children in each. Data was collected through a musical keyboard using Logic Audio Software 9.0 on the computer that recorded the participant's performance in the tasks. Tasks were divided into sections: spontaneous time production, time estimation with simple sounds and time estimation with music. Results: 1. Performance of ADHD groups in temporal estimation of simple sounds in short time intervals (30 ms were statistically lower than control group (p<0,05; 2. In the task comparing musical excerpts of the same duration (7s, ADHD groups considered the tracks longer when the musical notes had longer durations, while in the control group, the duration was related to the density of musical notes in the track. The positive average performance observed in the three groups in most tasks perhaps indicates the possibility that music can, in some way, positively modulate the symptoms of inattention in ADHD.

  18. Pigeons' Discrimination of Michotte's Launching Effect

    Science.gov (United States)

    Young, Michael E.; Beckmann, Joshua S.; Wasserman, Edward A.

    2006-01-01

    We trained four pigeons to discriminate a Michotte launching animation from three other animations using a go/no-go task. The pigeons received food for pecking at one of the animations, but not for pecking at the others. The four animations featured two types of interactions among objects: causal (direct launching) and noncausal (delayed, distal,…

  19. Task-irrelevant auditory feedback facilitates motor performance in musicians

    Directory of Open Access Journals (Sweden)

    Virginia eConde

    2012-05-01

    Full Text Available An efficient and fast auditory–motor network is a basic resource for trained musicians due to the importance of motor anticipation of sound production in musical performance. When playing an instrument, motor performance always goes along with the production of sounds and the integration between both modalities plays an essential role in the course of musical training. The aim of the present study was to investigate the role of task-irrelevant auditory feedback during motor performance in musicians using a serial reaction time task (SRTT. Our hypothesis was that musicians, due to their extensive auditory–motor practice routine during musical training, have a superior performance and learning capabilities when receiving auditory feedback during SRTT relative to musicians performing the SRTT without any auditory feedback. Here we provide novel evidence that task-irrelevant auditory feedback is capable to reinforce SRTT performance but not learning, a finding that might provide further insight into auditory-motor integration in musicians on a behavioral level.

  20. Demonstration of Einstein-Podolsky-Rosen steering with enhanced subchannel discrimination

    Science.gov (United States)

    Sun, Kai; Ye, Xiang-Jun; Xiao, Ya; Xu, Xiao-Ye; Wu, Yu-Chun; Xu, Jin-Shi; Chen, Jing-Ling; Li, Chuan-Feng; Guo, Guang-Can

    2018-03-01

    Einstein-Podolsky-Rosen (EPR) steering describes a quantum nonlocal phenomenon in which one party can nonlocally affect the other's state through local measurements. It reveals an additional concept of quantum non-locality, which stands between quantum entanglement and Bell nonlocality. Recently, a quantum information task named as subchannel discrimination (SD) provides a necessary and sufficient characterization of EPR steering. The success probability of SD using steerable states is higher than using any unsteerable states, even when they are entangled. However, the detailed construction of such subchannels and the experimental realization of the corresponding task are still technologically challenging. In this work, we designed a feasible collection of subchannels for a quantum channel and experimentally demonstrated the corresponding SD task where the probabilities of correct discrimination are clearly enhanced by exploiting steerable states. Our results provide a concrete example to operationally demonstrate EPR steering and shine a new light on the potential application of EPR steering.

  1. Psychophysical Estimates of Frequency Discrimination: More than Just Limitations of Auditory Processing

    Directory of Open Access Journals (Sweden)

    Beate Sabisch

    2013-07-01

    Full Text Available Efficient auditory processing is hypothesized to support language and literacy development. However, behavioral tasks used to assess this hypothesis need to be robust to non-auditory specific individual differences. This study compared frequency discrimination abilities in a heterogeneous sample of adults using two different psychoacoustic task designs, referred to here as: 2I_6A_X and 3I_2AFC designs. The role of individual differences in nonverbal IQ (NVIQ, socioeconomic status (SES and musical experience in predicting frequency discrimination thresholds on each task were assessed using multiple regression analyses. The 2I_6A_X task was more cognitively demanding and hence more susceptible to differences specifically in SES and musical training. Performance on this task did not, however, relate to nonword repetition ability (a measure of language learning capacity. The 3I_2AFC task, by contrast, was only susceptible to musical training. Moreover, thresholds measured using it predicted some variance in nonword repetition performance. This design thus seems suitable for use in studies addressing questions regarding the role of auditory processing in supporting language and literacy development.

  2. The Effect of Task Duration on Event-Based Prospective Memory: A Multinomial Modeling Approach

    Directory of Open Access Journals (Sweden)

    Hongxia Zhang

    2017-11-01

    Full Text Available Remembering to perform an action when a specific event occurs is referred to as Event-Based Prospective Memory (EBPM. This study investigated how EBPM performance is affected by task duration by having university students (n = 223 perform an EBPM task that was embedded within an ongoing computer-based color-matching task. For this experiment, we separated the overall task’s duration into the filler task duration and the ongoing task duration. The filler task duration is the length of time between the intention and the beginning of the ongoing task, and the ongoing task duration is the length of time between the beginning of the ongoing task and the appearance of the first Prospective Memory (PM cue. The filler task duration and ongoing task duration were further divided into three levels: 3, 6, and 9 min. Two factors were then orthogonally manipulated between-subjects using a multinomial processing tree model to separate the effects of different task durations on the two EBPM components. A mediation model was then created to verify whether task duration influences EBPM via self-reminding or discrimination. The results reveal three points. (1 Lengthening the duration of ongoing tasks had a negative effect on EBPM performance while lengthening the duration of the filler task had no significant effect on it. (2 As the filler task was lengthened, both the prospective and retrospective components show a decreasing and then increasing trend. Also, when the ongoing task duration was lengthened, the prospective component decreased while the retrospective component significantly increased. (3 The mediating effect of discrimination between the task duration and EBPM performance was significant. We concluded that different task durations influence EBPM performance through different components with discrimination being the mediator between task duration and EBPM performance.

  3. Effects of task demands on the early neural processing of fearful and happy facial expressions.

    Science.gov (United States)

    Itier, Roxane J; Neath-Tavares, Karly N

    2017-05-15

    Task demands shape how we process environmental stimuli but their impact on the early neural processing of facial expressions remains unclear. In a within-subject design, ERPs were recorded to the same fearful, happy and neutral facial expressions presented during a gender discrimination, an explicit emotion discrimination and an oddball detection tasks, the most studied tasks in the field. Using an eye tracker, fixation on the face nose was enforced using a gaze-contingent presentation. Task demands modulated amplitudes from 200 to 350ms at occipito-temporal sites spanning the EPN component. Amplitudes were more negative for fearful than neutral expressions starting on N170 from 150 to 350ms, with a temporo-occipital distribution, whereas no clear effect of happy expressions was seen. Task and emotion effects never interacted in any time window or for the ERP components analyzed (P1, N170, EPN). Thus, whether emotion is explicitly discriminated or irrelevant for the task at hand, neural correlates of fearful and happy facial expressions seem immune to these task demands during the first 350ms of visual processing. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Multisensory interaction in vibrotactile detection and discrimination of amplitude modulation

    DEFF Research Database (Denmark)

    Teodorescu, Kinneret; Bouchigny, Sylvain; Hoffmann, Pablo F.

    2011-01-01

    Perception of vibration during drilling demands integration of haptic and auditory information with force information. In this study we explored the ability to detect and discriminate changes in vibrotactile stimuli amplitude based either on purely haptic feedback or together with congruent...... skill of maxilla-facial surgery strongly relies on enhanced touch perception, as measured in reaction times and discrimination ability in bi-modal vibro-auditory conditions. These observations suggest that acquisition of mandibular surgery skill has brought to an enhanced representation of vibro......-tactile modulations in relevant stimuli ranges. Altogether, our results provide basis to assume that during acquisition of mandibular drilling skill, trainees may benefit from training of relevant basic aspects of touch perception - sensitivity to vibration and accompanying modulations of sound....

  5. Cross-modal selective attention: on the difficulty of ignoring sounds at the locus of visual attention.

    Science.gov (United States)

    Spence, C; Ranson, J; Driver, J

    2000-02-01

    In three experiments, we investigated whether the ease with which distracting sounds can be ignored depends on their distance from fixation and from attended visual events. In the first experiment, participants shadowed an auditory stream of words presented behind their heads, while simultaneously fixating visual lip-read information consistent with the relevant auditory stream, or meaningless "chewing" lip movements. An irrelevant auditory stream of words, which participants had to ignore, was presented either from the same side as the fixated visual stream or from the opposite side. Selective shadowing was less accurate in the former condition, implying that distracting sounds are harder to ignore when fixated. Furthermore, the impairment when fixating toward distractor sounds was greater when speaking lips were fixated than when chewing lips were fixated, suggesting that people find it particularly difficult to ignore sounds at locations that are actively attended for visual lipreading rather than merely passively fixated. Experiments 2 and 3 tested whether these results are specific to cross-modal links in speech perception by replacing the visual lip movements with a rapidly changing stream of meaningless visual shapes. The auditory task was again shadowing, but the active visual task was now monitoring for a specific visual shape at one location. A decrement in shadowing was again observed when participants passively fixated toward the irrelevant auditory stream. This decrement was larger when participants performed a difficult active visual task there versus fixating, but not for a less demanding visual task versus fixation. The implications for cross-modal links in spatial attention are discussed.

  6. Training haptic stiffness discrimination: time course of learning with or without visual information and knowledge of results.

    Science.gov (United States)

    Teodorescu, Kinneret; Bouchigny, Sylvain; Korman, Maria

    2013-08-01

    In this study, we explored the time course of haptic stiffness discrimination learning and how it was affected by two experimental factors, the addition of visual information and/or knowledge of results (KR) during training. Stiffness perception may integrate both haptic and visual modalities. However, in many tasks, the visual field is typically occluded, forcing stiffness perception to be dependent exclusively on haptic information. No studies to date addressed the time course of haptic stiffness perceptual learning. Using a virtual environment (VE) haptic interface and a two-alternative forced-choice discrimination task, the haptic stiffness discrimination ability of 48 participants was tested across 2 days. Each day included two haptic test blocks separated by a training block Additional visual information and/or KR were manipulated between participants during training blocks. Practice repetitions alone induced significant improvement in haptic stiffness discrimination. Between days, accuracy was slightly improved, but decision time performance was deteriorated. The addition of visual information and/or KR had only temporary effects on decision time, without affecting the time course of haptic discrimination learning. Learning in haptic stiffness discrimination appears to evolve through at least two distinctive phases: A single training session resulted in both immediate and latent learning. This learning was not affected by the training manipulations inspected. Training skills in VE in spaced sessions can be beneficial for tasks in which haptic perception is critical, such as surgery procedures, when the visual field is occluded. However, training protocols for such tasks should account for low impact of multisensory information and KR.

  7. Discriminant analysis of functional optical topography for schizophrenia diagnosis

    Science.gov (United States)

    Chuang, Ching-Cheng; Nakagome, Kazuyuki; Pu, Shenghong; Lan, Tsuo-Hung; Lee, Chia-Yen; Sun, Chia-Wei

    2014-01-01

    Abnormal prefrontal function plays a central role in the cognition deficits of schizophrenic patients; however, the character of the relationship between discriminant analysis and prefrontal activation remains undetermined. Recently, evidence of low prefrontal cortex (PFC) activation in individuals with schizophrenia has also been found during verbal fluency tests (VFT) and other cognitive tests with several neuroimaging methods. The purpose of this study is to assess the hemodynamic changes of the PFC and discriminant analysis between schizophrenia patients and healthy controls during VFT task by utilizing functional optical topography. A total of 99 subjects including 53 schizophrenic patients and 46 age- and gender-matched healthy controls were studied. The results showed that the healthy group had larger activation in the right and left PFC than in the middle PFC. Besides, the schizophrenic group showed weaker task performance and lower activation in the whole PFC than the healthy group. The result of the discriminant analysis showed a significant difference with P value <0.001 in six channels (CH 23, 29, 31, 40, 42, 52) between the schizophrenic and healthy groups. Finally, 68.69% and 71.72% of subjects are correctly classified as being schizophrenic or healthy with all 52 channels and six significantly different channels, respectively. Our findings suggest that the left PFC can be a feature region for discriminant analysis of schizophrenic diagnosis.

  8. Developmental Changes in Locating Voice and Sound in Space

    Science.gov (United States)

    Kezuka, Emiko; Amano, Sachiko; Reddy, Vasudevi

    2017-01-01

    We know little about how infants locate voice and sound in a complex multi-modal space. Using a naturalistic laboratory experiment the present study tested 35 infants at 3 ages: 4 months (15 infants), 5 months (12 infants), and 7 months (8 infants). While they were engaged frontally with one experimenter, infants were presented with (a) a second experimenter’s voice and (b) castanet sounds from three different locations (left, right, and behind). There were clear increases with age in the successful localization of sounds from all directions, and a decrease in the number of repetitions required for success. Nonetheless even at 4 months two-thirds of the infants attempted to search for the voice or sound. At all ages localizing sounds from behind was more difficult and was clearly present only at 7 months. Perseverative errors (looking at the last location) were present at all ages and appeared to be task specific (only present in the 7 month-olds for the behind location). Spontaneous attention shifts by the infants between the two experimenters, evident at 7 months, suggest early evidence for infant initiation of triadic attentional engagements. There was no advantage found for voice over castanet sounds in this study. Auditory localization is a complex and contextual process emerging gradually in the first half of the first year. PMID:28979220

  9. Instrument Identification in Polyphonic Music: Feature Weighting to Minimize Influence of Sound Overlaps

    Directory of Open Access Journals (Sweden)

    Goto Masataka

    2007-01-01

    Full Text Available We provide a new solution to the problem of feature variations caused by the overlapping of sounds in instrument identification in polyphonic music. When multiple instruments simultaneously play, partials (harmonic components of their sounds overlap and interfere, which makes the acoustic features different from those of monophonic sounds. To cope with this, we weight features based on how much they are affected by overlapping. First, we quantitatively evaluate the influence of overlapping on each feature as the ratio of the within-class variance to the between-class variance in the distribution of training data obtained from polyphonic sounds. Then, we generate feature axes using a weighted mixture that minimizes the influence via linear discriminant analysis. In addition, we improve instrument identification using musical context. Experimental results showed that the recognition rates using both feature weighting and musical context were 84.1 for duo, 77.6 for trio, and 72.3 for quartet; those without using either were 53.4, 49.6, and 46.5 , respectively.

  10. Ethnicity- and sex-based discrimination and the maintenance of self-esteem.

    Science.gov (United States)

    Lönnqvist, Jan-Erik; Hennig-Schmidt, Heike; Walkowitz, Gari

    2015-01-01

    The psychological underpinnings of labor market discrimination were investigated by having participants from Israel, the West Bank and Germany (N = 205) act as employers in a stylized employment task in which they ranked, set wages, and imposed a minimum effort level on applicants. State self-esteem was measured before and after the employment task, in which applicant ethnicity and sex were salient. The applicants were real people and all behavior was monetarily incentivized. Supporting the full self-esteem hypothesis of the social identity approach, low self-esteem in women was associated with assigning higher wages to women than to men, and such behavior was related to the maintenance of self-esteem. The narrower hypothesis that successful intergroup discrimination serves to protect self-esteem received broader support. Across all participants, both ethnicity- and sex-based discrimination of out-groups were associated with the maintenance of self-esteem, with the former showing a stronger association than the latter.

  11. Ethnicity- and Sex-Based Discrimination and the Maintenance of Self-Esteem

    Science.gov (United States)

    2015-01-01

    The psychological underpinnings of labor market discrimination were investigated by having participants from Israel, the West Bank and Germany (N = 205) act as employers in a stylized employment task in which they ranked, set wages, and imposed a minimum effort level on applicants. State self-esteem was measured before and after the employment task, in which applicant ethnicity and sex were salient. The applicants were real people and all behavior was monetarily incentivized. Supporting the full self-esteem hypothesis of the social identity approach, low self-esteem in women was associated with assigning higher wages to women than to men, and such behavior was related to the maintenance of self-esteem. The narrower hypothesis that successful intergroup discrimination serves to protect self-esteem received broader support. Across all participants, both ethnicity- and sex-based discrimination of out-groups were associated with the maintenance of self-esteem, with the former showing a stronger association than the latter. PMID:25978646

  12. Gender and the right to non-discrimination in international human rights law

    OpenAIRE

    Netkova, Bistra

    2016-01-01

    Discrimination against women based on the fact that they are women is a deeply rooted practice in all societies. However, the level of discrimination varies greatly with the level of development of the given society and strongly influences and vice versa it is influenced by the status of women in a given society. Addressing this gender-based discrimination is a difficult task because it is closely linked to the concept of equality, and state’s action and inactions. The article establishes tha...

  13. Design and evaluation of nonverbal sound-based input for those with motor handicapped.

    Science.gov (United States)

    Punyabukkana, Proadpran; Chanjaradwichai, Supadaech; Suchato, Atiwong

    2013-03-01

    Most personal computing interfaces rely on the users' ability to use their hand and arm movements to interact with on-screen graphical widgets via mainstream devices, including keyboards and mice. Without proper assistive devices, this style of input poses difficulties for motor-handicapped users. We propose a sound-based input scheme enabling users to operate Windows' Graphical User Interface by producing hums and fricatives through regular microphones. Hierarchically arranged menus are utilized so that only minimal numbers of different actions are required at a time. The proposed scheme was found to be accurate and capable of responding promptly compared to other sound-based schemes. Being able to select from multiple item-selecting modes helps reducing the average time duration needed for completing tasks in the test scenarios almost by half the time needed when the tasks were performed solely through cursor movements. Still, improvements on facilitating users to select the most appropriate modes for desired tasks should improve the overall usability of the proposed scheme.

  14. A Relational Database Model and Tools for Environmental Sound Recognition

    Directory of Open Access Journals (Sweden)

    Yuksel Arslan

    2017-12-01

    Full Text Available Environmental sound recognition (ESR has become a hot topic in recent years. ESR is mainly based on machine learning (ML and ML algorithms require first a training database. This database must comprise the sounds to be recognized and other related sounds. An ESR system needs the database during training, testing and in the production stage. In this paper, we present the design and pilot establishment of a database which will assists all researchers who want to establish an ESR system. This database employs relational database model which is not used for this task before. We explain in this paper design and implementation details of the database, data collection and load process. Besides we explain the tools and developed graphical user interface for a desktop application and for the WEB.

  15. Sound frequency affects speech emotion perception: results from congenital amusia.

    Science.gov (United States)

    Lolli, Sydney L; Lewenstein, Ari D; Basurto, Julian; Winnik, Sean; Loui, Psyche

    2015-01-01

    Congenital amusics, or "tone-deaf" individuals, show difficulty in perceiving and producing small pitch differences. While amusia has marked effects on music perception, its impact on speech perception is less clear. Here we test the hypothesis that individual differences in pitch perception affect judgment of emotion in speech, by applying low-pass filters to spoken statements of emotional speech. A norming study was first conducted on Mechanical Turk to ensure that the intended emotions from the Macquarie Battery for Evaluation of Prosody were reliably identifiable by US English speakers. The most reliably identified emotional speech samples were used in Experiment 1, in which subjects performed a psychophysical pitch discrimination task, and an emotion identification task under low-pass and unfiltered speech conditions. Results showed a significant correlation between pitch-discrimination threshold and emotion identification accuracy for low-pass filtered speech, with amusics (defined here as those with a pitch discrimination threshold >16 Hz) performing worse than controls. This relationship with pitch discrimination was not seen in unfiltered speech conditions. Given the dissociation between low-pass filtered and unfiltered speech conditions, we inferred that amusics may be compensating for poorer pitch perception by using speech cues that are filtered out in this manipulation. To assess this potential compensation, Experiment 2 was conducted using high-pass filtered speech samples intended to isolate non-pitch cues. No significant correlation was found between pitch discrimination and emotion identification accuracy for high-pass filtered speech. Results from these experiments suggest an influence of low frequency information in identifying emotional content of speech.

  16. Cross-Modal Correspondence between Brightness and Chinese Speech Sound with Aspiration

    Directory of Open Access Journals (Sweden)

    Sachiko Hirata

    2011-10-01

    Full Text Available Phonetic symbolism is the phenomenon of speech sounds evoking images based on sensory experiences; it is often discussed with cross-modal correspondence. By using Garner's task, Hirata, Kita, and Ukita (2009 showed the cross-modal congruence between brightness and voiced/voiceless consonants in Japanese speech sound, which is known as phonetic symbolism. In the present study, we examined the effect of the meaning of mimetics (lexical words whose sound reflects its meaning, like “ding-dong” in Japanese language on the cross-modal correspondence. We conducted an experiment with Chinese speech sounds with or without aspiration using Chinese people. Chinese vocabulary also contains mimetics but the existence of aspiration doesn't relate to the meaning of Chinese mimetics. As a result, Chinese speech sounds with aspiration, which resemble voiceless consonants, were matched with white color, whereas those without aspiration were matched with black. This result is identical to its pattern in Japanese people and consequently suggests that cross-modal correspondence occurs without the effect of the meaning of mimetics. The problem that whether these cross-modal correspondences are purely based on physical properties of speech sound or affected from phonetic properties remains for further study.

  17. Abilities in tactile discrimination of textures in adult rats exposed to enriched or impoverished environments.

    Science.gov (United States)

    Bourgeon, Stéphanie; Xerri, Christian; Coq, Jacques-Olivier

    2004-08-12

    In previous studies, we have shown that housing in enriched environment for about 3 months after weaning improved the topographic organization and decreased the size of the receptive fields (RFs) located on the glabrous skin surfaces in the forepaw maps of the primary somatosensory cortex (SI) in rats [Exp. Brain Res. 121 (1998) 191]. In contrast, housing in impoverished environment induced a degradation of the SI forepaw representation, characterized by topographic disruptions, a reduction of the cutaneous forepaw area and an enlargement of the glabrous RFs [Exp. Brain Res. 129 (1999) 518]. Based on these two studies, we postulated that these representational alterations could underlie changes in haptic perception. Therefore, the present study was aimed at determining the influence of housing conditions on the rat's abilities in tactile texture discrimination. After a 2-month exposure to enriched or impoverished environments, rats were trained to perform a discrimination task during locomotion on floorboards of different roughness. At the end of every daily behavioral session, rats were replaced in their respective housing environment. Rats had to discriminate homogeneous (low roughness) from heterogeneous floorboards (combination of two different roughness levels). To determine the maximum performance in texture discrimination, the roughness contrast of the heterogeneous texture was gradually reduced, so that homogeneous and heterogeneous floorboards became harder to differentiate. We found that the enriched rats learned the first steps of the behavioral task faster than the impoverished rats, whereas both groups exhibited similar performances in texture discrimination. An individual "predilection" for either homogeneous or heterogeneous floorboards, presumably reflecting a behavioral strategy, seemed to account for the absence of differences in haptic discrimination between groups. The sensory experience depending on the rewarded texture discrimination task

  18. Abstract numerical discrimination learning in rats.

    Science.gov (United States)

    Taniuchi, Tohru; Sugihara, Junko; Wakashima, Mariko; Kamijo, Makiko

    2016-06-01

    In this study, we examined rats' discrimination learning of the numerical ordering positions of objects. In Experiments 1 and 2, five out of seven rats successfully learned to respond to the third of six identical objects in a row and showed reliable transfer of this discrimination to novel stimuli after being trained with three different training stimuli. In Experiment 3, the three rats from Experiment 2 continued to be trained to respond to the third object in an object array, which included an odd object that needed to be excluded when identifying the target third object. All three rats acquired this selective-counting task of specific stimuli, and two rats showed reliable transfer of this selective-counting performance to test sets of novel stimuli. In Experiment 4, the three rats from Experiment 3 quickly learned to respond to the third stimulus in object rows consisting of either six identical or six different objects. These results offer strong evidence for abstract numerical discrimination learning in rats.

  19. Feature-Specific Event-Related Potential Effects to Action- and Sound-Related Verbs during Visual Word Recognition.

    Science.gov (United States)

    Popp, Margot; Trumpp, Natalie M; Kiefer, Markus

    2016-01-01

    Grounded cognition theories suggest that conceptual representations essentially depend on modality-specific sensory and motor systems. Feature-specific brain activation across different feature types such as action or audition has been intensively investigated in nouns, while feature-specific conceptual category differences in verbs mainly focused on body part specific effects. The present work aimed at assessing whether feature-specific event-related potential (ERP) differences between action and sound concepts, as previously observed in nouns, can also be found within the word class of verbs. In Experiment 1, participants were visually presented with carefully matched sound and action verbs within a lexical decision task, which provides implicit access to word meaning and minimizes strategic access to semantic word features. Experiment 2 tested whether pre-activating the verb concept in a context phase, in which the verb is presented with a related context noun, modulates subsequent feature-specific action vs. sound verb processing within the lexical decision task. In Experiment 1, ERP analyses revealed a differential ERP polarity pattern for action and sound verbs at parietal and central electrodes similar to previous results in nouns. Pre-activation of the meaning of verbs in the preceding context phase in Experiment 2 resulted in a polarity-reversal of feature-specific ERP effects in the lexical decision task compared with Experiment 1. This parallels analogous earlier findings for primed action and sound related nouns. In line with grounded cognitions theories, our ERP study provides evidence for a differential processing of action and sound verbs similar to earlier observation for concrete nouns. Although the localizational value of ERPs must be viewed with caution, our results indicate that the meaning of verbs is linked to different neural circuits depending on conceptual feature relevance.

  20. Teaching Letter Sounds in Preschool, Kindergarten, and Special Education: Five Strategies to Ease the Memory Burden

    Science.gov (United States)

    Gordon, Lynn

    2010-01-01

    Teaching students the most frequent sounds of the alphabet letters is the first crucial step in good phonics instruction. But beginning letter and sound lessons, especially if poorly taught or too rapidly paced, can be overwhelming and confusing for some young children and struggling readers. How can we simplify the cognitive task for such…

  1. The role of auditory cortices in the retrieval of single-trial auditory-visual object memories.

    Science.gov (United States)

    Matusz, Pawel J; Thelen, Antonia; Amrein, Sarah; Geiser, Eveline; Anken, Jacques; Murray, Micah M

    2015-03-01

    Single-trial encounters with multisensory stimuli affect both memory performance and early-latency brain responses to visual stimuli. Whether and how auditory cortices support memory processes based on single-trial multisensory learning is unknown and may differ qualitatively and quantitatively from comparable processes within visual cortices due to purported differences in memory capacities across the senses. We recorded event-related potentials (ERPs) as healthy adults (n = 18) performed a continuous recognition task in the auditory modality, discriminating initial (new) from repeated (old) sounds of environmental objects. Initial presentations were either unisensory or multisensory; the latter entailed synchronous presentation of a semantically congruent or a meaningless image. Repeated presentations were exclusively auditory, thus differing only according to the context in which the sound was initially encountered. Discrimination abilities (indexed by d') were increased for repeated sounds that were initially encountered with a semantically congruent image versus sounds initially encountered with either a meaningless or no image. Analyses of ERPs within an electrical neuroimaging framework revealed that early stages of auditory processing of repeated sounds were affected by prior single-trial multisensory contexts. These effects followed from significantly reduced activity within a distributed network, including the right superior temporal cortex, suggesting an inverse relationship between brain activity and behavioural outcome on this task. The present findings demonstrate how auditory cortices contribute to long-term effects of multisensory experiences on auditory object discrimination. We propose a new framework for the efficacy of multisensory processes to impact both current multisensory stimulus processing and unisensory discrimination abilities later in time. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  2. A new semantic vigilance task: vigilance decrement, workload, and sensitivity to dual-task costs.

    Science.gov (United States)

    Epling, Samantha L; Russell, Paul N; Helton, William S

    2016-01-01

    Cognitive resource theory is a common explanation for both the performance decline in vigilance tasks, known as the vigilance decrement, and the limited ability to perform multiple tasks simultaneously. The limited supply of cognitive resources may be utilized faster than they are replenished resulting in a performance decrement, or may need to be allocated among multiple tasks with some performance cost. Researchers have proposed both domain-specific, for example spatial versus verbal processing resources, and domain general cognitive resources. One challenge in testing the domain specificity of cognitive resources in vigilance is the current lack of difficult semantic vigilance tasks which reliably produce a decrement. In the present research, we investigated whether the vigilance decrement was found in a new abbreviated semantic discrimination vigilance task, and whether there was a performance decrement in said vigilance task when paired with a word recall task, as opposed to performed individually. As hypothesized, a vigilance decrement in the semantic vigilance task was found in both the single-task and dual-task conditions, along with reduced vigilance performance in the dual-task condition and reduced word recall in the dual-task condition. This is consistent with cognitive resource theory. The abbreviated semantic vigilance task will be a useful tool for researchers interested in determining the specificity of cognitive resources utilized in vigilance tasks.

  3. Demands on attention and the role of response priming in visual discrimination of feature conjunctions.

    Science.gov (United States)

    Fournier, Lisa R; Herbert, Rhonda J; Farris, Carrie

    2004-10-01

    This study examined how response mapping of features within single- and multiple-feature targets affects decision-based processing and attentional capacity demands. Observers judged the presence or absence of 1 or 2 target features within an object either presented alone or with distractors. Judging the presence of 2 features relative to the less discriminable of these features alone was faster (conjunction benefits) when the task-relevant features differed in discriminability and were consistently mapped to responses. Conjunction benefits were attributed to asynchronous decision priming across attended, task-relevant dimensions. A failure to find conjunction benefits for disjunctive conjunctions was attributed to increased memory demands and variable feature-response mapping for 2- versus single-feature targets. Further, attentional demands were similar between single- and 2-feature targets when response mapping, memory demands, and discriminability of the task-relevant features were equated between targets. Implications of the findings for recent attention models are discussed. (c) 2004 APA, all rights reserved

  4. Sound algorithms

    OpenAIRE

    De Götzen , Amalia; Mion , Luca; Tache , Olivier

    2007-01-01

    International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.

  5. Cross-modal attention influences auditory contrast sensitivity: Decreasing visual load improves auditory thresholds for amplitude- and frequency-modulated sounds.

    Science.gov (United States)

    Ciaramitaro, Vivian M; Chow, Hiu Mei; Eglington, Luke G

    2017-03-01

    We used a cross-modal dual task to examine how changing visual-task demands influenced auditory processing, namely auditory thresholds for amplitude- and frequency-modulated sounds. Observers had to attend to two consecutive intervals of sounds and report which interval contained the auditory stimulus that was modulated in amplitude (Experiment 1) or frequency (Experiment 2). During auditory-stimulus presentation, observers simultaneously attended to a rapid sequential visual presentation-two consecutive intervals of streams of visual letters-and had to report which interval contained a particular color (low load, demanding less attentional resources) or, in separate blocks of trials, which interval contained more of a target letter (high load, demanding more attentional resources). We hypothesized that if attention is a shared resource across vision and audition, an easier visual task should free up more attentional resources for auditory processing on an unrelated task, hence improving auditory thresholds. Auditory detection thresholds were lower-that is, auditory sensitivity was improved-for both amplitude- and frequency-modulated sounds when observers engaged in a less demanding (compared to a more demanding) visual task. In accord with previous work, our findings suggest that visual-task demands can influence the processing of auditory information on an unrelated concurrent task, providing support for shared attentional resources. More importantly, our results suggest that attending to information in a different modality, cross-modal attention, can influence basic auditory contrast sensitivity functions, highlighting potential similarities between basic mechanisms for visual and auditory attention.

  6. 46 CFR 7.20 - Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island Sound and...

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 1 2010-10-01 2010-10-01 false Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island Sound and easterly entrance to Long Island Sound, NY. 7.20 Section 7.20... Atlantic Coast § 7.20 Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island...

  7. Bilateral lesions of nucleus subpretectalis/interstitio-pretecto-subpretectalis (SP/IPS) selectively impair figure-ground discrimination in pigeons.

    Science.gov (United States)

    Scully, Erin N; Acerbo, Martin J; Lazareva, Olga F

    2014-01-01

    Earlier, we reported that nucleus rotundus (Rt) together with its inhibitory complex, nucleus subpretectalis/interstitio-pretecto-subpretectalis (SP/IPS), had significantly higher activity in pigeons performing figure-ground discrimination than in the control group that did not perform any visual discriminations. In contrast, color discrimination produced significantly higher activity than control in the Rt but not in the SP/IPS. Finally, shape discrimination produced significantly lower activity than control in both the Rt and the SP/IPS. In this study, we trained pigeons to simultaneously perform three visual discriminations (figure-ground, color, and shape) using the same stimulus displays. When birds learned to perform all three tasks concurrently at high levels of accuracy, we conducted bilateral chemical lesions of the SP/IPS. After a period of recovery, the birds were retrained on the same tasks to evaluate the effect of lesions on maintenance of these discriminations. We found that the lesions of the SP/IPS had no effect on color or shape discrimination and that they significantly impaired figure-ground discrimination. Together with our earlier data, these results suggest that the nucleus Rt and the SP/IPS are the key structures involved in figure-ground discrimination. These results also imply that thalamic processing is critical for figure-ground segregation in avian brain.

  8. Analysis of financial soundness of manufacturing companies in Indonesia Stock Exchange

    Directory of Open Access Journals (Sweden)

    Widi Hidayat

    2016-07-01

    Full Text Available This study aims to provide information to the issuer and Bapepam and Indonesian Institute of Accountants with additional important information content of ratings and financial soundness of the indicators that do not harm investors. This is an explanatory and descriptive nature of causality using quantitative methods, using all companies listed on the Indonesia Stock Exchange (ISE taken as the sample. The data were analyzed using discriminant statistical analysis tools are processed with SPSS. The results showed that the level of financial soundness of the manufacturing industries listed on the ISE such as 23 (62% Companies Current Asset Growth (CAG is low as well as Fixed Asset growth (FAG 28 (76% companies is still low, Equity Growth (EqG by 27 (73% the company, Revenue growth (RG 27 (65% companies and Net Income Growth (NIG 35 (95% firms. Two manufacturing companies have a very high NIG, thus, NIG average is very high. The seven models of financial soundness were tested based on the growth of corporate finance such as CAG, FAG, LG, EqG, RG, ExG and NIG. Only one model is not significant, the model RG, while the other model is a significant, with a significant difference be-tween the growths rates of the sound and unsound corporate finances industry groups.

  9. Quantitative evaluation of muscle synergy models: a single-trial task decoding approach.

    Science.gov (United States)

    Delis, Ioannis; Berret, Bastien; Pozzo, Thierry; Panzeri, Stefano

    2013-01-01

    Muscle synergies, i.e., invariant coordinated activations of groups of muscles, have been proposed as building blocks that the central nervous system (CNS) uses to construct the patterns of muscle activity utilized for executing movements. Several efficient dimensionality reduction algorithms that extract putative synergies from electromyographic (EMG) signals have been developed. Typically, the quality of synergy decompositions is assessed by computing the Variance Accounted For (VAF). Yet, little is known about the extent to which the combination of those synergies encodes task-discriminating variations of muscle activity in individual trials. To address this question, here we conceive and develop a novel computational framework to evaluate muscle synergy decompositions in task space. Unlike previous methods considering the total variance of muscle patterns (VAF based metrics), our approach focuses on variance discriminating execution of different tasks. The procedure is based on single-trial task decoding from muscle synergy activation features. The task decoding based metric evaluates quantitatively the mapping between synergy recruitment and task identification and automatically determines the minimal number of synergies that captures all the task-discriminating variability in the synergy activations. In this paper, we first validate the method on plausibly simulated EMG datasets. We then show that it can be applied to different types of muscle synergy decomposition and illustrate its applicability to real data by using it for the analysis of EMG recordings during an arm pointing task. We find that time-varying and synchronous synergies with similar number of parameters are equally efficient in task decoding, suggesting that in this experimental paradigm they are equally valid representations of muscle synergies. Overall, these findings stress the effectiveness of the decoding metric in systematically assessing muscle synergy decompositions in task space.

  10. Difficulty in Learning Similar-Sounding Words: A Developmental Stage or a General Property of Learning?

    Science.gov (United States)

    Pajak, Bozena; Creel, Sarah C.; Levy, Roger

    2016-01-01

    How are languages learned, and to what extent are learning mechanisms similar in infant native-language (L1) and adult second-language (L2) acquisition? In terms of vocabulary acquisition, we know from the infant literature that the ability to discriminate similar-sounding words at a particular age does not guarantee successful word-meaning…

  11. Discriminative Nonlinear Analysis Operator Learning: When Cosparse Model Meets Image Classification.

    Science.gov (United States)

    Wen, Zaidao; Hou, Biao; Jiao, Licheng

    2017-05-03

    Linear synthesis model based dictionary learning framework has achieved remarkable performances in image classification in the last decade. Behaved as a generative feature model, it however suffers from some intrinsic deficiencies. In this paper, we propose a novel parametric nonlinear analysis cosparse model (NACM) with which a unique feature vector will be much more efficiently extracted. Additionally, we derive a deep insight to demonstrate that NACM is capable of simultaneously learning the task adapted feature transformation and regularization to encode our preferences, domain prior knowledge and task oriented supervised information into the features. The proposed NACM is devoted to the classification task as a discriminative feature model and yield a novel discriminative nonlinear analysis operator learning framework (DNAOL). The theoretical analysis and experimental performances clearly demonstrate that DNAOL will not only achieve the better or at least competitive classification accuracies than the state-of-the-art algorithms but it can also dramatically reduce the time complexities in both training and testing phases.

  12. Auditory perception and attention as reflected by the brain event-related potentials in children with Asperger syndrome.

    Science.gov (United States)

    Lepistö, T; Silokallio, S; Nieminen-von Wendt, T; Alku, P; Näätänen, R; Kujala, T

    2006-10-01

    Language development is delayed and deviant in individuals with autism, but proceeds quite normally in those with Asperger syndrome (AS). We investigated auditory-discrimination and orienting in children with AS using an event-related potential (ERP) paradigm that was previously applied to children with autism. ERPs were measured to pitch, duration, and phonetic changes in vowels and to corresponding changes in non-speech sounds. Active sound discrimination was evaluated with a sound-identification task. The mismatch negativity (MMN), indexing sound-discrimination accuracy, showed right-hemisphere dominance in the AS group, but not in the controls. Furthermore, the children with AS had diminished MMN-amplitudes and decreased hit rates for duration changes. In contrast, their MMN to speech pitch changes was parietally enhanced. The P3a, reflecting involuntary orienting to changes, was diminished in the children with AS for speech pitch and phoneme changes, but not for the corresponding non-speech changes. The children with AS differ from controls with respect to their sound-discrimination and orienting abilities. The results of the children with AS are relatively similar to those earlier obtained from children with autism using the same paradigm, although these clinical groups differ markedly in their language development.

  13. Fatigue sensation induced by the sounds associated with mental fatigue and its related neural activities: revealed by magnetoencephalography.

    Science.gov (United States)

    Ishii, Akira; Tanaka, Masaaki; Iwamae, Masayoshi; Kim, Chongsoo; Yamano, Emi; Watanabe, Yasuyoshi

    2013-06-13

    It has been proposed that an inappropriately conditioned fatigue sensation could be one cause of chronic fatigue. Although classical conditioning of the fatigue sensation has been reported in rats, there have been no reports in humans. Our aim was to examine whether classical conditioning of the mental fatigue sensation can take place in humans and to clarify the neural mechanisms of fatigue sensation using magnetoencephalography (MEG). Ten and 9 healthy volunteers participated in a conditioning and a control experiment, respectively. In the conditioning experiment, we used metronome sounds as conditioned stimuli and two-back task trials as unconditioned stimuli to cause fatigue sensation. Participants underwent MEG measurement while listening to the metronome sounds for 6 min. Thereafter, fatigue-inducing mental task trials (two-back task trials), which are demanding working-memory task trials, were performed for 60 min; metronome sounds were started 30 min after the start of the task trials (conditioning session). The next day, neural activities while listening to the metronome for 6 min were measured. Levels of fatigue sensation were also assessed using a visual analogue scale. In the control experiment, participants listened to the metronome on the first and second days, but they did not perform conditioning session. MEG was not recorded in the control experiment. The level of fatigue sensation caused by listening to the metronome on the second day was significantly higher relative to that on the first day only when participants performed the conditioning session on the first day. Equivalent current dipoles (ECDs) in the insular cortex, with mean latencies of approximately 190 ms, were observed in six of eight participants after the conditioning session, although ECDs were not identified in any participant before the conditioning session. We demonstrated that the metronome sounds can cause mental fatigue sensation as a result of repeated pairings of the sounds

  14. Unconscious improvement in foreign language learning using mismatch negativity neurofeedback: A preliminary study.

    Directory of Open Access Journals (Sweden)

    Ming Chang

    Full Text Available When people learn foreign languages, they find it difficult to perceive speech sounds that are nonexistent in their native language, and extensive training is consequently necessary. Our previous studies have shown that by using neurofeedback based on the mismatch negativity event-related brain potential, participants could unconsciously achieve learning in the auditory discrimination of pure tones that could not be consciously discriminated without the neurofeedback. Here, we examined whether mismatch negativity neurofeedback is effective for helping someone to perceive new speech sounds in foreign language learning. We developed a task for training native Japanese speakers to discriminate between 'l' and 'r' sounds in English, as they usually cannot discriminate between these two sounds. Without participants attending to auditory stimuli or being aware of the nature of the experiment, neurofeedback training helped them to achieve significant improvement in unconscious auditory discrimination and recognition of the target words 'light' and 'right'. There was also improvement in the recognition of other words containing 'l' and 'r' (e.g., 'blight' and 'bright', even though these words had not been presented during training. This method could be used to facilitate foreign language learning and can be extended to other fields of auditory and clinical research and even other senses.

  15. A scheme of quantum state discrimination over specified states via weak-value measurement

    Science.gov (United States)

    Chen, Xi; Dai, Hong-Yi; Liu, Bo-Yang; Zhang, Ming

    2018-04-01

    The commonly adopted projective measurements are invalid in the specified task of quantum state discrimination when the discriminated states are superposition of planar-position basis states whose complex-number probability amplitudes have the same magnitude but different phases. Therefore we propose a corresponding scheme via weak-value measurement and examine the feasibility of this scheme. Furthermore, the role of the weak-value measurement in quantum state discrimination is analyzed and compared with one in quantum state tomography in this Letter.

  16. Heart sounds analysis using probability assessment.

    Science.gov (United States)

    Plesinger, F; Viscor, I; Halamek, J; Jurco, J; Jurak, P

    2017-07-31

    This paper describes a method for automated discrimination of heart sounds recordings according to the Physionet Challenge 2016. The goal was to decide if the recording refers to normal or abnormal heart sounds or if it is not possible to decide (i.e. 'unsure' recordings). Heart sounds S1 and S2 are detected using amplitude envelopes in the band 15-90 Hz. The averaged shape of the S1/S2 pair is computed from amplitude envelopes in five different bands (15-90 Hz; 55-150 Hz; 100-250 Hz; 200-450 Hz; 400-800 Hz). A total of 53 features are extracted from the data. The largest group of features is extracted from the statistical properties of the averaged shapes; other features are extracted from the symmetry of averaged shapes, and the last group of features is independent of S1 and S2 detection. Generated features are processed using logical rules and probability assessment, a prototype of a new machine-learning method. The method was trained using 3155 records and tested on 1277 hidden records. It resulted in a training score of 0.903 (sensitivity 0.869, specificity 0.937) and a testing score of 0.841 (sensitivity 0.770, specificity 0.913). The revised method led to a test score of 0.853 in the follow-up phase of the challenge. The presented solution achieved 7th place out of 48 competing entries in the Physionet Challenge 2016 (official phase). In addition, the PROBAfind software for probability assessment was introduced.

  17. Methamphetamine functions as a positive and negative drug feature in a Pavlovian appetitive discrimination task.

    Science.gov (United States)

    Reichel, Carmela M; Wilkinson, Jamie L; Bevins, Rick A

    2007-12-01

    This research determined the ability of methamphetamine to serve as a positive or negative feature, and assessed the ability of bupropion, cocaine, and naloxone to substitute for the methamphetamine features. Rats received methamphetamine (0.5 mg/kg, intraperitoneally) or saline 15 min before a conditioning session. For the feature positive (FP) group, offset of 15-s cue lights was followed by access to sucrose on methamphetamine sessions; sucrose was withheld during saline sessions. For the feature negative (FN) group, the light offset was followed by sucrose on saline sessions; sucrose was withheld during methamphetamine sessions. During acquisition, the FP group had higher responding on methamphetamine sessions than on saline sessions. For the FN group, responding was higher on saline sessions than on methamphetamine sessions. Conditioned responding was sensitive to methamphetamine dose. For the FP group, bupropion and cocaine fully and partially substituted for methamphetamine, respectively. In contrast, both drugs fully substituted for methamphetamine in the FN group. Naloxone did not substitute in either set of rats. FP-trained rats were more sensitive to the locomotor stimulating effects of the test drugs than FN-trained rats. This research demonstrates that the pharmacological effects of methamphetamine function as a FP or FN in this Pavlovian discrimination task and that training history can affect conditioned responding and locomotor effects evoked by a drug.

  18. Brain activations during bimodal dual tasks depend on the nature and combination of component tasks

    Directory of Open Access Journals (Sweden)

    Emma eSalo

    2015-02-01

    Full Text Available We used functional magnetic resonance imaging to investigate brain activations during nine different dual tasks in which the participants were required to simultaneously attend to concurrent streams of spoken syllables and written letters. They performed a phonological, spatial or simple (speaker-gender or font-shade discrimination task within each modality. We expected to find activations associated specifically with dual tasking especially in the frontal and parietal cortices. However, no brain areas showed systematic dual task enhancements common for all dual tasks. Further analysis revealed that dual tasks including component tasks that were according to Baddeley’s model modality atypical, that is, the auditory spatial task or the visual phonological task, were not associated with enhanced frontal activity. In contrast, for other dual tasks, activity specifically associated with dual tasking was found in the left or bilateral frontal cortices. Enhanced activation in parietal areas, however, appeared not to be specifically associated with dual tasking per se, but rather with intermodal attention switching. We also expected effects of dual tasking in left frontal supramodal phonological processing areas when both component tasks required phonological processing and in right parietal supramodal spatial processing areas when both tasks required spatial processing. However, no such effects were found during these dual tasks compared with their component tasks performed separately. Taken together, the current results indicate that activations during dual tasks depend in a complex manner on specific demands of component tasks.

  19. Earlier saccades to task-relevant targets irrespective of relative gain between peripheral and foveal information.

    Science.gov (United States)

    Wolf, Christian; Schütz, Alexander C

    2017-06-01

    Saccades bring objects of interest onto the fovea for high-acuity processing. Saccades to rewarded targets show shorter latencies that correlate negatively with expected motivational value. Shorter latencies are also observed when the saccade target is relevant for a perceptual discrimination task. Here we tested whether saccade preparation is equally influenced by informational value as it is by motivational value. We defined informational value as the probability that information is task-relevant times the ratio between postsaccadic foveal and presaccadic peripheral discriminability. Using a gaze-contingent display, we independently manipulated peripheral and foveal discriminability of the saccade target. Latencies of saccades with perceptual task were reduced by 36 ms in general, but they were not modulated by the information saccades provide (Experiments 1 and 2). However, latencies showed a clear negative linear correlation with the probability that the target is task-relevant (Experiment 3). We replicated that the facilitation by a perceptual task is spatially specific and not due to generally heightened arousal (Experiment 4). Finally, the facilitation only emerged when the perceptual task is in the visual but not in the auditory modality (Experiment 5). Taken together, these results suggest that saccade latencies are not equally modulated by informational value as by motivational value. The facilitation by a perceptual task only arises when task-relevant visual information is foveated, irrespective of whether the foveation is useful or not.

  20. A taste for words and sounds: a case of lexical-gustatory and sound-gustatory synesthesia

    Directory of Open Access Journals (Sweden)

    Olympia eColizoli

    2013-10-01

    Full Text Available Gustatory forms of synesthesia involve the automatic and consistent experience of tastes that are triggered by non-taste related inducers. We present a case of lexical-gustatory and sound-gustatory synesthesia within one individual, SC. Most words and a subset of nonlinguistic sounds induce the experience of taste, smell and physical sensations for SC. SC’s lexical-gustatory associations were significantly more consistent than those of a group of controls. We tested for effects of presentation modality (visual vs. auditory, taste-related congruency, and synesthetic inducer-concurrent direction using a priming task. SC’s performance did not differ significantly from a trained control group. We used functional magnetic resonance imaging to investigate the neural correlates of SC’s synesthetic experiences by comparing her brain activation to the literature on brain networks related to language, music and sound processing, in addition to synesthesia. Words that induced a strong taste were contrasted to words that induced weak-to-no tastes (tasty vs. tasteless words. Brain activation was also measured during passive listening to music and environmental sounds. Brain activation patterns showed evidence that two regions are implicated in SC’s synesthetic experience of taste and smell: the left anterior insula and left superior parietal lobe. Anterior insula activation may reflect the synesthetic taste experience. The superior parietal lobe is proposed to be involved in binding sensory information across sub-types of synesthetes. We conclude that SC’s synesthesia is genuine and reflected in her brain activation. The type of inducer (visual-lexical, auditory-lexical, and non-lexical auditory stimuli could be differentiated based on patterns of brain activity.

  1. Selective attention to sound location or pitch studied with fMRI.

    Science.gov (United States)

    Degerman, Alexander; Rinne, Teemu; Salmi, Juha; Salonen, Oili; Alho, Kimmo

    2006-03-10

    We used 3-T functional magnetic resonance imaging to compare the brain mechanisms underlying selective attention to sound location and pitch. In different tasks, the subjects (N = 10) attended to a designated sound location or pitch or to pictures presented on the screen. In the Attend Location conditions, the sound location varied randomly (left or right), while the pitch was kept constant (high or low). In the Attend Pitch conditions, sounds of randomly varying pitch (high or low) were presented at a constant location (left or right). Both attention to location and attention to pitch produced enhanced activity (in comparison with activation caused by the same sounds when attention was focused on the pictures) in widespread areas of the superior temporal cortex. Attention to either sound feature also activated prefrontal and inferior parietal cortical regions. These activations were stronger during attention to location than during attention to pitch. Attention to location but not to pitch produced a significant increase of activation in the premotor/supplementary motor cortices of both hemispheres and in the right prefrontal cortex, while no area showed activity specifically related to attention to pitch. The present results suggest some differences in the attentional selection of sounds on the basis of their location and pitch consistent with the suggested auditory "what" and "where" processing streams.

  2. Dyslexics' faster decay of implicit memory for sounds and words is manifested in their shorter neural adaptation.

    Science.gov (United States)

    Jaffe-Dax, Sagi; Frenkel, Or; Ahissar, Merav

    2017-01-24

    Dyslexia is a prevalent reading disability whose underlying mechanisms are still disputed. We studied the neural mechanisms underlying dyslexia using a simple frequency-discrimination task. Though participants were asked to compare the two tones in each trial, implicit memory of previous trials affected their responses. We hypothesized that implicit memory decays faster among dyslexics. We tested this by increasing the temporal intervals between consecutive trials, and by measuring the behavioral impact and ERP responses from the auditory cortex. Dyslexics showed a faster decay of implicit memory effects on both measures, with similar time constants. Finally, faster decay of implicit memory also characterized the impact of sound regularities in benefitting dyslexics' oral reading rate. Their benefit decreased faster as a function of the time interval from the previous reading of the same non-word. We propose that dyslexics' shorter neural adaptation paradoxically accounts for their longer reading times, since it reduces their temporal window of integration of past stimuli, resulting in noisier and less reliable predictions for both simple and complex stimuli. Less reliable predictions limit their acquisition of reading expertise.

  3. Psilocybin modulates functional connectivity of the amygdala during emotional face discrimination.

    Science.gov (United States)

    Grimm, O; Kraehenmann, R; Preller, K H; Seifritz, E; Vollenweider, F X

    2018-04-24

    Recent studies suggest that the antidepressant effects of the psychedelic 5-HT2A receptor agonist psilocybin are mediated through its modulatory properties on prefrontal and limbic brain regions including the amygdala. To further investigate the effects of psilocybin on emotion processing networks, we studied for the first-time psilocybin's acute effects on amygdala seed-to-voxel connectivity in an event-related face discrimination task in 18 healthy volunteers who received psilocybin and placebo in a double-blind balanced cross-over design. The amygdala has been implicated as a salience detector especially involved in the immediate response to emotional face content. We used beta-series amygdala seed-to-voxel connectivity during an emotional face discrimination task to elucidate the connectivity pattern of the amygdala over the entire brain. When we compared psilocybin to placebo, an increase in reaction time for all three categories of affective stimuli was found. Psilocybin decreased the connectivity between amygdala and the striatum during angry face discrimination. During happy face discrimination, the connectivity between the amygdala and the frontal pole was decreased. No effect was seen during discrimination of fearful faces. Thus, we show psilocybin's effect as a modulator of major connectivity hubs of the amygdala. Psilocybin decreases the connectivity between important nodes linked to emotion processing like the frontal pole or the striatum. Future studies are needed to clarify whether connectivity changes predict therapeutic effects in psychiatric patients. Copyright © 2018 Elsevier B.V. and ECNP. All rights reserved.

  4. The Diagnostic and Prognostic Value of a Dual-Tasking Paradigm in a Memory Clinic.

    Science.gov (United States)

    Nielsen, Malene Schjnning; Simonsen, Anja Hviid; Siersma, Volkert; Hasselbalch, Steen Gregers; Hoegh, Peter

    2018-01-01

    Daily living requires the ability to perform dual-tasking. As cognitive skills decrease in dementia, performing a cognitive and motor task simultaneously become increasingly challenging and subtle gait abnormalities may even be present in pre-dementia stages. Therefore, a dual-tasking paradigm, such as the Timed Up and Go-Dual Task (TUG-DT), may be useful in the diagnostic assessment of mild cognitive impairment (MCI). To investigate the diagnostic and prognostic ability of a dual-tasking paradigm in patients with MCI or mild Alzheimer's disease (AD) and to evaluate the association between the dual-tasking paradigm and cerebrospinal fluid (CSF) AD biomarkers. The study is a prospective cohort study conducted in a clinical setting in two memory clinics. Eighty-six patients were included (28 MCI, 17 AD, 41 healthy controls (HC)). The ability to perform dual-tasking was evaluated by the TUG-DT. Patients underwent a standardized diagnostic assessment and were evaluated to determine progression yearly. ROC curve analysis illustrated a high discriminative ability of the dual-tasking paradigm in separating MCI patients from HC (AUC: 0.78, AUC: 0.82) and a moderate discriminative ability in separating MCI from AD (AUC: 0.73, AUC: 0.55). Performance discriminated clearly between all groups (p paradigm for progression and rate of cognitive decline. A moderately strong correlation between the dual-tasking paradigm and CSF AD biomarkers was observed. In our study, we found that patients with MCI and mild AD have increasing difficulties in dual-tasking compared to healthy elderly. Hence, the dual-tasking paradigm may be a potential complement in the diagnostic assessment in a typical clinical setting.

  5. Single-trial effective brain connectivity patterns enhance discriminability of mental imagery tasks

    Science.gov (United States)

    Rathee, Dheeraj; Cecotti, Hubert; Prasad, Girijesh

    2017-10-01

    Objective. The majority of the current approaches of connectivity based brain-computer interface (BCI) systems focus on distinguishing between different motor imagery (MI) tasks. Brain regions associated with MI are anatomically close to each other, hence these BCI systems suffer from low performances. Our objective is to introduce single-trial connectivity feature based BCI system for cognition imagery (CI) based tasks wherein the associated brain regions are located relatively far away as compared to those for MI. Approach. We implemented time-domain partial Granger causality (PGC) for the estimation of the connectivity features in a BCI setting. The proposed hypothesis has been verified with two publically available datasets involving MI and CI tasks. Main results. The results support the conclusion that connectivity based features can provide a better performance than a classical signal processing framework based on bandpass features coupled with spatial filtering for CI tasks, including word generation, subtraction, and spatial navigation. These results show for the first time that connectivity features can provide a reliable performance for imagery-based BCI system. Significance. We show that single-trial connectivity features for mixed imagery tasks (i.e. combination of CI and MI) can outperform the features obtained by current state-of-the-art method and hence can be successfully applied for BCI applications.

  6. A shared system of representation governing quantity discrimination in canids

    Directory of Open Access Journals (Sweden)

    Joseph M Baker

    2012-10-01

    Full Text Available One way to investigate the evolution of cognition is to compare the abilities of phylogenetically related species. The domestic dog (Canis lupus familiaris, for example, still shares cognitive abilities with the coyote (C. latrans. Both of these canids possess the ability to make psychophysical less/more discriminations of food based on quantity. Like many other species including humans, this ability is mediated by Weber’s Law: discrimination of continuous quantities is dependent on the ratio between the two quantities. As two simultaneously presented quantities of food become more similar, choice of the large or small option becomes random in both dogs and coyotes. It remains unknown, however, whether these closely related species within the same family—one domesticated, and one wild—make such quantitative comparisons with comparable accuracy. Has domestication honed or diminished this quantitative ability? Might different selective and ecological pressures facing coyotes drive them to be more or less able to accurately represent and discriminate food quantity than domesticated dogs? This study is an effort to elucidate this question concerning the evolution of non-verbal quantitative cognition.Here, we tested the quantitative discrimination ability of 16 domesticated dogs. Each animal was given 9 trials in which two different quantities of food were simultaneously displayed to them. The domesticated dogs’ performance on this task was then compared directly to the data from 16 coyotes’ performance on this same task reported by Baker and colleagues (2011.The quantitative discrimination abilities between the two species were strikingly similar. Domesticated dogs demonstrated similar quantitative sensitivity as coyotes, suggesting that domestication may not have significantly altered the psychophysical discrimination abilities of canids. Instead, this study provides further evidence for similar nonverbal quantitative abilities across

  7. Sonar sound groups and increased terminal buzz duration reflect task complexity in hunting bats

    DEFF Research Database (Denmark)

    Hulgard, K.; Ratcliffe, J. M.

    2016-01-01

    to prey under presumably more difficult conditions. Specifically, we found Daubenton's bats, Myotis daubentonii, produced longer buzzes when aerial-hawking versus water-trawling prey, but that bats taking revolving air- and water-borne prey produced more sonar sound groups than did the bats when taking...

  8. Simultaneous and Sequential Feature Negative Discriminations: Elemental Learning and Occasion Setting in Human Pavlovian Conditioning

    Science.gov (United States)

    Baeyens, Frank; Vervliet, Bram; Vansteenwegen, Debora; Beckers, Tom; Hermans, Dirk; Eelen, Paul

    2004-01-01

    Using a conditioned suppression task, we investigated simultaneous (XA-/A+) vs. sequential (X [right arrow] A-/A+) Feature Negative (FN) discrimination learning in humans. We expected the simultaneous discrimination to result in X (or alternatively the XA configuration) becoming an inhibitor acting directly on the US, and the sequential…

  9. Task-specific modulation of human auditory evoked responses in a delayed-match-to-sample task

    Directory of Open Access Journals (Sweden)

    Feng eRong

    2011-05-01

    Full Text Available In this study, we focus our investigation on task-specific cognitive modulation of early cortical auditory processing in human cerebral cortex. During the experiments, we acquired whole-head magnetoencephalography (MEG data while participants were performing an auditory delayed-match-to-sample (DMS task and associated control tasks. Using a spatial filtering beamformer technique to simultaneously estimate multiple source activities inside the human brain, we observed a significant DMS-specific suppression of the auditory evoked response to the second stimulus in a sound pair, with the center of the effect being located in the vicinity of the left auditory cortex. For the right auditory cortex, a non-invariant suppression effect was observed in both DMS and control tasks. Furthermore, analysis of coherence revealed a beta band (12 ~ 20 Hz DMS-specific enhanced functional interaction between the sources in left auditory cortex and those in left inferior frontal gyrus, which has been shown to involve in short-term memory processing during the delay period of DMS task. Our findings support the view that early evoked cortical responses to incoming acoustic stimuli can be modulated by task-specific cognitive functions by means of frontal-temporal functional interactions.

  10. Theta oscillation and neuronal activity in rat hippocampus areinvolved in temporal discrimination of time in seconds

    Directory of Open Access Journals (Sweden)

    Tomoaki eNakazono

    2015-06-01

    Full Text Available The discovery of time cells revealed that the rodent hippocampus has information of time.Previous studies have suggested that a role of hippocampal time cells is to integratetemporally segregated events into a sequence using working memory with time perception.However, it is unclear that hippocampal cells contribute to time perception itself becausemost previous studies employed delayed matching-to-sample tasks that did not evaluatetime perception separately from working memory processes. Here, we investigated thefunction of the rat hippocampus in time perception using a temporal discrimination task. Inthe task, rats had to discriminate between durations of 1 and 3 sec to get a reward, andmaintaining task-related information as working memory was not required. We found thatsome hippocampal neurons showed firing rate modulation similar to that of time cells.Moreover, theta oscillation of local field potentials (LFPs showed a transient enhancementof power during time discrimination periods. However, there were little relationshipsbetween the neuronal activities and theta oscillations. These results suggest that both theindividual neuronal activities and theta oscillations of LFPs in the hippocampus have a possibility to be engaged in seconds order time perception; however, they participate in different ways.

  11. Effects of harmonic roving on pitch discrimination

    DEFF Research Database (Denmark)

    Santurette, Sébastien; de Kérangal, Mathilde le Gal; Joshi, Suyash Narendra

    2015-01-01

    Performance in pitch discrimination tasks is limited by variability intrinsic to listeners which may arise from peripheral auditory coding limitations or more central noise sources. The present study aimed at quantifying such “internal noise” by estimating the amount of harmonic roving required...... to impair pitch discrimination performance. Fundamental-frequency difference limens (F0DLs) were obtained in normal-hearing listeners with and without musical training for complex tones filtered between 1.5 and 3.5 kHz with F0s of 300 Hz (resolved harmonics) and 75 Hz (unresolved harmonics). The harmonicity...... that could be used to quantify the internal noise and provide strong constraints for physiologically inspired models of pitch perception....

  12. The Contribution of Sound Intensity in Vocal Emotion Perception: Behavioral and Electrophysiological Evidence

    Science.gov (United States)

    Chen, Xuhai; Yang, Jianfeng; Gan, Shuzhen; Yang, Yufang

    2012-01-01

    Although its role is frequently stressed in acoustic profile for vocal emotion, sound intensity is frequently regarded as a control parameter in neurocognitive studies of vocal emotion, leaving its role and neural underpinnings unclear. To investigate these issues, we asked participants to rate the angry level of neutral and angry prosodies before and after sound intensity modification in Experiment 1, and recorded electroencephalogram (EEG) for mismatching emotional prosodies with and without sound intensity modification and for matching emotional prosodies while participants performed emotional feature or sound intensity congruity judgment in Experiment 2. It was found that sound intensity modification had significant effect on the rating of angry level for angry prosodies, but not for neutral ones. Moreover, mismatching emotional prosodies, relative to matching ones, induced enhanced N2/P3 complex and theta band synchronization irrespective of sound intensity modification and task demands. However, mismatching emotional prosodies with reduced sound intensity showed prolonged peak latency and decreased amplitude in N2/P3 complex and smaller theta band synchronization. These findings suggest that though it cannot categorically affect emotionality conveyed in emotional prosodies, sound intensity contributes to emotional significance quantitatively, implying that sound intensity should not simply be taken as a control parameter and its unique role needs to be specified in vocal emotion studies. PMID:22291928

  13. The contribution of sound intensity in vocal emotion perception: behavioral and electrophysiological evidence.

    Directory of Open Access Journals (Sweden)

    Xuhai Chen

    Full Text Available Although its role is frequently stressed in acoustic profile for vocal emotion, sound intensity is frequently regarded as a control parameter in neurocognitive studies of vocal emotion, leaving its role and neural underpinnings unclear. To investigate these issues, we asked participants to rate the angry level of neutral and angry prosodies before and after sound intensity modification in Experiment 1, and recorded electroencephalogram (EEG for mismatching emotional prosodies with and without sound intensity modification and for matching emotional prosodies while participants performed emotional feature or sound intensity congruity judgment in Experiment 2. It was found that sound intensity modification had significant effect on the rating of angry level for angry prosodies, but not for neutral ones. Moreover, mismatching emotional prosodies, relative to matching ones, induced enhanced N2/P3 complex and theta band synchronization irrespective of sound intensity modification and task demands. However, mismatching emotional prosodies with reduced sound intensity showed prolonged peak latency and decreased amplitude in N2/P3 complex and smaller theta band synchronization. These findings suggest that though it cannot categorically affect emotionality conveyed in emotional prosodies, sound intensity contributes to emotional significance quantitatively, implying that sound intensity should not simply be taken as a control parameter and its unique role needs to be specified in vocal emotion studies.

  14. Problems in nonlinear acoustics: Scattering of sound by sound, parametric receiving arrays, nonlinear effects in asymmetric sound beams and pulsed finite amplitude sound beams

    Science.gov (United States)

    Hamilton, Mark F.

    1989-08-01

    Four projects are discussed in this annual summary report, all of which involve basic research in nonlinear acoustics: Scattering of Sound by Sound, a theoretical study of two nonconlinear Gaussian beams which interact to produce sum and difference frequency sound; Parametric Receiving Arrays, a theoretical study of parametric reception in a reverberant environment; Nonlinear Effects in Asymmetric Sound Beams, a numerical study of two dimensional finite amplitude sound fields; and Pulsed Finite Amplitude Sound Beams, a numerical time domain solution of the KZK equation.

  15. PREFACE: Aerodynamic sound Aerodynamic sound

    Science.gov (United States)

    Akishita, Sadao

    2010-02-01

    The modern theory of aerodynamic sound originates from Lighthill's two papers in 1952 and 1954, as is well known. I have heard that Lighthill was motivated in writing the papers by the jet-noise emitted by the newly commercialized jet-engined airplanes at that time. The technology of aerodynamic sound is destined for environmental problems. Therefore the theory should always be applied to newly emerged public nuisances. This issue of Fluid Dynamics Research (FDR) reflects problems of environmental sound in present Japanese technology. The Japanese community studying aerodynamic sound has held an annual symposium since 29 years ago when the late Professor S Kotake and Professor S Kaji of Teikyo University organized the symposium. Most of the Japanese authors in this issue are members of the annual symposium. I should note the contribution of the two professors cited above in establishing the Japanese community of aerodynamic sound research. It is my pleasure to present the publication in this issue of ten papers discussed at the annual symposium. I would like to express many thanks to the Editorial Board of FDR for giving us the chance to contribute these papers. We have a review paper by T Suzuki on the study of jet noise, which continues to be important nowadays, and is expected to reform the theoretical model of generating mechanisms. Professor M S Howe and R S McGowan contribute an analytical paper, a valuable study in today's fluid dynamics research. They apply hydrodynamics to solve the compressible flow generated in the vocal cords of the human body. Experimental study continues to be the main methodology in aerodynamic sound, and it is expected to explore new horizons. H Fujita's study on the Aeolian tone provides a new viewpoint on major, longstanding sound problems. The paper by M Nishimura and T Goto on textile fabrics describes new technology for the effective reduction of bluff-body noise. The paper by T Sueki et al also reports new technology for the

  16. Matching cue size and task properties in exogenous attention.

    Science.gov (United States)

    Burnett, Katherine E; d'Avossa, Giovanni; Sapir, Ayelet

    2013-01-01

    Exogenous attention is an involuntary, reflexive orienting response that results in enhanced processing at the attended location. The standard view is that this enhancement generalizes across visual properties of a stimulus. We test whether the size of an exogenous cue sets the attentional field and whether this leads to different effects on stimuli with different visual properties. In a dual task with a random-dot kinematogram (RDK) in each quadrant of the screen, participants discriminated the direction of moving dots in one RDK and localized one red dot. Precues were uninformative and consisted of either a large or a small luminance-change frame. The motion discrimination task showed attentional effects following both large and small exogenous cues. The red dot probe localization task showed attentional effects following a small cue, but not a large cue. Two additional experiments showed that the different effects on localization were not due to reduced spatial uncertainty or suppression of RDK dots in the surround. These results indicate that the effects of exogenous attention depend on the size of the cue and the properties of the task, suggesting the involvement of receptive fields with different sizes in different tasks. These attentional effects are likely to be driven by bottom-up mechanisms in early visual areas.

  17. Individual personality differences in goats predict their performance in visual learning and non-associative cognitive tasks.

    Science.gov (United States)

    Nawroth, Christian; Prentice, Pamela M; McElligott, Alan G

    2017-01-01

    Variation in common personality traits, such as boldness or exploration, is often associated with risk-reward trade-offs and behavioural flexibility. To date, only a few studies have examined the effects of consistent behavioural traits on both learning and cognition. We investigated whether certain personality traits ('exploration' and 'sociability') of individuals were related to cognitive performance, learning flexibility and learning style in a social ungulate species, the goat (Capra hircus). We also investigated whether a preference for feature cues rather than impaired learning abilities can explain performance variation in a visual discrimination task. We found that personality scores were consistent across time and context. Less explorative goats performed better in a non-associative cognitive task, in which subjects had to follow the trajectory of a hidden object (i.e. testing their ability for object permanence). We also found that less sociable subjects performed better compared to more sociable goats in a visual discrimination task. Good visual learning performance was associated with a preference for feature cues, indicating personality-dependent learning strategies in goats. Our results suggest that personality traits predict the outcome in visual discrimination and non-associative cognitive tasks in goats and that impaired performance in a visual discrimination tasks does not necessarily imply impaired learning capacities, but rather can be explained by a varying preference for feature cues. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. The influence of spatial congruency and movement preparation time on saccade curvature in simultaneous and sequential dual-tasks.

    Science.gov (United States)

    Moehler, Tobias; Fiehler, Katja

    2015-11-01

    Saccade curvature represents a sensitive measure of oculomotor inhibition with saccades curving away from covertly attended locations. Here we investigated whether and how saccade curvature depends on movement preparation time when a perceptual task is performed during or before saccade preparation. Participants performed a dual-task including a visual discrimination task at a cued location and a saccade task to the same location (congruent) or to a different location (incongruent). Additionally, we varied saccade preparation time (time between saccade cue and Go-signal) and the occurrence of the discrimination task (during saccade preparation=simultaneous vs. before saccade preparation=sequential). We found deteriorated perceptual performance in incongruent trials during simultaneous task performance while perceptual performance was unaffected during sequential task performance. Saccade accuracy and precision were deteriorated in incongruent trials during simultaneous and, to a lesser extent, also during sequential task performance. Saccades consistently curved away from covertly attended non-saccade locations. Saccade curvature was unaffected by movement preparation time during simultaneous task performance but decreased and finally vanished with increasing movement preparation time during sequential task performance. Our results indicate that the competing saccade plan to the covertly attended non-saccade location is maintained during simultaneous task performance until the perceptual task is solved while in the sequential condition, in which the discrimination task is solved prior to the saccade task, oculomotor inhibition decays gradually with movement preparation time. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Simple and conditional visual discrimination with wheel running as reinforcement in rats.

    Science.gov (United States)

    Iversen, I H

    1998-09-01

    Three experiments explored whether access to wheel running is sufficient as reinforcement to establish and maintain simple and conditional visual discriminations in nondeprived rats. In Experiment 1, 2 rats learned to press a lit key to produce access to running; responding was virtually absent when the key was dark, but latencies to respond were longer than for customary food and water reinforcers. Increases in the intertrial interval did not improve the discrimination performance. In Experiment 2, 3 rats acquired a go-left/go-right discrimination with a trial-initiating response and reached an accuracy that exceeded 80%; when two keys showed a steady light, pressing the left key produced access to running whereas pressing the right key produced access to running when both keys showed blinking light. Latencies to respond to the lights shortened when the trial-initiation response was introduced and became much shorter than in Experiment 1. In Experiment 3, 1 rat acquired a conditional discrimination task (matching to sample) with steady versus blinking lights at an accuracy exceeding 80%. A trial-initiation response allowed self-paced trials as in Experiment 2. When the rat was exposed to the task for 19 successive 24-hr periods with access to food and water, the discrimination performance settled in a typical circadian pattern and peak accuracy exceeded 90%. When the trial-initiation response was under extinction, without access to running, the circadian activity pattern determined the time of spontaneous recovery. The experiments demonstrate that wheel-running reinforcement can be used to establish and maintain simple and conditional visual discriminations in nondeprived rats.

  20. Aging effects on ERP correlates of emotional word discrimination

    NARCIS (Netherlands)

    Molnar, M.; Toth, B.; Boha, R.; Gaal, Z.A.; Kardos, Z.; File, B.; Stam, C.J.

    2013-01-01

    ObjectiveTo explore age-, and valence specific ERP-characteristics of word-discrimination processes. Methods: A group of young (mean age: 21.26. yrs) and elderly (mean age: 65.73. yrs) individuals participated. The task was to respond to a word (target) with valence (neutral, negative, positive) and

  1. Sound Transduction in the Auditory System of Bushcrickets

    Science.gov (United States)

    Nowotny, Manuela; Udayashankar, Arun Palghat; Weber, Melanie; Hummel, Jennifer; Kössl, Manfred

    2011-11-01

    Place based frequency representation, called tonotopy,is a typical property of hearing organs for the discrimination of different frequencies. Due to its coiled structure and secure housing, it is difficult access the mammalian cochlea. Hence, our knowledge about in vivo inner-ear mechanics is restricted to small regions. In this study, we present in vivo measurements that focus on the easily accessible, uncoiled auditory organs in bushcrickets, which are located in their foreleg tibiae. Sound enters the body via an opening at the lateral side of the thorax and passes through a horn-shaped acoustic trachea before reaching the high frequency hearing organ called crista acustica. In addition to the acoustic trachea as structure that transmits incoming sound towards the hearing organ, bushcrickets also possess two tympana, specialized plate-like structures, on the anterior and posterior side of each tibia. They provide a secondary path of excitation for the sensory receptors at low frequencies. We investigated the mechanics of the crista acustica in the tropical bushcricket Mecopoda elongata. The frequency-dependent motion of the crista acustica was captured using a laser-Doppler-vibrometer system. Using pure tone stimulation of the crista acustica, we could elicit traveling waves along the length of the hearing organ that move from the distal high frequency to the proximal low frequency region. In addition, distinct maxima in the velocity response of the crista acustica could be measured at ˜7 and ˜17 kHz. The travelling-wave-based tonotopy provides the basis for mechanical frequency discrimination along the crista acustica and opens up new possibility to investigate traveling wave mechanics in vivo.

  2. Effects of Sound Frequency on Audiovisual Integration: An Event-Related Potential Study.

    Science.gov (United States)

    Yang, Weiping; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Ren, Yanna; Takahashi, Satoshi; Wu, Jinglong

    2015-01-01

    A combination of signals across modalities can facilitate sensory perception. The audiovisual facilitative effect strongly depends on the features of the stimulus. Here, we investigated how sound frequency, which is one of basic features of an auditory signal, modulates audiovisual integration. In this study, the task of the participant was to respond to a visual target stimulus by pressing a key while ignoring auditory stimuli, comprising of tones of different frequencies (0.5, 1, 2.5 and 5 kHz). A significant facilitation of reaction times was obtained following audiovisual stimulation, irrespective of whether the task-irrelevant sounds were low or high frequency. Using event-related potential (ERP), audiovisual integration was found over the occipital area for 0.5 kHz auditory stimuli from 190-210 ms, for 1 kHz stimuli from 170-200 ms, for 2.5 kHz stimuli from 140-200 ms, 5 kHz stimuli from 100-200 ms. These findings suggest that a higher frequency sound signal paired with visual stimuli might be early processed or integrated despite the auditory stimuli being task-irrelevant information. Furthermore, audiovisual integration in late latency (300-340 ms) ERPs with fronto-central topography was found for auditory stimuli of lower frequencies (0.5, 1 and 2.5 kHz). Our results confirmed that audiovisual integration is affected by the frequency of an auditory stimulus. Taken together, the neurophysiological results provide unique insight into how the brain processes a multisensory visual signal and auditory stimuli of different frequencies.

  3. Effect of task-related continuous auditory feedback during learning of tracking motion exercises

    Directory of Open Access Journals (Sweden)

    Rosati Giulio

    2012-10-01

    Full Text Available Abstract Background This paper presents the results of a set of experiments in which we used continuous auditory feedback to augment motor training exercises. This feedback modality is mostly underexploited in current robotic rehabilitation systems, which usually implement only very basic auditory interfaces. Our hypothesis is that properly designed continuous auditory feedback could be used to represent temporal and spatial information that could in turn, improve performance and motor learning. Methods We implemented three different experiments on healthy subjects, who were asked to track a target on a screen by moving an input device (controller with their hand. Different visual and auditory feedback modalities were envisaged. The first experiment investigated whether continuous task-related auditory feedback can help improve performance to a greater extent than error-related audio feedback, or visual feedback alone. In the second experiment we used sensory substitution to compare different types of auditory feedback with equivalent visual feedback, in order to find out whether mapping the same information on a different sensory channel (the visual channel yielded comparable effects with those gained in the first experiment. The final experiment applied a continuously changing visuomotor transformation between the controller and the screen and mapped kinematic information, computed in either coordinate system (controller or video, to the audio channel, in order to investigate which information was more relevant to the user. Results Task-related audio feedback significantly improved performance with respect to visual feedback alone, whilst error-related feedback did not. Secondly, performance in audio tasks was significantly better with respect to the equivalent sensory-substituted visual tasks. Finally, with respect to visual feedback alone, video-task-related sound feedback decreased the tracking error during the learning of a novel

  4. Hippocampal-cortical contributions to strategic exploration during perceptual discrimination.

    Science.gov (United States)

    Voss, Joel L; Cohen, Neal J

    2017-06-01

    The hippocampus is crucial for long-term memory; its involvement in short-term or immediate expressions of memory is more controversial. Rodent hippocampus has been implicated in an expression of memory that occurs on-line during exploration termed "vicarious trial-and-error" (VTE) behavior. VTE occurs when rodents iteratively explore options during perceptual discrimination or at choice points. It is strategic in that it accelerates learning and improves later memory. VTE has been associated with activity of rodent hippocampal neurons, and lesions of hippocampus disrupt VTE and associated learning and memory advantages. Analogous findings of VTE in humans would support the role of hippocampus in active use of short-term memory to guide strategic behavior. We therefore measured VTE using eye-movement tracking during perceptual discrimination and identified relevant neural correlates with functional magnetic resonance imaging. A difficult perceptual-discrimination task was used that required visual information to be maintained during a several second trial, but with no long-term memory component. VTE accelerated discrimination. Neural correlates of VTE included robust activity of hippocampus and activity of a network of medial prefrontal and lateral parietal regions involved in memory-guided behavior. This VTE-related activity was distinct from activity associated with simply viewing visual stimuli and making eye movements during the discrimination task, which occurred in regions frequently associated with visual processing and eye-movement control. Subjects were mostly unaware of performing VTE, thus further distancing VTE from explicit long-term memory processing. These findings bridge the rodent and human literatures on neural substrates of memory-guided behavior, and provide further support for the role of hippocampus and a hippocampal-centered network of cortical regions in the immediate use of memory in on-line processing and the guidance of behavior. © 2017

  5. How discriminating are discriminative instruments?

    Science.gov (United States)

    Hankins, Matthew

    2008-05-27

    The McMaster framework introduced by Kirshner & Guyatt is the dominant paradigm for the development of measures of health status and health-related quality of life (HRQL). The framework defines the functions of such instruments as evaluative, predictive or discriminative. Evaluative instruments are required to be sensitive to change (responsiveness), but there is no corresponding index of the degree to which discriminative instruments are sensitive to cross-sectional differences. This paper argues that indices of validity and reliability are not sufficient to demonstrate that a discriminative instrument performs its function of discriminating between individuals, and that the McMaster framework would be augmented by the addition of a separate index of discrimination. The coefficient proposed by Ferguson (Delta) is easily adapted to HRQL instruments and is a direct, non-parametric index of the degree to which an instrument distinguishes between individuals. While Delta should prove useful in the development and evaluation of discriminative instruments, further research is required to elucidate the relationship between the measurement properties of discrimination, reliability and responsiveness.

  6. How discriminating are discriminative instruments?

    Directory of Open Access Journals (Sweden)

    Hankins Matthew

    2008-05-01

    Full Text Available Abstract The McMaster framework introduced by Kirshner & Guyatt is the dominant paradigm for the development of measures of health status and health-related quality of life (HRQL. The framework defines the functions of such instruments as evaluative, predictive or discriminative. Evaluative instruments are required to be sensitive to change (responsiveness, but there is no corresponding index of the degree to which discriminative instruments are sensitive to cross-sectional differences. This paper argues that indices of validity and reliability are not sufficient to demonstrate that a discriminative instrument performs its function of discriminating between individuals, and that the McMaster framework would be augmented by the addition of a separate index of discrimination. The coefficient proposed by Ferguson (Delta is easily adapted to HRQL instruments and is a direct, non-parametric index of the degree to which an instrument distinguishes between individuals. While Delta should prove useful in the development and evaluation of discriminative instruments, further research is required to elucidate the relationship between the measurement properties of discrimination, reliability and responsiveness.

  7. The Encoding of Sound Source Elevation in the Human Auditory Cortex.

    Science.gov (United States)

    Trapeau, Régis; Schönwiesner, Marc

    2018-03-28

    Spatial hearing is a crucial capacity of the auditory system. While the encoding of horizontal sound direction has been extensively studied, very little is known about the representation of vertical sound direction in the auditory cortex. Using high-resolution fMRI, we measured voxelwise sound elevation tuning curves in human auditory cortex and show that sound elevation is represented by broad tuning functions preferring lower elevations as well as secondary narrow tuning functions preferring individual elevation directions. We changed the ear shape of participants (male and female) with silicone molds for several days. This manipulation reduced or abolished the ability to discriminate sound elevation and flattened cortical tuning curves. Tuning curves recovered their original shape as participants adapted to the modified ears and regained elevation perception over time. These findings suggest that the elevation tuning observed in low-level auditory cortex did not arise from the physical features of the stimuli but is contingent on experience with spectral cues and covaries with the change in perception. One explanation for this observation may be that the tuning in low-level auditory cortex underlies the subjective perception of sound elevation. SIGNIFICANCE STATEMENT This study addresses two fundamental questions about the brain representation of sensory stimuli: how the vertical spatial axis of auditory space is represented in the auditory cortex and whether low-level sensory cortex represents physical stimulus features or subjective perceptual attributes. Using high-resolution fMRI, we show that vertical sound direction is represented by broad tuning functions preferring lower elevations as well as secondary narrow tuning functions preferring individual elevation directions. In addition, we demonstrate that the shape of these tuning functions is contingent on experience with spectral cues and covaries with the change in perception, which may indicate that the

  8. Development of Gender Discrimination: Effect of Sex-Typical and Sex-Atypical Toys.

    Science.gov (United States)

    Etaugh, Claire; Duits, Terri L.

    Toddlers (41 girls and 35 boys) between 18 and 37 months of age were given four gender discrimination tasks each consisting of 6 pairs of color drawings. Three of the tasks employed color drawings of preschool girls and boys holding either a sex-typical toy, a sex-atypical toy, or no toy. The fourth employed pictures of sex-typical masculine and…

  9. Tinnitus is associated with reduced sound level tolerance in adolescents with normal audiograms and otoacoustic emissions

    Science.gov (United States)

    Sanchez, Tanit Ganz; Moraes, Fernanda; Casseb, Juliana; Cota, Jaci; Freire, Katya; Roberts, Larry E.

    2016-01-01

    Recent neuroscience research suggests that tinnitus may reflect synaptic loss in the cochlea that does not express in the audiogram but leads to neural changes in auditory pathways that reduce sound level tolerance (SLT). Adolescents (N = 170) completed a questionnaire addressing their prior experience with tinnitus, potentially risky listening habits, and sensitivity to ordinary sounds, followed by psychoacoustic measurements in a sound booth. Among all adolescents 54.7% reported by questionnaire that they had previously experienced tinnitus, while 28.8% heard tinnitus in the booth. Psychoacoustic properties of tinnitus measured in the sound booth corresponded with those of chronic adult tinnitus sufferers. Neither hearing thresholds (≤15 dB HL to 16 kHz) nor otoacoustic emissions discriminated between adolescents reporting or not reporting tinnitus in the sound booth, but loudness discomfort levels (a psychoacoustic measure of SLT) did so, averaging 11.3 dB lower in adolescents experiencing tinnitus in the acoustic chamber. Although risky listening habits were near universal, the teenagers experiencing tinnitus and reduced SLT tended to be more protective of their hearing. Tinnitus and reduced SLT could be early indications of a vulnerability to hidden synaptic injury that is prevalent among adolescents and expressed following exposure to high level environmental sounds. PMID:27265722

  10. Food's visually perceived fat content affects discrimination speed in an orthogonal spatial task.

    Science.gov (United States)

    Harrar, Vanessa; Toepel, Ulrike; Murray, Micah M; Spence, Charles

    2011-10-01

    Choosing what to eat is a complex activity for humans. Determining a food's pleasantness requires us to combine information about what is available at a given time with knowledge of the food's palatability, texture, fat content, and other nutritional information. It has been suggested that humans may have an implicit knowledge of a food's fat content based on its appearance; Toepel et al. (Neuroimage 44:967-974, 2009) reported visual-evoked potential modulations after participants viewed images of high-energy, high-fat food (HF), as compared to viewing low-fat food (LF). In the present study, we investigated whether there are any immediate behavioural consequences of these modulations for human performance. HF, LF, or non-food (NF) images were used to exogenously direct participants' attention to either the left or the right. Next, participants made speeded elevation discrimination responses (up vs. down) to visual targets presented either above or below the midline (and at one of three stimulus onset asynchronies: 150, 300, or 450 ms). Participants responded significantly more rapidly following the presentation of a HF image than following the presentation of either LF or NF images, despite the fact that the identity of the images was entirely task-irrelevant. Similar results were found when comparing response speeds following images of high-carbohydrate (HC) food items to low-carbohydrate (LC) food items. These results support the view that people rapidly process (i.e. within a few hundred milliseconds) the fat/carbohydrate/energy value or, perhaps more generally, the pleasantness of food. Potentially as a result of HF/HC food items being more pleasant and thus having a higher incentive value, it seems as though seeing these foods results in a response readiness, or an overall alerting effect, in the human brain.

  11. Cross-Cultural Influences on Rhythm Processing: Reproduction, Discrimination, and Beat Tapping

    Directory of Open Access Journals (Sweden)

    Daniel J Cameron

    2015-04-01

    Full Text Available The structures of musical rhythm differ between cultures, despite the fact that the ability to synchronize one’s movements to musical rhythms appears to be universal. To measure the influence of culture on rhythm processing, we tested East African and North American adults on the perception, production, and beat tapping of rhythms derived from East African and Western music. To assess rhythm perception, participants identified whether pairs of rhythms were same or different. To assess rhythm production, participants reproduced rhythms after hearing them. To assess beat tapping, participants tapped the beat along with repeated rhythms. We expected that performance in all three tasks would be influenced both by the culture of the participant and by the culture of the rhythm. Specifically, we predicted that a participant’s ability to discriminate, reproduce, and accurately tap the beat would be better for rhythms from their own culture than for rhythms from another culture. In the rhythm discrimination task, there were no differences in discriminating culturally familiar and unfamiliar rhythms. In the rhythm reproduction task, both groups reproduced East African rhythms more accurately than Western rhythms, but East African participants also showed an effect of cultural familiarity, leading to a significant interaction. In the beat tapping task, participants in both groups tapped the beat more accurately for culturally familiar than unfamiliar rhythms. The results demonstrate that culture does influence the processing of musical rhythm. In terms of the function of musical rhythm, our results are consistent with theories that musical rhythm enables synchronization. Musical rhythm may foster musical cultural identity by enabling within-group synchronization to music, perhaps supporting social cohesion.

  12. Pitch ranking, electrode discrimination, and physiological spread-of-excitation using Cochlear's dual-electrode mode.

    Science.gov (United States)

    Goehring, Jenny L; Neff, Donna L; Baudhuin, Jacquelyn L; Hughes, Michelle L

    2014-08-01

    This study compared pitch ranking, electrode discrimination, and electrically evoked compound action potential (ECAP) spatial excitation patterns for adjacent physical electrodes (PEs) and the corresponding dual electrodes (DEs) for newer-generation Cochlear devices (Cochlear Ltd., Macquarie, New South Wales, Australia). The first goal was to determine whether pitch ranking and electrode discrimination yield similar outcomes for PEs and DEs. The second goal was to determine if the amount of spatial separation among ECAP excitation patterns (separation index, Σ) between adjacent PEs and the PE-DE pairs can predict performance on the psychophysical tasks. Using non-adaptive procedures, 13 subjects completed pitch ranking and electrode discrimination for adjacent PEs and the corresponding PE-DE pairs (DE versus each flanking PE) from the basal, middle, and apical electrode regions. Analysis of d' scores indicated that pitch-ranking and electrode-discrimination scores were not significantly different, but rather produced similar levels of performance. As expected, accuracy was significantly better for the PE-PE comparison than either PE-DE comparison. Correlations of the psychophysical versus ECAP Σ measures were positive; however, not all test/region correlations were significant across the array. Thus, the ECAP separation index is not sensitive enough to predict performance on behavioral tasks of pitch ranking or electrode discrimination for adjacent PEs or corresponding DEs.

  13. Multi-level discriminative dictionary learning with application to large scale image classification.

    Science.gov (United States)

    Shen, Li; Sun, Gang; Huang, Qingming; Wang, Shuhui; Lin, Zhouchen; Wu, Enhua

    2015-10-01

    The sparse coding technique has shown flexibility and capability in image representation and analysis. It is a powerful tool in many visual applications. Some recent work has shown that incorporating the properties of task (such as discrimination for classification task) into dictionary learning is effective for improving the accuracy. However, the traditional supervised dictionary learning methods suffer from high computation complexity when dealing with large number of categories, making them less satisfactory in large scale applications. In this paper, we propose a novel multi-level discriminative dictionary learning method and apply it to large scale image classification. Our method takes advantage of hierarchical category correlation to encode multi-level discriminative information. Each internal node of the category hierarchy is associated with a discriminative dictionary and a classification model. The dictionaries at different layers are learnt to capture the information of different scales. Moreover, each node at lower layers also inherits the dictionary of its parent, so that the categories at lower layers can be described with multi-scale information. The learning of dictionaries and associated classification models is jointly conducted by minimizing an overall tree loss. The experimental results on challenging data sets demonstrate that our approach achieves excellent accuracy and competitive computation cost compared with other sparse coding methods for large scale image classification.

  14. Influence of Task Difficulty and Background Music on Working Memory Activity: Developmental Considerations.

    Science.gov (United States)

    Kaniel, Shlomo; Aram, Dorit

    1998-01-01

    A study of 300 children in kindergarten, grade 2, and grade 6 found that background music improved visual discrimination task performance at the youngest and middle ages and had no effect on the oldest participants. On a square identification task, background music had no influence on easy and difficult tasks but lowered performance on…

  15. Effects of zolpidem on sedation, anxiety, and memory in the plus-maze discriminative avoidance task.

    Science.gov (United States)

    Zanin, Karina A; Patti, Camilla L; Sanday, Leandro; Fernandes-Santos, Luciano; Oliveira, Larissa C; Poyares, Dalva; Tufik, Sergio; Frussa-Filho, Roberto

    2013-04-01

    Zolpidem (Zolp), a hypnotic drug prescribed to treat insomnia, may have negative effects on memory, but reports are inconsistent. We examined the effects of acute doses of Zolp (2, 5, or 10 mg/kg, i.p.) on memory formation (learning, consolidation, and retrieval) using the plus-maze discriminative avoidance task. Mice were acutely treated with Zolp 30 min before training or testing. In addition, the effects of Zolp and midazolam (Mid; a classic benzodiazepine) on consolidation at different time points were examined. The possible role of state dependency was investigated using combined pre-training and pre-test treatments. Zolp produced a dose-dependent sedative effect, without modifying anxiety-like behavior. The pre-training administration of 5 or 10 mg/kg resulted in retention deficits. When administered immediately after training or before testing, memory was preserved. Zolp post-training administration (2 or 3 h) impaired subsequent memory. There was no participation of state dependency phenomenon in the amnestic effects of Zolp. Similar to Zolp, Mid impaired memory consolidation when administered 1 h after training. Amnestic effects occurred when Zolp was administered either before or 2-3 h after training. These memory deficits are not related to state dependency. Moreover, Zolp did not impair memory retrieval. Notably, the memory-impairing effects of Zolp are similar to those of Mid, with the exception of the time point at which the drug can modify consolidation. Finally, the memory effects were unrelated to sedation or anxiolysis.

  16. Continuous Re-Exposure to Environmental Sound Cues During Sleep Does Not Improve Memory for Semantically Unrelated Word Pairs.

    Science.gov (United States)

    Donohue, Kelly C; Spencer, Rebecca M C

    2011-06-01

    Two recent studies illustrated that cues present during encoding can enhance recall if re-presented during sleep. This suggests an academic strategy. Such effects have only been demonstrated with spatial learning and cue presentation was isolated to slow wave sleep (SWS). The goal of this study was to examine whether sounds enhance sleep-dependent consolidation of a semantic task if the sounds are re-presented continuously during sleep. Participants encoded a list of word pairs in the evening and recall was probed following an interval with overnight sleep. Participants encoded the pairs with the sound of "the ocean" from a sound machine. The first group slept with this sound; the second group slept with a different sound ("rain"); and the third group slept with no sound. Sleeping with sound had no impact on subsequent recall. Although a null result, this work provides an important test of the implications of context effects on sleep-dependent memory consolidation.

  17. Executive function deficits in team sport athletes with a history of concussion revealed by a visual-auditory dual task paradigm.

    Science.gov (United States)

    Tapper, Anthony; Gonzalez, Dave; Roy, Eric; Niechwiej-Szwedo, Ewa

    2017-02-01

    The purpose of this study was to examine executive functions in team sport athletes with and without a history of concussion. Executive functions comprise many cognitive processes including, working memory, attention and multi-tasking. Past research has shown that concussions cause difficulties in vestibular-visual and vestibular-auditory dual-tasking, however, visual-auditory tasks have been examined rarely. Twenty-nine intercollegiate varsity ice hockey athletes (age = 19.13, SD = 1.56; 15 females) performed an experimental dual-task paradigm that required simultaneously processing visual and auditory information. A brief interview, event description and self-report questionnaires were used to assign participants to each group (concussion, no-concussion). Eighteen athletes had a history of concussion and 11 had no concussion history. The two tests involved visuospatial working memory (i.e., Corsi block test) and auditory tone discrimination. Participants completed both tasks individually, then simultaneously. Two outcome variables were measured, Corsi block memory span and auditory tone discrimination accuracy. No differences were shown when each task was performed alone; however, athletes with a history of concussion had a significantly worse performance on the tone discrimination task in the dual-task condition. In conclusion, long-term deficits in executive functions were associated with a prior history of concussion when cognitive resources were stressed. Evaluations of executive functions and divided attention appear to be helpful in discriminating participants with and without a history concussion.

  18. Differential effects of visual context on pattern discrimination by pigeons (Columba livia) and humans (Homo sapiens).

    Science.gov (United States)

    Kelly, Debbie M; Cook, Robert G

    2003-06-01

    Three experiment examined the role of contextual information during line orientation and line position discriminations by pigeons (Columba livia) and humans (Homo sapiens). Experiment 1 tested pigeons' performance with these stimuli in a target localization task using texture displays. Experiments 2 and 3 tested pigeons and humans, respectively, with small and large variations of these stimuli in a same-different task. Humans showed a configural superiority effect when tested with displays constructed from large elements but not when tested with the smaller, more densely packed texture displays. The pigeons, in contrast, exhibited a configural inferiority effect when required to discriminate line orientation, regardless of stimulus size. These contrasting results suggest a species difference in the perceptionand use of features and contextual information in the discrimination of line information.

  19. Astro 4 - a sounding rocket programme of the X-ray astronomy

    International Nuclear Information System (INIS)

    Henkel, R.; Pechstein, H.

    1980-01-01

    The 'Astro 4' program is part of the German Astronomy Sounding Rocket Program and was divided into two different tasks. The 'Astro 4/1' project had the task to perform X-ray spectroscopic investigations and imaging of solar coronal active regions by means of zone-plate cameras, a Rowland spectrograph and a KAP crystal spectrometer aboard of a three-axis stabilized payload. The 'Astro 4/2' project had the task to take investigations of the X-ray sources Puppis A and the Crab Nebula by means of a Wolter Telescope, equipped with a position-sensitive proportional counter. Up to now the three-axis stabilized payload 'Astro 4/2' has been the longest Skylark payload. (Auth.)

  20. Imagining Sound

    DEFF Research Database (Denmark)

    Grimshaw, Mark; Garner, Tom Alexander

    2014-01-01

    We make the case in this essay that sound that is imagined is both a perception and as much a sound as that perceived through external stimulation. To argue this, we look at the evidence from auditory science, neuroscience, and philosophy, briefly present some new conceptual thinking on sound...... that accounts for this view, and then use this to look at what the future might hold in the context of imagining sound and developing technology....

  1. Post-stimulus endogenous and exogenous oscillations are differentially modulated by task difficulty.

    Science.gov (United States)

    Li, Yun; Lou, Bin; Gao, Xiaorong; Sajda, Paul

    2013-01-01

    We investigate the modulation of post-stimulus endogenous and exogenous oscillations when a visual discrimination is made more difficult. We use exogenous frequency tagging to induce steady-state visually evoked potentials (SSVEP) while subjects perform a face-car discrimination task, the difficulty of which varies on a trial-to-trial basis by varying the noise (phase coherence) in the image. We simultaneously analyze amplitude modulations of the SSVEP and endogenous alpha activity as a function of task difficulty. SSVEP modulation can be viewed as a neural marker of attention toward/away from the primary task, while modulation of post-stimulus alpha is closely related to cortical information processing. We find that as the task becomes more difficult, the amplitude of SSVEP decreases significantly, approximately 250-450 ms post-stimulus. Significant changes in endogenous alpha amplitude follow SSVEP modulation, occurring at approximately 400-700 ms post-stimulus and, unlike the SSVEP, the alpha amplitude is increasingly suppressed as the task becomes less difficult. Our results demonstrate simultaneous measurement of endogenous and exogenous oscillations that are modulated by task difficulty, and that the specific timing of these modulations likely reflects underlying information processing flow during perceptual decision-making.

  2. Pharmacological inactivation does not support a unique causal role for intraparietal sulcus in the discrimination of visual number.

    Directory of Open Access Journals (Sweden)

    Nicholas K DeWind

    Full Text Available The "number sense" describes the intuitive ability to quantify without counting. Single neuron recordings in non-human primates and functional imaging in humans suggest the intraparietal sulcus is an important neuroanatomical locus of numerical estimation. Other lines of inquiry implicate the IPS in numerous other functions, including attention and decision making. Here we provide a direct test of whether IPS has functional specificity for numerosity judgments. We used muscimol to reversibly and independently inactivate the ventral and lateral intraparietal areas in two monkeys performing a numerical discrimination task and a color discrimination task, roughly equilibrated for difficulty. Inactivation of either area caused parallel impairments in both tasks and no evidence of a selective deficit in numerical processing. These findings do not support a causal role for the IPS in numerical discrimination, except insofar as it also has a role in the discrimination of color. We discuss our findings in light of several alternative hypotheses of IPS function, including a role in orienting responses, a general cognitive role in attention and decision making processes and a more specific role in ordinal comparison that encompasses both number and color judgments.

  3. Reduced Chromatic Discrimination in Children with Autism Spectrum Disorders

    Science.gov (United States)

    Franklin, Anna; Sowden, Paul; Notman, Leslie; Gonzalez-Dixon, Melissa; West, Dorotea; Alexander, Iona; Loveday, Stephen; White, Alex

    2010-01-01

    Atypical perception in Autism Spectrum Disorders (ASD) is well documented (Dakin & Frith, 2005). However, relatively little is known about colour perception in ASD. Less accurate performance on certain colour tasks has led some to argue that chromatic discrimination is reduced in ASD relative to typical development (Franklin, Sowden, Burley,…

  4. Long term memory for noise: evidence of robust encoding of very short temporal acoustic patterns.

    Directory of Open Access Journals (Sweden)

    Jayalakshmi Viswanathan

    2016-11-01

    Full Text Available Recent research has demonstrated that humans are able to implicitly encode and retain repeating patterns in meaningless auditory noise. Our study aimed at testing the robustness of long-term implicit recognition memory for these learned patterns. Participants performed a cyclic/non-cyclic discrimination task, during which they were presented with either 1-s cyclic noises (CNs (the two halves of the noise were identical or 1-s plain random noises (Ns. Among CNs and Ns presented once, target CNs were implicitly presented multiple times within a block, and implicit recognition of these target CNs was tested 4 weeks later using a similar cyclic/non-cyclic discrimination task. Furthermore, robustness of implicit recognition memory was tested by presenting participants with looped (shifting the origin and scrambled (chopping sounds into 10- and 20-ms bits before shuffling versions of the target CNs. We found that participants had robust implicit recognition memory for learned noise patterns after 4 weeks, right from the first presentation. Additionally, this memory was remarkably resistant to acoustic transformations, such as looping and scrambling of the sounds. Finally, implicit recognition of sounds was dependent on participant’s discrimination performance during learning. Our findings suggest that meaningless temporal features as short as 10 ms can be implicitly stored in long-term auditory memory. Moreover, successful encoding and storage of such fine features may vary between participants, possibly depending on individual attention and auditory discrimination abilities.

  5. Visible Contrast Energy Metrics for Detection and Discrimination

    Science.gov (United States)

    Ahumada, Albert; Watson, Andrew

    2013-01-01

    Contrast energy was proposed by Watson, Robson, & Barlow as a useful metric for representing luminance contrast target stimuli because it represents the detectability of the stimulus in photon noise for an ideal observer. Like the eye, the ear is a complex transducer system, but relatively simple sound level meters are used to characterize sounds. These meters provide a range of frequency sensitivity functions and integration times depending on the intended use. We propose here the use of a range of contrast energy measures with different spatial frequency contrast sensitivity weightings, eccentricity sensitivity weightings, and temporal integration times. When detection threshold are plotting using such measures, the results show what the eye sees best when these variables are taken into account in a standard way. The suggested weighting functions revise the Standard Spatial Observer for luminance contrast detection and extend it into the near periphery. Under the assumption that the detection is limited only by internal noise, discrimination performance can be predicted by metrics based on the visible energy of the difference images

  6. Prior Visual Experience Modulates Learning of Sound Localization Among Blind Individuals.

    Science.gov (United States)

    Tao, Qian; Chan, Chetwyn C H; Luo, Yue-Jia; Li, Jian-Jun; Ting, Kin-Hung; Lu, Zhong-Lin; Whitfield-Gabrieli, Susan; Wang, Jun; Lee, Tatia M C

    2017-05-01

    Cross-modal learning requires the use of information from different sensory modalities. This study investigated how the prior visual experience of late blind individuals could modulate neural processes associated with learning of sound localization. Learning was realized by standardized training on sound localization processing, and experience was investigated by comparing brain activations elicited from a sound localization task in individuals with (late blind, LB) and without (early blind, EB) prior visual experience. After the training, EB showed decreased activation in the precuneus, which was functionally connected to a limbic-multisensory network. In contrast, LB showed the increased activation of the precuneus. A subgroup of LB participants who demonstrated higher visuospatial working memory capabilities (LB-HVM) exhibited an enhanced precuneus-lingual gyrus network. This differential connectivity suggests that visuospatial working memory due to the prior visual experience gained via LB-HVM enhanced learning of sound localization. Active visuospatial navigation processes could have occurred in LB-HVM compared to the retrieval of previously bound information from long-term memory for EB. The precuneus appears to play a crucial role in learning of sound localization, disregarding prior visual experience. Prior visual experience, however, could enhance cross-modal learning by extending binding to the integration of unprocessed information, mediated by the cognitive functions that these experiences develop.

  7. Minivosc - a minimal virtual oscillator driver for ALSA (Advanced Linux Sound Architecture)

    DEFF Research Database (Denmark)

    Dimitrov, Smilen; Serafin, Stefania

    2012-01-01

    Understanding the construction and implementation of sound cards (as examples of digital audio hardware) can be a demanding task, requiring insight into both hardware and software issues. An important step towards this goal, is the understanding of audio drivers and how they fit in the flow of ex...

  8. Polysyllable Speech Accuracy and Predictors of Later Literacy Development in Preschool Children With Speech Sound Disorders.

    Science.gov (United States)

    Masso, Sarah; Baker, Elise; McLeod, Sharynne; Wang, Cen

    2017-07-12

    The aim of this study was to determine if polysyllable accuracy in preschoolers with speech sound disorders (SSD) was related to known predictors of later literacy development: phonological processing, receptive vocabulary, and print knowledge. Polysyllables-words of three or more syllables-are important to consider because unlike monosyllables, polysyllables have been associated with phonological processing and literacy difficulties in school-aged children. They therefore have the potential to help identify preschoolers most at risk of future literacy difficulties. Participants were 93 preschool children with SSD from the Sound Start Study. Participants completed the Polysyllable Preschool Test (Baker, 2013) as well as phonological processing, receptive vocabulary, and print knowledge tasks. Cluster analysis was completed, and 2 clusters were identified: low polysyllable accuracy and moderate polysyllable accuracy. The clusters were significantly different based on 2 measures of phonological awareness and measures of receptive vocabulary, rapid naming, and digit span. The clusters were not significantly different on sound matching accuracy or letter, sound, or print concept knowledge. The participants' poor performance on print knowledge tasks suggested that as a group, they were at risk of literacy difficulties but that there was a cluster of participants at greater risk-those with both low polysyllable accuracy and poor phonological processing.

  9. Using otolith shape for intraspecific discrimination: the case of gurnards (Scorpaeniformes, Triglidae

    Directory of Open Access Journals (Sweden)

    Stefano Montanini

    2015-11-01

    Full Text Available The sagittal otoliths are sound transducers and play an important role in fish hearing. Triglidae (Teleostei, Scorpaeniformes are known for sound producing ability in agonistic contexts related to territorial defence, reproduction and competitive feeding (Amorim et al., 2004. Chelidonichthys cuculus and C. lucerna show a significant body size-depth relationship and specie-specific feeding strategies with growth. Both juveniles and adults of C. cuculus prey necto-benthic invertebrates while C. lucerna specimens change diet from crustaceans to teleost during growth (Stagioni et al., 2012; Vallisneri et al., 2014; Montanini et al., 2015. The goal of this study was to analyze intraspecific shape variations in sagitta of model species of gurnards. 217 specimens were collected during bottom trawl surveys in Adriatic sea (northeastern Mediterranean. Each left sagitta was removed, cleaned in ultrasounds bath and kept dry. The otolith digital images were processed to calculate five shape indices (aspect ratio, roundness, rectangularity, ellipticity and circularity. Indices were normalised to avoid allometric effects according to Lleonart et al. (2000, than processed by linear discriminant analysis (LDA. The SHAPE program was used to extract the outline and to assess the variability of shapes (EFA method and estimated it through the study of principal component analysis (PCA. Considering the first two discriminant functions, LDA plot showed a clearly separation between juvenile and adults for both species. About EFA, the first 4 principal component discriminated over 80% of variance and significant differences were found at critical size between juveniles and adults for all the components analysed. The allometric trends corresponded to a relative elongation of the sulcus acusticus and an increase of excisura ostii. The combined use of the two external outlines methods should be highly informative for intraspecific discrimination and might be related to

  10. Long term effects of aversive reinforcement on colour discrimination learning in free-flying bumblebees.

    Directory of Open Access Journals (Sweden)

    Miguel A Rodríguez-Gironés

    Full Text Available The results of behavioural experiments provide important information about the structure and information-processing abilities of the visual system. Nevertheless, if we want to infer from behavioural data how the visual system operates, it is important to know how different learning protocols affect performance and to devise protocols that minimise noise in the response of experimental subjects. The purpose of this work was to investigate how reinforcement schedule and individual variability affect the learning process in a colour discrimination task. Free-flying bumblebees were trained to discriminate between two perceptually similar colours. The target colour was associated with sucrose solution, and the distractor could be associated with water or quinine solution throughout the experiment, or with one substance during the first half of the experiment and the other during the second half. Both acquisition and final performance of the discrimination task (measured as proportion of correct choices were determined by the choice of reinforcer during the first half of the experiment: regardless of whether bees were trained with water or quinine during the second half of the experiment, bees trained with quinine during the first half learned the task faster and performed better during the whole experiment. Our results confirm that the choice of stimuli used during training affects the rate at which colour discrimination tasks are acquired and show that early contact with a strongly aversive stimulus can be sufficient to maintain high levels of attention during several hours. On the other hand, bees which took more time to decide on which flower to alight were more likely to make correct choices than bees which made fast decisions. This result supports the existence of a trade-off between foraging speed and accuracy, and highlights the importance of measuring choice latencies during behavioural experiments focusing on cognitive abilities.

  11. Sound radiation contrast in MR phase images. Method for the representation of elasticity, sound damping, and sound impedance changes

    International Nuclear Information System (INIS)

    Radicke, Marcus

    2009-01-01

    The method presented in this thesis combines ultrasound techniques with the magnetic-resonance tomography (MRT). An ultrasonic wave generates in absorbing media a static force in sound-propagation direction. The force leads at sound intensities of some W/cm 2 and a sound frequency in the lower MHz range to a tissue shift in the micrometer range. This tissue shift depends on the sound power, the sound frequency, the sound absorption, and the elastic properties of the tissue. A MRT sequence of the Siemens Healthcare AG was modified so that it measures (indirectly) the tissue shift, codes as grey values, and presents as 2D picture. By means of the grey values the sound-beam slope in the tissue can be visualized, and so additionally sound obstacles (changes of the sound impedance) can be detected. By the MRT images token up spatial changes of the tissue parameters sound absorption and elasticity can be detected. In this thesis measurements are presented, which show the feasibility and future chances of this method especially for the mammary-cancer diagnostics. [de

  12. Initial uncertainty impacts statistical learning in sound sequence processing.

    Science.gov (United States)

    Todd, Juanita; Provost, Alexander; Whitson, Lisa; Mullens, Daniel

    2016-11-01

    This paper features two studies confirming a lasting impact of first learning on how subsequent experience is weighted in early relevance-filtering processes. In both studies participants were exposed to sequences of sound that contained a regular pattern on two different timescales. Regular patterning in sound is readily detected by the auditory system and used to form "prediction models" that define the most likely properties of sound to be encountered in a given context. The presence and strength of these prediction models is inferred from changes in automatically elicited components of auditory evoked potentials. Both studies employed sound sequences that contained both a local and longer-term pattern. The local pattern was defined by a regular repeating pure tone occasionally interrupted by a rare deviating tone (p=0.125) that was physically different (a 30msvs. 60ms duration difference in one condition and a 1000Hz vs. 1500Hz frequency difference in the other). The longer-term pattern was defined by the rate at which the two tones alternated probabilities (i.e., the tone that was first rare became common and the tone that was first common became rare). There was no task related to the tones and participants were asked to ignore them while focussing attention on a movie with subtitles. Auditory-evoked potentials revealed long lasting modulatory influences based on whether the tone was initially encountered as rare and unpredictable or common and predictable. The results are interpreted as evidence that probability (or indeed predictability) assigns a differential information-value to the two tones that in turn affects the extent to which prediction models are updated and imposed. These effects are exposed for both common and rare occurrences of the tones. The studies contribute to a body of work that reveals that probabilistic information is not faithfully represented in these early evoked potentials and instead exposes that predictability (or conversely

  13. Adolescent fluoxetine exposure produces enduring, sex-specific alterations of visual discrimination and attention in rats.

    Science.gov (United States)

    LaRoche, Ronee B; Morgan, Russell E

    2007-01-01

    Over the past two decades the use of selective serotonin reuptake inhibitors (SSRIs) to treat behavioral disorders in children has grown rapidly, despite little evidence regarding the safety and efficacy of these drugs for use in children. Utilizing a rat model, this study investigated whether post-weaning exposure to a prototype SSRI, fluoxetine (FLX), influenced performance on visual tasks designed to measure discrimination learning, sustained attention, inhibitory control, and reaction time. Additionally, sex differences in response to varying doses of fluoxetine were examined. In Experiment 1, female rats were administered (P.O.) fluoxetine (10 mg/kg ) or vehicle (apple juice) from PND 25 thru PND 49. After a 14 day washout period, subjects were trained to perform a simultaneous visual discrimination task. Subjects were then tested for 20 sessions on a visual attention task that consisted of varied stimulus delays (0, 3, 6, or 9 s) and cue durations (200, 400, or 700 ms). In Experiment 2, both male and female Long-Evans rats (24 F, 24 M) were administered fluoxetine (0, 5, 10, or 15 mg/kg) then tested in the same visual tasks used in Experiment 1, with the addition of open-field and elevated plus-maze testing. Few FLX-related differences were seen in the visual discrimination, open field, or plus-maze tasks. However, results from the visual attention task indicated a dose-dependent reduction in the performance of fluoxetine-treated males, whereas fluoxetine-treated females tended to improve over baseline. These findings indicate that enduring, behaviorally-relevant alterations of the CNS can occur following pharmacological manipulation of the serotonin system during postnatal development.

  14. Localization of self-generated synthetic footstep sounds on different walked-upon materials through headphones

    DEFF Research Database (Denmark)

    Turchet, Luca; Spagnol, Simone; Geronazzo, Michele

    2016-01-01

    typologies of surface materials: solid (e.g., wood) and aggregate (e.g., gravel). Different sound delivery methods (mono, stereo, binaural) as well as several surface materials, in presence or absence of concurrent contextual auditory information provided as soundscapes, were evaluated in a vertical...... localization task. Results showed that solid surfaces were localized significantly farther from the walker's feet than the aggregate ones. This effect was independent of the used rendering technique, of the presence of soundscapes, and of merely temporal or spectral attributes of sound. The effect...

  15. Dorso-Lateral Frontal Cortex of the Ferret Encodes Perceptual Difficulty during Visual Discrimination.

    Science.gov (United States)

    Zhou, Zhe Charles; Yu, Chunxiu; Sellers, Kristin K; Fröhlich, Flavio

    2016-03-30

    Visual discrimination requires sensory processing followed by a perceptual decision. Despite a growing understanding of visual areas in this behavior, it is unclear what role top-down signals from prefrontal cortex play, in particular as a function of perceptual difficulty. To address this gap, we investigated how neurons in dorso-lateral frontal cortex (dl-FC) of freely-moving ferrets encode task variables in a two-alternative forced choice visual discrimination task with high- and low-contrast visual input. About two-thirds of all recorded neurons in dl-FC were modulated by at least one of the two task variables, task difficulty and target location. More neurons in dl-FC preferred the hard trials; no such preference bias was found for target location. In individual neurons, this preference for specific task types was limited to brief epochs. Finally, optogenetic stimulation confirmed the functional role of the activity in dl-FC before target touch; suppression of activity in pyramidal neurons with the ArchT silencing opsin resulted in a decrease in reaction time to touch the target but not to retrieve reward. In conclusion, dl-FC activity is differentially recruited for high perceptual difficulty in the freely-moving ferret and the resulting signal may provide top-down behavioral inhibition.

  16. Relation between functional connectivity and rhythm discrimination in children who do and do not stutter

    Directory of Open Access Journals (Sweden)

    Soo-Eun Chang

    2016-01-01

    Full Text Available Our ability to perceive and produce rhythmic patterns in the environment supports fundamental human capacities ranging from music and language processing to the coordination of action. This article considers whether spontaneous correlated brain activity within a basal ganglia-thalamocortical (rhythm network is associated with individual differences in auditory rhythm discrimination. Moreover, do children who stutter with demonstrated deficits in rhythm perception have weaker links between rhythm network functional connectivity and rhythm discrimination? All children in the study underwent a resting-state fMRI session, from which functional connectivity measures within the rhythm network were extracted from spontaneous brain activity. In a separate session, the same children completed an auditory rhythm-discrimination task, where behavioral performance was assessed using signal detection analysis. We hypothesized that in typically developing children, rhythm network functional connectivity would be associated with behavioral performance on the rhythm discrimination task, but that this relationship would be attenuated in children who stutter. Results supported our hypotheses, lending strong support for the view that (1 children who stutter have weaker rhythm network connectivity and (2 the lack of a relation between rhythm network connectivity and rhythm discrimination in children who stutter may be an important contributing factor to the etiology of stuttering.

  17. Cross-cultural influences on rhythm processing: reproduction, discrimination, and beat tapping.

    Science.gov (United States)

    Cameron, Daniel J; Bentley, Jocelyn; Grahn, Jessica A

    2015-01-01

    The structures of musical rhythm differ between cultures, despite the fact that the ability to entrain movement to musical rhythm occurs in virtually all individuals across cultures. To measure the influence of culture on rhythm processing, we tested East African and North American adults on perception, production, and beat tapping for rhythms derived from East African and Western music. To assess rhythm perception, participants identified whether pairs of rhythms were the same or different. To assess rhythm production, participants reproduced rhythms after hearing them. To assess beat tapping, participants tapped the beat along with repeated rhythms. We expected that performance in all three tasks would be influenced by the culture of the participant and the culture of the rhythm. Specifically, we predicted that a participant's ability to discriminate, reproduce, and accurately tap the beat would be better for rhythms from their own culture than for rhythms from another culture. In the rhythm discrimination task, there were no differences in discriminating culturally familiar and unfamiliar rhythms. In the rhythm reproduction task, both groups reproduced East African rhythms more accurately than Western rhythms, but East African participants also showed an effect of cultural familiarity, leading to a significant interaction. In the beat tapping task, participants in both groups tapped the beat more accurately for culturally familiar than for unfamiliar rhythms. Moreover, there were differences between the two participant groups, and between the two types of rhythms, in the metrical level selected for beat tapping. The results demonstrate that culture does influence the processing of musical rhythm. In terms of the function of musical rhythm, our results are consistent with theories that musical rhythm enables synchronization. Musical rhythm may foster musical cultural identity by enabling within-group synchronization to music, perhaps supporting social cohesion.

  18. Environmental Enrichment Expedites Acquisition and Improves Flexibility on a Temporal Sequencing Task in Mice

    Directory of Open Access Journals (Sweden)

    Darius Rountree-Harrison

    2018-03-01

    Full Text Available Environmental enrichment (EE via increased opportunities for voluntary exercise, sensory stimulation and social interaction, can enhance the function of and behaviours regulated by cognitive circuits. Little is known, however, as to how this intervention affects performance on complex tasks that engage multiple, definable learning and memory systems. Accordingly, we utilised the Olfactory Temporal Order Discrimination (OTOD task which requires animals to recall and report sequence information about a series of recently encountered olfactory stimuli. This approach allowed us to compare animals raised in either enriched or standard laboratory housing conditions on a number of measures, including the acquisition of a complex discrimination task, temporal sequence recall accuracy (i.e., the ability to accurately recall a sequences of events and acuity (i.e., the ability to resolve past events that occurred in close temporal proximity, as well as cognitive flexibility tested in the style of a rule reversal and an Intra-Dimensional Shift (IDS. We found that enrichment accelerated the acquisition of the temporal order discrimination task, although neither accuracy nor acuity was affected at asymptotic performance levels. Further, while a subtle enhancement of overall performance was detected for both rule reversal and IDS versions of the task, accelerated performance recovery could only be attributed to the shift-like contingency change. These findings suggest that EE can affect specific elements of complex, multi-faceted cognitive processes.

  19. A method for recognition of coexisting environmental sound sources based on the Fisher’s linear discriminant classifier

    DEFF Research Database (Denmark)

    Creixell Mediante, Ester; Haddad, Karim; Song, Wookeun

    2015-01-01

    A method for sound recognition of coexisting environmental noise sources by applying pattern recognition techniques is developed. The investigated technique could benefit several areas of application, such as noise impact assessment, acoustic pollution mitigation and soundscape characterization...

  20. The influence of phonetic dimensions on aphasic speech perception

    NARCIS (Netherlands)

    de Kok, D.A.; Jonkers, R.; Bastiaanse, Y.R.M.

    2010-01-01

    Individuals with aphasia have more problems detecting small differences between speech sounds than larger ones. This paper reports how phonemic processing is impaired and how this is influenced by speechreading. A non-word discrimination task was carried out with 'audiovisual', 'auditory only' and

  1. Spatial discrimination and visual discrimination

    DEFF Research Database (Denmark)

    Haagensen, Annika M. J.; Grand, Nanna; Klastrup, Signe

    2013-01-01

    Two methods investigating learning and memory in juvenile Gottingen minipigs were evaluated for potential use in preclinical toxicity testing. Twelve minipigs were tested using a spatial hole-board discrimination test including a learning phase and two memory phases. Five minipigs were tested...... in a visual discrimination test. The juvenile minipigs were able to learn the spatial hole-board discrimination test and showed improved working and reference memory during the learning phase. Performance in the memory phases was affected by the retention intervals, but the minipigs were able to remember...... the concept of the test in both memory phases. Working memory and reference memory were significantly improved in the last trials of the memory phases. In the visual discrimination test, the minipigs learned to discriminate between the three figures presented to them within 9-14 sessions. For the memory test...

  2. Short-term visual deprivation reduces interference effects of task-irrelevant facial expressions on affective prosody judgments

    Directory of Open Access Journals (Sweden)

    Ineke eFengler

    2015-04-01

    Full Text Available Several studies have suggested that neuroplasticity can be triggered by short-term visual deprivation in healthy adults. Specifically, these studies have provided evidence that visual deprivation reversibly affects basic perceptual abilities. The present study investigated the long-lasting effects of short-term visual deprivation on emotion perception. To this aim, we visually deprived a group of young healthy adults, age-matched with a group of non-deprived controls, for 3 hours and tested them before and after visual deprivation (i.e., after 8 h on average and at 4 week follow-up on an audio-visual (i.e., faces and voices emotion discrimination task. To observe changes at the level of basic perceptual skills, we additionally employed a simple audio-visual (i.e., tone bursts and light flashes discrimination task and two unimodal (one auditory and one visual perceptual threshold measures. During the 3 h period, both groups performed a series of auditory tasks. To exclude the possibility that changes in emotion discrimination may emerge as a consequence of the exposure to auditory stimulation during the 3 h stay in the dark, we visually deprived an additional group of age-matched participants who concurrently performed unrelated (i.e., tactile tasks to the later tested abilities. The two visually deprived groups showed enhanced affective prosodic discrimination abilities in the context of incongruent facial expressions following the period of visual deprivation; this effect was partially maintained until follow-up. By contrast, no changes were observed in affective facial expression discrimination and in the basic perception tasks in any group. These findings suggest that short-term visual deprivation per se triggers a reweighting of visual and auditory emotional cues, which seem to possibly prevail for longer durations.

  3. Tactile Acuity in the Blind: A Closer Look Reveals Superiority over the Sighted in Some but Not All Cutaneous Tasks

    Science.gov (United States)

    Alary, Flamine; Duquette, Marco; Goldstein, Rachel; Chapman, C. Elaine; Voss, Patrice; La Buissonniere-Ariza, Valerie; Lepore, Franco

    2009-01-01

    Previous studies have shown that blind subjects may outperform the sighted on certain tactile discrimination tasks. We recently showed that blind subjects outperformed the sighted in a haptic 2D-angle discrimination task. The purpose of this study was to compare the performance of the same blind (n = 16) and sighted (n = 17, G1) subjects in three…

  4. Development of Prediction Tool for Sound Absorption and Sound Insulation for Sound Proof Properties

    OpenAIRE

    Yoshio Kurosawa; Takao Yamaguchi

    2015-01-01

    High frequency automotive interior noise above 500 Hz considerably affects automotive passenger comfort. To reduce this noise, sound insulation material is often laminated on body panels or interior trim panels. For a more effective noise reduction, the sound reduction properties of this laminated structure need to be estimated. We have developed a new calculate tool that can roughly calculate the sound absorption and insulation properties of laminate structure and handy ...

  5. Looking at the ventriloquist: visual outcome of eye movements calibrates sound localization.

    Directory of Open Access Journals (Sweden)

    Daniel S Pages

    Full Text Available A general problem in learning is how the brain determines what lesson to learn (and what lessons not to learn. For example, sound localization is a behavior that is partially learned with the aid of vision. This process requires correctly matching a visual location to that of a sound. This is an intrinsically circular problem when sound location is itself uncertain and the visual scene is rife with possible visual matches. Here, we develop a simple paradigm using visual guidance of sound localization to gain insight into how the brain confronts this type of circularity. We tested two competing hypotheses. 1: The brain guides sound location learning based on the synchrony or simultaneity of auditory-visual stimuli, potentially involving a Hebbian associative mechanism. 2: The brain uses a 'guess and check' heuristic in which visual feedback that is obtained after an eye movement to a sound alters future performance, perhaps by recruiting the brain's reward-related circuitry. We assessed the effects of exposure to visual stimuli spatially mismatched from sounds on performance of an interleaved auditory-only saccade task. We found that when humans and monkeys were provided the visual stimulus asynchronously with the sound but as feedback to an auditory-guided saccade, they shifted their subsequent auditory-only performance toward the direction of the visual cue by 1.3-1.7 degrees, or 22-28% of the original 6 degree visual-auditory mismatch. In contrast when the visual stimulus was presented synchronously with the sound but extinguished too quickly to provide this feedback, there was little change in subsequent auditory-only performance. Our results suggest that the outcome of our own actions is vital to localizing sounds correctly. Contrary to previous expectations, visual calibration of auditory space does not appear to require visual-auditory associations based on synchrony/simultaneity.

  6. Auditory Stimulus Processing and Task Learning Are Adequate in Dyslexia, but Benefits from Regularities Are Reduced

    Science.gov (United States)

    Daikhin, Luba; Raviv, Ofri; Ahissar, Merav

    2017-01-01

    Purpose: The reading deficit for people with dyslexia is typically associated with linguistic, memory, and perceptual-discrimination difficulties, whose relation to reading impairment is disputed. We proposed that automatic detection and usage of serial sound regularities for individuals with dyslexia is impaired (anchoring deficit hypothesis),…

  7. Parameterizing Sound: Design Considerations for an Environmental Sound Database

    Science.gov (United States)

    2015-04-01

    associated with, or produced by, a physical event or human activity and 2) sound sources that are common in the environment. Reproductions or sound...Rogers S. Confrontation naming of environmental sounds. Journal of Clinical and Experimental Neuropsychology . 2000;22(6):830–864. 14 VanDerveer NJ

  8. Emission of sound from the mammalian inner ear

    Science.gov (United States)

    Reichenbach, Tobias; Stefanovic, Aleksandra; Nin, Fumiaki; Hudspeth, A. J.

    2013-03-01

    The mammalian inner ear, or cochlea, not only acts as a detector of sound but can also produce tones itself. These otoacoustic emissions are a striking manifestation of the mechanical active process that sensitizes the cochlea and sharpens its frequency discrimination. It remains uncertain how these signals propagate back to the middle ear, from which they are emitted as sound. Although reverse propagation might occur through waves on the cochlear basilar membrane, experiments suggest the existence of a second component in otoacoustic emissions. We have combined theoretical and experimental studies to show that mechanical signals can also be transmitted by waves on Reissner's membrane, a second elastic structure within the cochea. We have developed a theoretical description of wave propagation on the parallel Reissner's and basilar membranes and its role in the emission of distortion products. By scanning laser interferometry we have measured traveling waves on Reissner's membrane in the gerbil, guinea pig, and chinchilla. The results accord with the theory and thus support a role for Reissner's membrane in otoacoustic emission. T. R. holds a Career Award at the Scientific Interface from the Burroughs Wellcome Fund; A. J. H. is an Investigator of Howard Hughes Medical Institute.

  9. Efficiency of observer brightness discrimination in original and subracted images

    International Nuclear Information System (INIS)

    Swensson, R.G.; Kazda, I.; Nawfel, R.; Judy, P.F.

    1990-01-01

    This paper reports that, for an optimal image calculation, discriminating pairs of objects that differ only in brightness is equivalent to discriminating polarity differences in their subtraction images. This experiment measured and compared how efficiently human observers could perform the two different discriminations posed by such original and subtracted images. Disks of equal size, separated by their diameter, were superimposed on incorrelated, Gaussian noise backgrounds at different contrasts that made the two disks readily visible on the displayed radiographs. The digitally subtracted image-regions containing the two disks of each pair (shifted to registration) produced subtraction images with low-contrast disks that were either brighter or darker than the background. Observer performances in each task (measured by receiver operating characteristic [ROC] analysis) was compared with that of an optimal calculation (cross- correlator)

  10. Mental workload while driving: effects on visual search, discrimination, and decision making.

    Science.gov (United States)

    Recarte, Miguel A; Nunes, Luis M

    2003-06-01

    The effects of mental workload on visual search and decision making were studied in real traffic conditions with 12 participants who drove an instrumented car. Mental workload was manipulated by having participants perform several mental tasks while driving. A simultaneous visual-detection and discrimination test was used as performance criteria. Mental tasks produced spatial gaze concentration and visual-detection impairment, although no tunnel vision occurred. According to ocular behavior analysis, this impairment was due to late detection and poor identification more than to response selection. Verbal acquisition tasks were innocuous compared with production tasks, and complex conversations, whether by phone or with a passenger, are dangerous for road safety.

  11. Challenges in discriminating profanity from hate speech

    Science.gov (United States)

    Malmasi, Shervin; Zampieri, Marcos

    2018-03-01

    In this study, we approach the problem of distinguishing general profanity from hate speech in social media, something which has not been widely considered. Using a new dataset annotated specifically for this task, we employ supervised classification along with a set of features that includes ?-grams, skip-grams and clustering-based word representations. We apply approaches based on single classifiers as well as more advanced ensemble classifiers and stacked generalisation, achieving the best result of ? accuracy for this 3-class classification task. Analysis of the results reveals that discriminating hate speech and profanity is not a simple task, which may require features that capture a deeper understanding of the text not always possible with surface ?-grams. The variability of gold labels in the annotated data, due to differences in the subjective adjudications of the annotators, is also an issue. Other directions for future work are discussed.

  12. Discrimination of communication vocalizations by single neurons and groups of neurons in the auditory midbrain.

    Science.gov (United States)

    Schneider, David M; Woolley, Sarah M N

    2010-06-01

    Many social animals including songbirds use communication vocalizations for individual recognition. The perception of vocalizations depends on the encoding of complex sounds by neurons in the ascending auditory system, each of which is tuned to a particular subset of acoustic features. Here, we examined how well the responses of single auditory neurons could be used to discriminate among bird songs and we compared discriminability to spectrotemporal tuning. We then used biologically realistic models of pooled neural responses to test whether the responses of groups of neurons discriminated among songs better than the responses of single neurons and whether discrimination by groups of neurons was related to spectrotemporal tuning and trial-to-trial response variability. The responses of single auditory midbrain neurons could be used to discriminate among vocalizations with a wide range of abilities, ranging from chance to 100%. The ability to discriminate among songs using single neuron responses was not correlated with spectrotemporal tuning. Pooling the responses of pairs of neurons generally led to better discrimination than the average of the two inputs and the most discriminating input. Pooling the responses of three to five single neurons continued to improve neural discrimination. The increase in discriminability was largest for groups of neurons with similar spectrotemporal tuning. Further, we found that groups of neurons with correlated spike trains achieved the largest gains in discriminability. We simulated neurons with varying levels of temporal precision and measured the discriminability of responses from single simulated neurons and groups of simulated neurons. Simulated neurons with biologically observed levels of temporal precision benefited more from pooling correlated inputs than did neurons with highly precise or imprecise spike trains. These findings suggest that pooling correlated neural responses with the levels of precision observed in the

  13. Pharmacological evidence that both cognitive memory and habit formation contribute to within-session learning of concurrent visual discriminations.

    Science.gov (United States)

    Turchi, Janita; Devan, Bryan; Yin, Pingbo; Sigrist, Emmalynn; Mishkin, Mortimer

    2010-07-01

    The monkey's ability to learn a set of visual discriminations presented concurrently just once a day on successive days (24-h ITI task) is based on habit formation, which is known to rely on a visuo-striatal circuit and to be independent of visuo-rhinal circuits that support one-trial memory. Consistent with this dissociation, we recently reported that performance on the 24-h ITI task is impaired by a striatal-function blocking agent, the dopaminergic antagonist haloperidol, and not by a rhinal-function blocking agent, the muscarinic cholinergic antagonist scopolamine. In the present study, monkeys were trained on a short-ITI form of concurrent visual discrimination learning, one in which a set of stimulus pairs is repeated not only across daily sessions but also several times within each session (in this case, at about 4-min ITIs). Asymptotic discrimination learning rates in the non-drug condition were reduced by half, from approximately 11 trials/pair on the 24-h ITI task to approximately 5 trials/pair on the 4-min ITI task, and this faster learning was impaired by systemic injections of either haloperidol or scopolamine. The results suggest that in the version of concurrent discrimination learning used here, the short ITIs within a session recruit both visuo-rhinal and visuo-striatal circuits, and that the final performance level is driven by both cognitive memory and habit formation working in concert.

  14. Making fictions sound real - On film sound, perceptual realism and genre

    Directory of Open Access Journals (Sweden)

    Birger Langkjær

    2010-05-01

    Full Text Available This article examines the role that sound plays in making fictions perceptually real to film audiences, whether these fictions are realist or non-realist in content and narrative form. I will argue that some aspects of film sound practices and the kind of experiences they trigger are related to basic rules of human perception, whereas others are more properly explained in relation to how aesthetic devices, including sound, are used to characterise the fiction and thereby make it perceptually real to its audience. Finally, I will argue that not all genres can be defined by a simple taxonomy of sounds. Apart from an account of the kinds of sounds that typically appear in a specific genre, a genre analysis of sound may also benefit from a functionalist approach that focuses on how sounds can make both realist and non-realist aspects of genres sound real to audiences.

  15. Making fictions sound real - On film sound, perceptual realism and genre

    Directory of Open Access Journals (Sweden)

    Birger Langkjær

    2009-09-01

    Full Text Available This article examines the role that sound plays in making fictions perceptually real to film audiences, whether these fictions are realist or non-realist in content and narrative form. I will argue that some aspects of film sound practices and the kind of experiences they trigger are related to basic rules of human perception, whereas others are more properly explained in relation to how aesthetic devices, including sound, are used to characterise the fiction and thereby make it perceptually real to its audience. Finally, I will argue that not all genres can be defined by a simple taxonomy of sounds. Apart from an account of the kinds of sounds that typically appear in a specific genre, a genre analysis of sound may also benefit from a functionalist approach that focuses on how sounds can make both realist and non-realist aspects of genres sound real to audiences.

  16. Human listeners provide insights into echo features used by dolphins (Tursiops truncatus) to discriminate among objects.

    Science.gov (United States)

    Delong, Caroline M; Au, Whitlow W L; Harley, Heidi E; Roitblat, Herbert L; Pytka, Lisa

    2007-08-01

    Echolocating bottlenose dolphins (Tursiops truncatus) discriminate between objects on the basis of the echoes reflected by the objects. However, it is not clear which echo features are important for object discrimination. To gain insight into the salient features, the authors had a dolphin perform a match-to-sample task and then presented human listeners with echoes from the same objects used in the dolphin's task. In 2 experiments, human listeners performed as well or better than the dolphin at discriminating objects, and they reported the salient acoustic cues. The error patterns of the humans and the dolphin were compared to determine which acoustic features were likely to have been used by the dolphin. The results indicate that the dolphin did not appear to use overall echo amplitude, but that it attended to the pattern of changes in the echoes across different object orientations. Human listeners can quickly identify salient combinations of echo features that permit object discrimination, which can be used to generate hypotheses that can be tested using dolphins as subjects.

  17. Refining Stimulus Parameters in Assessing Infant Speech Perception Using Visual Reinforcement Infant Speech Discrimination: Sensation Level.

    Science.gov (United States)

    Uhler, Kristin M; Baca, Rosalinda; Dudas, Emily; Fredrickson, Tammy

    2015-01-01

    Speech perception measures have long been considered an integral piece of the audiological assessment battery. Currently, a prelinguistic, standardized measure of speech perception is missing in the clinical assessment battery for infants and young toddlers. Such a measure would allow systematic assessment of speech perception abilities of infants as well as the potential to investigate the impact early identification of hearing loss and early fitting of amplification have on the auditory pathways. To investigate the impact of sensation level (SL) on the ability of infants with normal hearing (NH) to discriminate /a-i/ and /ba-da/ and to determine if performance on the two contrasts are significantly different in predicting the discrimination criterion. The design was based on a survival analysis model for event occurrence and a repeated measures logistic model for binary outcomes. The outcome for survival analysis was the minimum SL for criterion and the outcome for the logistic regression model was the presence/absence of achieving the criterion. Criterion achievement was designated when an infant's proportion correct score was >0.75 on the discrimination performance task. Twenty-two infants with NH sensitivity participated in this study. There were 9 males and 13 females, aged 6-14 mo. Testing took place over two to three sessions. The first session consisted of a hearing test, threshold assessment of the two speech sounds (/a/ and /i/), and if time and attention allowed, visual reinforcement infant speech discrimination (VRISD). The second session consisted of VRISD assessment for the two test contrasts (/a-i/ and /ba-da/). The presentation level started at 50 dBA. If the infant was unable to successfully achieve criterion (>0.75) at 50 dBA, the presentation level was increased to 70 dBA followed by 60 dBA. Data examination included an event analysis, which provided the probability of criterion distribution across SL. The second stage of the analysis was a

  18. The anterior thalamus is critical for overcoming interference in a context-dependent odor discrimination task.

    Science.gov (United States)

    Law, L Matthew; Smith, David M

    2012-10-01

    The anterior thalamus (AT) is anatomically interconnected with the hippocampus and other structures known to be involved in memory, and the AT is involved in many of the same learning and memory functions as the hippocampus. For example, like the hippocampus, the AT is involved in spatial cognition and episodic memory. The hippocampus also has a well-documented role in contextual memory processes, but it is not known whether the AT is similarly involved in contextual memory. In the present study, we assessed the role of the AT in contextual memory processes by temporarily inactivating the AT and training rats on a recently developed context-based olfactory list learning task, which was designed to assess the use of contextual information to resolve interference. Rats were trained on one list of odor discrimination problems, followed by training on a second list in either the same context or a different context. In order to induce interference, some of the odors appeared on both lists with their predictive value reversed. Control rats that learned the two lists in different contexts performed significantly better than rats that learned the two lists in the same context. However, AT lesions completely abolished this contextual learning advantage, a result that is very similar to the effects of hippocampal inactivation. These findings demonstrate that the AT, like the hippocampus, is involved in contextual memory and suggest that the hippocampus and AT are part of a functional circuit involved in contextual memory. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  19. The use of cognitive cues for anticipatory strategies in a dynamic postural control task - validation of a novel approach to dual-task testing

    DEFF Research Database (Denmark)

    Læssøe, Uffe; Grarup, Bo; Bangshaab, Jette

    2016-01-01

    Introduction: Dual-task testing is relevant in the assessment of postural control. A combination of a primary (motor) and a secondary (distracting cognitive) tasks is most often used. It remains a challenge however, to standardize and monitor the cognitive task. In this study a new dual......-task testing approach with a facilitating, rather than distracting, cognitive component was evaluated. Methods: Thirty-one community-dwelling elderly and fifteen young people were tested with respect to their ability to use anticipatory postural control strategies. The motor task consisted of twenty...... two sessions. Conclusion: The dual-task test was sensitive enough to discriminate between elderly and young people. It revealed that the elderly did not utilize cognitive cues for their anticipatory postural control strategies as well as the young were able to. The test procedure was feasible...

  20. Social Interaction and Conditional Self-Discrimination under a Paradigm of Avoidance and Positive Reinforcement in Wistar Rats

    Science.gov (United States)

    Penagos-Corzo, Julio C.; Pérez-Acosta, Andrés M.; Hernández, Ingrid

    2015-01-01

    The experiment reported here uses a conditional self-discrimination task to examine the influence of social interaction on the facilitation of self-discrimination in rats. The study is based on a previous report (Penagos- Corzo et al., 2011) showing positive evidence of such facilitation, but extending the exposition to social interaction…

  1. Discrimination and Anti-discrimination in Denmark

    DEFF Research Database (Denmark)

    Olsen, Tore Vincents

    The purpose of this report is to describe and analyse Danish anti-discrimination legislation and the debate about discrimination in Denmark in order to identify present and future legal challenges. The main focus is the implementation of the EU anti-discrimination directives in Danish law...

  2. The Attentional Dependence of Emotion Cognition is Variable with the Competing Task

    Directory of Open Access Journals (Sweden)

    Cheng Chen

    2016-11-01

    Full Text Available The relationship between emotion and attention has fascinated researchers for decades. Many previous studies have used eye-tracking, ERP, MEG and fMRI to explore this issue but have reached different conclusions: some researchers hold that emotion cognition is an automatic process and independent of attention, while some others believed that emotion cognition is modulated by attentional resources and is a type of controlled processing. The present research aimed to investigate this controversy, and we hypothesized that the attentional dependence of emotion cognition is variable with the competing task. Eye-tracking technology and a dual-task paradigm were adopted, and subjects’ attention was manipulated to fixate at the central task to investigate whether subjects could detect the emotional faces presented in the peripheral area with a decrease or near-absence of attention. The results revealed that when the peripheral task was emotional face discrimination but the central attention-demanding task was different, subjects performed well in the peripheral task, which means that emotional information can be processed in parallel with other stimuli, and there may be a specific channel in the human brain for processing emotional information. However, when the central and peripheral tasks were both emotional face discrimination, subjects could not perform well in the peripheral task, indicating that the processing of emotional information required attentional resources and that it is a type of controlled processing. Therefore, we concluded that the attentional dependence of emotion cognition varied with the competing task.

  3. The meaning of city noises: Investigating sound quality in Paris (France)

    Science.gov (United States)

    Dubois, Daniele; Guastavino, Catherine; Maffiolo, Valerie; Guastavino, Catherine; Maffiolo, Valerie

    2004-05-01

    The sound quality of Paris (France) was investigated by using field inquiries in actual environments (open questionnaires) and using recordings under laboratory conditions (free-sorting tasks). Cognitive categories of soundscapes were inferred by means of psycholinguistic analyses of verbal data and of mathematical analyses of similarity judgments. Results show that auditory judgments mainly rely on source identification. The appraisal of urban noise therefore depends on the qualitative evaluation of noise sources. The salience of human sounds in public spaces has been demonstrated, in relation to pleasantness judgments: soundscapes with human presence tend to be perceived as more pleasant than soundscapes consisting solely of mechanical sounds. Furthermore, human sounds are qualitatively processed as indicators of human outdoor activities, such as open markets, pedestrian areas, and sidewalk cafe districts that reflect city life. In contrast, mechanical noises (mainly traffic noise) are commonly described in terms of physical properties (temporal structure, intensity) of a permanent background noise that also characterizes urban areas. This connotes considering both quantitative and qualitative descriptions to account for the diversity of cognitive interpretations of urban soundscapes, since subjective evaluations depend both on the meaning attributed to noise sources and on inherent properties of the acoustic signal.

  4. Adversarial Advantage Actor-Critic Model for Task-Completion Dialogue Policy Learning

    OpenAIRE

    Peng, Baolin; Li, Xiujun; Gao, Jianfeng; Liu, Jingjing; Chen, Yun-Nung; Wong, Kam-Fai

    2017-01-01

    This paper presents a new method --- adversarial advantage actor-critic (Adversarial A2C), which significantly improves the efficiency of dialogue policy learning in task-completion dialogue systems. Inspired by generative adversarial networks (GAN), we train a discriminator to differentiate responses/actions generated by dialogue agents from responses/actions by experts. Then, we incorporate the discriminator as another critic into the advantage actor-critic (A2C) framework, to encourage the...

  5. Sound Absorbers

    Science.gov (United States)

    Fuchs, H. V.; Möser, M.

    Sound absorption indicates the transformation of sound energy into heat. It is, for instance, employed to design the acoustics in rooms. The noise emitted by machinery and plants shall be reduced before arriving at a workplace; auditoria such as lecture rooms or concert halls require a certain reverberation time. Such design goals are realised by installing absorbing components at the walls with well-defined absorption characteristics, which are adjusted for corresponding demands. Sound absorbers also play an important role in acoustic capsules, ducts and screens to avoid sound immission from noise intensive environments into the neighbourhood.

  6. Pitch ranking, electrode discrimination, and physiological spread of excitation using current steering in cochlear implants

    Science.gov (United States)

    Goehring, Jenny L.; Neff, Donna L.; Baudhuin, Jacquelyn L.; Hughes, Michelle L.

    2014-01-01

    The first objective of this study was to determine whether adaptive pitch-ranking and electrode-discrimination tasks with cochlear-implant (CI) recipients produce similar results for perceiving intermediate “virtual-channel” pitch percepts using current steering. Previous studies have not examined both behavioral tasks in the same subjects with current steering. A second objective was to determine whether a physiological metric of spatial separation using the electrically evoked compound action potential spread-of-excitation (ECAP SOE) function could predict performance in the behavioral tasks. The metric was the separation index (Σ), defined as the difference in normalized amplitudes between two adjacent ECAP SOE functions, summed across all masker electrodes. Eleven CII or 90 K Advanced Bionics (Valencia, CA) recipients were tested using pairs of electrodes from the basal, middle, and apical portions of the electrode array. The behavioral results, expressed as d′, showed no significant differences across tasks. There was also no significant effect of electrode region for either task. ECAP Σ was not significantly correlated with pitch ranking or electrode discrimination for any of the electrode regions. Therefore, the ECAP separation index is not sensitive enough to predict perceptual resolution of virtual channels. PMID:25480063

  7. Practice makes perfect: the neural substrates of tactile discrimination by Mah-Jong experts include the primary visual cortex

    Directory of Open Access Journals (Sweden)

    Honda Manabu

    2006-12-01

    Full Text Available Abstract Background It has yet to be determined whether visual-tactile cross-modal plasticity due to visual deprivation, particularly in the primary visual cortex (V1, is solely due to visual deprivation or if it is a result of long-term tactile training. Here we conducted an fMRI study with normally-sighted participants who had undergone long-term training on the tactile shape discrimination of the two dimensional (2D shapes on Mah-Jong tiles (Mah-Jong experts. Eight Mah-Jong experts and twelve healthy volunteers who were naïve to Mah-Jong performed a tactile shape matching task using Mah-Jong tiles with no visual input. Furthermore, seven out of eight experts performed a tactile shape matching task with unfamiliar 2D Braille characters. Results When participants performed tactile discrimination of Mah-Jong tiles, the left lateral occipital cortex (LO and V1 were activated in the well-trained subjects. In the naïve subjects, the LO was activated but V1 was not activated. Both the LO and V1 of the well-trained subjects were activated during Braille tactile discrimination tasks. Conclusion The activation of V1 in subjects trained in tactile discrimination may represent altered cross-modal responses as a result of long-term training.

  8. Discriminability of Single and Multichannel Intracortical Microstimulation within Somatosensory Cortex

    Directory of Open Access Journals (Sweden)

    Cynthia Kay Overstreet

    2016-12-01

    Full Text Available The addition of tactile and proprioceptive feedback to neuroprosthetic limbs is expected to significantly improve the control of these devices. Intracortical microstimulation (ICMS of somatosensory cortex is a promising method of delivering this sensory feedback. To date, the main focus of somatosensory ICMS studies has been to deliver discriminable signals, corresponding to varying intensity, to a single location in cortex. However, multiple independent and simultaneous streams of sensory information will need to be encoded by ICMS to provide functionally relevant feedback for a neuroprosthetic limb (e.g. encoding contact events and pressure on multiple digits.In this study, we evaluated the ability of an awake, behaving non-human primate (Macaca mulatta to discriminate ICMS stimuli delivered on multiple electrodes spaced within somatosensory cortex. We delivered serial stimulation on single electrodes to evaluate the discriminability of sensations corresponding to ICMS of distinct cortical locations. Additionally, we delivered trains of multichannel stimulation, derived from a tactile sensor, synchronously across multiple electrodes. Our results indicate that discrimination of multiple ICMS stimuli is a challenging task, but that discriminable sensory percepts can be elicited by both single and multichannel ICMS on electrodes spaced within somatosensory cortex.

  9. Assessment of brain damage in a geriatric population through use of a visual-searching task.

    Science.gov (United States)

    Turbiner, M; Derman, R M

    1980-04-01

    This study was designed to assess the discriminative capacity of a visual-searching task for brain damage, as described by Goldstein and Kyc (1978), for 10 hospitalized male, brain-damaged patients, 10 hospitalized male schizophrenic patients, and 10 normal subjects in a control group, all of whom were approximately 65 yr. old. The derived data indicated, at a statistically significant level, that the visual-searching task was effective in successfully classifying 80% of the brain-damaged sample when compared to the schizophrenic patients and discriminating 90% of the brain-damaged patients from normal subjects.

  10. Lexical exposure to native language dialects can improve non-native phonetic discrimination.

    Science.gov (United States)

    Olmstead, Annie J; Viswanathan, Navin

    2018-04-01

    Nonnative phonetic learning is an area of great interest for language researchers, learners, and educators alike. In two studies, we examined whether nonnative phonetic discrimination of Hindi dental and retroflex stops can be improved by exposure to lexical items bearing the critical nonnative stops. We extend the lexical retuning paradigm of Norris, McQueen, and Cutler (Cognitive Psychology, 47, 204-238, 2003) by having naive American English (AE)-speaking participants perform a pretest-training-posttest procedure. They performed an AXB discrimination task with the Hindi retroflex and dental stops before and after transcribing naturally produced words from an Indian English speaker that either contained these tokens or not. Only those participants who heard words with the critical nonnative phones improved in their posttest discrimination. This finding suggests that exposure to nonnative phones in native lexical contexts supports learning of difficult nonnative phonetic discrimination.

  11. Making Sound Connections

    Science.gov (United States)

    Deal, Walter F., III

    2007-01-01

    Sound provides and offers amazing insights into the world. Sound waves may be defined as mechanical energy that moves through air or other medium as a longitudinal wave and consists of pressure fluctuations. Humans and animals alike use sound as a means of communication and a tool for survival. Mammals, such as bats, use ultrasonic sound waves to…

  12. Pigeons can discriminate "good" and "bad" paintings by children.

    Science.gov (United States)

    Watanabe, Shigeru

    2010-01-01

    Humans have the unique ability to create art, but non-human animals may be able to discriminate "good" art from "bad" art. In this study, I investigated whether pigeons could be trained to discriminate between paintings that had been judged by humans as either "bad" or "good". To do this, adult human observers first classified several children's paintings as either "good" (beautiful) or "bad" (ugly). Using operant conditioning procedures, pigeons were then reinforced for pecking at "good" paintings. After the pigeons learned the discrimination task, they were presented with novel pictures of both "good" and "bad" children's paintings to test whether they had successfully learned to discriminate between these two stimulus categories. The results showed that pigeons could discriminate novel "good" and "bad" paintings. Then, to determine which cues the subjects used for the discrimination, I conducted tests of the stimuli when the paintings were of reduced size or grayscale. In addition, I tested their ability to discriminate when the painting stimuli were mosaic and partial occluded. The pigeons maintained discrimination performance when the paintings were reduced in size. However, discrimination performance decreased when stimuli were presented as grayscale images or when a mosaic effect was applied to the original stimuli in order to disrupt spatial frequency. Thus, the pigeons used both color and pattern cues for their discrimination. The partial occlusion did not disrupt the discriminative behavior suggesting that the pigeons did not attend to particular parts, namely upper, lower, left or right half, of the paintings. These results suggest that the pigeons are capable of learning the concept of a stimulus class that humans name "good" pictures. The second experiment showed that pigeons learned to discriminate watercolor paintings from pastel paintings. The subjects showed generalization to novel paintings. Then, as the first experiment, size reduction test

  13. Age- and sex-related disturbance in a battery of sensorimotor and cognitive tasks in Kunming mice.

    Science.gov (United States)

    Chen, Gui-Hai; Wang, Yue-Ju; Zhang, Li-Qun; Zhou, Jiang-Ning

    2004-12-15

    A battery of tasks, i.e. beam walking, open field, tightrope, radial six-arm water maze (RAWM), novel-object recognition and olfactory discrimination, was used to determine whether there was age- and sex-related memory deterioration in Kunming (KM) mice, and whether these tasks are independent or correlated with each other. Two age groups of KM mice were used: a younger group (7-8 months old, 12 males and 11 females) and an older group (17-18 months old, 12 males and 12 females). The results showed that the spatial learning ability and memory in the RAWM were lower in older female KM mice relative to younger female mice and older male mice. Consistent with this, in the novel-object recognition task, a non-spatial cognitive task, older female mice but not older male mice had impairment of short-term memory. In olfactory discrimination, another non-spatial task, the older mice retained this ability. Interestingly, female mice performed better than males, especially in the younger group. The older females exhibited sensorimotor impairment in the tightrope task and low locomotor activity in the open-field task. Moreover, older mice spent a longer time in the peripheral squares of the open-field than younger ones. The non-spatial cognitive performance in the novel-object recognition and olfactory discrimination tasks was related to performance in the open-field, whereas the spatial cognitive performance in the RAWM was not related to performance in any of the three sensorimotor tasks. These results suggest that disturbance of spatial learning and memory, as well as selective impairment of non-spatial learning and memory, existed in older female KM mice.

  14. The third-stimulus temporal discrimination threshold: focusing on the temporal processing of sensory input within primary somatosensory cortex.

    Science.gov (United States)

    Leodori, Giorgio; Formica, Alessandra; Zhu, Xiaoying; Conte, Antonella; Belvisi, Daniele; Cruccu, Giorgio; Hallett, Mark; Berardelli, Alfredo

    2017-10-01

    The somatosensory temporal discrimination threshold (STDT) has been used in recent years to investigate time processing of sensory information, but little is known about the physiological correlates of somatosensory temporal discrimination. The objective of this study was to investigate whether the time interval required to discriminate between two stimuli varies according to the number of stimuli in the task. We used the third-stimulus temporal discrimination threshold (ThirdDT), defined as the shortest time interval at which an individual distinguishes a third stimulus following a pair of stimuli delivered at the STDT. The STDT and ThirdDT were assessed in 31 healthy subjects. In a subgroup of 10 subjects, we evaluated the effects of the stimuli intensity on the ThirdDT. In a subgroup of 16 subjects, we evaluated the effects of S1 continuous theta-burst stimulation (S1-cTBS) on the STDT and ThirdDT. Results show that ThirdDT is shorter than STDT. We found a positive correlation between STDT and ThirdDT values. As long as the stimulus intensity was within the perceivable and painless range, it did not affect ThirdDT values. S1-cTBS significantly affected both STDT and ThirdDT, although the latter was affected to a greater extent and for a longer period of time. We conclude that the interval needed to discriminate between time-separated tactile stimuli is related to the number of stimuli used in the task. STDT and ThirdDT are encoded in S1, probably by a shared tactile temporal encoding mechanism whose performance rapidly changes during the perception process. ThirdDT is a new method to measure somatosensory temporal discrimination. NEW & NOTEWORTHY To investigate whether the time interval required to discriminate between stimuli varies according to changes in the stimulation pattern, we used the third-stimulus temporal discrimination threshold (ThirdDT). We found that the somatosensory temporal discrimination acuity varies according to the number of stimuli in the

  15. Memory-Based Quantity Discrimination in Coyotes (Canis latrans

    Directory of Open Access Journals (Sweden)

    Salif Mahamane

    2014-08-01

    Full Text Available Previous research has shown that the ratio between competing quantities of food significantly mediates coyotes‘ (Canis latrans ability to choose the larger of two food options. These previous findings are consistent with predictions made by Weber‘s Law and indicate that coyotes possess quantity discrimination abilities that are similar to other species. Importantly, coyotes‘ discrimination abilities are similar to domestic dogs (Canis lupus familiaris, indicating that quantitative discrimination may remain stable throughout certain species‘ evolution. However, while previously shown in two domestic dogs, it is unknown whether coyotes possess the ability to discriminate visual quantities from memory. Here, we address this question by displaying different ratios of food quantities to 14 coyotes before placing the choices out of sight. The coyotes were then allowed to select one of either non-visible food quantities. Coyotes‘ discrimination of quantity from memory does not follow Weber‘s Law in this particular task. These results suggest that working memory in coyotes may not be adapted to maintain information regarding quantity as well as in domestic dogs. The likelihood of a coyote‘s choosing the large option increased when it was presented with difficult ratios of food options first, before it was later presented with trials using more easily discriminable ratios, and when the large option was placed on one particular side. This suggests that learning or motivation increased across trials when coyotes experienced difficult ratios first, and that location of food may have been more salient in working memory than quantity of food.

  16. Learning-induced uncertainty reduction in perceptual decisions is task-dependent

    Directory of Open Access Journals (Sweden)

    Feitong eYang

    2014-05-01

    Full Text Available Perceptual decision making in which decisions are reached primarily from extracting and evaluating sensory information requires close interactions between the sensory system and decision-related networks in the brain. Uncertainty pervades every aspect of this process and can be considered related to either the stimulus signal or decision criterion. Here, we investigated the learning-induced reduction of both the signal and criterion uncertainty in two perceptual decision tasks based on two Glass pattern stimulus sets. This was achieved by manipulating spiral angle and signal level of radial and concentric Glass patterns. The behavioral results showed that the participants trained with a task based on criterion comparison improved their categorization accuracy for both tasks, whereas the participants who were trained on a task based on signal detection improved their categorization accuracy only on their trained task. We fitted the behavioral data with a computational model that can dissociate the contribution of the signal and criterion uncertainties. The modeling results indicated that the participants trained on the criterion comparison task reduced both the criterion and signal uncertainty. By contrast, the participants who were trained on the signal detection task only reduced their signal uncertainty after training. Our results suggest that the signal uncertainty can be resolved by training participants to extract signals from noisy environments and to discriminate between clear signals, which are evidenced by reduced perception variance after both training procedures. Conversely, the criterion uncertainty can only be resolved by the training of fine discrimination. These findings demonstrate that uncertainty in perceptual decision-making can be reduced with training but that the reduction of different types of uncertainty is task-dependent.

  17. Analysis of speech sounds is left-hemisphere predominant at 100-150ms after sound onset.

    Science.gov (United States)

    Rinne, T; Alho, K; Alku, P; Holi, M; Sinkkonen, J; Virtanen, J; Bertrand, O; Näätänen, R

    1999-04-06

    Hemispheric specialization of human speech processing has been found in brain imaging studies using fMRI and PET. Due to the restricted time resolution, these methods cannot, however, determine the stage of auditory processing at which this specialization first emerges. We used a dense electrode array covering the whole scalp to record the mismatch negativity (MMN), an event-related brain potential (ERP) automatically elicited by occasional changes in sounds, which ranged from non-phonetic (tones) to phonetic (vowels). MMN can be used to probe auditory central processing on a millisecond scale with no attention-dependent task requirements. Our results indicate that speech processing occurs predominantly in the left hemisphere at the early, pre-attentive level of auditory analysis.

  18. Influence of Computerized Sounding Out on Spelling Performance for Children who do and not rely on AAC

    Science.gov (United States)

    McCarthy, Jillian H.; Hogan, Tiffany P.; Beukelman, David R.; Schwarz, Ilsa E.

    2015-01-01

    Purpose Spelling is an important skill for individuals who rely on augmentative alternative communication (AAC). The purpose of this study was to investigate how computerized sounding out influenced spelling accuracy of pseudo-words. Computerized sounding out was defined as a word elongated, thus providing an opportunity for a child to hear all the sounds in the word at a slower rate. Methods Seven children with cerebral palsy, four who use AAC and three who do not, participated in a single subject AB design. Results The results of the study indicated that the use of computerized sounding out increased the phonologic accuracy of the pseudo-words produced by participants. Conclusion The study provides preliminary evidence for the use of computerized sounding out during spelling tasks for children with cerebral palsy who do and do not use AAC. Future directions and clinical implications are discussed. PMID:24512195

  19. Perirhinal Cortex Resolves Feature Ambiguity in Configural Object Recognition and Perceptual Oddity Tasks

    Science.gov (United States)

    Bartko, Susan J.; Winters, Boyer D.; Cowell, Rosemary A.; Saksida, Lisa M.; Bussey, Timothy J.

    2007-01-01

    The perirhinal cortex (PRh) has a well-established role in object recognition memory. More recent studies suggest that PRh is also important for two-choice visual discrimination tasks. Specifically, it has been suggested that PRh contains conjunctive representations that help resolve feature ambiguity, which occurs when a task cannot easily be…

  20. Short-term and long-term effects of diazepam on the memory for discrimination and generalization of scopolamine.

    Science.gov (United States)

    Casasola-Castro, C; Weissmann-Sánchez, L; Calixto-González, E; Aguayo-Del Castillo, A; Velázquez-Martínez, D N

    2017-10-01

    Benzodiazepines are among the most widely prescribed and misused psychopharmaceutical drugs. Although they are well-tolerated, they are also capable of producing amnestic effects similar to those observed after pharmacological or organic cholinergic dysfunction. To date, the effect of benzodiazepine diazepam on the memory for discrimination of anticholinergic drugs has not been reported. The aim of the present study was to analyze the immediate and long-term effects of diazepam on a drug discrimination task with scopolamine. Male Wistar rats were trained to discriminate between scopolamine and saline administration using a two-lever discrimination task. Once discrimination was acquired, the subjects were divided into three independent groups, (1) control, (2) diazepam, and (3) diazepam chronic administration (10 days). Subsequently, generalization curves for scopolamine were obtained. Additionally, the diazepam and control groups were revaluated after 90 days without having been given any other treatment. The results showed that diazepam produced a significant reduction in the generalization gradient for scopolamine, indicating an impairment of discrimination. The negative effect of diazepam persisted even 90 days after drug had been administered. Meanwhile, the previous administration of diazepam for 10 days totally abated the generalization curve and the general performance of the subjects. The results suggest that diazepam affects memory for the stimulus discrimination of anticholinergic drugs and does so persistently, which could be an important consideration during the treatment of amnesic patients with benzodiazepines.

  1. GRace: a MATLAB-based application for fitting the discrimination-association model.

    Science.gov (United States)

    Stefanutti, Luca; Vianello, Michelangelo; Anselmi, Pasquale; Robusto, Egidio

    2014-10-28

    The Implicit Association Test (IAT) is a computerized two-choice discrimination task in which stimuli have to be categorized as belonging to target categories or attribute categories by pressing, as quickly and accurately as possible, one of two response keys. The discrimination association model has been recently proposed for the analysis of reaction time and accuracy of an individual respondent to the IAT. The model disentangles the influences of three qualitatively different components on the responses to the IAT: stimuli discrimination, automatic association, and termination criterion. The article presents General Race (GRace), a MATLAB-based application for fitting the discrimination association model to IAT data. GRace has been developed for Windows as a standalone application. It is user-friendly and does not require any programming experience. The use of GRace is illustrated on the data of a Coca Cola-Pepsi Cola IAT, and the results of the analysis are interpreted and discussed.

  2. Office noise: Can headphones and masking sound attenuate distraction by background speech?

    Science.gov (United States)

    Jahncke, Helena; Björkeholm, Patrik; Marsh, John E; Odelius, Johan; Sörqvist, Patrik

    2016-11-22

    Background speech is one of the most disturbing noise sources at shared workplaces in terms of both annoyance and performance-related disruption. Therefore, it is important to identify techniques that can efficiently protect performance against distraction. It is also important that the techniques are perceived as satisfactory and are subjectively evaluated as effective in their capacity to reduce distraction. The aim of the current study was to compare three methods of attenuating distraction from background speech: masking a background voice with nature sound through headphones, masking a background voice with other voices through headphones and merely wearing headphones (without masking) as a way to attenuate the background sound. Quiet was deployed as a baseline condition. Thirty students participated in an experiment employing a repeated measures design. Performance (serial short-term memory) was impaired by background speech (1 voice), but this impairment was attenuated when the speech was masked - and in particular when it was masked by nature sound. Furthermore, perceived workload was lowest in the quiet condition and significantly higher in all other sound conditions. Notably, the headphones tested as a sound-attenuating device (i.e. without masking) did not protect against the effects of background speech on performance and subjective work load. Nature sound was the only masking condition that worked as a protector of performance, at least in the context of the serial recall task. However, despite the attenuation of distraction by nature sound, perceived workload was still high - suggesting that it is difficult to find a masker that is both effective and perceived as satisfactory.

  3. MIDAS: Regionally linear multivariate discriminative statistical mapping.

    Science.gov (United States)

    Varol, Erdem; Sotiras, Aristeidis; Davatzikos, Christos

    2018-07-01

    Statistical parametric maps formed via voxel-wise mass-univariate tests, such as the general linear model, are commonly used to test hypotheses about regionally specific effects in neuroimaging cross-sectional studies where each subject is represented by a single image. Despite being informative, these techniques remain limited as they ignore multivariate relationships in the data. Most importantly, the commonly employed local Gaussian smoothing, which is important for accounting for registration errors and making the data follow Gaussian distributions, is usually chosen in an ad hoc fashion. Thus, it is often suboptimal for the task of detecting group differences and correlations with non-imaging variables. Information mapping techniques, such as searchlight, which use pattern classifiers to exploit multivariate information and obtain more powerful statistical maps, have become increasingly popular in recent years. However, existing methods may lead to important interpretation errors in practice (i.e., misidentifying a cluster as informative, or failing to detect truly informative voxels), while often being computationally expensive. To address these issues, we introduce a novel efficient multivariate statistical framework for cross-sectional studies, termed MIDAS, seeking highly sensitive and specific voxel-wise brain maps, while leveraging the power of regional discriminant analysis. In MIDAS, locally linear discriminative learning is applied to estimate the pattern that best discriminates between two groups, or predicts a variable of interest. This pattern is equivalent to local filtering by an optimal kernel whose coefficients are the weights of the linear discriminant. By composing information from all neighborhoods that contain a given voxel, MIDAS produces a statistic that collectively reflects the contribution of the voxel to the regional classifiers as well as the discriminative power of the classifiers. Critically, MIDAS efficiently assesses the

  4. Superior analgesic effect of an active distraction versus pleasant unfamiliar sounds and music

    DEFF Research Database (Denmark)

    Garza Villarreal, Eduardo A.; Brattico, Elvira; Vase, Lene

    2012-01-01

    Previous studies have shown a superior analgesic effect of favorite music over other passive or active distractive tasks. However, it is unclear what mediates this effect. In this study we investigated to which extent distraction, emotional valence and cognitive styles may explain part...... questionnaires concerning cognitive styles (Baron – Cohen and self-report). Active distraction with PASAT led to significantly less pain intensity and unpleasantness as compared to music and sound. In turn, both music and sound relieved pain significantly more than noise. When music and sound had the same level...... of valence they relieved pain to a similar degree. The emotional ratings of the conditions were correlated with the amount of pain relief and cognitive styles seemed to influence the analgesia effect. These findings suggest that the pain relieving effect previously seen in relation to music may be at least...

  5. Gap junctions and memory: an investigation using a single trial discrimination avoidance task for the neonate chick.

    Science.gov (United States)

    Verwey, L J; Edwards, T M

    2010-02-01

    Gap junctions are important to how the brain functions but are relatively under-investigated with respect to their contribution towards behaviour. In the present study a single trial discrimination avoidance task was used to investigate the effect of the gap junction inhibitor 18-alpha-glycyrrhetinic acid (alphaGA) on retention. Past studies within our research group have implied a potential role for gap junctions during the short-term memory (STM) stage which decays by 15 min post-training. A retention function study comparing 10 microM alphaGA and vehicle given immediately post-training demonstrated a significant main effect for drug with retention loss at all times of test (10-180 min post-training). Given that the most common gap junction in the brain is that forming the astrocytic network it is reasonable to conclude that alphaGA was acting upon these. To confirm this finding and interpretation two additional investigations were undertaken using endothelin-1 (ET-1) and ET-1+tolbutamide. Importantly, a retention function study using 10nM ET-1 replicated the retention loss observed for alphaGA. In order to confirm that ET-1 was acting on astrocytic gap junctions the amnestic action of ET-1 was effectively challenged with increasing concentrations of tolbutamide. The present findings suggest that astrocytic gap junctions are important for memory processing. Copyright 2009 Elsevier Inc. All rights reserved.

  6. Endogenous visuospatial attention increases visual awareness independent of visual discrimination sensitivity.

    Science.gov (United States)

    Vernet, Marine; Japee, Shruti; Lokey, Savannah; Ahmed, Sara; Zachariou, Valentinos; Ungerleider, Leslie G

    2017-08-12

    Visuospatial attention often improves task performance by increasing signal gain at attended locations and decreasing noise at unattended locations. Attention is also believed to be the mechanism that allows information to enter awareness. In this experiment, we assessed whether orienting endogenous visuospatial attention with cues differentially affects visual discrimination sensitivity (an objective task performance) and visual awareness (the subjective feeling of perceiving) during the same discrimination task. Gabor patch targets were presented laterally, either at low contrast (contrast stimuli) or at high contrast embedded in noise (noise stimuli). Participants reported their orientation either in a 3-alternative choice task (clockwise, counterclockwise, unknown) that allowed for both objective and subjective reports, or in a 2-alternative choice task (clockwise, counterclockwise) that provided a control for objective reports. Signal detection theory models were fit to the experimental data: estimated perceptual sensitivity reflected objective performance; decision criteria, or subjective biases, were a proxy for visual awareness. Attention increased sensitivity (i.e., improved objective performance) for the contrast, but not for the noise stimuli. Indeed, with the latter, attention did not further enhance the already high target signal or reduce the already low uncertainty on its position. Interestingly, for both contrast and noise stimuli, attention resulted in more liberal criteria, i.e., awareness increased. The noise condition is thus an experimental configuration where people think they see the targets they attend to better, even if they do not. This could be explained by an internal representation of their attentional state, which influences awareness independent of objective visual signals. Copyright © 2017. Published by Elsevier Ltd.

  7. Lexical processing and distributional knowledge in sound-spelling mapping in a consistent orthography: A longitudinal study of reading and spelling in dyslexic and typically developing children.

    Science.gov (United States)

    Marinelli, Chiara Valeria; Cellini, Pamela; Zoccolotti, Pierluigi; Angelelli, Paola

    This study examined the ability to master lexical processing and use knowledge of the relative frequency of sound-spelling mappings in both reading and spelling. Twenty-four dyslexic and dysgraphic children and 86 typically developing readers were followed longitudinally in 3rd and 5th grades. Effects of word regularity, word frequency, and probability of sound-spelling mappings were examined in two experimental tasks: (a) spelling to dictation; and (b) orthographic judgment. Dyslexic children showed larger regularity and frequency effects than controls in both tasks. Sensitivity to distributional information of sound-spelling mappings was already detected by third grade, indicating early acquisition even in children with dyslexia. Although with notable differences, knowledge of the relative frequencies of sound-spelling mapping influenced both reading and spelling. Results are discussed in terms of their theoretical and empirical implications.

  8. Sound Search Engine Concept

    DEFF Research Database (Denmark)

    2006-01-01

    Sound search is provided by the major search engines, however, indexing is text based, not sound based. We will establish a dedicated sound search services with based on sound feature indexing. The current demo shows the concept of the sound search engine. The first engine will be realased June...

  9. Early Stages of Melody Processing: Stimulus-Sequence and Task-Dependent Neuronal Activity in Monkey Auditory Cortical Fields A1 and R

    Science.gov (United States)

    Yin, Pingbo; Mishkin, Mortimer; Sutter, Mitchell; Fritz, Jonathan B.

    2008-01-01

    To explore the effects of acoustic and behavioral context on neuronal responses in the core of auditory cortex (fields A1 and R), two monkeys were trained on a go/no-go discrimination task in which they learned to respond selectively to a four-note target (S+) melody and withhold response to a variety of other nontarget (S−) sounds. We analyzed evoked activity from 683 units in A1/R of the trained monkeys during task performance and from 125 units in A1/R of two naive monkeys. We characterized two broad classes of neural activity that were modulated by task performance. Class I consisted of tone-sequence–sensitive enhancement and suppression responses. Enhanced or suppressed responses to specific tonal components of the S+ melody were frequently observed in trained monkeys, but enhanced responses were rarely seen in naive monkeys. Both facilitatory and suppressive responses in the trained monkeys showed a temporal pattern different from that observed in naive monkeys. Class II consisted of nonacoustic activity, characterized by a task-related component that correlated with bar release, the behavioral response leading to reward. We observed a significantly higher percentage of both Class I and Class II neurons in field R than in A1. Class I responses may help encode a long-term representation of the behaviorally salient target melody. Class II activity may reflect a variety of nonacoustic influences, such as attention, reward expectancy, somatosensory inputs, and/or motor set and may help link auditory perception and behavioral response. Both types of neuronal activity are likely to contribute to the performance of the auditory task. PMID:18842950

  10. Intrinsic motivation and attentional capture from gamelike features in a visual search task.

    Science.gov (United States)

    Miranda, Andrew T; Palmer, Evan M

    2014-03-01

    In psychology research studies, the goals of the experimenter and the goals of the participants often do not align. Researchers are interested in having participants who take the experimental task seriously, whereas participants are interested in earning their incentive (e.g., money or course credit) as quickly as possible. Creating experimental methods that are pleasant for participants and that reward them for effortful and accurate data generation, while not compromising the scientific integrity of the experiment, would benefit both experimenters and participants alike. Here, we explored a gamelike system of points and sound effects that rewarded participants for fast and accurate responses. We measured participant engagement at both cognitive and perceptual levels and found that the point system (which invoked subtle, anonymous social competition between participants) led to positive intrinsic motivation, while the sound effects (which were pleasant and arousing) led to attentional capture for rewarded colors. In a visual search task, points were awarded after each trial for fast and accurate responses, accompanied by short, pleasant sound effects. We adapted a paradigm from Anderson, Laurent, and Yantis (Proceedings of the National Academy of Sciences 108(25):10367-10371, 2011b), in which participants completed a training phase during which red and green targets were probabilistically associated with reward (a point bonus multiplier). During a test phase, no points or sounds were delivered, color was irrelevant to the task, and previously rewarded targets were sometimes presented as distractors. Significantly longer response times on trials in which previously rewarded colors were present demonstrated attentional capture, and positive responses to a five-question intrinsic-motivation scale demonstrated participant engagement.

  11. The sound manifesto

    Science.gov (United States)

    O'Donnell, Michael J.; Bisnovatyi, Ilia

    2000-11-01

    Computing practice today depends on visual output to drive almost all user interaction. Other senses, such as audition, may be totally neglected, or used tangentially, or used in highly restricted specialized ways. We have excellent audio rendering through D-A conversion, but we lack rich general facilities for modeling and manipulating sound comparable in quality and flexibility to graphics. We need coordinated research in several disciplines to improve the use of sound as an interactive information channel. Incremental and separate improvements in synthesis, analysis, speech processing, audiology, acoustics, music, etc. will not alone produce the radical progress that we seek in sonic practice. We also need to create a new central topic of study in digital audio research. The new topic will assimilate the contributions of different disciplines on a common foundation. The key central concept that we lack is sound as a general-purpose information channel. We must investigate the structure of this information channel, which is driven by the cooperative development of auditory perception and physical sound production. Particular audible encodings, such as speech and music, illuminate sonic information by example, but they are no more sufficient for a characterization than typography is sufficient for characterization of visual information. To develop this new conceptual topic of sonic information structure, we need to integrate insights from a number of different disciplines that deal with sound. In particular, we need to coordinate central and foundational studies of the representational models of sound with specific applications that illuminate the good and bad qualities of these models. Each natural or artificial process that generates informative sound, and each perceptual mechanism that derives information from sound, will teach us something about the right structure to attribute to the sound itself. The new Sound topic will combine the work of computer

  12. The time course of shape discrimination in the human brain.

    Science.gov (United States)

    Ales, Justin M; Appelbaum, L Gregory; Cottereau, Benoit R; Norcia, Anthony M

    2013-02-15

    The lateral occipital cortex (LOC) activates selectively to images of intact objects versus scrambled controls, is selective for the figure-ground relationship of a scene, and exhibits at least some degree of invariance for size and position. Because of these attributes, it is considered to be a crucial part of the object recognition pathway. Here we show that human LOC is critically involved in perceptual decisions about object shape. High-density EEG was recorded while subjects performed a threshold-level shape discrimination task on texture-defined figures segmented by either phase or orientation cues. The appearance or disappearance of a figure region from a uniform background generated robust visual evoked potentials throughout retinotopic cortex as determined by inverse modeling of the scalp voltage distribution. Contrasting responses from trials containing shape changes that were correctly detected (hits) with trials in which no change occurred (correct rejects) revealed stimulus-locked, target-selective activity in the occipital visual areas LOC and V4 preceding the subject's response. Activity that was locked to the subjects' reaction time was present in the LOC. Response-locked activity in the LOC was determined to be related to shape discrimination for several reasons: shape-selective responses were silenced when subjects viewed identical stimuli but their attention was directed away from the shapes to a demanding letter discrimination task; shape-selectivity was present across four different stimulus configurations used to define the figure; LOC responses correlated with participants' reaction times. These results indicate that decision-related activity is present in the LOC when subjects are engaged in threshold-level shape discriminations. Copyright © 2012 Elsevier Inc. All rights reserved.

  13. Unsound Sound

    DEFF Research Database (Denmark)

    Knakkergaard, Martin

    2016-01-01

    This article discusses the change in premise that digitally produced sound brings about and how digital technologies more generally have changed our relationship to the musical artifact, not simply in degree but in kind. It demonstrates how our acoustical conceptions are thoroughly challenged...... by the digital production of sound and, by questioning the ontological basis for digital sound, turns our understanding of the core term substance upside down....

  14. Gait characteristics and their discriminative power in geriatric patients with and without cognitive impairment.

    Science.gov (United States)

    Kikkert, Lisette H J; Vuillerme, Nicolas; van Campen, Jos P; Appels, Bregje A; Hortobágyi, Tibor; Lamoth, Claudine J C

    2017-08-15

    A detailed gait analysis (e.g., measures related to speed, self-affinity, stability, and variability) can help to unravel the underlying causes of gait dysfunction, and identify cognitive impairment. However, because geriatric patients present with multiple conditions that also affect gait, results from healthy old adults cannot easily be extrapolated to geriatric patients. Hence, we (1) quantified gait outcomes based on dynamical systems theory, and (2) determined their discriminative power in three groups: healthy old adults, geriatric patients with- and geriatric patients without cognitive impairment. For the present cross-sectional study, 25 healthy old adults recruited from community (65 ± 5.5 years), and 70 geriatric patients with (n = 39) and without (n = 31) cognitive impairment from the geriatric dayclinic of the MC Slotervaart hospital in Amsterdam (80 ± 6.6 years) were included. Participants walked for 3 min during single- and dual-tasking at self-selected speed while 3D trunk accelerations were registered with an IPod touch G4. We quantified 23 gait outcomes that reflect multiple gait aspects. A multivariate model was built using Partial Least Square- Discriminant Analysis (PLS-DA) that best modelled participant group from gait outcomes. For single-task walking, the PLS-DA model consisted of 4 Latent Variables that explained 63 and 41% of the variance in gait outcomes and group, respectively. Outcomes related to speed, regularity, predictability, and stability of trunk accelerations revealed with the highest discriminative power (VIP > 1). A high proportion of healthy old adults (96 and 93% for single- and dual-task, respectively) was correctly classified based on the gait outcomes. The discrimination of geriatric patients with and without cognitive impairment was poor, with 57% (single-task) and 64% (dual-task) of the patients misclassified. While geriatric patients vs. healthy old adults walked slower, and less regular, predictable, and

  15. Cortical representations of communication sounds.

    Science.gov (United States)

    Heiser, Marc A; Cheung, Steven W

    2008-10-01

    This review summarizes recent research into cortical processing of vocalizations in animals and humans. There has been a resurgent interest in this topic accompanied by an increased number of studies using animal models with complex vocalizations and new methods in human brain imaging. Recent results from such studies are discussed. Experiments have begun to reveal the bilateral cortical fields involved in communication sound processing and the transformations of neural representations that occur among those fields. Advances have also been made in understanding the neuronal basis of interaction between developmental exposures and behavioral experiences with vocalization perception. Exposure to sounds during the developmental period produces large effects on brain responses, as do a variety of specific trained tasks in adults. Studies have also uncovered a neural link between the motor production of vocalizations and the representation of vocalizations in cortex. Parallel experiments in humans and animals are answering important questions about vocalization processing in the central nervous system. This dual approach promises to reveal microscopic, mesoscopic, and macroscopic principles of large-scale dynamic interactions between brain regions that underlie the complex phenomenon of vocalization perception. Such advances will yield a greater understanding of the causes, consequences, and treatment of disorders related to speech processing.

  16. Construct Validity in TOEFL iBT Speaking Tasks: Insights from Natural Language Processing

    Science.gov (United States)

    Kyle, Kristopher; Crossley, Scott A.; McNamara, Danielle S.

    2016-01-01

    This study explores the construct validity of speaking tasks included in the TOEFL iBT (e.g., integrated and independent speaking tasks). Specifically, advanced natural language processing (NLP) tools, MANOVA difference statistics, and discriminant function analyses (DFA) are used to assess the degree to which and in what ways responses to these…

  17. Early Sound Symbolism for Vowel Sounds

    Directory of Open Access Journals (Sweden)

    Ferrinne Spector

    2013-06-01

    Full Text Available Children and adults consistently match some words (e.g., kiki to jagged shapes and other words (e.g., bouba to rounded shapes, providing evidence for non-arbitrary sound–shape mapping. In this study, we investigated the influence of vowels on sound–shape matching in toddlers, using four contrasting pairs of nonsense words differing in vowel sound (/i/ as in feet vs. /o/ as in boat and four rounded–jagged shape pairs. Crucially, we used reduplicated syllables (e.g., kiki vs. koko rather than confounding vowel sound with consonant context and syllable variability (e.g., kiki vs. bouba. Toddlers consistently matched words with /o/ to rounded shapes and words with /i/ to jagged shapes (p < 0.01. The results suggest that there may be naturally biased correspondences between vowel sound and shape.

  18. Cross-Modal Correspondences Enhance Performance on a Colour-to-Sound Sensory Substitution Device.

    Science.gov (United States)

    Hamilton-Fletcher, Giles; Wright, Thomas D; Ward, Jamie

    Visual sensory substitution devices (SSDs) can represent visual characteristics through distinct patterns of sound, allowing a visually impaired user access to visual information. Previous SSDs have avoided colour and when they do encode colour, have assigned sounds to colour in a largely unprincipled way. This study introduces a new tablet-based SSD termed the ‘Creole’ (so called because it combines tactile scanning with image sonification) and a new algorithm for converting colour to sound that is based on established cross-modal correspondences (intuitive mappings between different sensory dimensions). To test the utility of correspondences, we examined the colour–sound associative memory and object recognition abilities of sighted users who had their device either coded in line with or opposite to sound–colour correspondences. Improved colour memory and reduced colour-errors were made by users who had the correspondence-based mappings. Interestingly, the colour–sound mappings that provided the highest improvements during the associative memory task also saw the greatest gains for recognising realistic objects that also featured these colours, indicating a transfer of abilities from memory to recognition. These users were also marginally better at matching sounds to images varying in luminance, even though luminance was coded identically across the different versions of the device. These findings are discussed with relevance for both colour and correspondences for sensory substitution use.

  19. Effects of X-ray radiation on complex visual discrimination learning and social recognition memory in rats.

    Directory of Open Access Journals (Sweden)

    Catherine M Davis

    Full Text Available The present report describes an animal model for examining the effects of radiation on a range of neurocognitive functions in rodents that are similar to a number of basic human cognitive functions. Fourteen male Long-Evans rats were trained to perform an automated intra-dimensional set shifting task that consisted of their learning a basic discrimination between two stimulus shapes followed by more complex discrimination stages (e.g., a discrimination reversal, a compound discrimination, a compound reversal, a new shape discrimination, and an intra-dimensional stimulus discrimination reversal. One group of rats was exposed to head-only X-ray radiation (2.3 Gy at a dose rate of 1.9 Gy/min, while a second group received a sham-radiation exposure using the same anesthesia protocol. The irradiated group responded less, had elevated numbers of omitted trials, increased errors, and greater response latencies compared to the sham-irradiated control group. Additionally, social odor recognition memory was tested after radiation exposure by assessing the degree to which rats explored wooden beads impregnated with either their own odors or with the odors of novel, unfamiliar rats; however, no significant effects of radiation on social odor recognition memory were observed. These data suggest that rodent tasks assessing higher-level human cognitive domains are useful in examining the effects of radiation on the CNS, and may be applicable in approximating CNS risks from radiation exposure in clinical populations receiving whole brain irradiation.

  20. Effects of X-ray radiation on complex visual discrimination learning and social recognition memory in rats.

    Science.gov (United States)

    Davis, Catherine M; Roma, Peter G; Armour, Elwood; Gooden, Virginia L; Brady, Joseph V; Weed, Michael R; Hienz, Robert D

    2014-01-01

    The present report describes an animal model for examining the effects of radiation on a range of neurocognitive functions in rodents that are similar to a number of basic human cognitive functions. Fourteen male Long-Evans rats were trained to perform an automated intra-dimensional set shifting task that consisted of their learning a basic discrimination between two stimulus shapes followed by more complex discrimination stages (e.g., a discrimination reversal, a compound discrimination, a compound reversal, a new shape discrimination, and an intra-dimensional stimulus discrimination reversal). One group of rats was exposed to head-only X-ray radiation (2.3 Gy at a dose rate of 1.9 Gy/min), while a second group received a sham-radiation exposure using the same anesthesia protocol. The irradiated group responded less, had elevated numbers of omitted trials, increased errors, and greater response latencies compared to the sham-irradiated control group. Additionally, social odor recognition memory was tested after radiation exposure by assessing the degree to which rats explored wooden beads impregnated with either their own odors or with the odors of novel, unfamiliar rats; however, no significant effects of radiation on social odor recognition memory were observed. These data suggest that rodent tasks assessing higher-level human cognitive domains are useful in examining the effects of radiation on the CNS, and may be applicable in approximating CNS risks from radiation exposure in clinical populations receiving whole brain irradiation.