Full Text Available Perceptual skills can improve dramatically even with minimal practice. A major and practical benefit of learning, however, is in transferring the improvement on the trained task to untrained tasks or stimuli, yet the mechanisms underlying this process are still poorly understood. Reduction of internal noise has been proposed as a mechanism of perceptual learning, and while we have evidence that frequency discrimination (FD learning is due to a reduction of internal noise, the source of that noise was not determined. In this study, we examined whether reducing the noise associated with neural phase locking to tones can explain the observed improvement in behavioural thresholds. We compared FD training between two tone durations (15 and 100 ms that straddled the temporal integration window of auditory nerve fibers upon which computational modeling of phase locking noise was based. Training on short tones resulted in improved FD on probe tests of both the long and short tones. Training on long tones resulted in improvement only on the long tones. Simulations of FD learning, based on the computational model and on signal detection theory, were compared with the behavioral FD data. We found that improved fidelity of phase locking accurately predicted transfer of learning from short to long tones, but also predicted transfer from long to short tones. The observed lack of transfer from long to short tones suggests the involvement of a second mechanism. Training may have increased the temporal integration window which could not transfer because integration time for the short tone is limited by its duration. Current learning models assume complex relationships between neural populations that represent the trained stimuli. In contrast, we propose that training-induced enhancement of the signal-to-noise ratio offers a parsimonious explanation of learning and transfer that easily accounts for asymmetric transfer of learning.
Shiller, Douglas M; Rochon, Marie-Lyne
Auditory feedback plays an important role in children's speech development by providing the child with information about speech outcomes that is used to learn and fine-tune speech motor plans. The use of auditory feedback in speech motor learning has been extensively studied in adults by examining oral motor responses to manipulations of auditory feedback during speech production. Children are also capable of adapting speech motor patterns to perceived changes in auditory feedback; however, it is not known whether their capacity for motor learning is limited by immature auditory-perceptual abilities. Here, the link between speech perceptual ability and the capacity for motor learning was explored in two groups of 5- to 7-year-old children who underwent a period of auditory perceptual training followed by tests of speech motor adaptation to altered auditory feedback. One group received perceptual training on a speech acoustic property relevant to the motor task while a control group received perceptual training on an irrelevant speech contrast. Learned perceptual improvements led to an enhancement in speech motor adaptation (proportional to the perceptual change) only for the experimental group. The results indicate that children's ability to perceive relevant speech acoustic properties has a direct influence on their capacity for sensory-based speech motor adaptation.
Shiller, Douglas M.; Rochon, Marie-Lyne
Auditory feedback plays an important role in children’s speech development by providing the child with information about speech outcomes that is used to learn and fine-tune speech motor plans. The use of auditory feedback in speech motor learning has been extensively studied in adults by examining oral motor responses to manipulations of auditory feedback during speech production. Children are also capable of adapting speech motor patterns to perceived changes in auditory feedback, however it is not known whether their capacity for motor learning is limited by immature auditory-perceptual abilities. Here, the link between speech perceptual ability and the capacity for motor learning was explored in two groups of 5–7-year-old children who underwent a period of auditory perceptual training followed by tests of speech motor adaptation to altered auditory feedback. One group received perceptual training on a speech acoustic property relevant to the motor task while a control group received perceptual training on an irrelevant speech contrast. Learned perceptual improvements led to an enhancement in speech motor adaptation (proportional to the perceptual change) only for the experimental group. The results indicate that children’s ability to perceive relevant speech acoustic properties has a direct influence on their capacity for sensory-based speech motor adaptation. PMID:24842067
Amitay, Sygal; Halliday, Lorna; Taylor, Jenny; Sohoglu, Ediz; Moore, David R
Although feedback on performance is generally thought to promote perceptual learning, the role and necessity of feedback remain unclear. We investigated the effect of providing varying amounts of positive feedback while listeners attempted to discriminate between three identical tones on learning frequency discrimination. Using this novel procedure, the feedback was meaningless and random in relation to the listeners' responses, but the amount of feedback provided (or lack thereof) affected learning. We found that a group of listeners who received positive feedback on 10% of the trials improved their performance on the task (learned), while other groups provided either with excess (90%) or with no feedback did not learn. Superimposed on these group data, however, individual listeners showed other systematic changes of performance. In particular, those with lower non-verbal IQ who trained in the no feedback condition performed more poorly after training. This pattern of results cannot be accounted for by learning models that ascribe an external teacher role to feedback. We suggest, instead, that feedback is used to monitor performance on the task in relation to its perceived difficulty, and that listeners who learn without the benefit of feedback are adept at self-monitoring of performance, a trait that also supports better performance on non-verbal IQ tests. These results show that 'perceptual' learning is strongly influenced by top-down processes of motivation and intelligence.
Full Text Available BACKGROUND: Although feedback on performance is generally thought to promote perceptual learning, the role and necessity of feedback remain unclear. We investigated the effect of providing varying amounts of positive feedback while listeners attempted to discriminate between three identical tones on learning frequency discrimination. METHODOLOGY/PRINCIPAL FINDINGS: Using this novel procedure, the feedback was meaningless and random in relation to the listeners' responses, but the amount of feedback provided (or lack thereof affected learning. We found that a group of listeners who received positive feedback on 10% of the trials improved their performance on the task (learned, while other groups provided either with excess (90% or with no feedback did not learn. Superimposed on these group data, however, individual listeners showed other systematic changes of performance. In particular, those with lower non-verbal IQ who trained in the no feedback condition performed more poorly after training. CONCLUSIONS/SIGNIFICANCE: This pattern of results cannot be accounted for by learning models that ascribe an external teacher role to feedback. We suggest, instead, that feedback is used to monitor performance on the task in relation to its perceived difficulty, and that listeners who learn without the benefit of feedback are adept at self-monitoring of performance, a trait that also supports better performance on non-verbal IQ tests. These results show that 'perceptual' learning is strongly influenced by top-down processes of motivation and intelligence.
Weaver, Phyllis A.; Rosner, Jerome
Scores of 25 learning disabled students (aged 9 to 13) were compared on five tests: a visual-perceptual test (Coloured Progressive Matrices); an auditory-perceptual test (Auditory Motor Placement); a listening and reading comprehension test (Durrell Listening-Reading Series); and a word recognition test (Word Recognition subtest, Diagnostic…
Shiller, Douglas M.; Rochon, Marie-Lyne
Auditory feedback plays an important role in children’s speech development by providing the child with information about speech outcomes that is used to learn and fine-tune speech motor plans. The use of auditory feedback in speech motor learning has been extensively studied in adults by examining oral motor responses to manipulations of auditory feedback during speech production. Children are also capable of adapting speech motor patterns to perceived changes in auditory feedback, however it...
Lynne E Bernstein
Full Text Available Speech perception under audiovisual conditions is well known to confer benefits to perception such as increased speed and accuracy. Here, we investigated how audiovisual training might benefit or impede auditory perceptual learning speech degraded by vocoding. In Experiments 1 and 3, participants learned paired associations between vocoded spoken nonsense words and nonsense pictures in a protocol with a fixed number of trials. In Experiment 1, paired-associates (PA audiovisual (AV training of one group of participants was compared with audio-only (AO training of another group. When tested under AO conditions, the AV-trained group was significantly more accurate than the AO-trained group. In addition, pre- and post-training AO forced-choice consonant identification with untrained nonsense words showed that AV-trained participants had learned significantly more than AO participants. The pattern of results pointed to their having learned at the level of the auditory phonetic features of the vocoded stimuli. Experiment 2, a no-training control with testing and re-testing on the AO consonant identification, showed that the controls were as accurate as the AO-trained participants in Experiment 1 but less accurate than the AV-trained participants. In Experiment 3, PA training alternated AV and AO conditions on a list-by-list basis within participants, and training was to criterion (92% correct. PA training with AO stimuli was reliably more effective than training with AV stimuli. We explain these discrepant results in terms of the so-called "reverse hierarchy theory" of perceptual learning and in terms of the diverse multisensory and unisensory processing resources available to speech perception. We propose that early audiovisual speech integration can potentially impede auditory perceptual learning; but visual top-down access to relevant auditory features can promote auditory perceptual learning.
Bernstein, Lynne E; Auer, Edward T; Eberhardt, Silvio P; Jiang, Jintao
Speech perception under audiovisual (AV) conditions is well known to confer benefits to perception such as increased speed and accuracy. Here, we investigated how AV training might benefit or impede auditory perceptual learning of speech degraded by vocoding. In Experiments 1 and 3, participants learned paired associations between vocoded spoken nonsense words and nonsense pictures. In Experiment 1, paired-associates (PA) AV training of one group of participants was compared with audio-only (AO) training of another group. When tested under AO conditions, the AV-trained group was significantly more accurate than the AO-trained group. In addition, pre- and post-training AO forced-choice consonant identification with untrained nonsense words showed that AV-trained participants had learned significantly more than AO participants. The pattern of results pointed to their having learned at the level of the auditory phonetic features of the vocoded stimuli. Experiment 2, a no-training control with testing and re-testing on the AO consonant identification, showed that the controls were as accurate as the AO-trained participants in Experiment 1 but less accurate than the AV-trained participants. In Experiment 3, PA training alternated AV and AO conditions on a list-by-list basis within participants, and training was to criterion (92% correct). PA training with AO stimuli was reliably more effective than training with AV stimuli. We explain these discrepant results in terms of the so-called "reverse hierarchy theory" of perceptual learning and in terms of the diverse multisensory and unisensory processing resources available to speech perception. We propose that early AV speech integration can potentially impede auditory perceptual learning; but visual top-down access to relevant auditory features can promote auditory perceptual learning.
Bernstein, Lynne E.; Auer, Edward T.; Eberhardt, Silvio P.; Jiang, Jintao
Speech perception under audiovisual (AV) conditions is well known to confer benefits to perception such as increased speed and accuracy. Here, we investigated how AV training might benefit or impede auditory perceptual learning of speech degraded by vocoding. In Experiments 1 and 3, participants learned paired associations between vocoded spoken nonsense words and nonsense pictures. In Experiment 1, paired-associates (PA) AV training of one group of participants was compared with audio-only (AO) training of another group. When tested under AO conditions, the AV-trained group was significantly more accurate than the AO-trained group. In addition, pre- and post-training AO forced-choice consonant identification with untrained nonsense words showed that AV-trained participants had learned significantly more than AO participants. The pattern of results pointed to their having learned at the level of the auditory phonetic features of the vocoded stimuli. Experiment 2, a no-training control with testing and re-testing on the AO consonant identification, showed that the controls were as accurate as the AO-trained participants in Experiment 1 but less accurate than the AV-trained participants. In Experiment 3, PA training alternated AV and AO conditions on a list-by-list basis within participants, and training was to criterion (92% correct). PA training with AO stimuli was reliably more effective than training with AV stimuli. We explain these discrepant results in terms of the so-called “reverse hierarchy theory” of perceptual learning and in terms of the diverse multisensory and unisensory processing resources available to speech perception. We propose that early AV speech integration can potentially impede auditory perceptual learning; but visual top-down access to relevant auditory features can promote auditory perceptual learning. PMID:23515520
Amitay, Sygal; Moore, David R.; Molloy, Katharine; Halliday, Lorna F.
Previous studies have suggested that negative feedback is more effective in driving learning than positive feedback. We investigated the effect on learning of providing varying amounts of negative and positive feedback while listeners attempted to discriminate between three identical tones; an impossible task that nevertheless produces robust learning. Four feedback conditions were compared during training: 90% positive feedback or 10% negative feedback informed the participants that they wer...
Amitay, Sygal; Moore, David R.; Molloy, Katharine; Halliday, Lorna F.
Previous studies have suggested that negative feedback is more effective in driving learning than positive feedback. We investigated the effect on learning of providing varying amounts of negative and positive feedback while listeners attempted to discriminate between three identical tones; an impossible task that nevertheless produces robust learning. Four feedback conditions were compared during training: 90% positive feedback or 10% negative feedback informed the participants that they were doing equally well, while 10% positive or 90% negative feedback informed them they were doing equally badly. In all conditions the feedback was random in relation to the listeners’ responses (because the task was to discriminate three identical tones), yet both the valence (negative vs. positive) and the probability of feedback (10% vs. 90%) affected learning. Feedback that informed listeners they were doing badly resulted in better post-training performance than feedback that informed them they were doing well, independent of valence. In addition, positive feedback during training resulted in better post-training performance than negative feedback, but only positive feedback indicating listeners were doing badly on the task resulted in learning. As we have previously speculated, feedback that better reflected the difficulty of the task was more effective in driving learning than feedback that suggested performance was better than it should have been given perceived task difficulty. But contrary to expectations, positive feedback was more effective than negative feedback in driving learning. Feedback thus had two separable effects on learning: feedback valence affected motivation on a subjectively difficult task, and learning occurred only when feedback probability reflected the subjective difficulty. To optimize learning, training programs need to take into consideration both feedback valence and probability. PMID:25946173
Full Text Available Medical rehabilitation involving behavioral training can produce highly successful outcomes, but those successes are obtained at the cost of long periods of often tedious training, reducing compliance. By contrast, arcade-style video games can be entertaining and highly motivating. We examine here the impact of video game play on contiguous perceptual training. We alternated several periods of auditory pure-tone frequency discrimination (FD with the popular spatial visual-motor game Tetris played in silence. Tetris play alone did not produce any auditory or cognitive benefits. However, when alternated with FD training it enhanced learning of FD and auditory working memory. The learning-enhancing effects of Tetris play cannot be explained simply by the visual-spatial training involved, as the effects were gone when Tetris play was replaced with another visual-spatial task using Tetris-like stimuli but not incorporated into a game environment. The results indicate that game play enhances learning and transfer of the contiguous auditory experiences, pointing to a promising approach for increasing the efficiency and applicability of rehabilitative training.
Zhang, Yu-Xuan; Tang, Ding-Lan; Moore, David R.; Amitay, Sygal
Medical rehabilitation involving behavioral training can produce highly successful outcomes, but those successes are obtained at the cost of long periods of often tedious training, reducing compliance. By contrast, arcade-style video games can be entertaining and highly motivating. We examine here the impact of video game play on contiguous perceptual training. We alternated several periods of auditory pure-tone frequency discrimination (FD) with the popular spatial visual-motor game Tetris played in silence. Tetris play alone did not produce any auditory or cognitive benefits. However, when alternated with FD training it enhanced learning of FD and auditory working memory. The learning-enhancing effects of Tetris play cannot be explained simply by the visual-spatial training involved, as the effects were gone when Tetris play was replaced with another visual-spatial task using Tetris-like stimuli but not incorporated into a game environment. The results indicate that game play enhances learning and transfer of the contiguous auditory experiences, pointing to a promising approach for increasing the efficiency and applicability of rehabilitative training. PMID:28701989
Zhang, Yu-Xuan; Tang, Ding-Lan; Moore, David R; Amitay, Sygal
Medical rehabilitation involving behavioral training can produce highly successful outcomes, but those successes are obtained at the cost of long periods of often tedious training, reducing compliance. By contrast, arcade-style video games can be entertaining and highly motivating. We examine here the impact of video game play on contiguous perceptual training. We alternated several periods of auditory pure-tone frequency discrimination (FD) with the popular spatial visual-motor game Tetris played in silence. Tetris play alone did not produce any auditory or cognitive benefits. However, when alternated with FD training it enhanced learning of FD and auditory working memory. The learning-enhancing effects of Tetris play cannot be explained simply by the visual-spatial training involved, as the effects were gone when Tetris play was replaced with another visual-spatial task using Tetris-like stimuli but not incorporated into a game environment. The results indicate that game play enhances learning and transfer of the contiguous auditory experiences, pointing to a promising approach for increasing the efficiency and applicability of rehabilitative training.
Full Text Available The extent to which auditory experience can shape general auditory perceptual abilities is still under constant debate. Some studies show that specific auditory expertise may have a general effect on auditory perceptual abilities, while others show a more limited influence, exhibited only in a relatively narrow range associated with the area of expertise. The current study addresses this issue by examining experience-dependent enhancement in perceptual abilities in the auditory domain. Three experiments were performed. In the first experiment, 12 pop and rock musicians and 15 non-musicians were tested in frequency discrimination (DLF, intensity discrimination, spectrum discrimination (DLS, and time discrimination (DLT. Results showed significant superiority of the musician group only for the DLF and DLT tasks, illuminating enhanced perceptual skills in the key features of pop music, in which miniscule changes in amplitude and spectrum are not critical to performance. The next two experiments attempted to differentiate between generalization and specificity in the influence of auditory experience, by comparing subgroups of specialists. First, seven guitar players and eight percussionists were tested in the DLF and DLT tasks that were found superior for musicians. Results showed superior abilities on the DLF task for guitar players, though no difference between the groups in DLT, demonstrating some dependency of auditory learning on the specific area of expertise. Subsequently, a third experiment was conducted, testing a possible influence of vowel density in native language on auditory perceptual abilities. Ten native speakers of German (a language characterized by a dense vowel system of 14 vowels, and 10 native speakers of Hebrew (characterized by a sparse vowel system of five vowels, were tested in a formant discrimination task. This is the linguistic equivalent of a DLS task. Results showed that German speakers had superior formant
Lametti, Daniel R; Krol, Sonia A; Shiller, Douglas M; Ostry, David J
The perception of speech is notably malleable in adults, yet alterations in perception seem to have little impact on speech production. However, we hypothesized that speech perceptual training might immediately influence speech motor learning. To test this, we paired a speech perceptual-training task with a speech motor-learning task. Subjects performed a series of perceptual tests designed to measure and then manipulate the perceptual distinction between the words head and had. Subjects then produced head with the sound of the vowel altered in real time so that they heard themselves through headphones producing a word that sounded more like had. In support of our hypothesis, the amount of motor learning in response to the voice alterations depended on the perceptual boundary acquired through perceptual training. The studies show that plasticity in adults' speech perception can have immediate consequences for speech production in the context of speech learning. © The Author(s) 2014.
Full Text Available Introduction: Speech recognition in adverse listening conditions becomes more difficult as we age, particularly for individuals with age-related hearing loss (ARHL. Whether these difficulties can be eased with training remains debated, because it is not clear whether the outcomes are sufficiently general to be of use outside of the training context. The aim of the current study was to compare training-induced learning and generalization between normal-hearing older adults and those with ARHL.Methods: 56 listeners (60-72 y/o, 35 participants with ARHL and 21 normal hearing adults participated in the study. The study design was a cross over design with three groups (immediate-training, delayed-training and no-training group. Trained participants received 13 sessions of home-based auditory training over the course of 4 weeks. Three adverse listening conditions were targeted: (1 Speech-in-noise (2 time compressed speech and (3 competing speakers, and the outcomes of training were compared between normal and ARHL groups. Pre- and post-test sessions were completed by all participants. Outcome measures included tests on all of the trained conditions as well as on a series of untrained conditions designed to assess the transfer of learning to other speech and non-speech conditions. Results: Significant improvements on all trained conditions were observed in both ARHL and normal-hearing groups over the course of training. Normal hearing participants learned more than participants with ARHL in the speech-in-noise condition, but showed similar patterns of learning in the other conditions. Greater pre- to post-test changes were observed in trained than in untrained listeners on all trained conditions. In addition, the ability of trained listeners from the ARHL group to discriminate minimally different pseudowords in noise also improved with training. Conclusions: ARHL did not preclude auditory perceptual learning but there was little generalization to
Silvio P Eberhardt
Full Text Available In a series of studies we have been investigating how multisensory training affects unisensory perceptual learning with speech stimuli. Previously, we reported that Aaudiovisual training with speech stimuli can promote auditory-only perceptual learning in normal-hearing adults but can impede learning in congenitally deaf adults with late-acquired cochlear implants. Here, impeder and promoter effects were sought in normal-hearing adults who participated in lipreading training. In Experiment 1, visual-only (VO training on paired associations between CVCVC nonsense word videos and nonsense pictures demonstrated that VO words could be learned to a high level of accuracy even by poor lipreaders. In Experiment 2, visual-auditory (VA training in the same paradigm but with the addition of synchronous vocoded acoustic speech impeded VO learning of the stimuli in the paired-associates paradigm. In Experiment 3, the vocoded auditory-only (AO stimuli were shown to be less informative than the VO speech. Experiment 4 combined vibrotactile speech stimuli with the visual stimuli during training. Vibrotactile stimuli were shown to promote visual perceptual learning in participants whose training scores were similar. In Experiment 5, no-training controls were used to show that training with visual speech carried over to consonant identification of untrained CVCVC stimuli but not to lipreading words in sentences. Across this and previous studies, multisensory training effects depended on the functional relationship between pathways engaged during training. Two principles are proposed to account for stimulus effects: (1 Stimuli presented to the trainee’s primary perceptual pathway will impede learning by a lower-rank pathway. (2 Stimuli presented to the trainee’s lower rank perceptual pathway will promote learning by a higher-rank pathway. The mechanisms supporting these principles are discussed in light of multisensory reverse hierarchy theory.
Eberhardt, Silvio P; Auer, Edward T; Bernstein, Lynne E
In a series of studies we have been investigating how multisensory training affects unisensory perceptual learning with speech stimuli. Previously, we reported that audiovisual (AV) training with speech stimuli can promote auditory-only (AO) perceptual learning in normal-hearing adults but can impede learning in congenitally deaf adults with late-acquired cochlear implants. Here, impeder and promoter effects were sought in normal-hearing adults who participated in lipreading training. In Experiment 1, visual-only (VO) training on paired associations between CVCVC nonsense word videos and nonsense pictures demonstrated that VO words could be learned to a high level of accuracy even by poor lipreaders. In Experiment 2, visual-auditory (VA) training in the same paradigm but with the addition of synchronous vocoded acoustic speech impeded VO learning of the stimuli in the paired-associates paradigm. In Experiment 3, the vocoded AO stimuli were shown to be less informative than the VO speech. Experiment 4 combined vibrotactile speech stimuli with the visual stimuli during training. Vibrotactile stimuli were shown to promote visual perceptual learning. In Experiment 5, no-training controls were used to show that training with visual speech carried over to consonant identification of untrained CVCVC stimuli but not to lipreading words in sentences. Across this and previous studies, multisensory training effects depended on the functional relationship between pathways engaged during training. Two principles are proposed to account for stimulus effects: (1) Stimuli presented to the trainee's primary perceptual pathway will impede learning by a lower-rank pathway. (2) Stimuli presented to the trainee's lower rank perceptual pathway will promote learning by a higher-rank pathway. The mechanisms supporting these principles are discussed in light of multisensory reverse hierarchy theory (RHT).
Zhang, Yu-Xuan; Tang, Ding-Lan; Moore, David R.; Amitay, Sygal
Medical rehabilitation involving behavioral training can produce highly successful outcomes, but those successes are obtained at the cost of long periods of often tedious training, reducing compliance. By contrast, arcade-style video games can be entertaining and highly motivating. We examine here the impact of video game play on contiguous perceptual training. We alternated several periods of auditory pure-tone frequency discrimination (FD) with the popular spatial visual-motor game Tetris p...
de Souza Ana Cláudia Silva
Full Text Available Abstract Background There is an accumulating body of evidence indicating that neuronal functional specificity to basic sensory stimulation is mutable and subject to experience. Although fMRI experiments have investigated changes in brain activity after relative to before perceptual learning, brain activity during perceptual learning has not been explored. This work investigated brain activity related to auditory frequency discrimination learning using a variational Bayesian approach for source localization, during simultaneous EEG and fMRI recording. We investigated whether the practice effects are determined solely by activity in stimulus-driven mechanisms or whether high-level attentional mechanisms, which are linked to the perceptual task, control the learning process. Results The results of fMRI analyses revealed significant attention and learning related activity in left and right superior temporal gyrus STG as well as the left inferior frontal gyrus IFG. Current source localization of simultaneously recorded EEG data was estimated using a variational Bayesian method. Analysis of current localized to the left inferior frontal gyrus and the right superior temporal gyrus revealed gamma band activity correlated with behavioral performance. Conclusions Rapid improvement in task performance is accompanied by plastic changes in the sensory cortex as well as superior areas gated by selective attention. Together the fMRI and EEG results suggest that gamma band activity in the right STG and left IFG plays an important role during perceptual learning.
Daikhin, Luba; Ahissar, Merav
Introducing simple stimulus regularities facilitates learning of both simple and complex tasks. This facilitation may reflect an implicit change in the strategies used to solve the task when successful predictions regarding incoming stimuli can be formed. We studied the modifications in brain activity associated with fast perceptual learning based on regularity detection. We administered a two-tone frequency discrimination task and measured brain activation (fMRI) under two conditions: with and without a repeated reference tone. Although participants could not explicitly tell the difference between these two conditions, the introduced regularity affected both performance and the pattern of brain activation. The "No-Reference" condition induced a larger activation in frontoparietal areas known to be part of the working memory network. However, only the condition with a reference showed fast learning, which was accompanied by a reduction of activity in two regions: the left intraparietal area, involved in stimulus retention, and the posterior superior-temporal area, involved in representing auditory regularities. We propose that this joint reduction reflects a reduction in the need for online storage of the compared tones. We further suggest that this change reflects an implicit strategic shift "backwards" from reliance mainly on working memory networks in the "No-Reference" condition to increased reliance on detected regularities stored in high-level auditory networks.
Bernstein, Lynne E.; Eberhardt, Silvio P.; Auer, Edward T.
Training with audiovisual (AV) speech has been shown to promote auditory perceptual learning of vocoded acoustic speech by adults with normal hearing. In Experiment 1, we investigated whether AV speech promotes auditory-only (AO) perceptual learning in prelingually deafened adults with late-acquired cochlear implants. Participants were assigned to learn associations between spoken disyllabic C(=consonant)V(=vowel)CVC non-sense words and non-sense pictures (fribbles), under AV and then AO (AV-AO; or counter-balanced AO then AV, AO-AV, during Periods 1 then 2) training conditions. After training on each list of paired-associates (PA), testing was carried out AO. Across all training, AO PA test scores improved (7.2 percentage points) as did identification of consonants in new untrained CVCVC stimuli (3.5 percentage points). However, there was evidence that AV training impeded immediate AO perceptual learning: During Period-1, training scores across AV and AO conditions were not different, but AO test scores were dramatically lower in the AV-trained participants. During Period-2 AO training, the AV-AO participants obtained significantly higher AO test scores, demonstrating their ability to learn the auditory speech. Across both orders of training, whenever training was AV, AO test scores were significantly lower than training scores. Experiment 2 repeated the procedures with vocoded speech and 43 normal-hearing adults. Following AV training, their AO test scores were as high as or higher than following AO training. Also, their CVCVC identification scores patterned differently than those of the cochlear implant users. In Experiment 1, initial consonants were most accurate, and in Experiment 2, medial consonants were most accurate. We suggest that our results are consistent with a multisensory reverse hierarchy theory, which predicts that, whenever possible, perceivers carry out perceptual tasks immediately based on the experience and biases they bring to the task. We
Heald, Shannon L. M.; Van Hedger, Stephen C.; Nusbaum, Howard C.
In our auditory environment, we rarely experience the exact acoustic waveform twice. This is especially true for communicative signals that have meaning for listeners. In speech and music, the acoustic signal changes as a function of the talker (or instrument), speaking (or playing) rate, and room acoustics, to name a few factors. Yet, despite this acoustic variability, we are able to recognize a sentence or melody as the same across various kinds of acoustic inputs and determine meaning based on listening goals, expectations, context, and experience. The recognition process relates acoustic signals to prior experience despite variability in signal-relevant and signal-irrelevant acoustic properties, some of which could be considered as “noise” in service of a recognition goal. However, some acoustic variability, if systematic, is lawful and can be exploited by listeners to aid in recognition. Perceivable changes in systematic variability can herald a need for listeners to reorganize perception and reorient their attention to more immediately signal-relevant cues. This view is not incorporated currently in many extant theories of auditory perception, which traditionally reduce psychological or neural representations of perceptual objects and the processes that act on them to static entities. While this reduction is likely done for the sake of empirical tractability, such a reduction may seriously distort the perceptual process to be modeled. We argue that perceptual representations, as well as the processes underlying perception, are dynamically determined by an interaction between the uncertainty of the auditory signal and constraints of context. This suggests that the process of auditory recognition is highly context-dependent in that the identity of a given auditory object may be intrinsically tied to its preceding context. To argue for the flexible neural and psychological updating of sound-to-meaning mappings across speech and music, we draw upon examples
Shannon L. M. Heald
Full Text Available In our auditory environment, we rarely experience the exact acoustic waveform twice. This is especially true for communicative signals that have meaning for listeners. In speech and music, the acoustic signal changes as a function of the talker (or instrument, speaking (or playing rate, and room acoustics, to name a few factors. Yet, despite this acoustic variability, we are able to recognize a sentence or melody as the same across various kinds of acoustic inputs and determine meaning based on listening goals, expectations, context, and experience. The recognition process relates acoustic signals to prior experience despite variability in signal-relevant and signal-irrelevant acoustic properties, some of which could be considered as “noise” in service of a recognition goal. However, some acoustic variability, if systematic, is lawful and can be exploited by listeners to aid in recognition. Perceivable changes in systematic variability can herald a need for listeners to reorganize perception and reorient their attention to more immediately signal-relevant cues. This view is not incorporated currently in many extant theories of auditory perception, which traditionally reduce psychological or neural representations of perceptual objects and the processes that act on them to static entities. While this reduction is likely done for the sake of empirical tractability, such a reduction may seriously distort the perceptual process to be modeled. We argue that perceptual representations, as well as the processes underlying perception, are dynamically determined by an interaction between the uncertainty of the auditory signal and constraints of context. This suggests that the process of auditory recognition is highly context-dependent in that the identity of a given auditory object may be intrinsically tied to its preceding context. To argue for the flexible neural and psychological updating of sound-to-meaning mappings across speech and music, we
Dosher, Barbara; Lu, Zhong-Lin
Visual perceptual learning through practice or training can significantly improve performance on visual tasks. Originally seen as a manifestation of plasticity in the primary visual cortex, perceptual learning is more readily understood as improvements in the function of brain networks that integrate processes, including sensory representations, decision, attention, and reward, and balance plasticity with system stability. This review considers the primary phenomena of perceptual learning, theories of perceptual learning, and perceptual learning's effect on signal and noise in visual processing and decision. Models, especially computational models, play a key role in behavioral and physiological investigations of the mechanisms of perceptual learning and for understanding, predicting, and optimizing human perceptual processes, learning, and performance. Performance improvements resulting from reweighting or readout of sensory inputs to decision provide a strong theoretical framework for interpreting perceptual learning and transfer that may prove useful in optimizing learning in real-world applications.
Full Text Available The auditory system of adult listeners has been shown to accommodate to altered spectral cues to sound location which presumably provides the basis for recalibration to changes in the shape of the ear over a life time. Here we review the role of auditory and non-auditory inputs to the perception of sound location and consider a range of recent experiments looking at the role of non-auditory inputs in the process of accommodation to these altered spectral cues. A number of studies have used small ear moulds to modify the spectral cues that result in significant degradation in localization performance. Following chronic exposure (10-60 days performance recovers to some extent and recent work has demonstrated that this occurs for both audio-visual and audio-only regions of space. This begs the questions as to the teacher signal for this remarkable functional plasticity in the adult nervous system. Following a brief review of influence of the motor state in auditory localisation, we consider the potential role of auditory-motor learning in the perceptual recalibration of the spectral cues. Several recent studies have considered how multi-modal and sensory-motor feedback might influence accommodation to altered spectral cues produced by ear moulds or through virtual auditory space stimulation using non-individualised spectral cues. The work with ear moulds demonstrates that a relatively short period of training involving sensory-motor feedback (5 – 10 days significantly improved both the rate and extent of accommodation to altered spectral cues. This has significant implications not only for the mechanisms by which this complex sensory information is encoded to provide a spatial code but also for adaptive training to altered auditory inputs. The review concludes by considering the implications for rehabilitative training with hearing aids and cochlear prosthesis.
Kellman, Philip J.; Garrigan, Patrick
We consider perceptual learning: experience-induced changes in the way perceivers extract information. Often neglected in scientific accounts of learning and in instruction, perceptual learning is a fundamental contributor to human expertise and is crucial in domains where humans show remarkable levels of attainment, such as language, chess, music, and mathematics. In Section 2, we give a brief history and discuss the relation of perceptual learning to other forms of learning. We consider in Section 3 several specific phenomena, illustrating the scope and characteristics of perceptual learning, including both discovery and fluency effects. We describe abstract perceptual learning, in which structural relationships are discovered and recognized in novel instances that do not share constituent elements or basic features. In Section 4, we consider primary concepts that have been used to explain and model perceptual learning, including receptive field change, selection, and relational recoding. In Section 5, we consider the scope of perceptual learning, contrasting recent research, focused on simple sensory discriminations, with earlier work that emphasized extraction of invariance from varied instances in more complex tasks. Contrary to some recent views, we argue that perceptual learning should not be confined to changes in early sensory analyzers. Phenomena at various levels, we suggest, can be unified by models that emphasize discovery and selection of relevant information. In a final section, we consider the potential role of perceptual learning in educational settings. Most instruction emphasizes facts and procedures that can be verbalized, whereas expertise depends heavily on implicit pattern recognition and selective extraction skills acquired through perceptual learning. We consider reasons why perceptual learning has not been systematically addressed in traditional instruction, and we describe recent successful efforts to create a technology of perceptual
Lynne E Bernstein
Full Text Available Training with audiovisual (AO speech can promote auditory perceptual learning of vocoded acoustic speech by adults with normal hearing. Pre-/perilingually deafened adults rely on visual speech even when they also use a cochlear implant. This study investigated whether visual speech promotes auditory perceptual learning in these cochlear implant users. In Experiment 1, 28 prelingually deafened adults with late-acquired cochlear implants were assigned to learn paired associations between spoken disyllabic C(=consonantV(=vowelCVC nonsense words and nonsense pictures (fribbles, under AV and then under auditory-only (AO (or counter-balanced AO then AV training conditions. After training on each list of paired-associates (PA, testing was carried out AO. Across AV and AO training, AO PA test scores improved as did identification of consonants in untrained CVCVC stimuli. However, whenever PA training was carried out with AV stimuli, AO test scores were steeply reduced. Experiment 2 repeated the experiment with 43 normal-hearing adults. Their AO tests scores did not drop following AV PA training and even increased relative to scores following AO training. Normal-hearing participants' consonant identification scores improved also but with a pattern that contrasted with cochlear implant users’: Normal hearing adults were most accurate for medial consonants, and in contrast cochlear implant users were most accurate for initial consonants. The results are interpreted within a multisensory reverse hierarchy theory, which predicts that perceptual tasks are carried out whenever possible based on immediate high-level perception without scrutiny of lower-level features. The theory implies that, based on their bias towards visual speech, cochlear implant participants learned the PAs with greater reliance on vision to the detriment of auditory perceptual learning. Normal-hearing participants' learning took advantage of the concurrence between auditory and visual
Besken, Miri; Mulligan, Neil W.
Judgments of learning (JOLs) are sometimes influenced by factors that do not impact actual memory performance. One recent proposal is that perceptual fluency during encoding affects metamemory and is a basis of metacognitive illusions. In the present experiments, participants identified aurally presented words that contained inter-spliced silences…
Weaver, Phyllis A.; Rosner, Jerome
This paper reports the outcomes of a correlational study that examined the relationships between visual and auditory perceptual skills, on the one hand, and comprehension that is independent of decoding, on the other. Five sets of test scores--a visual perceptual test (Coloured Progressive Matrices), an auditory perceptual test (Auditory Motor…
Bunton, Kate; Kent, Raymond D.; Duffy, Joseph R.; Rosenbek, John C.; Kent, Jane F.
Purpose: Darley, Aronson, and Brown (1969a, 1969b) detailed methods and results of auditory-perceptual assessment for speakers with dysarthrias of varying etiology. They reported adequate listener reliability for use of the rating system as a tool for differential diagnosis, but several more recent studies have raised concerns about listener…
Agus, Trevor R.; Carrión-Castillo, Amaia; Pressnitzer, Daniel; Ramus, Franck
Purpose: A phonological deficit is thought to affect most individuals with developmental dyslexia. The present study addresses whether the phonological deficit is caused by difficulties with perceptual learning of fine acoustic details. Method: A demanding test of nonverbal auditory memory, "noise learning," was administered to both…
Gabay, Yafit; Dick, Frederic K; Zevin, Jason D; Holt, Lori L
Very little is known about how auditory categories are learned incidentally, without instructions to search for category-diagnostic dimensions, overt category decisions, or experimenter-provided feedback. This is an important gap because learning in the natural environment does not arise from explicit feedback and there is evidence that the learning systems engaged by traditional tasks are distinct from those recruited by incidental category learning. We examined incidental auditory category learning with a novel paradigm, the Systematic Multimodal Associations Reaction Time (SMART) task, in which participants rapidly detect and report the appearance of a visual target in 1 of 4 possible screen locations. Although the overt task is rapid visual detection, a brief sequence of sounds precedes each visual target. These sounds are drawn from 1 of 4 distinct sound categories that predict the location of the upcoming visual target. These many-to-one auditory-to-visuomotor correspondences support incidental auditory category learning. Participants incidentally learn categories of complex acoustic exemplars and generalize this learning to novel exemplars and tasks. Further, learning is facilitated when category exemplar variability is more tightly coupled to the visuomotor associations than when the same stimulus variability is experienced across trials. We relate these findings to phonetic category learning. (c) 2015 APA, all rights reserved).
Tsodyks, Misha; Gilbert, Charles
Sensory perception is a learned trait. The brain strategies we use to perceive the world are constantly modified by experience. With practice, we subconsciously become better at identifying familiar objects or distinguishing fine details in our environment. Current theoretical models simulate some properties of perceptual learning, but neglect the underlying cortical circuits. Future neural network models must incorporate the top-down alteration of cortical function by expectation or perceptual tasks. These newly found dynamic processes are challenging earlier views of static and feedforward processing of sensory information. PMID:15483598
de Kok, I.A.; Poppe, Ronald Walter; Heylen, Dirk K.J.
We introduce Iterative Perceptual Learning (IPL), a novel approach for learning computational models for social behavior synthesis from corpora of human-human interactions. The IPL approach combines perceptual evaluation with iterative model refinement. Human observers rate the appropriateness of
de Kok, I.A.; Poppe, Ronald Walter; Heylen, Dirk K.J.
We introduce Iterative Perceptual Learning (IPL), a novel approach to learn computational models for social behavior synthesis from corpora of human–human interactions. IPL combines perceptual evaluation with iterative model refinement. Human observers rate the appropriateness of synthesized
Kellman, Philip J.; Massey, Christine M.
Recent research indicates that perceptual learning (PL)--experience-induced changes in the way perceivers extract information--plays a larger role in complex cognitive tasks, including abstract and symbolic domains, than has been understood in theory or implemented in instruction. Here, we describe the involvement of PL in complex cognitive tasks…
Andrillon, Thomas; Kouider, Sid; Agus, Trevor; Pressnitzer, Daniel
Experience continuously imprints on the brain at all stages of life. The traces it leaves behind can produce perceptual learning , which drives adaptive behavior to previously encountered stimuli. Recently, it has been shown that even random noise, a type of sound devoid of acoustic structure, can trigger fast and robust perceptual learning after repeated exposure . Here, by combining psychophysics, electroencephalography (EEG), and modeling, we show that the perceptual learning of noise is associated with evoked potentials, without any salient physical discontinuity or obvious acoustic landmark in the sound. Rather, the potentials appeared whenever a memory trace was observed behaviorally. Such memory-evoked potentials were characterized by early latencies and auditory topographies, consistent with a sensory origin. Furthermore, they were generated even on conditions of diverted attention. The EEG waveforms could be modeled as standard evoked responses to auditory events (N1-P2) , triggered by idiosyncratic perceptual features acquired through learning. Thus, we argue that the learning of noise is accompanied by the rapid formation of sharp neural selectivity to arbitrary and complex acoustic patterns, within sensory regions. Such a mechanism bridges the gap between the short-term and longer-term plasticity observed in the learning of noise [2, 4-6]. It could also be key to the processing of natural sounds within auditory cortices , suggesting that the neural code for sound source identification will be shaped by experience as well as by acoustics. Copyright © 2015 Elsevier Ltd. All rights reserved.
Brown, Rachel M; Palmer, Caroline
In two experiments, we investigated how auditory-motor learning influences performers' memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory-motor (normal performance), and weakly coupled auditory-motor (performing along with auditory recordings). Pianists' recognition of the learned melodies was better following auditory-only or auditory-motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory-motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory-motor learning. These findings suggest that motor learning can aid performers' auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features.
Gottselig, J M; Hofer-Tinguely, G; Borbély, A A; Regel, S J; Landolt, H-P; Rétey, J V; Achermann, P
Sleep is superior to waking for promoting performance improvements between sessions of visual perceptual and motor learning tasks. Few studies have investigated possible effects of sleep on auditory learning. A key issue is whether sleep specifically promotes learning, or whether restful waking yields similar benefits. According to the "interference hypothesis," sleep facilitates learning because it prevents interference from ongoing sensory input, learning and other cognitive activities that normally occur during waking. We tested this hypothesis by comparing effects of sleep, busy waking (watching a film) and restful waking (lying in the dark) on auditory tone sequence learning. Consistent with recent findings for human language learning, we found that compared with busy waking, sleep between sessions of auditory tone sequence learning enhanced performance improvements. Restful waking provided similar benefits, as predicted based on the interference hypothesis. These findings indicate that physiological, behavioral and environmental conditions that accompany restful waking are sufficient to facilitate learning and may contribute to the facilitation of learning that occurs during sleep.
Full Text Available Perceptual training is generally assumed to improve perception by modifying the encoding or decoding of sensory information. However, this assumption is incompatible with recent demonstrations that transfer of learning can be enhanced by across-trial variation of training stimuli or task. Here we present three lines of evidence from healthy adults in support of the idea that the enhanced transfer of auditory discrimination learning is mediated by working memory (WM. First, the ability to discriminate small differences in tone frequency or duration was correlated with WM measured with a tone n-back task. Second, training frequency discrimination around a variable frequency transferred to and from WM learning, but training around a fixed frequency did not. The transfer of learning in both directions was correlated with a reduction of the influence of stimulus variation in the discrimination task, linking WM and its improvement to across-trial stimulus interaction in auditory discrimination. Third, while WM training transferred broadly to other WM and auditory discrimination tasks, variable-frequency training on duration discrimination did not improve WM, indicating that stimulus variation challenges and trains WM only if the task demands stimulus updating in the varied dimension. The results provide empirical evidence as well as a theoretic framework for interactions between cognitive and sensory plasticity during perceptual experience.
Full Text Available The perception of speech sounds can be re-tuned through a mechanism of lexically-driven perceptual learning after exposure to instances of atypical speech production. This study asked whether this re-tuning is sensitive to the position of the atypical sound within the word. We investigated perceptual learning using English voiced stop consonants, which are commonly devoiced in word-final position by Dutch learners of English. After exposure to a Dutch learner’s productions of devoiced stops in word-final position (but not in any other positions, British English (BE listeners showed evidence of perceptual learning in a subsequent cross-modal priming task, where auditory primes with devoiced final stops (e.g., ‘seed’, pronounced [si:t^h], facilitated recognition of visual targets with voiced final stops (e.g., SEED. In Experiment 1, this learning effect generalized to test pairs where the critical contrast was in word-initial position, e.g. auditory primes such as ‘town’ facilitated recognition of visual targets like DOWN. Control listeners, who had not heard any stops by the speaker during exposure, showed no learning effects. The generalization to word-initial position did not occur when participants had also heard correctly voiced, word-initial stops during exposure (Experiment 2, and when the speaker was a native BE speaker who mimicked the word-final devoicing (Experiment 3. The readiness of the perceptual system to generalize a previously learned adjustment to other positions within the word thus appears to be modulated by distributional properties of the speech input, as well as by the perceived sociophonetic characteristics of the speaker. The results suggest that the transfer of pre-lexical perceptual adjustments that occur through lexically-driven learning can be affected by a combination of acoustic, phonological, and sociophonetic factors.
Eisner, Frank; Melinger, Alissa; Weber, Andrea
The perception of speech sounds can be re-tuned through a mechanism of lexically driven perceptual learning after exposure to instances of atypical speech production. This study asked whether this re-tuning is sensitive to the position of the atypical sound within the word. We investigated perceptual learning using English voiced stop consonants, which are commonly devoiced in word-final position by Dutch learners of English. After exposure to a Dutch learner’s productions of devoiced stops in word-final position (but not in any other positions), British English (BE) listeners showed evidence of perceptual learning in a subsequent cross-modal priming task, where auditory primes with devoiced final stops (e.g., “seed”, pronounced [si:th]), facilitated recognition of visual targets with voiced final stops (e.g., SEED). In Experiment 1, this learning effect generalized to test pairs where the critical contrast was in word-initial position, e.g., auditory primes such as “town” facilitated recognition of visual targets like DOWN. Control listeners, who had not heard any stops by the speaker during exposure, showed no learning effects. The generalization to word-initial position did not occur when participants had also heard correctly voiced, word-initial stops during exposure (Experiment 2), and when the speaker was a native BE speaker who mimicked the word-final devoicing (Experiment 3). The readiness of the perceptual system to generalize a previously learned adjustment to other positions within the word thus appears to be modulated by distributional properties of the speech input, as well as by the perceived sociophonetic characteristics of the speaker. The results suggest that the transfer of pre-lexical perceptual adjustments that occur through lexically driven learning can be affected by a combination of acoustic, phonological, and sociophonetic factors. PMID:23554598
Rydell, Robert J; Shiffrin, Richard M; Boucher, Kathryn L; Van Loo, Katie; Rydell, Michael T
Stereotype threat (ST) refers to a situation in which a member of a group fears that her or his performance will validate an existing negative performance stereotype, causing a decrease in performance. For example, reminding women of the stereotype "women are bad at math" causes them to perform more poorly on math questions from the SAT and GRE. Performance deficits can be of several types and be produced by several mechanisms. We show that ST prevents perceptual learning, defined in our task as an increasing rate of search for a target Chinese character in a display of such characters. Displays contained two or four characters and half of these contained a target. Search rate increased across a session of training for a control group of women, but not women under ST. Speeding of search is typically explained in terms of learned "popout" (automatic attraction of attention to a target). Did women under ST learn popout but fail to express it? Following training, the women were shown two colored squares and asked to choose the one with the greater color saturation. Superimposed on the squares were task-irrelevant Chinese characters. For women not trained under ST, the presence of a trained target on one square slowed responding, indicating that training had caused the learning of an attention response to targets. Women trained under ST showed no slowing, indicating that they had not learned such an attention response.
Sohoglu, Ediz; Davis, Matthew H
Human perception is shaped by past experience on multiple timescales. Sudden and dramatic changes in perception occur when prior knowledge or expectations match stimulus content. These immediate effects contrast with the longer-term, more gradual improvements that are characteristic of perceptual learning. Despite extensive investigation of these two experience-dependent phenomena, there is considerable debate about whether they result from common or dissociable neural mechanisms. Here we test single- and dual-mechanism accounts of experience-dependent changes in perception using concurrent magnetoencephalographic and EEG recordings of neural responses evoked by degraded speech. When speech clarity was enhanced by prior knowledge obtained from matching text, we observed reduced neural activity in a peri-auditory region of the superior temporal gyrus (STG). Critically, longer-term improvements in the accuracy of speech recognition following perceptual learning resulted in reduced activity in a nearly identical STG region. Moreover, short-term neural changes caused by prior knowledge and longer-term neural changes arising from perceptual learning were correlated across subjects with the magnitude of learning-induced changes in recognition accuracy. These experience-dependent effects on neural processing could be dissociated from the neural effect of hearing physically clearer speech, which similarly enhanced perception but increased rather than decreased STG responses. Hence, the observed neural effects of prior knowledge and perceptual learning cannot be attributed to epiphenomenal changes in listening effort that accompany enhanced perception. Instead, our results support a predictive coding account of speech perception; computational simulations show how a single mechanism, minimization of prediction error, can drive immediate perceptual effects of prior knowledge and longer-term perceptual learning of degraded speech.
Nazila Salary Majd
Full Text Available Background and Aim: Auditory-perceptual assessment of voice a main approach in the diagnosis and therapy improvement of voice disorders. Despite, there are few Iranian studies about auditory-perceptual assessment of voice. The aim of present study was development and determination of validity and rater reliability of Persian version of the Consensus Auditory Perceptual Evaluation of Voice (CAPE -V.Methods: The qualitative content validity was detected by collecting 10 questionnaires from 9 experienced speech and language pathologists and a linguist. For reliability purposes, the voice samples of 40 dysphonic (neurogenic, functional with and without laryngeal lesions adults (20-45 years of age and 10 normal healthy speakers were recorded. The samples included sustain of vowels and reading the 6 sentences of Persian version of the consensus auditory perceptual evaluation of voice called the ATSHA.Results: The qualitative content validity was proved for developed Persian version of the consensus auditory perceptual evaluation of voice. Cronbach’s alpha was high (0.95. Intra-rater reliability coefficients ranged from 0.86 for overall severity to 0.42 for pitch; inter-rater reliability ranged from 0.85 for overall severity to 0.32 for pitch (p<0.05.Conclusion: The ATSHA can be used as a valid and reliable Persian scale for auditory perceptual assessment of voice in adults.
.... This fundamental process of auditory perception is called auditory scene analysis. of particular importance in auditory scene analysis is the separation of speech from interfering sounds, or speech segregation...
Schier, Ana Cândida; Berti, Larissa Cristina; Chacon, Lourenço
To investigate the perceptual-auditory and orthographic performances of students regarding identification of contrasts among the fricatives of Brazilian Portuguese, and to investigate the extent to which these two types of performances are related. Data from perceptual-auditory and orthographic performances of 20 children attending the two first grades of elementary education at a public school in Mallet (PR), Brazil, were analyzed. Data collection regarding auditory perception was based on the Assessment Tool in Speech Perception (PERCEFAL), using the software Perceval. Data collection regarding orthography was carried out through dictation of the same words used in the assessment tool PERCEFAL. We observed: more accuracy in perceptual-auditory than in orthographic skills; tendency of shorter response time and lesser variability in the perceptual-auditory hits than in the errors; mismatch of errors in orthographic and auditory perception, since, in perception, the highest percentage of errors involved the point of articulation of fricatives, while in orthography the highest percentage involved voicing. Although related to each other, perceptual-auditory and orthographic performances do not match term by term. Therefore, in clinical practice, attention should focus not only on the aspects that bring these two performances together, but also on the aspects that differentiate them.
Full Text Available BACKGROUND: An outstanding question in sensory neuroscience is whether the perceived timing of events is mediated by a central supra-modal timing mechanism, or multiple modality-specific systems. We use a perceptual learning paradigm to address this question. METHODOLOGY/PRINCIPAL FINDINGS: Three groups were trained daily for 10 sessions on an auditory, a visual or a combined audiovisual temporal order judgment (TOJ. Groups were pre-tested on a range TOJ tasks within and between their group modality prior to learning so that transfer of any learning from the trained task could be measured by post-testing other tasks. Robust TOJ learning (reduced temporal order discrimination thresholds occurred for all groups, although auditory learning (dichotic 500/2000 Hz tones was slightly weaker than visual learning (lateralised grating patches. Crossmodal TOJs also displayed robust learning. Post-testing revealed that improvements in temporal resolution acquired during visual learning transferred within modality to other retinotopic locations and orientations, but not to auditory or crossmodal tasks. Auditory learning did not transfer to visual or crossmodal tasks, and neither did it transfer within audition to another frequency pair. In an interesting asymmetry, crossmodal learning transferred to all visual tasks but not to auditory tasks. Finally, in all conditions, learning to make TOJs for stimulus onsets did not transfer at all to discriminating temporal offsets. These data present a complex picture of timing processes. CONCLUSIONS/SIGNIFICANCE: The lack of transfer between unimodal groups indicates no central supramodal timing process for this task; however, the audiovisual-to-visual transfer cannot be explained without some form of sensory interaction. We propose that auditory learning occurred in frequency-tuned processes in the periphery, precluding interactions with more central visual and audiovisual timing processes. Functionally the patterns
Mill, Robert W.; Bőhm, Tamás M.; Bendixen, Alexandra; Winkler, István; Denham, Susan L.
Many sound sources can only be recognised from the pattern of sounds they emit, and not from the individual sound events that make up their emission sequences. Auditory scene analysis addresses the difficult task of interpreting the sound world in terms of an unknown number of discrete sound sources (causes) with possibly overlapping signals, and therefore of associating each event with the appropriate source. There are potentially many different ways in which incoming events can be assigned to different causes, which means that the auditory system has to choose between them. This problem has been studied for many years using the auditory streaming paradigm, and recently it has become apparent that instead of making one fixed perceptual decision, given sufficient time, auditory perception switches back and forth between the alternatives—a phenomenon known as perceptual bi- or multi-stability. We propose a new model of auditory scene analysis at the core of which is a process that seeks to discover predictable patterns in the ongoing sound sequence. Representations of predictable fragments are created on the fly, and are maintained, strengthened or weakened on the basis of their predictive success, and conflict with other representations. Auditory perceptual organisation emerges spontaneously from the nature of the competition between these representations. We present detailed comparisons between the model simulations and data from an auditory streaming experiment, and show that the model accounts for many important findings, including: the emergence of, and switching between, alternative organisations; the influence of stimulus parameters on perceptual dominance, switching rate and perceptual phase durations; and the build-up of auditory streaming. The principal contribution of the model is to show that a two-stage process of pattern discovery and competition between incompatible patterns can account for both the contents (perceptual organisations) and the
Thordis Marisa Neger
Full Text Available Within a few sentences, listeners learn to understand severely degraded speech such as noise-vocoded speech. However, individuals vary in the amount of such perceptual learning and it is unclear what underlies these differences. The present study investigates whether perceptual learning in speech relates to statistical learning, as sensitivity to probabilistic information may aid identification of relevant cues in novel speech input. If statistical learning and perceptual learning (partly draw on the same general mechanisms, then statistical learning in a non-auditory modality using non-linguistic sequences should predict adaptation to degraded speech.In the present study, 73 older adults (aged over 60 years and 60 younger adults (aged between 18 and 30 years performed a visual artificial grammar learning task and were presented with sixty meaningful noise-vocoded sentences in an auditory recall task. Within age groups, sentence recognition performance over exposure was analyzed as a function of statistical learning performance, and other variables that may predict learning (i.e., hearing, vocabulary, attention switching control, working memory and processing speed. Younger and older adults showed similar amounts of perceptual learning, but only younger adults showed significant statistical learning. In older adults, improvement in understanding noise-vocoded speech was constrained by age. In younger adults, amount of adaptation was associated with lexical knowledge and with statistical learning ability. Thus, individual differences in general cognitive abilities explain listeners' variability in adapting to noise-vocoded speech. Results suggest that perceptual and statistical learning share mechanisms of implicit regularity detection, but that the ability to detect statistical regularities is impaired in older adults if visual sequences are presented quickly.
Neger, Thordis M; Rietveld, Toni; Janse, Esther
Within a few sentences, listeners learn to understand severely degraded speech such as noise-vocoded speech. However, individuals vary in the amount of such perceptual learning and it is unclear what underlies these differences. The present study investigates whether perceptual learning in speech relates to statistical learning, as sensitivity to probabilistic information may aid identification of relevant cues in novel speech input. If statistical learning and perceptual learning (partly) draw on the same general mechanisms, then statistical learning in a non-auditory modality using non-linguistic sequences should predict adaptation to degraded speech. In the present study, 73 older adults (aged over 60 years) and 60 younger adults (aged between 18 and 30 years) performed a visual artificial grammar learning task and were presented with 60 meaningful noise-vocoded sentences in an auditory recall task. Within age groups, sentence recognition performance over exposure was analyzed as a function of statistical learning performance, and other variables that may predict learning (i.e., hearing, vocabulary, attention switching control, working memory, and processing speed). Younger and older adults showed similar amounts of perceptual learning, but only younger adults showed significant statistical learning. In older adults, improvement in understanding noise-vocoded speech was constrained by age. In younger adults, amount of adaptation was associated with lexical knowledge and with statistical learning ability. Thus, individual differences in general cognitive abilities explain listeners' variability in adapting to noise-vocoded speech. Results suggest that perceptual and statistical learning share mechanisms of implicit regularity detection, but that the ability to detect statistical regularities is impaired in older adults if visual sequences are presented quickly.
Waldron, K A; Saphire, D G
This research explored ways gifted children with learning disabilities perceive and recall auditory and visual input and apply this information to reading, mathematics, and spelling. 24 learning-disabled/gifted children and a matched control group of normally achieving gifted students were tested for oral reading, word recognition and analysis, listening comprehension, and spelling. In mathematics, they were tested for numeration, mental and written computation, word problems, and numerical reasoning. To explore perception and memory skills, students were administered formal tests of visual and auditory memory as well as auditory discrimination of sounds. Their responses to reading and to mathematical computations were further considered for evidence of problems in visual discrimination, visual sequencing, and visual spatial areas. Analyses indicated that these learning-disabled/gifted students were significantly weaker than controls in their decoding skills, in spelling, and in most areas of mathematics. They were also significantly weaker in auditory discrimination and memory, and in visual discrimination, sequencing, and spatial abilities. Conclusions are that these underlying perceptual and memory deficits may be related to students' academic problems.
Zhang, Yilu; Weng, Juyang; Hwang, Wey-Shiuan
Motivated by the human autonomous development process from infancy to adulthood, we have built a robot that develops its cognitive and behavioral skills through real-time interactions with the environment. We call such a robot a developmental robot. In this paper, we present the theory and the architecture to implement a developmental robot and discuss the related techniques that address an array of challenging technical issues. As an application, experimental results on a real robot, self-organizing, autonomous, incremental learner (SAIL), are presented with emphasis on its audition perception and audition-related action generation. In particular, the SAIL robot conducts the auditory learning from unsegmented and unlabeled speech streams without any prior knowledge about the auditory signals, such as the designated language or the phoneme models. Neither available before learning starts are the actions that the robot is expected to perform. SAIL learns the auditory commands and the desired actions from physical contacts with the environment including the trainers.
Van Hedger, Stephen C; Heald, Shannon L M; Koch, Rachelle; Nusbaum, Howard C
Absolute pitch (AP) is typically defined as the ability to label an isolated tone as a musical note in the absence of a reference tone. At first glance the acquisition of AP note categories seems like a perceptual learning task, since individuals must assign a category label to a stimulus based on a single perceptual dimension (pitch) while ignoring other perceptual dimensions (e.g., loudness, octave, instrument). AP, however, is rarely discussed in terms of domain-general perceptual learning mechanisms. This is because AP is typically assumed to depend on a critical period of development, in which early exposure to pitches and musical labels is thought to be necessary for the development of AP precluding the possibility of adult acquisition of AP. Despite this view of AP, several previous studies have found evidence that absolute pitch category learning is, to an extent, trainable in a post-critical period adult population, even if the performance typically achieved by this population is below the performance of a "true" AP possessor. The current studies attempt to understand the individual differences in learning to categorize notes using absolute pitch cues by testing a specific prediction regarding cognitive capacity related to categorization - to what extent does an individual's general auditory working memory capacity (WMC) predict the success of absolute pitch category acquisition. Since WMC has been shown to predict performance on a wide variety of other perceptual and category learning tasks, we predict that individuals with higher WMC should be better at learning absolute pitch note categories than individuals with lower WMC. Across two studies, we demonstrate that auditory WMC predicts the efficacy of learning absolute pitch note categories. These results suggest that a higher general auditory WMC might underlie the formation of absolute pitch categories for post-critical period adults. Implications for understanding the mechanisms that underlie the
Bonebright, T.L.; Caudell, T.P.; Goldsmith, T.E.; Miner, N.E.
This paper describes a general methodological framework for evaluating the perceptual properties of auditory stimuli. The framework provides analysis techniques that can ensure the effective use of sound for a variety of applications including virtual reality and data sonification systems. Specifically, we discuss data collection techniques for the perceptual qualities of single auditory stimuli including identification tasks, context-based ratings, and attribute ratings. In addition, we present methods for comparing auditory stimuli, such as discrimination tasks, similarity ratings, and sorting tasks. Finally, we discuss statistical techniques that focus on the perceptual relations among stimuli, such as Multidimensional Scaling (MDS) and Pathfinder Analysis. These methods are presented as a starting point for an organized and systematic approach for non-experts in perceptual experimental methods, rather than as a complete manual for performing the statistical techniques and data collection methods. It is our hope that this paper will help foster further interdisciplinary collaboration among perceptual researchers, designers, engineers, and others in the development of effective auditory displays.
Friar, John T.
Two factors of predicted learning disorders were investigated: (1) inability to maintain appropriate classroom behavior (BEH), (2) perceptual discrimination deficit (PERC). Three groups of first-graders (BEH, PERC, normal control) were administered measures of impulse control, distractability, auditory discrimination, and visual discrimination.…
Full Text Available Despite advanced technologies in auditory rehabilitation of profound deafness, deaf children often exhibit delayed cognitive and linguistic development and auditory training remains a crucial element of their education. In the present cross-sectional study, we assess whether music would be a relevant tool for deaf children rehabilitation. In normal-hearing children, music lessons have been shown to improve cognitive and linguistic-related abilities, such as phonetic discrimination and reading. We compared auditory perception, auditory cognition, and phonetic discrimination between 14 profoundly deaf children who completed weekly music lessons for a period of 1.5 to 4 years and 14 deaf children who did not receive musical instruction. Children were assessed on perceptual and cognitive auditory tasks using environmental sounds: discrimination, identification, auditory scene analysis, auditory working memory. Transfer to the linguistic domain was tested with a phonetic discrimination task. Musically-trained children showed better performance in auditory scene analysis, auditory working memory and phonetic discrimination tasks, and multiple regressions showed that success on these tasks was at least partly driven by music lessons. We propose that musical education contributes to development of general processes such as auditory attention and perception, which, in turn, facilitate auditory-related cognitive and linguistic processes.
Rochette, Françoise; Moussard, Aline; Bigand, Emmanuel
Despite advanced technologies in auditory rehabilitation of profound deafness, deaf children often exhibit delayed cognitive and linguistic development and auditory training remains a crucial element of their education. In the present cross-sectional study, we assess whether music would be a relevant tool for deaf children rehabilitation. In normal-hearing children, music lessons have been shown to improve cognitive and linguistic-related abilities, such as phonetic discrimination and reading. We compared auditory perception, auditory cognition, and phonetic discrimination between 14 profoundly deaf children who completed weekly music lessons for a period of 1.5-4 years and 14 deaf children who did not receive musical instruction. Children were assessed on perceptual and cognitive auditory tasks using environmental sounds: discrimination, identification, auditory scene analysis, auditory working memory. Transfer to the linguistic domain was tested with a phonetic discrimination task. Musically trained children showed better performance in auditory scene analysis, auditory working memory and phonetic discrimination tasks, and multiple regressions showed that success on these tasks was at least partly driven by music lessons. We propose that musical education contributes to development of general processes such as auditory attention and perception, which, in turn, facilitate auditory-related cognitive and linguistic processes.
Nívea Franklin Chaves Martins; Hipólito Virgílio Magalhães Jr
The aim of this case report was to promote a reflection about the importance of speech-therapy for stimulation a person with learning disability associated to language and auditory processing disorders. Data analysis considered the auditory abilities deficits identified in the first auditory processing test, held on April 30,2002 compared with the new auditory processing test done on May 13,2003,after one year of therapy directed to acoustic stimulation of auditory abilities disorders,in acco...
The article summarizes information on assistive devices (hearing aids, cochlear implants, tactile aids, visual aids) and rehabilitation procedures (auditory training, speechreading, cued speech, and speech production) to aid the auditory learning of the hearing impaired.(DB)
Michael J Proulx
Full Text Available A sensory substitution device for blind persons aims to provide the missing visual input by converting images into a form that another modality can perceive, such as sound. Here I will discuss the perceptual learning and attentional mechanisms necessary for interpreting sounds produced by a device (The vOICe in a visuospatial manner. Although some aspects of the conversion, such as relating vertical location to pitch, rely on natural crossmodal mappings, the extensive training required suggests that synthetic mappings are required to generalize perceptual learning to new objects and environments, and ultimately to experience visual qualia. Here I will discuss the effects of the conversion and training on perception and attention that demonstrate the synthetic nature of learning the crossmodal mapping. Sensorimotor experience may be required to facilitate learning, develop expertise, and to develop a form of synthetic synaesthesia.
Mishra, Srikanta K; Panda, Manasa R
Musical training and experience greatly enhance the cortical and subcortical processing of sounds, which may translate to superior auditory perceptual acuity. Auditory temporal resolution is a fundamental perceptual aspect that is critical for speech understanding in noise in listeners with normal hearing, auditory disorders, cochlear implants, and language disorders, yet very few studies have focused on music-induced learning of temporal resolution. This report demonstrates that Carnatic musical training and experience have a significant impact on temporal resolution assayed by gap detection thresholds. This experience-dependent learning in Carnatic-trained musicians exhibits the universal aspects of human perception and plasticity. The present work adds the perceptual component to a growing body of neurophysiological and imaging studies that suggest plasticity of the peripheral auditory system at the level of the brainstem. The present work may be intriguing to researchers and clinicians alike interested in devising cross-cultural training regimens to alleviate listening-in-noise difficulties.
Zraick, Richard I.; Kempster, Gail B.; Connor, Nadine P.; Thibeault, Susan; Klaben, Bernice K.; Bursac, Zoran; Thrush, Carol R.; Glaze, Leslie E.
Purpose: The Consensus Auditory-Perceptual Evaluation of Voice (CAPE-V) was developed to provide a protocol and form for clinicians to use when assessing the voice quality of adults with voice disorders (Kempster, Gerratt, Verdolini Abbott, Barkmeier-Kramer, & Hillman, 2009). This study examined the reliability and the empirical validity of the…
Giroud, Nathalie; Lemke, Ulrike; Reich, Philip; Matthes, Katarina L; Meyer, Martin
The current study investigates cognitive processes as reflected in late auditory-evoked potentials as a function of longitudinal auditory learning. A normal hearing adult sample (n=15) performed an active oddball task at three consecutive time points (TPs) arranged at two week intervals, and during which EEG was recorded. The stimuli comprised of syllables consisting of a natural fricative (/sh/,/s/,/f/) embedded between two /a/ sounds, as well as morphed transitions of the two syllables that served as deviants. Perceptual and cognitive modulations as reflected in the onset and the mean global field power (GFP) of N2b- and P3b-related microstates across four weeks were investigated. We found that the onset of P3b-like microstates, but not N2b-like microstates decreased across TPs, more strongly for difficult deviants leading to similar onsets for difficult and easy stimuli after repeated exposure. The mean GFP of all N2b-like and P3b-like microstates increased more in spectrally strong deviants compared to weak deviants, leading to a distinctive activation for each stimulus after learning. Our results indicate that longitudinal training of auditory-related cognitive mechanisms such as stimulus categorization, attention and memory updating processes are an indispensable part of successful auditory learning. This suggests that future studies should focus on the potential benefits of cognitive processes in auditory training. Copyright © 2016 Elsevier B.V. All rights reserved.
Auditory-perceptual evaluation is the most commonly used clinical voice assessment method, and is often considered a gold standard for documentation of voice disorders. This view has arisen for many reasons, including the fact that voice quality is perceptual in nature and that the perceptual characteristics of voice have greater intuitive meaning and shared reality among listeners than do many instrumental measures. Other factors include limitations in the validity and reliability of instrumental methods and lack of agreement as to the most sensitive and specific instrumental measures of voice quality. Perceptual evaluation has, however, been heavily criticised because it is subjective. As a result, listener reliability is not always adequate and auditory-perceptual ratings can be confounded by factors such as the listener's shifting internal standards, listener experience, type of rating scale used and the voice sample being evaluated. This paper discusses these pros and cons of perceptual evaluation, and outlines clinical strategies and research approaches that may lead to improvements in the assessment of voice quality. In particular, clinicians are advised to use multiple methods of voice quality evaluation, and to include both subjective and objective evaluation tools. Copyright 2009 S. Karger AG, Basel.
Sarro, Emma C; Sanes, Dan H
In humans, auditory perception reaches maturity over a broad age range, extending through adolescence. Despite this slow maturation, children are considered to be outstanding learners, suggesting that immature perceptual skills might actually be advantageous to improvement on an acoustic task as a result of training (perceptual learning). Previous non-human studies have not employed an identical task when comparing perceptual performance of young and mature subjects, making it difficult to assess learning. Here, we used an identical procedure on juvenile and adult gerbils to examine the perception of amplitude modulation (AM), a stimulus feature that is an important component of most natural sounds. On average, Adult animals could detect smaller fluctuations in amplitude (i.e., smaller modulation depths) than Juveniles, indicating immature perceptual skills in Juveniles. However, the population variance was much greater for Juveniles, a few animals displaying adult-like AM detection. To determine whether immature perceptual skills facilitated learning, we compared naïve performance on the AM detection task with the amount of improvement following additional training. The amount of improvement in Adults correlated with naïve performance: those with the poorest naïve performance improved the most. In contrast, the naïve performance of Juveniles did not predict the amount of learning. Those Juveniles with immature AM detection thresholds did not display greater learning than Adults. Furthermore, for several of the Juveniles with adult-like thresholds, AM detection deteriorated with repeated testing. Thus, immature perceptual skills in young animals were not associated with greater learning. (c) 2010 Wiley Periodicals, Inc.
Seitz, Aaron R; Yamagishi, Noriko; Werner, Birgit; Goda, Naokazu; Kawato, Mitsuo; Watanabe, Takeo
For more than a century, the process of stabilization has been a central issue in the research of learning and memory. Namely, after a skill or memory is acquired, it must be consolidated before it becomes resistant to disruption by subsequent learning. Although it is clear that there are many cases in which learning can be disrupted, it is unclear when learning something new disrupts what has already been learned. Herein, we provide two answers to this question with the demonstration that perceptual learning of a visual stimulus disrupts or interferes with the consolidation of a previously learned visual stimulus. In this study, we trained subjects on two different hyperacuity tasks and determined whether learning of the second task disrupted that of the first. We first show that disruption of learning occurs between visual stimuli presented at the same orientation in the same retinotopic location but not for the same stimuli presented at retinotopically disparate locations or different orientations at the same location. Second, we show that disruption from stimuli in the same retinotopic location is ameliorated if the subjects wait for 1 h before training on the second task. These results indicate that disruption, at least in visual learning, is specific to features of the tasks and that a temporal delay of 1 h can stabilize visual learning. This research shows that visual learning is susceptible to disruption and elucidates the processes by which the brain can consolidate learning and thus protect what is learned from being overwritten.
Albert, Guillaume; Renaud, Patrice; Chartier, Sylvain; Renaud, Lise; Sauvé, Louise; Bouchard, Stéphane
More and more immersive environments are developed to provide support for learning or training purposes. Ecological validity of such environments is usually based on learning performance comparisons between virtual environments and their genuine counterparts. Little is known about learning processes occurring in immersive environments. A new technique is proposed for testing perceptual learning during virtual immersion. This methodology relies upon eye-tracking technologies to analyze gaze behavior recorded in relation to virtual objects' features and tasks' requirements. It is proposed that perceptual learning mechanisms engaged could be detected through eye movements. In this study, nine subjects performed perceptual learning tasks in virtual immersion. Results obtained indicated that perceptual learning influences gaze behavior dynamics. More precisely, analysis revealed that fixation number and variability in fixation duration varied with perceptual learning level. Such findings could contribute in shedding light on learning mechanisms as well as providing additional support for validating virtual learning environments.
Seeba, Folkert; Schwartz, Joshua J.; Bee, Mark A.
The human auditory system perceptually restores short deleted segments of speech and other sounds (e.g. tones) when the resulting silent gaps are filled by a potential masking noise. When this phenomenon, known as ‘auditory induction’, occurs, listeners experience the illusion of hearing an ongoing sound continuing through the interrupting noise even though the perceived sound is not physically present. Such illusions suggest that a key function of the auditory system is to allow listeners to perceive complete auditory objects with incomplete acoustic information, as may often be the case in multisource acoustic environments. At present, however, we know little about the possible functions of auditory induction in the sound-mediated behaviours of animals. The present study used two-choice phonotaxis experiments to test the hypothesis that female grey treefrogs, Hyla chrysoscelis, experience the illusory perceptual restoration of discrete pulses in the male advertisement call when pulses are deleted and replaced by a potential masking noise. While added noise restored some attractiveness to calls with missing pulses, there was little evidence to suggest that the frogs actually experienced the illusion of perceiving the missing pulses. Instead, the added noise appeared to function as an acoustic appendage that made some calls more attractive than others as a result of sensory biases, the expression of which depended on the temporal order and acoustic structure of the added appendages. PMID:20514342
Li, Haishan; He, Qingshun
Ambiguity tolerance and perceptual learning styles are the two influential elements showing individual differences in EFL learning. This research is intended to explore the relationship between Chinese EFL learners' ambiguity tolerance and their preferred perceptual learning styles. The findings include (1) the learners are sensitive to English…
Iwarsson, Jenny; Petersen, Niels Reinholt
Objectives/Hypothesis: This study investigates the effect of consensus training of listeners on intrarater and interrater reliability and agreement of perceptual voice analysis. The use of such training, including a reference voice sample, could be assumed to make the internal standards held...... in memory common and more robust, which is of great importance to reduce the variability of auditory perceptual ratings. Study Design: A prospective design with testing before and after training. Methods: Thirteen students of audiologopedics served as listening subjects. The ratings were made using...... a multidimensional protocol with four-point equal-appearing interval scales. The stimuli consisted of text reading by authentic dysphonic patients. The consensus training for each perceptual voice parameter included (1) definition, (2) underlying physiology, (3) presentation of carefully selected sound examples...
Full Text Available In auditory scene analysis, population separation and temporal coherence have been proposed to explain how auditory features are grouped together and streamed over time. The present study investigated whether these two theories can be applied to tactile streaming and whether temporal coherence theory can be applied to crossmodal streaming. The results show that synchrony detection between two tones/taps at different frequencies/locations became difficult when one of the tones/taps was embedded in a perceptual stream. While the taps applied to the same location were streamed over time, the taps applied to different locations were not. This observation suggests that tactile stream formation can be explained by population-separation theory. On the other hand, temporally coherent auditory stimuli at different frequencies were streamed over time, but temporally coherent tactile stimuli applied to different locations were not. When there was within-modality streaming, temporally coherent auditory stimuli and tactile stimuli were not streamed over time, either. This observation suggests the limitation of temporal coherence theory when it is applied to perceptual grouping over time.
Yamagishi, Shimpei; Otsuka, Sho; Furukawa, Shigeto; Kashino, Makio
The two-tone sequence (ABA_), which comprises two different sounds (A and B) and a silent gap, has been used to investigate how the auditory system organizes sequential sounds depending on various stimulus conditions or brain states. Auditory streaming can be evoked by differences not only in the tone frequency ("spectral cue": ΔFTONE, TONE condition) but also in the amplitude modulation rate ("AM cue": ΔFAM, AM condition). The aim of the present study was to explore the relationship between the perceptual properties of auditory streaming for the TONE and AM conditions. A sequence with a long duration (400 repetitions of ABA_) was used to examine the property of the bistability of streaming. The ratio of feature differences that evoked an equivalent probability of the segregated percept was close to the ratio of the Q-values of the auditory and modulation filters, consistent with a "channeling theory" of auditory streaming. On the other hand, for values of ΔFAM and ΔFTONE evoking equal probabilities of the segregated percept, the number of perceptual switches was larger for the TONE condition than for the AM condition, indicating that the mechanism(s) that determine the bistability of auditory streaming are different between or sensitive to the two domains. Nevertheless, the number of switches for individual listeners was positively correlated between the spectral and AM domains. The results suggest a possibility that the neural substrates for spectral and AM processes share a common switching mechanism but differ in location and/or in the properties of neural activity or the strength of internal noise at each level. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Green, C Shawn; Li, Renjie; Bavelier, Daphne
Action video games have been shown to enhance behavioral performance on a wide variety of perceptual tasks, from those that require effective allocation of attentional resources across the visual scene, to those that demand the successful identification of fleetingly presented stimuli. Importantly, these effects have not only been shown in expert action video game players, but a causative link has been established between action video game play and enhanced processing through training studies. Although an account based solely on attention fails to capture the variety of enhancements observed after action game playing, a number of models of perceptual learning are consistent with the observed results, with behavioral modeling favoring the hypothesis that avid video game players are better able to form templates for, or extract the relevant statistics of, the task at hand. This may suggest that the neural site of learning is in areas where information is integrated and actions are selected; yet changes in low-level sensory areas cannot be ruled out. Copyright © 2009 Cognitive Science Society, Inc.
Skoe, E; Krizman, J; Spitzer, E; Kraus, N
To capture patterns in the environment, neurons in the auditory brainstem rapidly alter their firing based on the statistical properties of the soundscape. How this neural sensitivity relates to behavior is unclear. We tackled this question by combining neural and behavioral measures of statistical learning, a general-purpose learning mechanism governing many complex behaviors including language acquisition. We recorded complex auditory brainstem responses (cABRs) while human adults implicitly learned to segment patterns embedded in an uninterrupted sound sequence based on their statistical characteristics. The brainstem's sensitivity to statistical structure was measured as the change in the cABR between a patterned and a pseudo-randomized sequence composed from the same set of sounds but differing in their sound-to-sound probabilities. Using this methodology, we provide the first demonstration that behavioral-indices of rapid learning relate to individual differences in brainstem physiology. We found that neural sensitivity to statistical structure manifested along a continuum, from adaptation to enhancement, where cABR enhancement (patterned>pseudo-random) tracked with greater rapid statistical learning than adaptation. Short- and long-term auditory experiences (days to years) are known to promote brainstem plasticity and here we provide a conceptual advance by showing that the brainstem is also integral to rapid learning occurring over minutes. Copyright © 2013 IBRO. Published by Elsevier Ltd. All rights reserved.
Full Text Available Action video game playing substantially improves visual performance; however, the source of this improvement remains unclear. Here we use the equivalent external noise technique to characterize the mechanism by which action video games may facilitate performance (Lu & Dosher, 1998. In first study, Action Video Game Players (VGPs and Non-Action Video Game Players (NVGPs performed a foveal orientation identification task at different external noise levels. VGPs showed lower thresholds than NVGPs with a marked difference at different noise levels. Perceptual Template Model fitting indicated that there were an 11% additive noise reduction and a 25% external noise exclusion. The causal effect of action video game playing was confirmed in a following 50 hour training study, This work establishes that playing action video games leads to robust internal addictive and external noise exclusion, consistent with the use of better matched perceptual templates. To investigate the discrepancy between our results and previous fovea perceptual learning research (Lu et al, 2004, same stimuli in previous experiment were used in perceptual learning experiment and we find same perceptual template improvement pattern. This suggest both action video game playing and perceptual learning could lead to better perceptual template.
Lortie, Catherine L; Deschamps, Isabelle; Guitton, Matthieu J; Tremblay, Pascale
The factors that influence the evaluation of voice in adulthood, as well as the consequences of such evaluation on social interactions, are not well understood. Here, we examined the effect of listeners' age and the effect of talker age, sex, and smoking status on the auditory-perceptual evaluation of voice, voice-related psychosocial attributions, and perceived speech tempo. We also examined the voice dimensions affecting the propensity to engage in social interactions. Twenty-five younger (age 19-37 years) and 25 older (age 51-74 years) healthy adults participated in this cross-sectional study. Their task was to evaluate the voice of 80 talkers. Statistical analyses revealed limited effects of the age of the listener on voice evaluation. Specifically, older listeners provided relatively more favorable voice ratings than younger listeners, mainly in terms of roughness. In contrast, the age of the talker had a broader impact on voice evaluation, affecting auditory-perceptual evaluations, psychosocial attributions, and perceived speech tempo. Some of these talker differences were dependent upon the sex of the talker and his or her smoking status. Finally, the results also show that voice-related psychosocial attribution was more strongly associated with the propensity of the listener to engage in social interactions with a person than auditory-perceptual dimensions and perceived speech tempo, especially for the younger adults. These results suggest that age has a broad influence on voice evaluation, with a stronger impact for talker age compared with listener age. While voice-related psychosocial attributions may be an important determinant of social interactions, perceived voice quality and speech tempo appear to be less influential. https://doi.org/10.23641/asha.5844102.
Szychowska, Malina; Eklund, Rasmus; Nilsson, Mats E; Wiens, Stefan
Auditory change detection has been studied extensively with mismatch negativity (MMN), an event-related potential. Because it is unresolved if the duration MMN depends on sound pressure level (SPL), we studied effects of different SPLs (56, 66, and 76dB) on the duration MMN. Further, previous research suggests that the MMN is reduced by a concurrent visual task. Because a recent behavioral study found that high visual perceptual load strongly reduced detection sensitivity to irrelevant sounds, we studied if the duration MMN is reduced by load, and if this reduction is stronger at low SPLs. Although a duration MMN was observed for all SPLs, the MMN was apparently not moderated strongly by SPL, perceptual load, or their interaction, because all 95% CIs overlapped zero. In a contrast analysis of the MMN (across loads) between the 56-dB and 76-dB groups, evidence (BF=0.31) favored the null hypothesis that duration MMN is unaffected by a 20-dB increase in SPL. Similarly, evidence (BF=0.19) favored the null hypothesis that effects of perceptual load on the duration MMN do not change with a 20-dB increase in SPL. However, evidence (BF=3.12) favored the alternative hypothesis that the effect of perceptual load in the present study resembled the overall effect in a recent meta-analysis. When the present findings were combined with the meta-analysis, the effect of load (low minus high) was -0.43μV, 95% CI [-0.64, -0.22] suggesting that the duration MMN decreases with load. These findings provide support for a sensitive monitoring system of the auditory environment. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Douglas M Shiller
Full Text Available BACKGROUND: Hearing ability is essential for normal speech development, however the precise mechanisms linking auditory input and the improvement of speaking ability remain poorly understood. Auditory feedback during speech production is believed to play a critical role by providing the nervous system with information about speech outcomes that is used to learn and subsequently fine-tune speech motor output. Surprisingly, few studies have directly investigated such auditory-motor learning in the speech production of typically developing children. METHODOLOGY/PRINCIPAL FINDINGS: In the present study, we manipulated auditory feedback during speech production in a group of 9-11-year old children, as well as in adults. Following a period of speech practice under conditions of altered auditory feedback, compensatory changes in speech production and perception were examined. Consistent with prior studies, the adults exhibited compensatory changes in both their speech motor output and their perceptual representations of speech sound categories. The children exhibited compensatory changes in the motor domain, with a change in speech output that was similar in magnitude to that of the adults, however the children showed no reliable compensatory effect on their perceptual representations. CONCLUSIONS: The results indicate that 9-11-year-old children, whose speech motor and perceptual abilities are still not fully developed, are nonetheless capable of auditory-feedback-based sensorimotor adaptation, supporting a role for such learning processes in speech motor development. Auditory feedback may play a more limited role, however, in the fine-tuning of children's perceptual representations of speech sound categories.
Kühnis, Jürg; Elmer, Stefan; Meyer, Martin; Jäncke, Lutz
A vast amount of previous work has consistently revealed that professional music training is associated with functional and structural alterations of auditory-related brain regions. Meanwhile, there is also an increasing array of evidence, which shows that musicianship facilitates segmental, as well as supra-segmental aspects of speech processing. Based on this evidence, we addressed a novel research question, namely whether professional music training has an influence on the perceptual learning of speech sounds. In the context of an EEG experiment, we presented auditory pseudoword-chimeras, manipulated in terms of spectral- or envelope-related acoustic information, to a group of professional musicians and non-musicians. During EEG measurements, participants were requested to assign the auditory-presented pseudoword-chimeras to one out of four visually presented templates. As expected, both groups showed behavioural learning effects during the time course of the experiment. These learning effects were associated with an increase in accuracy, a decrease in reaction time, as well as a decrease in the P2-like microstate duration in both groups. Notably, the musicians showed an increased learning performance compared to the controls during the first two runs of the spectral condition. This perceptual learning effect, which varies as a function of musical expertise, was reflected by a reduction of the P2-like microstate duration. Results may mirror transfer effects from musical training to the processing of spectral information in speech sounds. Hence, this study provides first evidence for a relationship between changes in microstates, musical expertise, and perceptual verbal learning mechanisms.
Maryn, Youri; Roy, Nelson
Auditory-perceptual evaluation of dysphonia may be influenced by the type of speech/voice task used to render judgements during the clinical evaluation, i.e., sustained vowels versus continuous speech. This study explored (a) differences in listener dysphonia severity ratings on the basis of speech/voice tasks, (b) the influence of speech/voice task on dysphonia severity ratings of stimuli that combined sustained vowels and continuous speech, and (c) the differences in inter-rater reliability of dysphonia severity ratings between both speech tasks. Five experienced listeners rated overall dysphonia severity in sustained vowels, continuous speech and concatenated speech samples elicited by 39 subjects with various voice disorders and degrees of hoarseness. Data confirmed that sustained vowels are rated significantly more dysphonic than continuous speech. Furthermore, dysphonia severity in concatenated speech samples is least determined by the sustained vowel. Finally, no significant difference was found in inter-rater reliability between dysphonia severity ratings of sustained vowels versus continuous speech. Based upon the results, both types of speech/voice tasks (i.e., sustained vowel and continuous speech) should be elicited and judged by clinicians in the auditory-perceptual rating of dysphonia severity.
Li, Jin-rang; Sun, Yan-yan; Xu, Wen
To design a speech voice sample text with all phonemes in Mandarin for subjective auditory perceptual evaluation of voice disorders. The principles for design of a speech voice sample text are: The short text should include the 21 initials and 39 finals, this may cover all the phonemes in Mandarin. Also, the short text should have some meanings. A short text was made out. It had 155 Chinese words, and included 21 initials and 38 finals (the final, ê, was not included because it was rarely used in Mandarin). Also, the text covered 17 light tones and one "Erhua". The constituent ratios of the initials and finals presented in this short text were statistically similar as those in Mandarin according to the method of similarity of the sample and population (r = 0.742, P text were statistically not similar as those in Mandarin (r = 0.731, P > 0.05). A speech voice sample text with all phonemes in Mandarin was made out. The constituent ratios of the initials and finals presented in this short text are similar as those in Mandarin. Its value for subjective auditory perceptual evaluation of voice disorders need further study.
Moore, David R.; Halliday, Lorna F.; Amitay, Sygal
This paper reviews recent studies that have used adaptive auditory training to address communication problems experienced by some children in their everyday life. It considers the auditory contribution to developmental listening and language problems and the underlying principles of auditory learning that may drive further refinement of auditory learning applications. Following strong claims that language and listening skills in children could be improved by auditory learning, researchers hav...
Full Text Available Previous studies have shown that sodium salicylate (SS activates not only central auditory structures, but also nonauditory regions associated with emotion and memory. To identify electrophysiological changes in the nonauditory regions, we recorded sound-evoked local field potentials and multiunit discharges from the striatum, amygdala, hippocampus, and cingulate cortex after SS-treatment. The SS-treatment produced behavioral evidence of tinnitus and hyperacusis. Physiologically, the treatment significantly enhanced sound-evoked neural activity in the striatum, amygdala, and hippocampus, but not in the cingulate. The enhanced sound evoked response could be linked to the hyperacusis-like behavior. Further analysis showed that the enhancement of sound-evoked activity occurred predominantly at the midfrequencies, likely reflecting shifts of neurons towards the midfrequency range after SS-treatment as observed in our previous studies in the auditory cortex and amygdala. The increased number of midfrequency neurons would lead to a relative higher number of total spontaneous discharges in the midfrequency region, even though the mean discharge rate of each neuron may not increase. The tonotopical overactivity in the midfrequency region in quiet may potentially lead to tonal sensation of midfrequency (the tinnitus. The neural changes in the amygdala and hippocampus may also contribute to the negative effect that patients associate with their tinnitus.
Cai, Shang; Xiao, Yeming; Pan, Jielin; Zhao, Qingwei; Yan, Yonghong
Mel Frequency Cepstral Coefficients (MFCC) are the most popular acoustic features used in automatic speech recognition (ASR), mainly because the coefficients capture the most useful information of the speech and fit well with the assumptions used in hidden Markov models. As is well known, MFCCs already employ several principles which have known counterparts in the peripheral properties of human hearing: decoupling across frequency, mel-warping of the frequency axis, log-compression of energy, etc. It is natural to introduce more mechanisms in the auditory periphery to improve the noise robustness of MFCC. In this paper, a k-nearest neighbors based frequency masking filter is proposed to reduce the audibility of spectra valleys which are sensitive to noise. Besides, Moore and Glasberg's critical band equivalent rectangular bandwidth (ERB) expression is utilized to determine the filter bandwidth. Furthermore, a new bandpass infinite impulse response (IIR) filter is proposed to imitate the temporal masking phenomenon of the human auditory system. These three auditory perceptual mechanisms are combined with the standard MFCC algorithm in order to investigate their effects on ASR performance, and a revised MFCC extraction scheme is presented. Recognition performances with the standard MFCC, RASTA perceptual linear prediction (RASTA-PLP) and the proposed feature extraction scheme are evaluated on a medium-vocabulary isolated-word recognition task and a more complex large vocabulary continuous speech recognition (LVCSR) task. Experimental results show that consistent robustness against background noise is achieved on these two tasks, and the proposed method outperforms both the standard MFCC and RASTA-PLP.
Xi, Jie; Jia, Wu-Li; Feng, Li-Xia; Lu, Zhong-Lin; Huang, Chang-Bing
Amblyopia is a developmental disorder that results in both monocular and binocular deficits. Although traditional treatment in clinical practice (i.e., refractive correction, or occlusion by patching and penalization of the fellow eye) is effective in restoring monocular visual acuity, there is little information on how binocular function, especially stereopsis, responds to traditional amblyopia treatment. We aim to evaluate the effects of perceptual learning on stereopsis in observers with amblyopia in the current study. Eleven observers (21.1 ± 5.1 years, six females) with anisometropic or ametropic amblyopia were trained to judge depth in 10 to 13 sessions. Red-green glasses were used to present three different texture anaglyphs with different disparities but a fixed exposure duration. Stereoacuity was assessed with the Fly Stereo Acuity Test and visual acuity was assessed with the Chinese Tumbling E Chart before and after training. Averaged across observers, training significantly reduced disparity threshold from 776.7″ to 490.4″ (P amblyopia. These results, together with previous evidence, suggest that structured monocular and binocular training might be necessary to fully recover degraded visual functions in amblyopia. Chinese Abstract.
Full Text Available We report a series of experiments utilizing the binocular rivalry paradigm designed to investigate whether auditory semantic context modulates visual awareness. Binocular rivalry refers to the phenomenon whereby when two different figures are presented to each eye, observers perceive each figure as being dominant in alternation over time. The results demonstrate that participants report a particular percept as being dominant for less of the time when listening to an auditory soundtrack that happens to be semantically congruent with the other alternative (i.e., the competing percept, as compared to when listening to an auditory soundtrack that is irrelevant to both visual figures (Experiment 1A. When a visually-presented word was provided as a semantic cue, no such semantic modulatory effect was observed (Experiment 1B. We also demonstrate that the crossmodal semantic modulation of binocular rivalry was robustly observed irrespective of participants’ attentional control over the dichoptic figures and the relative luminance contrast between the figures (Experiments 2A and 2B. The pattern of crossmodal semantic effects reported here cannot simply be attributed to the meaning of the soundtrack guiding participants’ attention or biasing their behavioral responses. Hence, these results support the claim that crossmodal perceptual information can serve as a constraint on human visual awareness in terms of their semantic congruency.
Chen, Yi-Chuan; Yeh, Su-Ling; Spence, Charles
We report a series of experiments utilizing the binocular rivalry paradigm designed to investigate whether auditory semantic context modulates visual awareness. Binocular rivalry refers to the phenomenon whereby when two different figures are presented to each eye, observers perceive each figure as being dominant in alternation over time. The results demonstrate that participants report a particular percept as being dominant for less of the time when listening to an auditory soundtrack that happens to be semantically congruent with the other alternative (i.e., the competing) percept, as compared to when listening to an auditory soundtrack that was irrelevant to both visual figures (Experiment 1A). When a visually presented word was provided as a semantic cue, no such semantic modulatory effect was observed (Experiment 1B). We also demonstrate that the crossmodal semantic modulation of binocular rivalry was robustly observed irrespective of participants' attentional control over the dichoptic figures and the relative luminance contrast between the figures (Experiments 2A and 2B). The pattern of crossmodal semantic effects reported here cannot simply be attributed to the meaning of the soundtrack guiding participants' attention or biasing their behavioral responses. Hence, these results support the claim that crossmodal perceptual information can serve as a constraint on human visual awareness in terms of their semantic congruency.
Rance, G; Corben, L A; Du Bourg, E; King, A; Delatycki, M B
Friedreich ataxia (FRDA) is a neurodegenerative disease affecting motor and sensory systems. This study aimed to investigate the presence and perceptual consequences of auditory neuropathy (AN) in affected individuals and examine the use of personal-FM systems to ameliorate the resulting communication difficulties. Ten individuals with FRDA underwent a battery of auditory function tests and their results were compared with a cohort of matched controls. Friedreich ataxia subjects were then fit with personal FM-listening devices and evaluated over a 6 week period. Basic auditory processing was affected with each FRDA individual showing poorer temporal processing and figure/ground discrimination than their matched control. Speech perception in the presence of background noise was also impaired, with FRDA listeners typically able to access only around 50% of the information available to their normal peers. The use of personal FM-listening devices did however, dramatically improve their ability to hear and communicate in everyday listening situations. Copyright © 2010 IBRO. Published by Elsevier Ltd. All rights reserved.
Full Text Available Even though auditory training exercises for humans have been shown to improve certain perceptual skills of individuals with and without hearing loss, there is a lack of knowledge pertaining to which aspects of training are responsible for the perceptual gains, and which aspects of perception are changed. To better define how auditory training impacts brain and behavior, electroencephalography and magnetoencephalography have been used to determine the time course and coincidence of cortical modulations associated with different types of training. Here we focus on P1-N1-P2 auditory evoked responses (AEP, as there are consistent reports of gains in P2 amplitude following various types of auditory training experiences; including music and speech-sound training. The purpose of this experiment was to determine if the auditory evoked P2 response is a biomarker of learning. To do this, we taught native English speakers to identify a new pre-voiced temporal cue that is not used phonemically in the English language so that coinciding changes in evoked neural activity could be characterized. To differentiate possible effects of repeated stimulus exposure and a button-pushing task from learning itself, we examined modulations in brain activity in a group of participants who learned to identify the pre-voicing contrast and compared it to participants, matched in time, and stimulus exposure, that did not. The main finding was that the amplitude of the P2 auditory evoked response increased across repeated EEG sessions for all groups, regardless of any change in perceptual performance. What’s more, these effects were retained for months. Changes in P2 amplitude were attributed to changes in neural activity associated with the acquisition process and not the learned outcome itself. A further finding was the expression of a late negativity (LN wave 600-900 ms post-stimulus onset, post-training, exclusively for the group that learned to identify the pre
Tremblay, Kelly L.; Ross, Bernhard; Inoue, Kayo; McClannahan, Katrina; Collet, Gregory
Even though auditory training exercises for humans have been shown to improve certain perceptual skills of individuals with and without hearing loss, there is a lack of knowledge pertaining to which aspects of training are responsible for the perceptual gains, and which aspects of perception are changed. To better define how auditory training impacts brain and behavior, electroencephalography (EEG) and magnetoencephalography (MEG) have been used to determine the time course and coincidence of cortical modulations associated with different types of training. Here we focus on P1-N1-P2 auditory evoked responses (AEP), as there are consistent reports of gains in P2 amplitude following various types of auditory training experiences; including music and speech-sound training. The purpose of this experiment was to determine if the auditory evoked P2 response is a biomarker of learning. To do this, we taught native English speakers to identify a new pre-voiced temporal cue that is not used phonemically in the English language so that coinciding changes in evoked neural activity could be characterized. To differentiate possible effects of repeated stimulus exposure and a button-pushing task from learning itself, we examined modulations in brain activity in a group of participants who learned to identify the pre-voicing contrast and compared it to participants, matched in time, and stimulus exposure, that did not. The main finding was that the amplitude of the P2 auditory evoked response increased across repeated EEG sessions for all groups, regardless of any change in perceptual performance. What’s more, these effects are retained for months. Changes in P2 amplitude were attributed to changes in neural activity associated with the acquisition process and not the learned outcome itself. A further finding was the expression of a late negativity (LN) wave 600–900 ms post-stimulus onset, post-training exclusively for the group that learned to identify the pre
Lohmander, Anette; Hagberg, Emilie; Persson, Christina
-Sum) and of auditory perceptual ratings of velopharyngeal competence (VPC-Rate). Available VPC-Sum scores and judgments of associated variables (hypernasality, audible nasal air leakage, weak pressure consonants, and non-oral articulation) from 391 5-year olds with repaired cleft palate (the Scandcleft project) were...
Aidan Peter Murphy
Full Text Available The visual system exploits past experience at multiple timescales to resolve perceptual ambiguity in the retinal image. For example, perception of a bistable stimulus can be biased towards one interpretation over another when preceded by a brief presentation of a disambiguated version of the stimulus (positive priming or through intermittent presentations of the ambiguous stimulus (stabilization. Similarly, prior presentations of unambiguous stimuli can be used to explicitly train a long-lasting association between a percept and a retinal location (perceptual association. These phenonema have typically been regarded as independent processes, with short-term biases attributed to perceptual memory and longer-term biases described as associative learning. Here we tested for interactions between these two forms of experience-dependent perceptual bias and demonstrate that short-term processes strongly influence long-term outcomes. We first demonstrate that the establishment of long-term perceptual contingencies does not require explicit training by unambiguous stimuli, but can arise spontaneously during the periodic presentation of brief, ambiguous stimuli. Using rotating Necker cube stimuli, we observed enduring, retinotopically specific perceptual biases that were expressed from the outset and remained stable for up to forty minutes, consistent with the known phenomenon of perceptual stabilization. Further, bias was undiminished after a break period of five minutes, but was readily reset by interposed periods of continuous, as opposed to periodic, ambiguous presentation. Taken together, the results demonstrate that perceptual biases can arise naturally and may principally reflect the brain’s tendency to favor recent perceptual interpretation at a given retinal location. Further, they suggest that an association between retinal location and perceptual state, rather than a physical stimulus, is sufficient to generate long-term biases in perceptual
Davies-Thompson, Jodie; Fletcher, Kimberley; Hills, Charlotte; Pancaroglu, Raika; Corrow, Sherryse L; Barton, Jason J S
Despite many studies of acquired prosopagnosia, there have been only a few attempts at its rehabilitation, all in single cases, with a variety of mnemonic or perceptual approaches, and of variable efficacy. In a cohort with acquired prosopagnosia, we evaluated a perceptual learning program that incorporated variations in view and expression, which was aimed at training perceptual stages of face processing with an emphasis on ecological validity. Ten patients undertook an 11-week face training program and an 11-week control task. Training required shape discrimination between morphed facial images, whose similarity was manipulated by a staircase procedure to keep training near a perceptual threshold. Training progressed from blocks of neutral faces in frontal view through increasing variations in view and expression. Whereas the control task did not change perception, training improved perceptual sensitivity for the trained faces and generalized to new untrained expressions and views of those faces. There was also a significant transfer to new faces. Benefits were maintained over a 3-month period. Training efficacy was greater for those with more perceptual deficits at baseline. We conclude that perceptual learning can lead to persistent improvements in face discrimination in acquired prosopagnosia. This reflects both acquisition of new skills that can be applied to new faces as well as a degree of overlearning of the stimulus set at the level of 3-D expression-invariant representations.
Full Text Available Auditory perceptual and visual-spatial characteristics of subjective tinnitus evoked by eye gaze were studied in two adult human subjects. This uncommon form of tinnitus occurred approximately 4-6 weeks following neurosurgery for gross total excision of space Occupying lesions of the cerebellopontine angle and hearing was lost in the operated ear. In both cases, the gaze evoked tinnitus was characterized as being tonal in nature, with pitch and loudness percepts remaining constant as long as the same horizontal or vertical eye directions were maintained. Tinnitus was absent when the eyes were in a neutral head referenced position with subjects looking straight ahead. The results and implications of ophthalmological, standard and modified visual field assessment, pure tone audio metric assessment, spontaneous otoacoustic emission testing and detailed psychophysical assessment of pitch and loudness are discussed
Zhou, Peiyun; Christianson, Kiel
Auditory perceptual simulation (APS) during silent reading refers to situations in which the reader actively simulates the voice of a character or other person depicted in a text. In three eye-tracking experiments, APS effects were investigated as people read utterances attributed to a native English speaker, a non-native English speaker, or no speaker at all. APS effects were measured via online eye movements and offline comprehension probes. Results demonstrated that inducing APS during silent reading resulted in observable differences in reading speed when readers simulated the speech of faster compared to slower speakers and compared to silent reading without APS. Social attitude survey results indicated that readers' attitudes towards the native and non-native speech did not consistently influence APS-related effects. APS of both native speech and non-native speech increased reading speed, facilitated deeper, less good-enough sentence processing, and improved comprehension compared to normal silent reading.
Brown, Rachel M.; Palmer, Caroline
Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians' encoding (during Learning, as they practiced novel melodies), and retrieval (during Recall of those melodies). Pianists learned melodies by listening without performing (auditory learning) or performing without sound (motor learning); following Learning, pianists performed the melodies from memory with auditory feedback (Recall). During either Learning (Experiment 1) or Recall (Experiment 2), pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced) and temporal regularity (variability of quarter-note interonset intervals) were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists' pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2). Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1): Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2): Higher auditory imagery skill predicted greater temporal regularity during Recall in the presence of
Brown, Rachel M; Palmer, Caroline
Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians' encoding (during Learning, as they practiced novel melodies), and retrieval (during Recall of those melodies). Pianists learned melodies by listening without performing (auditory learning) or performing without sound (motor learning); following Learning, pianists performed the melodies from memory with auditory feedback (Recall). During either Learning (Experiment 1) or Recall (Experiment 2), pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced) and temporal regularity (variability of quarter-note interonset intervals) were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists' pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2). Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1): Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2): Higher auditory imagery skill predicted greater temporal regularity during Recall in the presence of
Bejjanki, Vikranth R.; Beck, Jeffrey M.; Lu, Zhong-Lin; Pouget, Alexandre
Extensive training on simple tasks like fine orientation discrimination results in large improvements in performance, a form of learning known as perceptual learning. Previous neural models have argued that perceptual learning is the result of sharpening and amplification of tuning curves in early visual areas. However, these models are at odds with the conclusions of psychophysical experiments manipulating external noise, which argue for improved decision making, presumably in later visual areas. Here, we explore the possibility that perceptual learning for fine orientation discrimination is due to improved probabilistic inference in early visual areas. We show that this mechanism captures both the changes in response properties observed in early visual areas and the changes in performance observed in psychophysical experiments. We also suggest that sharpening and amplification of tuning curves may play only a minor role in improving performance, in comparison to the role played by the reshaping of inter-neuronal correlations. PMID:21460833
Kellman, Philip J
Recent advances in the learning sciences offer remarkable potential to improve medical education and maximize the benefits of emerging medical technologies. This article describes 2 major innovation areas in the learning sciences that apply to simulation and other aspects of medical learning: Perceptual learning (PL) and adaptive learning technologies. PL technology offers, for the first time, systematic, computer-based methods for teaching pattern recognition, structural intuition, transfer, and fluency. Synergistic with PL are new adaptive learning technologies that optimize learning for each individual, embed objective assessment, and implement mastery criteria. The author describes the Adaptive Response-Time-based Sequencing (ARTS) system, which uses each learner's accuracy and speed in interactive learning to guide spacing, sequencing, and mastery. In recent efforts, these new technologies have been applied in medical learning contexts, including adaptive learning modules for initial medical diagnosis and perceptual/adaptive learning modules (PALMs) in dermatology, histology, and radiology. Results of all these efforts indicate the remarkable potential of perceptual and adaptive learning technologies, individually and in combination, to improve learning in a variety of medical domains. Reprint & Copyright © 2013 Association of Military Surgeons of the U.S.
Rachel M. Brown
Full Text Available Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians’ encoding (during Learning, as they practiced novel melodies, and retrieval (during Recall of those melodies. Pianists learned melodies by listening without performing (auditory learning or performing without sound (motor learning; following Learning, pianists performed the melodies from memory with auditory feedback (Recall. During either Learning (Experiment 1 or Recall (Experiment 2, pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced and temporal regularity (variability of quarter-note interonset intervals were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists’ pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2. Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1: Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2: Higher auditory imagery skill predicted greater temporal regularity during Recall in the
Neger, T.M.; Rietveld, A.C.M.; Janse, E.
Within a few sentences, listeners learn to understand severely degraded speech such as noise-vocoded speech. However, individuals vary in the amount of such perceptual learning and it is unclear what underlies these differences. The present study investigates whether perceptual learning in speech
Xu, Jinghong; Yu, Liping; Cai, Rui; Zhang, Jiping; Sun, Xinde
Previous studies have shown that the functional development of auditory system is substantially influenced by the structure of environmental acoustic inputs in early life. In our present study, we investigated the effects of early auditory enrichment with music on rat auditory discrimination learning. We found that early auditory enrichment with music from postnatal day (PND) 14 enhanced learning ability in auditory signal-detection task and in sound duration-discrimination task. In parallel, a significant increase was noted in NMDA receptor subunit NR2B protein expression in the auditory cortex. Furthermore, we found that auditory enrichment with music starting from PND 28 or 56 did not influence NR2B expression in the auditory cortex. No difference was found in the NR2B expression in the inferior colliculus (IC) between music-exposed and normal rats, regardless of when the auditory enrichment with music was initiated. Our findings suggest that early auditory enrichment with music influences NMDA-mediated neural plasticity, which results in enhanced auditory discrimination learning.
Reetzke, Rachel; Maddox, W. Todd; Chandrasekaran, Bharath
Auditory categorization is a natural and adaptive process that allows for the organization of high-dimensional, continuous acoustic information into discrete representations. Studies in the visual domain have identified a rule-based learning system that learns and reasons via a hypothesis-testing process that requires working memory and executive attention. The rule-based learning system in vision shows a protracted development, reflecting the influence of maturing prefrontal function on visual categorization. The aim of the current study is two-fold: (a) to examine the developmental trajectory of rule-based auditory category learning from childhood through adolescence, into early adulthood; and (b) to examine the extent to which individual differences in rule-based category learning relate to individual differences in executive function. Sixty participants with normal hearing, 20 children (age range, 7–12), 21 adolescents (age range, 13–19), and 19 young adults (age range, 20–23), learned to categorize novel dynamic ripple sounds using trial-by-trial feedback. The spectrotemporally modulated ripple sounds are considered the auditory equivalent of the well-studied Gabor patches in the visual domain. Results revealed that auditory categorization accuracy improved with age, with young adults outperforming children and adolescents. Computational modeling analyses indicated that the use of the task-optimal strategy (i.e. a conjunctive rule-based learning strategy) improved with age. Notably, individual differences in executive flexibility significantly predicted auditory category learning success. The current findings demonstrate a protracted development of rule-based auditory categorization. The results further suggest that executive flexibility coupled with perceptual processes play important roles in successful rule-based auditory category learning. PMID:26491987
Mitchell, Chris; Hall, Geoffrey
We present a review of recent studies of perceptual learning conducted with nonhuman animals. The focus of this research has been to elucidate the mechanisms by which mere exposure to a pair of similar stimuli can increase the ease with which those stimuli are discriminated. These studies establish an important role for 2 mechanisms, one involving inhibitory associations between the unique features of the stimuli, the other involving a long-term habituation process that enhances the relative salience of these features. We then examine recent work investigating equivalent perceptual learning procedures with human participants. Our aim is to determine the extent to which the phenomena exhibited by people are susceptible to explanation in terms of the mechanisms revealed by the animal studies. Although we find no evidence that associative inhibition contributes to the perceptual learning effect in humans, initial detection of unique features (those that allow discrimination between 2 similar stimuli) appears to depend on an habituation process. Once the unique features have been detected, a tendency to attend to those features and to learn about their properties enhances subsequent discrimination. We conclude that the effects obtained with humans engage mechanisms additional to those seen in animals but argue that, for the most part, these have their basis in learning processes that are common to animals and people. In a final section, we discuss some implications of this analysis of perceptual learning for other aspects of experimental psychology and consider some potential applications. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
Deveau, Jenni; Lovcik, Gary; Seitz, Aaron R
... impacts on individuals' lives. Research in the field of perceptual learning has demonstrated that vision can be improved in both normally seeing and visually impaired individuals, however, a limitation of most perceptual learning...
Full Text Available Research of visual perceptual learning has illuminated the flexibility of processing in the visual system and provides insights into therapeutic approaches to remediating some components of low vision. A key observation from research of perceptual learning is that effects of training are often highly specific to the attributes of the trained stimuli. This observation has been a blessing to basic research, providing important constraints to models of learning, but is a curse to translational research, which has the goal of creating therapies that generalize widely across visual tasks and stimuli. Here we suggest that the curse of specificity can be overcome by adopting a different experimental framework than is standard in the field. Namely, translational studies should integrate many approaches together and sacrifice mechanistic understanding to gain clinical relevance. To validate this argument, we review research from our lab and others, and also present new data, that together shows how perceptual learning on basic stimuli can lead to improvements on standard vision tests as well as real world vision use such as improved reading and even improved sports performance. Furthermore, we show evidence that this integrative approach to perceptual learning can ameliorate effects of presbyopia and provides promise to improve visual function for individuals suffering from low vision.
Dinse, Hubert R; Kattenstroth, J C; Lenz, M; Tegenthoff, M; Wolf, O T
Cortisol, the primary glucocorticoid (GC) in humans, influences neuronal excitability and plasticity by acting on mineralocorticoid and glucocorticoid receptors. Cellular studies demonstrated that elevated GC levels affect neuronal plasticity, for example through a reduction of hippocampal long-term potentiation (LTP). At the behavioural level, after treatment with GCs, numerous studies have reported impaired hippocampal function, such as impaired memory retrieval. In contrast, relatively little is known about the impact of GCs on cortical plasticity and perceptual learning in adult humans. Therefore, in this study, we explored the impact of elevated GC levels on human perceptual learning. To this aim, we used a training-independent learning approach, where lasting changes in human perception can be induced by applying passive repetitive sensory stimulation (rss), the timing of which was determined from cellular LTP studies. In our placebo-controlled double-blind study, we used tactile LTP-like stimulation to induce improvements in tactile acuity (spatial two-point discrimination). Our results show that a single administration of hydrocortisone (30mg) completely blocked rss-induced changes in two-point discrimination. In contrast, the placebo group showed the expected rss-induced increase in two-point discrimination of over 14%. Our data demonstrate that high GC levels inhibit rss-induced perceptual learning. We suggest that the suppression of LTP, as previously reported in cellular studies, may explain the perceptual learning impairments observed here. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Full Text Available Is stimulus specific perceptual learning the result of extended practice or does it emerge early in the time course of learning? We examined this issue by manipulating the amount of practice given on a face identification task on Day 1, and altering the familiarity of stimuli on Day 2. We found that a small number of trials was sufficient to produce stimulus specific perceptual learning of faces: on Day 2, response accuracy decreased by the same amount for novel stimuli regardless of whether observers practiced 105 or 840 trials on Day 1. Current models of learning assume early procedural improvements followed by late stimulus specific gains. Our results show that stimulus specific and procedural improvements are distributed throughout the time course of learning
Özcebe, Esra; Aydinli, Fatma Esen; Tiğrak, Tuğçe Karahan; İncebay, Önal; Yilmaz, Taner
The main purpose of this study was to culturally adapt the Consensus Auditory-Perceptual Evaluation of Voice (CAPE-V) to Turkish and to evaluate its internal consistency, validity, and reliability. The Turkish version of CAPE-V was developed, and with the use of a prospective case-control design, the voice recordings of 130 participants were collected according to CAPE-V protocol. Auditory-perceptual evaluation was conducted according to CAPE-V and Grade, Roughness, Breathiness, Asthenia, and Strain (GRBAS) scale by two ear, nose, and throat specialists and two speech and language therapists. The different types of voice disorders, classified as organic and functional disorders, were compared in terms of their CAPE-V scores. The overall severity parameter had the highest intrarater and inter-reliability values for all the participants. For all four raters, the differences in the six CAPE-V parameters between the study and the control groups were found to be statistically significant. Among the correlations for the comparable parameters of the CAPE-V and the GRBAS scales, the highest correlation was found between the overall severity-grade parameters. There was no difference found between the organic and functional voice disorders in terms of the CAPE-V scores. The Turkish version of CAPE-V has been proven to be a reliable and valid instrument to use in the auditory-perceptual evaluation of voice. For the future application of this study, it would be important to investigate whether cepstral measures correlate with the auditory-perceptual judgments of dysphonia severity collected by a Turkish version of the CAPE-V. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
El-Kaim, A; Aramaki, M; Ystad, S; Kronland-Martinet, R; Cermolacce, M; Naudin, J; Vion-Dury, J; Micoulaud-Franchi, J-A
In schizophrenia, perceptual inundation related to sensory gating deficit can be evaluated "off-line" with the sensory gating inventory (SGI) and "on-line" during listening tests. However, no study investigated the relation between "off-line evaluation" and "on-line evaluation". The present study investigates this relationship. A sound corpus of 36 realistic environmental auditory scenes was obtained from a 3D immersive synthesizer. Twenty schizophrenic patients and twenty healthy subjects completed the SGI and evaluated the feeling of "inundation" from 1 ("null") to 5 ("maximum") for each auditory scene. Sensory gating deficit was evaluated in half of each population group with P50 suppression electrophysiological measure. Evaluation of inundation during sound listening was significantly higher in schizophrenia (3.25) compared to the control group (2.40, P<.001). The evaluation of inundation during the listening test correlated significantly with the perceptual modulation (n=20, rho=.52, P=.029) and the over-inclusion dimensions (n=20, rho=.59, P=.01) of the SGI in schizophrenic patients and with the P50 suppression for the entire group of controls and patients who performed ERP recordings (n=20, rho=-.49, P=.027). An evaluation of the external validity of the SGI was obtained through listening tests. The ability to control acoustic parameters of each of the realistic immersive environmental auditory scenes might in future research make it possible to identify acoustic triggers related to perceptual inundation in schizophrenia. Copyright © 2015 Elsevier Masson SAS. All rights reserved.
Bartolucci, Marco; Smith, Andrew T.
Practicing a visual task commonly results in improved performance. Often the improvement does not transfer well to a new retinal location, suggesting that it is mediated by changes occurring in early visual cortex, and indeed neuroimaging and neurophysiological studies both demonstrate that perceptual learning is associated with altered activity…
Henk, William A.
Behaviorism cannot adequately explain language processing. A synthesis of the psycholinguistic and information processing approaches of cognitive psychology, however, can provide the basis for a speculative analysis of reading, if this synthesis is tempered by a perceptual learning theory of uncertainty reduction. Theorists of information…
Rodríguez, Gabriel; Angulo, Rocío
An experiment with human participants established a novel procedure to assess perceptual learning with tactile stimuli. Participants received unsupervised exposure to two sandpaper surfaces differing in roughness (A and B). The ability of the participants to discriminate between the stimuli was subsequently assessed on a same/different test. It…
Eisner, F.; Melinger, A.; Weber, A.C.
The perception of speech sounds can be re-tuned through a mechanism of lexically driven perceptual learning after exposure to instances of atypical speech production. This study asked whether this re-tuning is sensitive to the position of the atypical sound within the word. We investigated
During listening to spoken language, the perceptual system needs to adapt frequently to changes in talkers, and thus to considerable interindividual variability in the articulation of a given speech sound. This thesis investigated a learning process which allows listeners to use stored lexical
BOLHUIS, JJ; VANKAMPEN, HS
The characteristics of auditory learning in filial imprinting in precocial birds are reviewed. Numerous studies have demonstrated that the addition of an auditory stimulus improves following of a visual stimulus. This paper evaluates whether there is genuine auditory imprinting, i.e. the formation
Dunn, Rita; Dunn, Kenneth
This article discusses the evolution of teaching approaches in concert with the findings of over three decades of researches on student perceptual strengths. Confusing reports of successes and only limited successes for students with varied perceptual strengths suggest that combined auditory, visual, tactual, and/or kinesthetic instructional…
Full Text Available The primary visual cortex (V1 is pre-wired to facilitate the extraction of behaviorally important visual features. Collinear edge detectors in V1, for instance, mutually enhance each other to improve the perception of lines against a noisy background. The same pre-wiring that facilitates line extraction, however, is detrimental when subjects have to discriminate the brightness of different line segments. How is it possible to improve in one task by unsupervised practicing, without getting worse in the other task? The classical view of perceptual learning is that practicing modulates the feedforward input stream through synaptic modifications onto or within V1. However, any rewiring of V1 would deteriorate other perceptual abilities different from the trained one. We propose a general neuronal model showing that perceptual learning can modulate top-down input to V1 in a task-specific way while feedforward and lateral pathways remain intact. Consistent with biological data, the model explains how context-dependent brightness discrimination is improved by a top-down recruitment of recurrent inhibition and a top-down induced increase of the neuronal gain within V1. Both the top-down modulation of inhibition and of neuronal gain are suggested to be universal features of cortical microcircuits which enable perceptual learning.
Can, Handan; Doğutepe, Elvin; Torun Yazıhan, Nakşidil; Korkman, Hamdi; Erdoğan Bakar, Emel
Auditory Verbal Learning Test (AVLT) is frequently used in neuropsychology literature to comprehensively assess the memory. The test measures verbal learning as immediate and delayed free recall, recognition, and retroactive and proactive interference. Adaptation of AVLT to the Turkish society has been completed, whereas research and development studies are still underway. The purpose of the present study is to investigate the construct validity of the test in order to contribute to the research and development process. In line with this purpose, the research data were obtained from 78 healthy participants aged between 20 and 69. The exclusion criteria included neurological and/or psychiatric disorders as well as untreated auditory/visual disorders. AVLT was administered to participants individually by two trained psychologists. Principal component analysis that is used to investigate the components represented by the AVLT scores consisted of learning, free recall and recognition, in line with the construct of the test. Distractors were also added to these two components in structural equation model. Analyses were carried out on descriptive level to establish the relatioships between age, education, gender and AVLT scores. These findings, which are consistent with the literature indicating that memory is affected by the developmental process, suggest that learning/free recall, recognition, and distractor scores of the AVLT demonstrate a component pattern consistent with theoretical knowledge. This conclusion suggests that AVLT is a valid measurement test for the Turkish society.
Lu, Zhong-Lin; Chu, Wilson; Dosher, Barbara Anne; Lee, Sophia
Eye-transfer tests, external noise manipulations, and observer models were used to systematically characterize learning mechanisms in judging motion direction of moving objects in visual periphery (Experiment 1) and fovea (Experiment 2) and to investigate the degree of transfer of the learning mechanisms from trained to untrained eyes. Perceptual learning in one eye was measured over 10 practice sessions. Subsequent learning in the untrained eye was assessed in five transfer sessions. We characterized the magnitude of transfer of each learning mechanism to the untrained eye by separately analyzing the magnitude of subsequent learning in low and high external noise conditions. In both experiments, we found that learning in the trained eye reduced contrast thresholds uniformly across all of the external noise levels: 47 +/- 10% and 62 +/- 8% in experiments 1 and 2, respectively. Two mechanisms, stimulus enhancement and template retuning, accounted for the observed performance improvements. The degree of transfer to the untrained eye depended on the amount of external noise added to the signal stimuli: In high external noise conditions, learning transferred completely to the untrained eye in both experiments. In low external noise conditions, there was only partial transfer of learning: 63% in experiment 1 and 54% in experiment 2. The results suggest that template retuning, which is effective in high external noise conditions, is mostly binocular, whereas stimulus enhancement, which is effective in low external noise displays, is largely monocular. The two independent mechanisms underlie perceptual learning of motion direction identification in monocular and binocular motion systems.
Wang, Fang; Huang, Jing; Lv, Yaping; Ma, Xiaoli; Yang, Bin; Wang, Encong; Du, Boqi; Li, Wu; Song, Yan
Visual perceptual learning has been shown to be highly specific to the retinotopic location and attributes of the trained stimulus. Recent psychophysical studies suggest that these specificities, which have been associated with early retinotopic visual cortex, may in fact not be inherent in perceptual learning and could be related to higher-order brain functions. Here we provide direct electrophysiological evidence in support of this proposition. In a series of event-related potential (ERP) experiments, we recorded high-density electroencephalography (EEG) from human adults over the course of learning in a texture discrimination task (TDT). The results consistently showed that the earliest C1 component (68-84ms), known to reflect V1 activity driven by feedforward inputs, was not modulated by learning regardless of whether the behavioral improvement is location specific or not. In contrast, two later posterior ERP components (posterior P1 and P160-350) over the occipital cortex and one anterior ERP component (anterior P160-350) over the prefrontal cortex were progressively modified day by day. Moreover, the change of the anterior component was closely correlated with improved behavioral performance on a daily basis. Consistent with recent psychophysical and imaging observations, our results indicate that perceptual learning can mainly involve changes in higher-level visual cortex as well as in the neural networks responsible for cognitive functions such as attention and decision making. Copyright © 2015 Elsevier Inc. All rights reserved.
Grzeczkowski, Lukasz; Cretenoud, Aline; Herzog, Michael H; Mast, Fred W
Perceptual learning is usually assumed to occur within sensory areas or when sensory evidence is mapped onto decisions. Subsequent procedural and motor processes, involved in most perceptual learning experiments, are thought to play no role in the learning process. Here, we show that this is not the case. Observers trained with a standard three-line bisection task and indicated the offset direction of the central line by pressing either a left or right push button. Before and after training, observers adjusted the central line of the same bisection stimulus using a computer mouse. As expected, performance improved through training. Surprisingly, learning did not transfer to the untrained mouse adjustment condition. The same was true for the opposite, i.e., training with mouse adjustments did not transfer to the push button condition. We found partial transfer when observers adjusted the central line with two different adjustment procedures. We suggest that perceptual learning is specific to procedural motor aspects beyond visual processing. Our results support theories were visual stimuli are coded together with their corresponding actions.
Mettler, Everett; Kellman, Philip J
Although much recent work in perceptual learning (PL) has focused on basic sensory discriminations, recent analyses suggest that PL in a variety of tasks depends on processes that discover and select information relevant to classifications being learned (Kellman & Garrigan, 2009; Petrov, Dosher, & Lu, 2005). In complex, real-world tasks, discovery involves finding structural invariants amidst task-irrelevant variation (Gibson, 1969), allowing learners to correctly classify new stimuli. The applicability of PL methods to such tasks offers important opportunities to improve learning. It also raises questions about how learning might be optimized in complex tasks and whether variables that influence other forms of learning also apply to PL. We investigated whether an adaptive, response-time-based, category sequencing algorithm implementing laws of spacing derived from memory research would also enhance perceptual category learning and transfer to novel cases. Participants learned to classify images of 12 different butterfly genera under conditions of: (1) random presentation, (2) adaptive category sequencing, and (3) adaptive category sequencing with 'mini-blocks' (grouping 3 successive category exemplars). We found significant effects on efficiency of learning for adaptive category sequencing, reliably better than for random presentation and mini-blocking (Experiment 1). Effects persisted across a 1-week delay and were enhanced for novel items. Experiment 2 showed even greater effects of adaptive learning for perceptual categories containing lower variability. These results suggest that adaptive category sequencing increases the efficiency of PL and enhances generalization of PL to novel stimuli, key components of high-level PL and fundamental requirements of learning in many domains. Copyright © 2014 Elsevier B.V. All rights reserved.
Moore, David R; Halliday, Lorna F; Amitay, Sygal
This paper reviews recent studies that have used adaptive auditory training to address communication problems experienced by some children in their everyday life. It considers the auditory contribution to developmental listening and language problems and the underlying principles of auditory learning that may drive further refinement of auditory learning applications. Following strong claims that language and listening skills in children could be improved by auditory learning, researchers have debated what aspect of training contributed to the improvement and even whether the claimed improvements reflect primarily a retest effect on the skill measures. Key to understanding this research have been more circumscribed studies of the transfer of learning and the use of multiple control groups to examine auditory and non-auditory contributions to the learning. Significant auditory learning can occur during relatively brief periods of training. As children mature, their ability to train improves, but the relation between the duration of training, amount of learning and benefit remains unclear. Individual differences in initial performance and amount of subsequent learning advocate tailoring training to individual learners. The mechanisms of learning remain obscure, especially in children, but it appears that the development of cognitive skills is of at least equal importance to the refinement of sensory processing. Promotion of retention and transfer of learning are major goals for further research.
Mathias, Brian; Palmer, Caroline; Perrin, Fabien; Tillmann, Barbara
Sounds that have been produced with one's own motor system tend to be remembered better than sounds that have only been perceived, suggesting a role of motor information in memory for auditory stimuli. To address potential contributions of the motor network to the recognition of previously produced sounds, we used event-related potential, electric current density, and behavioral measures to investigate memory for produced and perceived melodies. Musicians performed or listened to novel melodies, and then heard the melodies either in their original version or with single pitch alterations. Production learning enhanced subsequent recognition accuracy and increased amplitudes of N200, P300, and N400 responses to pitch alterations. Premotor and supplementary motor regions showed greater current density during the initial detection of alterations in previously produced melodies than in previously perceived melodies, associated with the N200. Primary motor cortex was more strongly engaged by alterations in previously produced melodies within the P300 and N400 timeframes. Motor memory traces may therefore interface with auditory pitch percepts in premotor regions as early as 200 ms following perceived pitch onsets. Outcomes suggest that auditory-motor interactions contribute to memory benefits conferred by production experience, and support a role of motor prediction mechanisms in the production effect. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: email@example.com.
Hasegawa, Naoya; Takeda, Kenta; Sakuma, Moe; Mani, Hiroki; Maejima, Hiroshi; Asaka, Tadayoshi
Augmented sensory biofeedback (BF) for postural control is widely used to improve postural stability. However, the effective sensory information in BF systems of motor learning for postural control is still unknown. The purpose of this study was to investigate the learning effects of visual versus auditory BF training in dynamic postural control. Eighteen healthy young adults were randomly divided into two groups (visual BF and auditory BF). In test sessions, participants were asked to bring the real-time center of pressure (COP) in line with a hidden target by body sway in the sagittal plane. The target moved in seven cycles of sine curves at 0.23Hz in the vertical direction on a monitor. In training sessions, the visual and auditory BF groups were required to change the magnitude of a visual circle and a sound, respectively, according to the distance between the COP and target in order to reach the target. The perceptual magnitudes of visual and auditory BF were equalized according to Stevens' power law. At the retention test, the auditory but not visual BF group demonstrated decreased postural performance errors in both the spatial and temporal parameters under the no-feedback condition. These findings suggest that visual BF increases the dependence on visual information to control postural performance, while auditory BF may enhance the integration of the proprioceptive sensory system, which contributes to motor learning without BF. These results suggest that auditory BF training improves motor learning of dynamic postural control. Copyright © 2017 Elsevier B.V. All rights reserved.
Cohen, Yamit; Daikhin, Luba; Ahissar, Merav
What do we learn when we practice a simple perceptual task? Many studies have suggested that we learn to refine or better select the sensory representations of the task-relevant dimension. Here we show that learning is specific to the trained structural regularities. Specifically, when this structure is modified after training with a fixed temporal structure, performance regresses to pretraining levels, even when the trained stimuli and task are retained. This specificity raises key questions as to the importance of low-level sensory modifications in the learning process. We trained two groups of participants on a two-tone frequency discrimination task for several days. In one group, a fixed reference tone was consistently presented in the first interval (the second tone was higher or lower), and in the other group the same reference tone was consistently presented in the second interval. When following training, these temporal protocols were switched between groups, performance of both groups regressed to pretraining levels, and further training was needed to attain postlearning performance. ERP measures, taken before and after training, indicated that participants implicitly learned the temporal regularity of the protocol and formed an attentional template that matched the trained structure of information. These results are consistent with Reverse Hierarchy Theory, which posits that even the learning of simple perceptual tasks progresses in a top-down manner, hence can benefit from temporal regularities at the trial level, albeit at the potential cost that learning may be specific to these regularities.
Memmert, D.; Hagemann, N.; Althoetmar, R.; Geppert, S.; Seiler, D.
This study uses three experiments with different kinds of training conditions to investigate the "easy-to-hard" principle, context interference conditions, and feedback effects for learning anticipatory skills in badminton. Experiment 1 (N = 60) showed that a training program that gradually increases the difficulty level has no advantage over the…
Molly J Henry
Full Text Available A number of accounts of human auditory perception assume that listeners use prior stimulus context to generate predictions about future stimulation. Here, we tested an auditory pitch-motion hypothesis that was developed from this perspective. Listeners judged either the time change (i.e., duration or pitch change of a comparison frequency glide relative to a standard (referent glide. Under a constant-velocity assumption, listeners were hypothesized to use the pitch velocity (Δf/Δt of the standard glide to generate predictions about the pitch velocity of the comparison glide, leading to perceptual distortions along the to-be-judged dimension when the velocities of the two glides differed. These predictions were borne out in the pattern of relative points of subjective equality by a significant three-way interaction between the velocities of the two glides and task. In general, listeners' judgments along the task-relevant dimension (pitch or time were affected by expectations generated by the constant-velocity standard, but in an opposite manner for the two stimulus dimensions. When the comparison glide velocity was faster than the standard, listeners overestimated time change, but underestimated pitch change, whereas when the comparison glide velocity was slower than the standard, listeners underestimated time change, but overestimated pitch change. Perceptual distortions were least evident when the velocities of the standard and comparison glides were matched. Fits of an imputed velocity model further revealed increasingly larger distortions at faster velocities. The present findings provide support for the auditory pitch-motion hypothesis and add to a larger body of work revealing a role for active prediction in human auditory perception.
Tsushima, Yoshiaki; Watanabe, Takeo
The role of attention in perceptual learning has been a topic of controversy. Sensory psychophysicists/physiologists and animal learning psychologists have conducted numerous studies to examine this role; but because these two types of researchers use two very different lines of approach, their findings have never been effectively integrated. In the present article, we review studies from both lines and use exposure-based learning experiments to discuss the role of attention in perceptual learning. In addition, we propose a model in which exposure-based learning occurs only when a task-irrelevant feature is weak. We hope that this article will provide new insight into the role of attention in perceptual learning to the benefit of both sensory psychophysicists/physiologists and animal learning psychologists.
volume. The conference's topics include auditory exploration of data via sonification and audification; real time monitoring of multivariate date; sound in immersive interfaces and teleoperation; perceptual issues in auditory display; sound in generalized computer interfaces; technologies supporting...... auditory display creation; data handling for auditory display systems; applications of auditory display....
Karlimah; Risfiani, F.
This paper presents the results of the research on the relation of mathematical concept with mathematics, other subjects, and with everyday life. This research reveals study result of the students who had auditory learning style and correlates it with their ability of mathematical connection. In this research, the researchers used a combination model or sequential exploratory design method, which is the use of qualitative and quantitative research methods in sequence. The result proves that giving learning facilities which are not suitable for the class whose students have the auditory learning style results in the barely sufficient math connection ability. The average mathematical connection ability of the auditory students was initially in the medium level of qualification. Then, the improvement in the form of the varied learning that suited the auditory learning style still showed the average ability of mathematical connection in medium level of qualification. Nevertheless, there was increase in the frequency of students in the medium level of qualification and decrease in the very low and low level of qualification. This suggests that the learning facilities, which are appropriate for the student’s auditory learning style, contribute well enough to the students’ mathematical connection ability. Therefore, the mathematics learning for students who have an auditory learning style should consist of particular activity that is understanding the concepts of mathematics and their relations.
Chaves, Cristiane Ribeiro; Campbell, Melanie; Côrtes Gama, Ana Cristina
This study aimed to determine the influence of native language on the auditory-perceptual assessment of voice, as completed by Brazilian and Anglo-Canadian listeners using Brazilian vocal samples and the grade, roughness, breathiness, asthenia, strain (GRBAS) scale. This is an analytical, observational, comparative, and transversal study conducted at the Speech Language Pathology Department of the Federal University of Minas Gerais in Brazil, and at the Communication Sciences and Disorders Department of the University of Alberta in Canada. The GRBAS scale, connected speech, and a sustained vowel were used in this study. The vocal samples were drawn randomly from a database of recorded speech of Brazilian adults, some with healthy voices and some with voice disorders. The database is housed at the Federal University of Minas Gerais. Forty-six samples of connected speech (recitation of days of the week), produced by 35 women and 11 men, and 46 samples of the sustained vowel /a/, produced by 37 women and 9 men, were used in this study. The listeners were divided into two groups of three speech therapists, according to nationality: Brazilian or Anglo-Canadian. The groups were matched according to the years of professional experience of participants. The weighted kappa was used to calculate the intra- and inter-rater agreements, with 95% confidence intervals, respectively. An analysis of the intra-rater agreement showed that Brazilians and Canadians had similar results in auditory-perceptual evaluation of sustained vowel and connected speech. The results of the inter-rater agreement of connected speech and sustained vowel indicated that Brazilians and Canadians had, respectively, moderate agreement on the overall severity (0.57 and 0.50), breathiness (0.45 and 0.45), and asthenia (0.50 and 0.46); poor correlation on roughness (0.19 and 0.007); and weak correlation on strain to connected speech (0.22), and moderate correlation to sustained vowel (0.50). In general
Bailey, Frank S.; Yocum, Russell G.
The purpose of this personal experience as a narrative investigation is to describe how an auditory processing learning disability exacerbated--and how spirituality and religiosity relieved--suicidal ideation, through the lived experiences of an individual born and raised in the United States. The study addresses: (a) how an auditory processing…
Grosso, A; Cambiaghi, M; Concina, G; Sacco, T; Sacchetti, B
Emotional memories represent the core of human and animal life and drive future choices and behaviors. Early research involving brain lesion studies in animals lead to the idea that the auditory cortex participates in emotional learning by processing the sensory features of auditory stimuli paired with emotional consequences and by transmitting this information to the amygdala. Nevertheless, electrophysiological and imaging studies revealed that, following emotional experiences, the auditory cortex undergoes learning-induced changes that are highly specific, associative and long lasting. These studies suggested that the role played by the auditory cortex goes beyond stimulus elaboration and transmission. Here, we discuss three major perspectives created by these data. In particular, we analyze the possible roles of the auditory cortex in emotional learning, we examine the recruitment of the auditory cortex during early and late memory trace encoding, and finally we consider the functional interplay between the auditory cortex and subcortical nuclei, such as the amygdala, that process affective information. We conclude that, starting from the early phase of memory encoding, the auditory cortex has a more prominent role in emotional learning, through its connections with subcortical nuclei, than is typically acknowledged. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.
Rummer, Ralf; Schweppe, Judith; Fürstenberg, Anne; Scheiter, Katharina; Zindler, Antje
Various studies have demonstrated an advantage of auditory over visual text modality when learning with texts and pictures. To explain this modality effect, two complementary assumptions are proposed by cognitive theories of multimedia learning: first, the visuospatial load hypothesis, which explains the modality effect in terms of visuospatial working memory overload in the visual text condition; and second, the temporal contiguity assumption, according to which the modality effect occurs because solely auditory texts and pictures can be attended to simultaneously. The latter explanation applies only to simultaneous presentation, the former to both simultaneous and sequential presentation. This paper introduces a third explanation, according to which parts of the modality effect are due to early, sensory processes. This account predicts that-for texts longer than one sentence-the modality effect with sequential presentation is restricted to the information presented most recently. Two multimedia experiments tested the influence of text modality across three different conditions: simultaneous presentation of texts and pictures versus sequential presentation versus presentation of text only. Text comprehension and picture recognition served as dependent variables. An advantage for auditory texts was restricted to the most recent text information and occurred under all presentation conditions. With picture recognition, the modality effect was restricted to the simultaneous condition. These findings clearly support the idea that the modality effect can be attributed to early processes in perception and sensory memory rather than to a working memory bottleneck.
Horie, Yoshinori; Toriizuka, Takashi
The focus of this study is a human's ability to make full use of listening and hearing. This ability consists of dividing auditory information into a signal and a noise. To evaluate the risk of using headphones, the study investigated the auditory perception when a warning sound is given in the presence of environmental noise.
Full Text Available Williams Syndrome is a genetically determined neurodevelopmental disorder characterized by an uneven cognitive profile and surprisingly large neurobehavioral differences among individuals. Previous studies have already shown different forms of memory deficiencies and learning difficulties in WS. Here we studied the capacity of WS subjects to improve their performance in a basic visual task. We employed a contour integration paradigm that addresses occipital visual function, and analyzed the initial (i.e. baseline and after-learning performance of WS individuals. Instead of pooling the very inhomogeneous results of WS subjects together, we evaluated individual performance by expressing it in terms of the deviation from the average performance of the group of typically developing subjects of similar age. This approach helped us to reveal information about the possible origins of poor performance of WS subjects in contour integration. Although the majority of WS individuals showed both reduced baseline and reduced learning performance, individual analysis also revealed a dissociation between baseline and learning capacity in several WS subjects. In spite of impaired initial contour integration performance, some WS individuals presented learning capacity comparable to learning in the typically developing population, and vice versa, poor learning was also observed in subjects with high initial performance levels. These data indicate a dissociation between factors determining initial performance and perceptual learning.
Full Text Available Background: Medical students are expected to master the ability to interpret histopathologic images, a difficult and time-consuming process. A major problem is the issue of transferring information learned from one example of a particular pathology to a new example. Recent advances in cognitive science have identified new approaches to address this problem. Methods: We adapted a new approach for enhancing pattern recognition of basic pathologic processes in skin histopathology images that utilizes perceptual learning techniques, allowing learners to see relevant structure in novel cases along with adaptive learning algorithms that space and sequence different categories (e.g. diagnoses that appear during a learning session based on each learner′s accuracy and response time (RT. We developed a perceptual and adaptive learning module (PALM that utilized 261 unique images of cell injury, inflammation, neoplasia, or normal histology at low and high magnification. Accuracy and RT were tracked and integrated into a "Score" that reflected students rapid recognition of the pathologies and pre- and post-tests were given to assess the effectiveness. Results: Accuracy, RT and Scores significantly improved from the pre- to post-test with Scores showing much greater improvement than accuracy alone. Delayed post-tests with previously unseen cases, given after 6-7 weeks, showed a decline in accuracy relative to the post-test for 1 st -year students, but not significantly so for 2 nd -year students. However, the delayed post-test scores maintained a significant and large improvement relative to those of the pre-test for both 1 st and 2 nd year students suggesting good retention of pattern recognition. Student evaluations were very favorable. Conclusion: A web-based learning module based on the principles of cognitive science showed an evidence for improved recognition of histopathology patterns by medical students.
Roth, Daphne Ari-Even; Kishon-Rabin, Liat; Hildesheimer, Minka; Karni, Avi
Large gains in performance, evolving hours after practice has terminated, were reported in a number of visual and some motor learning tasks, as well as recently in an auditory nonverbal discrimination task. It was proposed that these gains reflect a latent phase of experience-triggered memory consolidation in human skill learning. It is not clear, however, whether and when delayed gains in performance evolve following training in an auditory verbal identification task. Here we show that normal-hearing young adults trained to identify consonant-vowel stimuli in increasing levels of background noise showed significant, robust, delayed gains in performance that became effective not earlier than 4 h post-training, with most participants improving at more than 6 h post-training. These gains were retained for over 6 mo. Moreover, although it has been recently argued that time including sleep, rather than time per se, is necessary for the evolution of delayed gains in human perceptual learning, our results show that 12 h post-training in the waking state were as effective as 12 h, including no less than 6 h night's sleep. Altogether, the results indicate, for the first time, the existence of a latent, hours-long, consolidation phase in a human auditory verbal learning task, which occurs even during the awake state.
Brown, Rachel M.; Caroline ePalmer
Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians' encoding (during Learning, as they practiced nov...
Wisniewski, Matthew G; Radell, Milen L; Church, Barbara A; Mercado, Eduardo
Individuals learn to classify percepts effectively when the task is initially easy and then gradually increases in difficulty. Some suggest that this is because easy-to-discriminate events help learners focus attention on discrimination-relevant dimensions. Here, we tested whether such attentional-spotlighting accounts are sufficient to explain easy-to-hard effects in auditory perceptual learning. In two experiments, participants were trained to discriminate periodic, frequency-modulated (FM) tones in two separate frequency ranges (300-600 Hz or 3000-6000 Hz). In one frequency range, sounds gradually increased in similarity as training progressed. In the other, stimulus similarity was constant throughout training. After training, participants showed better performance in their progressively trained frequency range, even though the discrimination-relevant dimension across ranges was the same. Learning theories that posit experience-dependent changes in stimulus representations and/or the strengthening of associations with differential responses, predict the observed specificity of easy-to-hard effects, whereas attentional-spotlighting theories do not. Calibrating the difficulty and temporal sequencing of training experiences to support more incremental representation-based learning can enhance the effectiveness of practice beyond any benefits gained from explicitly highlighting relevant dimensions.
Matthew G Wisniewski
Full Text Available Individuals learn to classify percepts effectively when the task is initially easy and then gradually increases in difficulty. Some suggest that this is because easy-to-discriminate events help learners focus attention on discrimination-relevant dimensions. Here, we tested whether such attentional-spotlighting accounts are sufficient to explain easy-to-hard effects in auditory perceptual learning. In two experiments, participants were trained to discriminate periodic, frequency-modulated (FM tones in two separate frequency ranges (300-600 Hz or 3000-6000 Hz. In one frequency range, sounds gradually increased in similarity as training progressed. In the other, stimulus similarity was constant throughout training. After training, participants showed better performance in their progressively trained frequency range, even though the discrimination-relevant dimension across ranges was the same. Learning theories that posit experience-dependent changes in stimulus representations and/or the strengthening of associations with differential responses, predict the observed specificity of easy-to-hard effects, whereas attentional-spotlighting theories do not. Calibrating the difficulty and temporal sequencing of training experiences to support more incremental representation-based learning can enhance the effectiveness of practice beyond any benefits gained from explicitly highlighting relevant dimensions.
Sun, Peijian Paul; Teng, Lin Sophie
This study revisited Reid's (1987) perceptual learning style preference questionnaire (PLSPQ) in an attempt to answer whether the PLSPQ fits in the Chinese-as-a-second-language (CSL) context. If not, what are CSL learners' learning styles drawing on the PLSPQ? The PLSPQ was first re-examined through reliability analysis and confirmatory factor analysis (CFA) with 224 CSL learners. The results showed that Reid's six-factor PLSPQ could not satisfactorily explain the CSL learners' learning styles. Exploratory factor analyses were, therefore, performed to explore the dimensionality of the PLSPQ in the CSL context. A four-factor PLSPQ was successfully constructed including auditory/visual, kinaesthetic/tactile, group, and individual styles. Such a measurement model was cross-validated through CFAs with 118 CSL learners. The study not only lends evidence to the literature that Reid's PLSPQ lacks construct validity, but also provides CSL teachers and learners with insightful and practical guidance concerning learning styles. Implications and limitations of the present study are discussed.
Strait, Dana L.; Kraus, Nina
Experience-dependent characteristics of auditory function, especially with regard to speech-evoked auditory neurophysiology, have garnered increasing attention in recent years. This interest stems from both pragmatic and theoretical concerns as it bears implications for the prevention and remediation of language-based learning impairment in addition to providing insight into mechanisms engendering experience-dependent changes in human sensory function. Musicians provide an attractive model for studying the experience-dependency of auditory processing in humans due to their distinctive neural enhancements compared to nonmusicians. We have only recently begun to address whether these enhancements are observable early in life, during the initial years of music training when the auditory system is under rapid development, as well as later in life, after the onset of the aging process. Here we review neural enhancements in musically trained individuals across the life span in the context of cellular mechanisms that underlie learning, identified in animal models. Musicians’ subcortical physiologic enhancements are interpreted according to a cognitive framework for auditory learning, providing a model by which to study mechanisms of experience-dependent changes in auditory function in humans. PMID:23988583
Vinera, Jennifer; Kermen, Florence; Sacquet, Joëlle; Didier, Anne; Mandairon, Nathalie; Richard, Marion
Noradrenaline contributes to olfactory-guided behaviors but its role in olfactory learning during adulthood is poorly documented. We investigated its implication in olfactory associative and perceptual learning using local infusion of mixed a1-ß adrenergic receptor antagonist (labetalol) in the adult mouse olfactory bulb. We reported that…
Horton, Jonathan C; Fahle, Manfred; Mulder, Theo; Trauzettel-Klosinski, Susanne
The capacity for functional restitution after brain damage is quite different in the sensory and motor systems. This series of presentations highlights the potential for adaptation, plasticity, and perceptual learning from an interdisciplinary perspective. The chances for restitution in the primary visual cortex are limited. Some patterns of visual field loss and recovery after stroke are common, whereas others are impossible, which can be explained by the arrangement and plasticity of the cortical map. On the other hand, compensatory mechanisms are effective, can occur spontaneously, and can be enhanced by training. In contrast to the human visual system, the motor system is highly flexible. This is based on special relationships between perception and action and between cognition and action. In addition, the healthy adult brain can learn new functions, e.g. increasing resolution above the retinal one. The significance of these studies for rehabilitation after brain damage will be discussed.
Hordacre, Brenton; Immink, Maarten A; Ridding, Michael C; Hillier, Susan
The purpose of this study was to manipulate psychological stress and anxiety to investigate effects on ensuing perceptual-motor learning. Thirty-six participants attended two experimental sessions separated by 24h. In the first session, participants were randomized to either a mental arithmetic task known to increase stress and anxiety levels or a control condition and subsequently completed training on a speeded precision pinch task. Learning of the pinch task was assessed at the second session. Those exposed to the high stress-anxiety mental arithmetic task prior to training reported elevated levels of both stress and anxiety and demonstrated shorter movement times and improved retention of movement accuracy and movement variability. Response execution processes appear to benefit from elevated states of stress and anxiety immediately prior to training even when elicited by an unrelated task. Copyright © 2016 Elsevier B.V. All rights reserved.
Huurneman, B.; Boonstra, F.N.; Goossens, J.
PURPOSE: Perceptual learning improves visual acuity and reduces crowding in children with infantile nystagmus (IN). Here, we compare reading performance of 6- to 11-year-old children with IN with normal controls, and evaluate whether perceptual learning improves their reading. METHODS: Children with
Bufford, Carolyn A.; Mettler, Everett; Geller, Emma H.; Kellman, Philip J.
Mathematics requires thinking but also pattern recognition. Recent research indicates that perceptual learning (PL) interventions facilitate discovery of structure and recognition of patterns in mathematical domains, as assessed by tests of mathematical competence. Here we sought direct evidence that a brief perceptual learning module (PLM)…
Full Text Available Amongst the most significant questions we are confronted with today include the integration of the brain's micro-circuitry, our ability to build the complex social networks that underpin society and how our society impacts on our ecological environment. In trying to unravel these issues one place to begin is at the level of the individual: to consider how we accumulate information about our environment, how this information leads to decisions and how our individual decisions in turn create our social environment. While this is an enormous task, we may already have at hand many of the tools we need. This article is intended to review some of the recent results in neuro-cognitive research and show how they can be extended to two very specific types of expertise: perceptual expertise and social cognition. These two cognitive skills span a vast range of our genetic heritage. Perceptual expertise developed very early in our evolutionary history and is likely a highly developed part of all mammals' cognitive ability. On the other hand social cognition is most highly developed in humans in that we are able to maintain larger and more stable long term social connections with more behaviourally diverse individuals than any other species. To illustrate these ideas I will discuss board games as a toy model of social interactions as they include many of the relevant concepts: perceptual learning, decision-making, long term planning and understanding the mental states of other people. Using techniques that have been developed in mathematical psychology, I show that we can represent some of the key features of expertise using stochastic differential equations. Such models demonstrate how an expert's long exposure to a particular context influences the information they accumulate in order to make a decision.These processes are not confined to board games, we are all experts in our daily lives through long exposure to the many regularities of daily tasks and
Amongst the most significant questions we are confronted with today include the integration of the brain's micro-circuitry, our ability to build the complex social networks that underpin society and how our society impacts on our ecological environment. In trying to unravel these issues one place to begin is at the level of the individual: to consider how we accumulate information about our environment, how this information leads to decisions and how our individual decisions in turn create our social environment. While this is an enormous task, we may already have at hand many of the tools we need. This article is intended to review some of the recent results in neuro-cognitive research and show how they can be extended to two very specific and interrelated types of expertise: perceptual expertise and social cognition. These two cognitive skills span a vast range of our genetic heritage. Perceptual expertise developed very early in our evolutionary history and is a highly developed part of all mammals' cognitive ability. On the other hand social cognition is most highly developed in humans in that we are able to maintain larger and more stable long term social connections with more behaviorally diverse individuals than any other species. To illustrate these ideas I will discuss board games as a toy model of social interactions as they include many of the relevant concepts: perceptual learning, decision-making, long term planning and understanding the mental states of other people. Using techniques that have been developed in mathematical psychology, I show that we can represent some of the key features of expertise using stochastic differential equations (SDEs). Such models demonstrate how an expert's long exposure to a particular context influences the information they accumulate in order to make a decision.These processes are not confined to board games, we are all experts in our daily lives through long exposure to the many regularities of daily tasks and social
Full Text Available BACKGROUND: It is well-known that human beings are able to associate stimuli (novel or not perceived in their environment. For example, this ability is used by children in reading acquisition when arbitrary associations between visual and auditory stimuli must be learned. The studies tend to consider it as an "implicit" process triggered by the learning of letter/sound correspondences. The study described in this paper examined whether the addition of the visuo-haptic exploration would help adults to learn more effectively the arbitrary association between visual and auditory novel stimuli. METHODOLOGY/PRINCIPAL FINDINGS: Adults were asked to learn 15 new arbitrary associations between visual stimuli and their corresponding sounds using two learning methods which differed according to the perceptual modalities involved in the exploration of the visual stimuli. Adults used their visual modality in the "classic" learning method and both their visual and haptic modalities in the "multisensory" learning one. After both learning methods, participants showed a similar above-chance ability to recognize the visual and auditory stimuli and the audio-visual associations. However, the ability to recognize the visual-auditory associations was better after the multisensory method than after the classic one. CONCLUSION/SIGNIFICANCE: This study revealed that adults learned more efficiently the arbitrary association between visual and auditory novel stimuli when the visual stimuli were explored with both vision and touch. The results are discussed from the perspective of how they relate to the functional differences of the manual haptic modality and the hypothesis of a "haptic bond" between visual and auditory stimuli.
Kraft, Antje; Grimsen, Cathleen; Trenner, Dennis; Kehrer, Stefanie; Lipfert, Anika; Köhnlein, Martin; Fahle, Manfred; Brandt, Stephan A
Perceptual learning is defined as a long-lasting improvement of perception as a result of experience. Here we examined the role of task on fast perceptual learning for shape localisation either in simple detection or based on form discrimination in different visual submodalities, using identical stimulus position and stimulus types for both tasks. Thresholds for each submodality were identified by four-alternative-forced-choice tasks. Fast perceptual learning occurred for shape detection-based on luminance, motion and color differences but not for texture differences. In contradistinction, fast perceptual learning was not evident in shape localisation based on discrimination. Thresholds of all submodalities were stable across days. Fast perceptual learning seems to differ not only between different visual submodalities, but also across different tasks within the same visual submodality. Copyright 2009 Elsevier Ltd. All rights reserved.
Ivone, Ferreira Neves; Schochat, Eliane
Auditory processing maturation in school children with and without learning difficulties. To verify response improvement with the increase in age of the auditory processing skills in school children with ages ranging from eight to ten years, with and without learning difficulties and to perform a comparative study. Eighty-nine children without learning complaints (Group 1) and 60 children with learning difficulties (Group II) were assessed. The used auditory processing tests were: Pediatric Speech Intelligibility (PSI), Speech in Noise, Dichotic Non-Verbal (DNV) and Staggered Spondaic Word (SSW). A better performance was observed for Group I between the ages of eight and ten in all of the used tests. However, the observed differences were statistically significant only for PSI and SSW. For Group II, a better performance was also observed with the increase in age, with statistically significant differences for all of the used tests. Comparing the results between Groups I and II, a better performance was verified for children with no learning difficulties, in the three age groups, in PSI, DNV and SSW. A statistically significant improvement was verified in the responses of the auditory processing with the increase in age, for the ages between eight and ten years, in children with and without learning difficulties. In the comparative study, it was verified that children with learning difficulties presented a lower performance in all of the used tests in the three age groups. This suggests, for this group, a delay in the maturation of the auditory processing skills.
Su, Yuling; Lai, Yunpeng; Huang, Wanyi; Tan, Wei; Qu, Zhe; Ding, Yulong
Although some studies showed that training can improve the ability of cross-dimension conjunction search, less is known about the underlying mechanism. Specifically, it remains unclear whether training of visual conjunction search can successfully bind different features of separated dimensions into a new function unit at early stages of visual processing. In the present study, we utilized stimulus specificity and generalization to provide a new approach to investigate the mechanisms underlying perceptual learning (PL) in visual conjunction search. Five experiments consistently showed that after 40 to 50 min of training of color-shape/orientation conjunction search, the ability to search for a certain conjunction target improved significantly and the learning effects did not transfer to a new target that differed from the trained target in both color and shape/orientation features. However, the learning effects were not strictly specific. In color-shape conjunction search, although the learning effect could not transfer to a same-shape different-color target, it almost completely transferred to a same-color different-shape target. In color-orientation conjunction search, the learning effect partly transferred to a new target that shared same color or same orientation with the trained target. Moreover, the sum of transfer effects for the same color target and the same orientation target in color-orientation conjunction search was algebraically equivalent to the learning effect for trained target, showing an additive transfer effect. The different transfer patterns in color-shape and color-orientation conjunction search learning might reflect the different complexity and discriminability between feature dimensions. These results suggested a feature-based attention enhancement mechanism rather than a unitization mechanism underlying the short-term PL of color-shape/orientation conjunction search.
Pianesi, Federica; Scorpecci, Alessandro; Giannantonio, Sara; Micardi, Mariella; Resca, Alessandra; Marsella, Pasquale
To assess when prelingually deaf children with a cochlear implant (CI) achieve the First Milestone of Oral Language, to study the progression of their prelingual auditory skills in the first year after CI and to investigate a possible correlation between such skills and the timing of initial oral language development. The sample included 44 prelingually deaf children (23 M and 21 F) from the same tertiary care institution, who received unilateral or bilateral cochlear implants. Achievement of the First Milestone of Oral Language (FMOL) was defined as speech comprehension of at least 50 words and speech production of a minimum of 10 words, as established by administration of a validated Italian test for the assessment of initial language competence in infants. Prelingual auditory-perceptual skills were assessed over time by means of a test battery consisting of: the Infant Toddler Meaningful Integration Scale (IT-MAIS); the Infant Listening Progress Profile (ILiP) and the Categories of Auditory Performance (CAP). On average, the 44 children received their CI at 24±9 months and experienced FMOL after 8±4 months of continuous CI use. The IT-MAIS, ILiP and CAP scores increased significantly over time, the greatest improvement occurring between baseline and six months of CI use. On multivariate regression analysis, age at diagnosis and age at CI did not appear to bear correlation with FMOL timing; instead, the only variables contributing to its variance were IT-MAIS and ILiP scores after six months of CI use, accounting for 43% and 55%, respectively. Prelingual auditory skills of implanted children assessed via a test battery six months after CI treatment, can act as indicators of the timing of initial oral language development. Accordingly, the period from CI switch-on to six months can be considered as a window of opportunity for appropriate intervention in children failing to show the expected progression of their auditory skills and who would have higher risk of
Sharoni, Varda; Natur, Nazeh
The goals of this study were to adapt the Rey Auditory Verbal Learning Test (AVLT) into Arabic, to compare recall functioning among age groups (6:0 to 17:11), and to compare gender differences on various memory dimensions (immediate and delayed recall, learning rate, recognition, proactive interferences, and retroactive interferences). This…
Hamada, Megumi; Goya, Hideki
This study investigated the role of syllable structure in L2 auditory word learning. Based on research on cross-linguistic variation of speech perception and lexical memory, it was hypothesized that Japanese L1 learners of English would learn English words with an open-syllable structure without consonant clusters better than words with a…
Chen, Zhaocong; Wong, Francis C K; Jones, Jeffery A; Li, Weifeng; Liu, Peng; Chen, Xi; Liu, Hanjun
Speech perception and production are intimately linked. There is evidence that speech motor learning results in changes to auditory processing of speech. Whether speech motor control benefits from perceptual learning in speech, however, remains unclear. This event-related potential study investigated whether speech-sound learning can modulate the processing of feedback errors during vocal pitch regulation. Mandarin speakers were trained to perceive five Thai lexical tones while learning to associate pictures with spoken words over 5 days. Before and after training, participants produced sustained vowel sounds while they heard their vocal pitch feedback unexpectedly perturbed. As compared to the pre-training session, the magnitude of vocal compensation significantly decreased for the control group, but remained consistent for the trained group at the post-training session. However, the trained group had smaller and faster N1 responses to pitch perturbations and exhibited enhanced P2 responses that correlated significantly with their learning performance. These findings indicate that the cortical processing of vocal pitch regulation can be shaped by learning new speech-sound associations, suggesting that perceptual learning in speech can produce transfer effects to facilitating the neural mechanisms underlying the online monitoring of auditory feedback regarding vocal production.
Aravamudhan, Radhika; Lotto, Andrew J; Hawks, John W
Williams [(1986). "Role of dynamic information in the perception of coarticulated vowels," Ph.D. thesis, University of Connecticut, Standford, CT] demonstrated that nonspeech contexts had no influence on pitch judgments of nonspeech targets, whereas context effects were obtained when instructed to perceive the sounds as speech. On the other hand, Holt et al. [(2000). "Neighboring spectral content influences vowel identification," J. Acoust. Soc. Am. 108, 710-722] showed that nonspeech contexts were sufficient to elicit context effects in speech targets. The current study was to test a hypothesis that could explain the varying effectiveness of nonspeech contexts: Context effects are obtained only when there are well-established perceptual categories for the target stimuli. Experiment 1 examined context effects in speech and nonspeech signals using four series of stimuli: steady-state vowels that perceptually spanned from /inverted ohm/-/I/ in isolation and in the context of /w/ (with no steady-state portion) and two nonspeech sine-wave series that mimicked the acoustics of the speech series. In agreement with previous work context effects were obtained for speech contexts and targets but not for nonspeech analogs. Experiment 2 tested predictions of the hypothesis by testing for nonspeech context effects after the listeners had been trained to categorize the sounds. Following training, context-dependent categorization was obtained for nonspeech stimuli in the training group. These results are presented within a general perceptual-cognitive framework for speech perception research.
Ong, Michael; Russell, Paul N; Helton, William S
Perceptual learning is critical in many settings. In the present study, we investigated the role of individual differences in attention effort in perceptual learning by having participants learn to detect rare cryptic figures. We employed both functional near-infrared spectroscopy measures of frontal cortical activity and self-reports of pre-task motivation in order to assess individual differences in attention effort. We also manipulated performance feedback and the amount of background information provided to the participants regarding the task. Twelve men and 28 women participated in the experiment. Performance metrics were indicative of perceptual learning occurring. Overall performance on the task was correlated significantly with pre-task levels of self-reported motivation and the rate of learning was correlated with initial oxygen response in the frontal cortex. The initial spike in frontal oxygen response declined with time on task, perhaps due to shifts towards automaticity. The results suggest perceptual learning is influenced by individual differences in attention effort.
Full Text Available The aim of this article is to present a systematic review about the anatomy, function, connectivity, and functional activation of the primary auditory cortex (PAC (Brodmann areas 41/42 when involved in language paradigms. PAC activates with a plethora of diverse basic stimuli including but not limited to tones, chords, natural sounds, consonants, and speech. Nonetheless, the PAC shows specific sensitivity to speech. Damage in the PAC is associated with so-called “pure word-deafness” (“auditory verbal agnosia”. BA41, and to a lesser extent BA42, are involved in early stages of phonological processing (phoneme recognition. Phonological processing may take place in either the right or left side, but customarily the left exerts an inhibitory tone over the right, gaining dominance in function. BA41/42 are primary auditory cortices harboring complex phoneme perception functions with asymmetrical expression, making it possible to include them as core language processing areas (Wernicke’s area.
Rosalie, Simon M.; Muller, Sean
This paper presents a preliminary model that outlines the mechanisms underlying the transfer of perceptual-motor skill learning in sport and everyday tasks. Perceptual-motor behavior is motivated by performance demands and evolves over time to increase the probability of success through adaptation. Performance demands at the time of an event…
Full Text Available Many patients with sensorineural hearing loss have a precipitous high-frequency loss with relatively good thresholds in the low frequencies. This present paper briefly introduces and compares the basic principles of four types of frequency lowering algorithms with emphasis on nonlinear frequency compression (NLFC. A review of the effects of the NLFC algorithm on speech and music perception and sound quality appraisal is then provided. For vowel perception, it seems that the benefits provided by NLFC are limited, which are probably related to the parameter settings of the compression. For consonant perception, several studies have shown that NLFC provides improved perception of high-frequency consonants such as /s/ and /z/. However, a few other studies have demonstrated negative results in consonant perception. In terms of sentence recognition, persistent use of NLFC might provide improved performance. Compared to the conventional processing, NLFC does not alter the speech sound quality appraisal and music perception as long as the compression setting is not too aggressive. In the subsequent section, the relevant factors with regard to NLFC settings, time-course of acclimatization, listener characteristics, and perceptual tasks are discussed. Although the literature shows mixed results on the perceptual efficacy of NLFC, this technique improved certain aspects of speech understanding in certain hearing-impaired listeners. Little research is available on speech perception outcomes in languages other than English. More clinical data are needed to verify the perceptual efficacy of NLFC in patients with precipitous high-frequency hearing loss. Such knowledge will help guide clinical rehabilitation of those patients.
Huurneman, Bianca; Boonstra, F Nienke; Cox, Ralf F A; van Rens, Ger; Cillessen, Antonius H N
This study investigated whether visual perceptual learning can improve near visual acuity and reduce foveal crowding effects in four- to nine-year-old children with visual impairment. Participants were 45 children with visual impairment and 29 children with normal vision. Children with visual impairment were divided into three groups: a magnifier group (n = 12), a crowded perceptual learning group (n = 18), and an uncrowded perceptual learning group (n = 15). Children with normal vision also were divided in three groups, but were measured only at baseline. Dependent variables were single near visual acuity (NVA), crowded NVA, LH line 50% crowding NVA, number of trials, accuracy, performance time, amount of small errors, and amount of large errors. Children with visual impairment trained during six weeks, two times per week, for 30 minutes (12 training sessions). After training, children showed significant improvement of NVA in addition to specific improvements on the training task. The crowded perceptual learning group showed the largest acuity improvements (1.7 logMAR lines on the crowded chart, P visual impairment benefit from perceptual training. While task-specific improvements were observed in all training groups, transfer to crowded NVA was largest in the crowded perceptual learning group. To our knowledge, this is the first study to provide evidence for the improvement of NVA by perceptual learning in children with visual impairment. (http://www.trialregister.nl number, NTR2537.).
Deluca, Cristina; Golzar, Ashkan; Santandrea, Elisa; Lo Gerfo, Emanuele; Eštočinová, Jana; Moretto, Giuseppe; Fiaschi, Antonio; Panzeri, Marta; Mariotti, Caterina; Tinazzi, Michele; Chelazzi, Leonardo
Visual perceptual learning is widely assumed to reflect plastic changes occurring along the cerebro-cortical visual pathways, including at the earliest stages of processing, though increasing evidence indicates that higher-level brain areas are also involved. Here we addressed the possibility that the cerebellum plays an important role in visual perceptual learning. Within the realm of motor control, the cerebellum supports learning of new skills and recalibration of motor commands when movement execution is consistently perturbed (adaptation). Growing evidence indicates that the cerebellum is also involved in cognition and mediates forms of cognitive learning. Therefore, the obvious question arises whether the cerebellum might play a similar role in learning and adaptation within the perceptual domain. We explored a possible deficit in visual perceptual learning (and adaptation) in patients with cerebellar damage using variants of a novel motion extrapolation, psychophysical paradigm. Compared to their age- and gender-matched controls, patients with focal damage to the posterior (but not the anterior) cerebellum showed strongly diminished learning, in terms of both rate and amount of improvement over time. Consistent with a double-dissociation pattern, patients with focal damage to the anterior cerebellum instead showed more severe clinical motor deficits, indicative of a distinct role of the anterior cerebellum in the motor domain. The collected evidence demonstrates that a pure form of slow-incremental visual perceptual learning is crucially dependent on the intact cerebellum, bearing the notion that the human cerebellum acts as a learning device for motor, cognitive and perceptual functions. We interpret the deficit in terms of an inability to fine-tune predictive models of the incoming flow of visual perceptual input over time. Moreover, our results suggest a strong dissociation between the role of different portions of the cerebellum in motor versus
Erdener, Doğu; Burnham, Denis
Despite the body of research on auditory-visual speech perception in infants and schoolchildren, development in the early childhood period remains relatively uncharted. In this study, English-speaking children between three and four years of age were investigated for: (i) the development of visual speech perception - lip-reading and visual influence in auditory-visual integration; (ii) the development of auditory speech perception and native language perceptual attunement; and (iii) the relationship between these and a language skill relevant at this age, receptive vocabulary. Visual speech perception skills improved even over this relatively short time period. However, regression analyses revealed that vocabulary was predicted by auditory-only speech perception, and native language attunement, but not by visual speech perception ability. The results suggest that, in contrast to infants and schoolchildren, in three- to four-year-olds the relationship between speech perception and language ability is based on auditory and not visual or auditory-visual speech perception ability. Adding these results to existing findings allows elaboration of a more complete account of the developmental course of auditory-visual speech perception.
Fair, Joseph; Flom, Ross; Jones, Jacob; Martin, Justin
Six-month-olds reliably discriminate different monkey and human faces whereas 9-month-olds only discriminate different human faces. It is often falsely assumed that perceptual narrowing reflects a permanent change in perceptual abilities. In 3 experiments, ninety-six 12-month-olds' discrimination of unfamiliar monkey faces was examined. Following…
Norrix, Linda W.; Plante, Elena; Vance, Rebecca
Auditory and auditory-visual (AV) speech perception skills were examined in adults with and without language-learning disabilities (LLD). The AV stimuli consisted of congruent consonant-vowel syllables (auditory and visual syllables matched in terms of syllable being produced) and incongruent McGurk syllables (auditory syllable differed from…
Winkler, István; Czigler, István
Predictive coding theories posit that the perceptual system is structured as a hierarchically organized set of generative models with increasingly general models at higher levels. The difference between model predictions and the actual input (prediction error) drives model selection and adaptation processes minimizing the prediction error. Event-related brain potentials elicited by sensory deviance are thought to reflect the processing of prediction error at an intermediate level in the hierarchy. We review evidence from auditory and visual studies of deviance detection suggesting that the memory representations inferred from these studies meet the criteria set for perceptual object representations. Based on this evidence we then argue that these perceptual object representations are closely related to the generative models assumed by predictive coding theories. Copyright © 2011 Elsevier B.V. All rights reserved.
Nikolaev, Andrey R; Gepshtein, Sergei; van Leeuwen, Cees
Perceptual learning improves visual performance. Among the plausible mechanisms of learning, reduction of perceptual bias has been studied the least. Perceptual bias may compensate for lack of stimulus information, but excessive reliance on bias diminishes visual discriminability. We investigated the time course of bias in a perceptual grouping task and studied the associated cortical dynamics in spontaneous and evoked EEG. Participants reported the perceived orientation of dot groupings in ambiguous dot lattices. Performance improved over a 1-hr period as indicated by the proportion of trials in which participants preferred dot groupings favored by dot proximity. The proximity-based responses were compromised by perceptual bias: Vertical groupings were sometimes preferred to horizontal ones, independent of dot proximity. In the evoked EEG activity, greater amplitude of the N1 component for horizontal than vertical responses indicated that the bias was most prominent in conditions of reduced visual discriminability. The prominence of bias decreased in the course of the experiment. Although the bias was still prominent, prestimulus activity was characterized by an intermittent regime of alternating modes of low and high alpha power. Responses were more biased in the former mode, indicating that perceptual bias was deployed actively to compensate for stimulus uncertainty. Thus, early stages of perceptual learning were characterized by episodes of greater reliance on prior visual preferences, alternating with episodes of receptivity to stimulus information. In the course of learning, the former episodes disappeared, and biases reappeared only infrequently.
VANKAMPEN, HS; BOLHUIS, JJ
The present study investigated auditory learning in chicks (Gallus gallus domesticus) in a filial imprinting situation, using an experimental design employed frequently in laboratory studies of visual imprinting. In Experiment 1, chicks were trained by exposing them to one of two artificial sounds.
Gheysen, Freja; Gevers, Wim; De Schutter, Erik; Van Waelvelde, Hilde; Fias, Wim
This paper contributes to the domain of implicit sequence learning by presenting a new version of the serial reaction time (SRT) task that allows unambiguously separating perceptual from motor learning. Participants matched the colors of three small squares with the color of a subsequently presented large target square. An identical sequential structure was tied to the colors of the target square (perceptual version, Experiment 1) or to the manual responses (motor version, Experiment 2). Short blocks of sequenced and randomized trials alternated and hence provided a continuous monitoring of the learning process. Reaction time measurements demonstrated clear evidence of independently learning perceptual and motor serial information, though revealed different time courses between both learning processes. No explicit awareness of the serial structure was needed for either of the two types of learning to occur. The paradigm introduced in this paper evidenced that perceptual learning can occur with SRT measurements and opens important perspectives for future imaging studies to answer the ongoing question, which brain areas are involved in the implicit learning of modality specific (motor vs. perceptual) or general serial order.
Nittrouer, Susan; Lowenstein, Joanna H.
The ability to recognize speech involves sensory, perceptual, and cognitive processes. For much of the history of speech perception research, investigators have focused on the first and third of these, asking how much and what kinds of sensory information are used by normal and impaired listeners, as well as how effective amounts of that information are altered by “top-down” cognitive processes. This experiment focused on perceptual processes, asking what accounts for how the sensory informat...
Full Text Available Abstract concept learning was thought to be uniquely human, but has since been observed in many other species. Discriminating same from different is one abstract relation that has been studied frequently. In the current experiment, using operant conditioning, we tested whether black-capped chickadees (Poecile atricapillus could discriminate sets of auditory stimuli based on whether all the sounds within a sequence were the same or different from one another. The chickadees were successful at solving this same/different relational task, and transferred their learning to same/different sequences involving novel combinations of training notes and novel notes within the range of pitches experienced during training. The chickadees showed limited transfer to pitches that was not used in training, suggesting that the processing of absolute pitch may constrain their relational performance. Our results indicate, for the first time, that black-capped chickadees readily form relational auditory same and different categories, adding to the list of perceptual, behavioural, and cognitive abilities that make this species an important comparative model for human language and cognition.
Nívea Franklin Chaves Martins; Hipólito Virgílio Magalhães Jr
The aim of this case report was to promote a reflection about the importance of speechtherapy for stimulation a person with learning disability associated to language and auditory processing disorders. Data analysis considered the auditory abilities deficits identified in the first auditory processing test, held on April 30, 2002 compared with the new auditory processing test done on May 13, 2003, after one year of therapy directed to acoustic stimulation of auditory abilities disorders, in a...
Deveau, Jenni; Lovcik, Gary; Seitz, Aaron R
Perception is the window through which we understand all information about our environment, and therefore deficits in perception due to disease, injury, stroke or aging can have significant negative impacts on individuals' lives. Research in the field of perceptual learning has demonstrated that vision can be improved in both normally seeing and visually impaired individuals, however, a limitation of most perceptual learning approaches is their emphasis on isolating particular mechanisms. In the current study, we adopted an integrative approach where the goal is not to achieve highly specific learning but instead to achieve general improvements to vision. We combined multiple perceptual learning approaches that have individually contributed to increasing the speed, magnitude and generality of learning into a perceptual-learning based video-game. Our results demonstrate broad-based benefits of vision in a healthy adult population. Transfer from the game includes; improvements in acuity (measured with self-paced standard eye-charts), improvement along the full contrast sensitivity function, and improvements in peripheral acuity and contrast thresholds. The use of this type of this custom video game framework built up from psychophysical approaches takes advantage of the benefits found from video game training while maintaining a tight link to psychophysical designs that enable understanding of mechanisms of perceptual learning and has great potential both as a scientific tool and as therapy to help improve vision. Copyright © 2014 Elsevier B.V. All rights reserved.
Andreas L. Schulz
Full Text Available Goal directed behavior and associated learning processes are tightly linked to neuronal activity in the ventral striatum. Mechanisms that integrate task relevant sensory information into striatal processing during decision making and learning are implicitly assumed in current reinforcementmodels, yet they are still weakly understood. To identify the functional activation of cortico-striatal subpopulations of connections during auditory discrimination learning, we trained Mongolian gerbils in a two-way active avoidance task in a shuttlebox to discriminate between falling and rising frequency modulated tones with identical spectral properties. We assessed functional coupling by analyzing the field-field coherence between the auditory cortex and the ventral striatum of animals performing the task. During the course of training, we observed a selective increase of functionalcoupling during Go-stimulus presentations. These results suggest that the auditory cortex functionally interacts with the ventral striatum during auditory learning and that the strengthening of these functional connections is selectively goal-directed.
Max, Ludo; Maffett, Derek G
Neurologically healthy individuals use sensory feedback to alter future movements by updating internal models of the effector system and environment. For example, when visual feedback about limb movements or auditory feedback about speech movements is experimentally perturbed, the planning of subsequent movements is adjusted - i.e., sensorimotor adaptation occurs. A separate line of studies has demonstrated that experimentally delaying the sensory consequences of limb movements causes the sensory input to be attributed to external sources rather than to one's own actions. Yet similar feedback delays have remarkably little effect on visuo-motor adaptation (although the rate of learning varies, the amount of adaptation is only moderately affected with delays of 100-200ms, and adaptation still occurs even with a delay as long as 5000ms). Thus, limb motor learning remains largely intact even in conditions where error assignment favors external factors. Here, we show a fundamentally different result for sensorimotor control of speech articulation: auditory-motor adaptation to formant-shifted feedback is completely eliminated with delays of 100ms or more. Thus, for speech motor learning, real-time auditory feedback is critical. This novel finding informs theoretical models of human motor control in general and speech motor control in particular, and it has direct implications for the application of motor learning principles in the habilitation and rehabilitation of individuals with various sensorimotor speech disorders. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Eckstein, Miguel P.; Abbey, Craig K.; Pham, Binh T.; Shimozaki, Steven S.
Human performance in visual detection, discrimination, identification, and search tasks typically improves with practice. Psychophysical studies suggest that perceptual learning is mediated by an enhancement in the coding of the signal, and physiological studies suggest that it might be related to the plasticity in the weighting or selection of sensory units coding task relevant information (learning through attention optimization). We propose an experimental paradigm (optimal perceptual learning paradigm) to systematically study the dynamics of perceptual learning in humans by allowing comparisons to that of an optimal Bayesian algorithm and a number of suboptimal learning models. We measured improvement in human localization (eight-alternative forced-choice with feedback) performance of a target randomly sampled from four elongated Gaussian targets with different orientations and polarities and kept as a target for a block of four trials. The results suggest that the human perceptual learning can occur within a lapse of four trials (learning is slower and incomplete with respect to the optimal algorithm (23.3% reduction in human efficiency from the 1st-to-4th learning trials). The greatest improvement in human performance, occurring from the 1st-to-2nd learning trial, was also present in the optimal observer, and, thus reflects a property inherent to the visual task and not a property particular to the human perceptual learning mechanism. One notable source of human inefficiency is that, unlike the ideal observer, human learning relies more heavily on previous decisions than on the provided feedback, resulting in no human learning on trials following a previous incorrect localization decision. Finally, the proposed theory and paradigm provide a flexible framework for future studies to evaluate the optimality of human learning of other visual cues and/or sensory modalities.
Cumming, Ruth; Wilson, Angela; Goswami, Usha
Children with specific language impairments (SLIs) show impaired perception and production of spoken language, and can also present with motor, auditory, and phonological difficulties. Recent auditory studies have shown impaired sensitivity to amplitude rise time (ART) in children with SLIs, along with non-speech rhythmic timing difficulties. Linguistically, these perceptual impairments should affect sensitivity to speech prosody and syllable stress. Here we used two tasks requiring sensitivity to prosodic structure, the DeeDee task and a stress misperception task, to investigate this hypothesis. We also measured auditory processing of ART, rising pitch and sound duration, in both speech ("ba") and non-speech (tone) stimuli. Participants were 45 children with SLI aged on average 9 years and 50 age-matched controls. We report data for all the SLI children (N = 45, IQ varying), as well as for two independent SLI subgroupings with intact IQ. One subgroup, "Pure SLI," had intact phonology and reading (N = 16), the other, "SLI PPR" (N = 15), had impaired phonology and reading. Problems with syllable stress and prosodic structure were found for all the group comparisons. Both sub-groups with intact IQ showed reduced sensitivity to ART in speech stimuli, but the PPR subgroup also showed reduced sensitivity to sound duration in speech stimuli. Individual differences in processing syllable stress were associated with auditory processing. These data support a new hypothesis, the "prosodic phrasing" hypothesis, which proposes that grammatical difficulties in SLI may reflect perceptual difficulties with global prosodic structure related to auditory impairments in processing amplitude rise time and duration.
Schneider, David M; Mooney, Richard
In the auditory system, corollary discharge signals are theorized to facilitate normal hearing and the learning of acoustic behaviors, including speech and music. Despite clear evidence of corollary discharge signals in the auditory cortex and their presumed importance for hearing and auditory-guided motor learning, the circuitry and function of corollary discharge signals in the auditory cortex are not well described. In this review, we focus on recent developments in the mouse and songbird that provide insights into the circuitry that transmits corollary discharge signals to the auditory system and the function of these signals in the context of hearing and vocal learning. Copyright © 2015 Elsevier Ltd. All rights reserved.
Daniel, Reka; Wagner, Gerd; Koch, Kathrin; Reichenbach, Jurgen R.; Sauer, Heinrich; Schlosser, Ralf G. M.
The formation of new perceptual categories involves learning to extract that information from a wide range of often noisy sensory inputs, which is critical for selecting between a limited number of responses. To identify brain regions involved in visual classification learning under noisy conditions, we developed a task on the basis of the…
Sun, Peijian Paul; Teng, Lin Sophie
This study revisited Reid's (1987) perceptual learning style preference questionnaire (PLSPQ) in an attempt to answer whether the PLSPQ fits in the Chinese-as-a-second-language (CSL) context. If not, what are CSL learners' learning styles drawing on the PLSPQ? The PLSPQ was first re-examined through reliability analysis and confirmatory factor…
Frizzo, Ana Claudia Figueiredo
Full Text Available Introduction: This is an objective laboratory assessment of the central auditory systems of children with learning disabilities. Aim: To examine and determine the properties of the components of the Auditory Middle Latency Response in a sample of children with learning disabilities. Methods: This was a prospective, cross-sectional cohort study with quantitative, descriptive, and exploratory outcomes. We included 50 children aged 8-13 years of both genders with and without learning disorders. Those with disorders of known organic, environmental, or genetic causes were excluded. Results and Conclusions: The Na, Pa, and Nb waves were identified in all subjects. The ranges of the latency component values were as follows: Na = 9.8-32.3 ms, Pa = 19.0-51.4 ms, Nb = 30.0-64.3 ms (learning disorders group and Na = 13.2-29.6 ms, Pa = 21.8-42.8 ms, Nb = 28.4-65.8 ms (healthy group. The values of the Na-Pa amplitude ranged from 0.3 to 6.8 ìV (learning disorders group or 0.2-3.6 ìV (learning disorders group. Upon analysis, the functional characteristics of the groups were distinct: the left hemisphere Nb latency was longer in the study group than in the control group. Peculiarities of the electrophysiological measures were observed in the children with learning disorders. This study has provided information on the Auditory Middle Latency Response and can serve as a reference for other clinical and experimental studies in children with these disorders.
Zhang, Gong-Liang; Li, Hao; Song, Yan; Yu, Cong
The brain site of perceptual learning has been frequently debated. Recent psychophysical evidence for complete learning transfer to new retinal locations and orientations/directions suggests that perceptual learning may mainly occur in high-level brain areas. Contradictorily, ERP C1 changes associated with perceptual learning are cited as evidence for training-induced plasticity in the early visual cortex. However, C1 can be top-down modulated, which suggests the possibility that C1 changes may result from top-down modulation of the early visual cortex by high-level perceptual learning. To single out the potential top-down impact, we trained observers with a peripheral orientation discrimination task and measured C1 changes at an untrained diagonal quadrant location where learning transfer was previously known to be significant. Our assumption was that any C1 changes at this untrained location would indicate top-down modulation of the early visual cortex, rather than plasticity in the early visual cortex. The expected learning transfer was indeed accompanied with significant C1 changes. Moreover, C1 changes were absent in an untrained shape discrimination task with the same stimuli. We conclude that ERP C1 can be top-down modulated in a task-specific manner by high-level perceptual learning, so that C1 changes may not necessarily indicate plasticity in the early visual cortex. Moreover, learning transfer and associated C1 changes may indicate that learning-based top-down modulation can be remapped to early visual cortical neurons at untrained locations to enable learning transfer.
Benard, Michel Ruben; Başkent, Deniz
Normal-hearing (NH) listeners make use of context, speech redundancy and top-down linguistic processes to perceptually restore inaudible or masked portions of speech. Previous research has shown poorer perception and restoration of interrupted speech in CI users and NH listeners tested with acoustic
Karin Zazo Ortiz
Full Text Available OBJETIVO: Comparar os dados da análise perceptivo-auditiva (subjetiva com os dados da análise acústica (objetiva. MÉTODOS: Quarenta e dois pacientes disártricos, com diagnósticos neurológicos definidos, 21 do sexo masculino e 21 do sexo feminino foram submetidos à análise perceptual-auditiva e acústica. Todos os pacientes foram submetidos à gravação da voz, tendo sido avaliados, na análise auditiva, tipo de voz, ressonância (equilibrada, hipernasal ou laringo-faríngea, loudness (adequado, diminuído ou aumentado, pitch (adequado, grave, agudo ataque vocal (isocrônico, brusco ou soproso, e estabilidade (estável ou instável. Para a análise acústica foram utilizados os programas GRAM 5.1.7; para a análise da qualidade vocal e comportamento dos harmônicos na espectrografia e o Programa Vox Metria, para a obtenção das medidas objetivas. RESULTADOS: A comparação entre os achados das análises auditiva e acústica em sua maioria não foi significante, ou seja, não houve uma relação direta entre os achados subjetivos e os dados objetivos. Houve diferença estatisticamente significante apenas entre voz soprosa e Shimmer alterado (p=0,048 e entre a definição dos harmônicos e voz soprosa (p=0,040, sendo assim, observou-se correlação entre a presença de ruído à emissão e soprosidade. CONCLUSÕES: As análises perceptual-auditiva e acústica forneceram dados diferentes, porém complementares, auxiliando, de forma conjunta, no diagnóstico clínico das disartrias.PURPOSE: To compare data found in auditory-perceptual analyses (subjective and acoustic analyses (objective in dysarthric patients. METHODS: Forty-two patients with well defined neurological diagnosis, 21 male and 21 female, were evaluated in auditory-perceptual parameters and acoustic measures. All patients had their voices recorded. Auditory-perceptual voice analyses were made considering type of voice, resonance (balanced, hipernasal or laryngopharyngeal
Chang, Acer Y-C; Schwartzman, David J; VanRullen, Rufin; Kanai, Ryota; Seth, Anil K
A novel neural signature of active visual processing has recently been described in the form of the "perceptual echo", in which the cross-correlation between a sequence of randomly fluctuating luminance values and occipital electrophysiological signals exhibits a long-lasting periodic (∼100 ms cycle) reverberation of the input stimulus (VanRullen and Macdonald, 2012). As yet, however, the mechanisms underlying the perceptual echo and its function remain unknown. Reasoning that natural visual signals often contain temporally predictable, though nonperiodic features, we hypothesized that the perceptual echo may reflect a periodic process associated with regularity learning. To test this hypothesis, we presented subjects with successive repetitions of a rapid nonperiodic luminance sequence, and examined the effects on the perceptual echo, finding that echo amplitude linearly increased with the number of presentations of a given luminance sequence. These data suggest that the perceptual echo reflects a neural signature of regularity learning.Furthermore, when a set of repeated sequences was followed by a sequence with inverted luminance polarities, the echo amplitude decreased to the same level evoked by a novel stimulus sequence. Crucially, when the original stimulus sequence was re-presented, the echo amplitude returned to a level consistent with the number of presentations of this sequence, indicating that the visual system retained sequence-specific information, for many seconds, even in the presence of intervening visual input. Altogether, our results reveal a previously undiscovered regularity learning mechanism within the human visual system, reflected by the perceptual echo.SIGNIFICANCE STATEMENT How the brain encodes and learns fast-changing but nonperiodic visual input remains unknown, even though such visual input characterizes natural scenes. We investigated whether the phenomenon of "perceptual echo" might index such learning. The perceptual echo is a
Banai, Karen; Lavner, Yizhar
Time-compressed speech, a form of rapidly presented speech, is harder to comprehend than natural speech, especially for non-native speakers. Although it is possible to adapt to time-compressed speech after a brief exposure, it is not known whether additional perceptual learning occurs with further practice. Here, we ask whether multiday training on time-compressed speech yields more learning than that observed during the initial adaptation phase and whether the pattern of generalization following successful learning is different than that observed with initial adaptation only. Two groups of non-native Hebrew speakers were tested on five different conditions of time-compressed speech identification in two assessments conducted 10-14 days apart. Between those assessments, one group of listeners received five practice sessions on one of the time-compressed conditions. Between the two assessments, trained listeners improved significantly more than untrained listeners on the trained condition. Furthermore, the trained group generalized its learning to two untrained conditions in which different talkers presented the trained speech materials. In addition, when the performance of the non-native speakers was compared to that of a group of naïve native Hebrew speakers, performance of the trained group was equivalent to that of the native speakers on all conditions on which learning occurred, whereas performance of the untrained non-native listeners was substantially poorer. Multiday training on time-compressed speech results in significantly more perceptual learning than brief adaptation. Compared to previous studies of adaptation, the training induced learning is more stimulus specific. Taken together, the perceptual learning of time-compressed speech appears to progress from an initial, rapid adaptation phase to a subsequent prolonged and more stimulus specific phase. These findings are consistent with the predictions of the Reverse Hierarchy Theory of perceptual
Wang, Rui; Zhang, Jun-Yun; Klein, Stanley A; Levi, Dennis M; Yu, Cong
Perceptual learning, a process in which training improves visual discrimination, is often specific to the trained retinal location, and this location specificity is frequently regarded as an indication of neural plasticity in the retinotopic visual cortex. However, our previous studies have shown that "double training" enables location-specific perceptual learning, such as Vernier learning, to completely transfer to a new location where an irrelevant task is practiced. Here we show that Vernier learning can be actuated by less location-specific orientation or motion-direction learning to transfer to completely untrained retinal locations. This "piggybacking" effect occurs even if both tasks are trained at the same retinal location. However, piggybacking does not occur when the Vernier task is paired with a more location-specific contrast-discrimination task. This previously unknown complexity challenges the current understanding of perceptual learning and its specificity/transfer. Orientation and motion-direction learning, but not contrast and Vernier learning, appears to activate a global process that allows learning transfer to untrained locations. Moreover, when paired with orientation or motion-direction learning, Vernier learning may be "piggybacked" by the activated global process to transfer to other untrained retinal locations. How this task-specific global activation process is achieved is as yet unknown. © 2014 ARVO.
Alice Kitty Lagas
Full Text Available The selective serotonin reuptake inhibitor fluoxetine significantly enhances adult visual cortex plasticity within the rat. This effect is related to decreased gamma-aminobutyric acid (GABA mediated inhibition and identifies fluoxetine as a potential agent for enhancing plasticity in the adult human brain. We tested the hypothesis that fluoxetine would enhance visual perceptual learning of a motion direction discrimination (MDD task in humans. We also investigated 1 the effect of fluoxetine on visual and motor cortex excitability and 2 the impact of increased GABA mediated inhibition following a single dose of triazolam on post-training MDD task performance. Within a double blind, placebo controlled design, 20 healthy adult participants completed a 19-day course of fluoxetine (n = 10, 20mg per day or placebo (n = 10. Participants were trained on the MDD task over the final five days of fluoxetine administration. Accuracy for the trained MDD stimulus and an untrained MDD stimulus configuration was assessed before and after training, after triazolam and one week after triazolam. Motor and visual cortex excitability was measured using transcranial magnetic stimulation. Fluoxetine did not enhance the magnitude or rate of perceptual learning and full transfer of learning to the untrained stimulus was observed for both groups. After training was complete, trazolam had no effect on trained task performance but significantly impaired untrained task performance. No consistent effects of fluoxetine on cortical excitability were observed. The results do not support the hypothesis that fluoxetine can enhance learning in humans. However, the specific effect of triazolam on MDD task performance for the untrained stimulus suggests that learning and learning transfer relay on dissociable neural mechanisms.
Lu, Zhong-Lin; Chu, Wilson; Dosher, Barbara Anne; Lee, Sophia
We combined the external noise paradigm, the Perceptual Template Model approach, and transfer tests to investigate the mechanisms and eye-specificity of perceptual learning of Gabor orientation in visual periphery. Coupled with a fixation task, discriminating a 5 from an S in a rapid small character string at fixation, contrast thresholds were estimated for each of eight external noise levels at two performance criteria using 3/1 and 2/1 staircases. Perceptual learning in one eye was measured over 10 practice sessions, followed by five sessions of practice in the new eye to assess transfer. We found that monocular learning improved performance (reduced contrast thresholds) with virtually equal magnitude across a wide range of external noise levels with no significant change in central task performance. Based on measurements of learning effects at two performance criterion levels, we identified a mixture of stimulus enhancement and external noise exclusion as the mechanism of perceptual learning underlying the observed improvements. Perceptual learning in the trained eye generalized completely to the untrained eye. We related the transfer patterns to known physiology and psychophysics on orientation direction coding.
Zhang, Jun-Yun; Cong, Lin-Juan; Klein, Stanley A; Levi, Dennis M; Yu, Cong
We investigated whether perceptual learning in adults with amblyopia could be enabled to transfer completely to an orthogonal orientation, which would suggest that amblyopic perceptual learning results mainly from high-level cognitive compensation, rather than plasticity in the amblyopic early visual brain. Nineteen adults (mean age = 22.5 years) with anisometropic and/or strabismic amblyopia were trained following a training-plus-exposure (TPE) protocol. The amblyopic eyes practiced contrast, orientation, or Vernier discrimination at one orientation for six to eight sessions. Then the amblyopic or nonamblyopic eyes were exposed to an orthogonal orientation via practicing an irrelevant task. Training was first performed at a lower spatial frequency (SF), then at a higher SF near the cutoff frequency of the amblyopic eye. Perceptual learning was initially orientation specific. However, after exposure to the orthogonal orientation, learning transferred to an orthogonal orientation completely. Reversing the exposure and training order failed to produce transfer. Initial lower SF training led to broad improvement of contrast sensitivity, and later higher SF training led to more specific improvement at high SFs. Training improved visual acuity by 1.5 to 1.6 lines (P orientations to enable learning transfer. Therefore, perceptual learning may improve amblyopic vision mainly through rule-based cognitive compensation.
Nittrouer, Susan; Lowenstein, Joanna H
The ability to recognize speech involves sensory, perceptual, and cognitive processes. For much of the history of speech perception research, investigators have focused on the first and third of these, asking how much and what kinds of sensory information are used by normal and impaired listeners, as well as how effective amounts of that information are altered by "top-down" cognitive processes. This experiment focused on perceptual processes, asking what accounts for how the sensory information in the speech signal gets organized. Two types of speech signals processed to remove properties that could be considered traditional acoustic cues (amplitude envelopes and sine wave replicas) were presented to 100 listeners in five groups: native English-speaking (L1) adults, 7-, 5-, and 3-year-olds, and native Mandarin-speaking adults who were excellent second-language (L2) users of English. The L2 adults performed more poorly than L1 adults with both kinds of signals. Children performed more poorly than L1 adults but showed disproportionately better performance for the sine waves than for the amplitude envelopes compared to both groups of adults. Sentence context had similar effects across groups, so variability in recognition was attributed to differences in perceptual organization of the sensory information, presumed to arise from native language experience.
Full Text Available Long-term music training can positively impact speech processing. A recent framework developed to explain such cross-domain plasticity posits that music training-related advantages in speech processing are due to shared cognitive and perceptual processes between music and speech. Although perceptual and cognitive processing advantages due to music training have been independently demonstrated, to date no study has examined perceptual and cognitive processing within the context of a single task. The present study examines the impact of long-term music training on speech learning from a rigorous, computational perspective derived from signal detection theory. Our computational models provide independent estimates of cognitive and perceptual processing in native English-speaking musicians (n=15, mean age= 25 years and non-musicians (n=15, mean age= 23 years learning to categorize non-native lexical pitch patterns (Mandarin tones. Musicians outperformed non-musicians in this task. Model-based analyses suggested that musicians shifted from simple unidimensional decision strategies to more optimal multidimensional decision strategies sooner than non-musicians. In addition, musicians used optimal decisional strategies more often than non-musicians. However, musicians and non-musicians who used multidimensional strategies showed no difference in performance. We estimated parameters that quantify the magnitude of perceptual variability along two dimensions that are critical for tone categorization: pitch height and pitch direction. Both musicians and non-musicians showed a decrease in perceptual variability along the pitch height dimension, but only musicians showed a significant reduction in perceptual variability along the pitch direction dimension. Notably, these advantages persisted during a generalization phase, when no feedback was provided. These results provide an insight into the mechanisms underlying the musician advantage observed in non
Smayda, Kirsten E; Chandrasekaran, Bharath; Maddox, W Todd
Long-term music training can positively impact speech processing. A recent framework developed to explain such cross-domain plasticity posits that music training-related advantages in speech processing are due to shared cognitive and perceptual processes between music and speech. Although perceptual and cognitive processing advantages due to music training have been independently demonstrated, to date no study has examined perceptual and cognitive processing within the context of a single task. The present study examines the impact of long-term music training on speech learning from a rigorous, computational perspective derived from signal detection theory. Our computational models provide independent estimates of cognitive and perceptual processing in native English-speaking musicians (n = 15, mean age = 25 years) and non-musicians (n = 15, mean age = 23 years) learning to categorize non-native lexical pitch patterns (Mandarin tones). Musicians outperformed non-musicians in this task. Model-based analyses suggested that musicians shifted from simple unidimensional decision strategies to more optimal multidimensional (MD) decision strategies sooner than non-musicians. In addition, musicians used optimal decisional strategies more often than non-musicians. However, musicians and non-musicians who used MD strategies showed no difference in performance. We estimated parameters that quantify the magnitude of perceptual variability along two dimensions that are critical for tone categorization: pitch height and pitch direction. Both musicians and non-musicians showed a decrease in perceptual variability along the pitch height dimension, but only musicians showed a significant reduction in perceptual variability along the pitch direction dimension. Notably, these advantages persisted during a generalization phase, when no feedback was provided. These results provide an insight into the mechanisms underlying the musician advantage observed in non-native speech learning.
Gaab, Nadine; Paetzold, Miriam; Becker, Markus; Walker, Matthew P; Schlaug, Gottfried
Evidence continues to support a role for sleep in delayed learning without further practice. Here we demonstrate the beneficial influence of sleep on auditory skill learning. Fifty-six subjects were randomly assigned to two groups, trained and tested on a pitch memory task three times across 24 h. The morning group was trained at 09.00 h, retested 12 h later that same day, and again after 12 h sleep. The evening group was trained at 21.00 h, retested 12 h immediately after sleep, and again 12 h later the next day. At retesting, both groups combined showed significant delayed learning only after sleep, but not across equivalent periods of wake, regardless of which came first. These data add to the growing literature describing sleep-dependent learning throughout sensory and motor domains.
Full Text Available Previous research suggests that high functioning children with Autism Spectrum Disorder (ASD sometimes have problems learning categories, but often appear to perform normally in categorization tasks. The deficits that individuals with ASD show when learning categories have been attributed to executive dysfunction, general deficits in implicit learning, atypical cognitive strategies, or abnormal perceptual biases and abilities. Several of these psychological explanations for category learning deficits have been associated with neural abnormalities such as cortical underconnectivity. The present study evaluated how well existing neurally-based theories account for atypical perceptual category learning shown by high functioning children with ASD across multiple category learning tasks involving novel, abstract shapes. Consistent with earlier results, children’s performances revealed two distinct patterns of learning and generalization associated with ASD: one was indistinguishable from performance in typically developing children; the other revealed dramatic impairments. These two patterns were evident regardless of training regimen or stimulus set. Surprisingly, some children with ASD showed both patterns. Simulations of perceptual category learning could account for the two observed patterns in terms of differences in neural plasticity. However, no current psychological or neural theory adequately explains why a child with ASD might show such large fluctuations in category learning ability across training conditions or stimulus sets.
Wang, Zhengke; Cheng-Lai, Alice; Song, Yan; Cutting, Laurie; Jiang, Yuzheng; Lin, Ou; Meng, Xiangzhi; Zhou, Xiaolin
Learning to read involves discriminating between different written forms and establishing connections with phonology and semantics. This process may be partially built upon visual perceptual learning, during which the ability to process the attributes of visual stimuli progressively improves with practice. The present study investigated to what extent Chinese children with developmental dyslexia have deficits in perceptual learning by using a texture discrimination task, in which participants were asked to discriminate the orientation of target bars. Experiment l demonstrated that, when all of the participants started with the same initial stimulus-to-mask onset asynchrony (SOA) at 300 ms, the threshold SOA, adjusted according to response accuracy for reaching 80% accuracy, did not show a decrement over 5 days of training for children with dyslexia, whereas this threshold SOA steadily decreased over the training for the control group. Experiment 2 used an adaptive procedure to determine the threshold SOA for each participant during training. Results showed that both the group of dyslexia and the control group attained perceptual learning over the sessions in 5 days, although the threshold SOAs were significantly higher for the group of dyslexia than for the control group; moreover, over individual participants, the threshold SOA negatively correlated with their performance in Chinese character recognition. These findings suggest that deficits in visual perceptual processing and learning might, in part, underpin difficulty in reading Chinese. Copyright © 2014 John Wiley & Sons, Ltd.
van Vugt, Floris T; Tillmann, Barbara
Music and speech are skills that require high temporal precision of motor output. A key question is how humans achieve this timing precision given the poor temporal resolution of somatosensory feedback, which is classically considered to drive motor learning. We hypothesise that auditory feedback critically contributes to learn timing, and that, similarly to visuo-spatial learning models, learning proceeds by correcting a proportion of perceived timing errors. Thirty-six participants learned to tap a sequence regularly in time. For participants in the synchronous-sound group, a tone was presented simultaneously with every keystroke. For the jittered-sound group, the tone was presented after a random delay of 10-190 ms following the keystroke, thus degrading the temporal information that the sound provided about the movement. For the mute group, no keystroke-triggered sound was presented. In line with the model predictions, participants in the synchronous-sound group were able to improve tapping regularity, whereas the jittered-sound and mute group were not. The improved tapping regularity of the synchronous-sound group also transferred to a novel sequence and was maintained when sound was subsequently removed. The present findings provide evidence that humans engage in auditory feedback error-based learning to improve movement quality (here reduce variability in sequence tapping). We thus elucidate the mechanism by which high temporal precision of movement can be achieved through sound in a way that may not be possible with less temporally precise somatosensory modalities. Furthermore, the finding that sound-supported learning generalises to novel sequences suggests potential rehabilitation applications. Copyright © 2015 Elsevier B.V. All rights reserved.
Glad, Harold L.
This study evaluates the relationships that exist between three types of visual and perceptual-motor tasks (coincidence-anticipation, tracking with rotary pursuit, and a unique two-dimensional discrete motor task) and investigates the nature of learning demonstrated by the subjects on each of the three tasks. Thirty male students were given 20…
Zhai, Jingjing; Chen, Min; Liu, Lijuan; Zhao, Xuna; Zhang, Hong; Luo, Xiaojie; Gao, Jiahong
To investigate the neuromechanisms of perceptual learning treatment in patients with anisometropic amblyopia using functional MRI (fMRI) and diffusion tensor imaging (DTI) techniques. 20 patients with monocular anisometropic amblyopia participated in the study. Both fMRI and DTI data were acquired for each patient twice: before and after 30 days' perceptual learning treatment for the amblyopic eye. During fMRI scanning, patients viewed the stimuli with either the sound or amblyopic eye. Changes of cortical activation after treatment were evaluated. In the DTI exams, the fractional anisotropy (FA) values, apparent diffusion coefficient (ADC) values, the voxel numbers of optic radiations (ORs), and the number of tracks were compared between the ipsilateral and the contralateral ORs and also between the previous and posterior scans. Remarkable increased activation via the amblyopic eyes was found in Brodmann Area (BA) 17-19, bilateral temporal lobes, and right cingulate gyrus after the perceptual learning treatment. No significant changes were found in the FA values, ADC values, voxel numbers, and the number of tracks after the treatment. These results indicate that perceptual learning treatment for amblyopia had a positive effect on the visual cortex and temporal lobe visual areas in patients with anisometropic amblyopia.
Baleghizadeh, Sasan; Shayeghi, Rose
The purpose of the present study is to investigate the relationships between preferences of Multiple Intelligences and perceptual/social learning styles. Two self-report questionnaires were administered to a total of 207 male and female participants. Pearson correlation results revealed statistically significant positive relations between…
Huurneman, B.; Boonstra, F.N.; Goossens, J.
Purpose: To identify predictors of sensitivity to perceptual learning on a computerized, near-threshold letter discrimination task in children with infantile nystagmus (idiopathic IN: n = 18; oculocutaneous albinism accompanied by IN: n = 18). Methods: Children were divided into two age-, acuity-,
Huurneman, Bianca; Boonstra, F. Nienke; Cox, Ralf F. A.; van Rens, Ger; Cillessen, Antonius H. N.
PURPOSE. This study investigated whether visual perceptual learning can improve near visual acuity and reduce foveal crowding effects in four-to nine-year-old children with visual impairment. METHODS. Participants were 45 children with visual impairment and 29 children with normal vision. Children
Huurneman, B.; Boonstra, F.N.; Cox, R.F.; Rens, G. van; Cillessen, A.H.N.
PURPOSE: This study investigated whether visual perceptual learning can improve near visual acuity and reduce foveal crowding effects in four- to nine-year-old children with visual impairment. METHODS: Participants were 45 children with visual impairment and 29 children with normal vision. Children
Huurneman, B.; Boonstra, F.N.; Cox, R.F.A.; Rens, G.H.M.B. van; Cillessen, A.H.N.
PURPOSE. This study investigated whether visual perceptual learning can improve near visual acuity and reduce foveal crowding effects in four-to nine-year-old children with visual impairment. METHODS. Participants were 45 children with visual impairment and 29 children with normal vision. Children
Witt, Arnaud; Vinter, Annie
There is growing evidence that, faced with a complex environment, participants subdivide the incoming information into small perceptual units, called chunks. Although statistical properties have been identified as playing a key role in chunking, we wanted to determine whether perceptual (repetitions) and positional (initial units) features might provide immediate guidance for the parsing of information into chunks. Children aged 5 and 8 years were exposed to sequences of 3, 4, or 5 colours. Sequence learning was assessed either through an explicit generation test (Experiment 1) or through a recognition test (Experiment 2). Experiment 1 showed that perceptual and positional saliencies benefited learning and that sensitivity to repetitions was age dependent and permitted the formation of longer chunks (trigrams) in the oldest children. Experiment 2 suggested that children became sensitive to perceptual and positional saliencies regardless of age and that the both types of saliencies supported the formation of longer chunks in the oldest children. The discussion focuses on the multiple factors intervening in sequence learning and their differential effects as a function of the instructions used at test to assess sequence learning.
Jarodzka, Halszka; Balslev, Thomas; Holmqvist, Kenneth; Nyström, Marcus; Scheiter, Katharina; Gerjets, Peter; Eika, Berit
Jarodzka, H., Balslev, T., Holmqvist, K., Nyström, M., Scheiter, K., Gerjets, P., & Eika, B. (2010). Learning perceptual aspects of diagnosis in medicine via eye movement modeling examples on patient video cases. In S. Ohlsson & R. Catrambone (Eds.), Proceedings of the 32nd Annual Conference of the
Jarodzka, Halszka; Balslev, Thomas; Holmqvist, Kenneth; Nyström, Marcus; Scheiter, Katharina; Gerjets, Peter; Eika, Berit
Jarodzka, H., Balslev, T., Holmqvist, K., Nyström, M., Scheiter, K., Gerjets, P., & Eika, B. (2010, August). Learning perceptual aspects of diagnosis in medicine via eye movement modeling examples on patient video cases. Poster presented at the 32nd Annual Conference of the Cognitive Science
Richtsmeier, Peter T; Goffman, Lisa
What cognitive mechanisms account for the trajectory of speech sound development, in particular, gradually increasing accuracy during childhood? An intriguing potential contributor is statistical learning, a type of learning that has been studied frequently in infant perception but less often in child speech production. To assess the relevance of statistical learning to developing speech accuracy, we carried out a statistical learning experiment with four- and five-year-olds in which statistical learning was examined over one week. Children were familiarized with and tested on word-medial consonant sequences in novel words. There was only modest evidence for statistical learning, primarily in the first few productions of the first session. This initial learning effect nevertheless aligns with previous statistical learning research. Furthermore, the overall learning effect was similar to an estimate of weekly accuracy growth based on normative studies. The results implicate other important factors in speech sound development, particularly learning via production. Copyright © 2017 Elsevier Inc. All rights reserved.
Weinberger, Norman M
Primary ("early") sensory cortices have been viewed as stimulus analyzers devoid of function in learning, memory, and cognition. However, studies combining sensory neurophysiology and learning protocols have revealed that associative learning systematically modifies the encoding of stimulus dimensions in the primary auditory cortex (A1) to accentuate behaviorally important sounds. This "representational plasticity" (RP) is manifest at different levels. The sensitivity and selectivity of signal tones increase near threshold, tuning above threshold shifts toward the frequency of acoustic signals, and their area of representation can increase within the tonotopic map of A1. The magnitude of area gain encodes the level of behavioral stimulus importance and serves as a substrate of memory strength. RP has the same characteristics as behavioral memory: it is associative, specific, develops rapidly, consolidates, and can last indefinitely. Pairing tone with stimulation of the cholinergic nucleus basalis induces RP and implants specific behavioral memory, while directly increasing the representational area of a tone in A1 produces matching behavioral memory. Thus, RP satisfies key criteria for serving as a substrate of auditory memory. The findings suggest a basis for posttraumatic stress disorder in abnormally augmented cortical representations and emphasize the need for a new model of the cerebral cortex. © 2015 Elsevier B.V. All rights reserved.
Pena, Jose L; DeBello, William M
The human brain has accumulated many useful building blocks over its evolutionary history, and the best knowledge of these has often derived from experiments performed in animal species that display finely honed abilities. In this article we review a model system at the forefront of investigation into the neural bases of information processing, plasticity, and learning: the barn owl auditory localization pathway. In addition to the broadly applicable principles gleaned from three decades of work in this system, there are good reasons to believe that continued exploration of the owl brain will be invaluable for further advances in understanding of how neuronal networks give rise to behavior.
Queen, Jennifer S.; Nygaard, Lynne C.
Previous research suggests that as listeners become familiar with a speaker's vocal style, they are better able to understand that speaker. This study investigated one possible mechanism by which this talker familiarity benefit arises. Listeners' vowel spaces were measured using a perceptual discrimination test both before and after they were trained to identify a group of speakers by name. Listeners identified either the same speakers whose vowels they discriminated or a different group of speakers. Differences in the learnability and the intelligibility of the two speaking groups were observed. The speaker group that was harder to identify also had vowels that were harder to discriminate. Changes in the listeners' vowel spaces were determined by examining multidimensional scaling solutions of their responses during the discrimination tests. All listeners became better at discriminating vowels. However, only listeners who heard different speakers during identification training and vowel discrimination exhibited a shift in their vowel spaces after training. This suggests that encountering new voices in an unrelated, nonlinguistic task acts to alter the perceptual context and may affect the structure of linguistic representation. Together, these results suggest a link between linguistic and nonlinguistic information in representations for spoken language.
Kattner, Florian; Cochrane, Aaron; Green, C Shawn
The majority of theoretical models of learning consider learning to be a continuous function of experience. However, most perceptual learning studies use thresholds estimated by fitting psychometric functions to independent blocks, sometimes then fitting a parametric function to these block-wise estimated thresholds. Critically, such approaches tend to violate the basic principle that learning is continuous through time (e.g., by aggregating trials into large "blocks" for analysis that each assume stationarity, then fitting learning functions to these aggregated blocks). To address this discrepancy between base theory and analysis practice, here we instead propose fitting a parametric function to thresholds from each individual trial. In particular, we implemented a dynamic psychometric function whose parameters were allowed to change continuously with each trial, thus parameterizing nonstationarity. We fit the resulting continuous time parametric model to data from two different perceptual learning tasks. In nearly every case, the quality of the fits derived from the continuous time parametric model outperformed the fits derived from a nonparametric approach wherein separate psychometric functions were fit to blocks of trials. Because such a continuous trial-dependent model of perceptual learning also offers a number of additional advantages (e.g., the ability to extrapolate beyond the observed data; the ability to estimate performance on individual critical trials), we suggest that this technique would be a useful addition to each psychophysicist's analysis toolkit.
Jones, Scott P; Dwyer, Dominic M
Exposure to complex checkerboards (comprising a common background, e.g., X, with unique features, e.g., A-D, that are placed in particular locations on the background) improves discrimination between them (perceptual learning). Such stimuli have been used previously to probe human perceptual learning but these studies leave open the question of whether the improvement in discrimination is based on the content or location of the unique stimuli. Experiment 1 suggests that perceptual learning produced by exposure to AX and BX transferred to stimuli that had new unique features (e.g., C, D) in the position that had been occupied by A and B during exposure. However, there was no transfer to stimuli that retained A and B as the unique features but moved them to a different location on the background. Experiment 2 replicated the key features of Experiment 1, that is, no transfer of exposure learning based on content but perfect transfer of exposure learning based on location using a design which allowed for independent tests of location- and content-based performance. In both the experiments reported here, superior discrimination between similar stimuli on the basis of exposure can be explained entirely by learning where to look, with no independent effect of learning about particular stimulus features. These results directly challenge the interpretation of practically all prior experiments using the same type of design and stimuli.
Beijer, L. J.; Rietveld, A. C. M.; van Stiphout, A. J. L.
Background: Web based speech training for dysarthric speakers, such as E-learning based Speech Therapy (EST), puts considerable demands on auditory discrimination abilities. Aims: To discuss the development and the evaluation of an auditory discrimination test (ADT) for the assessment of auditory speech discrimination skills in Dutch adult…
Iverson, Paul; Evans, Bronwen G
This study investigated whether individuals with small and large native-language (L1) vowel inventories learn second-language (L2) vowel systems differently, in order to better understand how L1 categories interfere with new vowel learning. Listener groups whose L1 was Spanish (5 vowels) or German (18 vowels) were given five sessions of high-variability auditory training for English vowels, after having been matched to assess their pre-test English vowel identification accuracy. Listeners were tested before and after training in terms of their identification accuracy for English vowels, the assimilation of these vowels into their L1 vowel categories, and their best exemplars for English (i.e., perceptual vowel space map). The results demonstrated that Germans improved more than Spanish speakers, despite the Germans' more crowded L1 vowel space. A subsequent experiment demonstrated that Spanish listeners were able to improve as much as the German group after an additional ten sessions of training, and that both groups were able to retain this learning. The findings suggest that a larger vowel category inventory may facilitate new learning, and support a hypothesis that auditory training improves identification by making the application of existing categories to L2 phonemes more automatic and efficient.
Larcombe, Stephanie J; Kennard, Chris; Bridge, Holly
Repeated practice of a specific task can improve visual performance, but the neural mechanisms underlying this improvement in performance are not yet well understood. Here we trained healthy participants on a visual motion task daily for 5 days in one visual hemifield. Before and after training, we used functional magnetic resonance imaging (fMRI) to measure the change in neural activity. We also imaged a control group of participants on two occasions who did not receive any task training. While in the MRI scanner, all participants completed the motion task in the trained and untrained visual hemifields separately. Following training, participants improved their ability to discriminate motion direction in the trained hemifield and, to a lesser extent, in the untrained hemifield. The amount of task learning correlated positively with the change in activity in the medial superior temporal (MST) area. MST is the anterior portion of the human motion complex (hMT+). MST changes were localized to the hemisphere contralateral to the region of the visual field, where perceptual training was delivered. Visual areas V2 and V3a showed an increase in activity between the first and second scan in the training group, but this was not correlated with performance. The contralateral anterior hippocampus and bilateral dorsolateral prefrontal cortex (DLPFC) and frontal pole showed changes in neural activity that also correlated with the amount of task learning. These findings emphasize the importance of MST in perceptual learning of a visual motion task. Hum Brain Mapp 39:145-156, 2018. © 2017 Wiley Periodicals, Inc. © 2017 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.
Valt, Christian; Klein, Christoph; Boehm, Stephan G
Repetition priming is a prominent example of non-declarative memory, and it increases the accuracy and speed of responses to repeatedly processed stimuli. Major long-hold memory theories posit that repetition priming results from facilitation within perceptual and conceptual networks for stimulus recognition and categorization. Stimuli can also be bound to particular responses, and it has recently been suggested that this rapid response learning, not network facilitation, provides a sound theory of priming of object recognition. Here, we addressed the relevance of network facilitation and rapid response learning for priming of person recognition with a view to advance general theories of priming. In four experiments, participants performed conceptual decisions like occupation or nationality judgments for famous faces. The magnitude of rapid response learning varied across experiments, and rapid response learning co-occurred and interacted with facilitation in perceptual and conceptual networks. These findings indicate that rapid response learning and facilitation in perceptual and conceptual networks are complementary rather than competing theories of priming. Thus, future memory theories need to incorporate both rapid response learning and network facilitation as individual facets of priming. © 2014 The British Psychological Society.
Bezdicek, Ondrej; Stepankova, Hana; Moták, Ladislav; Axelrod, Bradley N; Woodard, John L; Preiss, Marek; Nikolai, Tomáš; Růžička, Evžen; Poreh, Amir
The present study provides normative data stratified by age for the Rey Auditory Verbal Learning test Czech version (RAVLT) derived from a sample of 306 cognitively normal subjects (20-85 years). Participants met strict inclusion criteria (absence of any active or past neurological or psychiatric disorder) and performed within normal limits on other neuropsychological measures. Our analyses revealed significant relationships between most RAVLT indices and age and education. Normative data are provided not only for basic RAVLT scores, but for the first time also for a variety of derived (gained/lost access, primacy/recency effect) and error scores. The study confirmed a logarithmic character of the learning slope and is consistent with other studies. It enables the clinician to evaluate more precisely subject's RAVLT memory performance on a vast number of indices and can be viewed as a concrete example of Quantified Process Approach to neuropsychological assessment.
Full Text Available A subset of sensory substitution (SS devices translate images into sounds in real time using a portable computer, camera, and headphones. Perceptual constancy is the key to understanding both functional and phenomenological aspects of perception with SS. In particular, constancies enable object externalization, which is critical to the performance of daily tasks such as obstacle avoidance and locating dropped objects. In order to improve daily task performance by the blind, and determine if constancies can be learned with SS, we trained blind (N = 4 and sighted (N =10 individuals on length and orientation constancy tasks for 8 days at about 1 hour per day with an auditory SS device. We found that blind and sighted performance at the constancy tasks significantly improved, and attained constancy performance that was above chance. Furthermore, dynamic interactions with stimuli were critical to constancy learning with the SS device. In particular, improved task learning significantly correlated with the number of spontaneous left-right head-tilting movements while learning length constancy. The improvement from previous head-tilting trials even transferred to a no-head-tilt condition. Therefore, not only can SS learning be improved by encouraging head movement while learning, but head movement may also play an important role in learning constancies in the sighted. In addition, the learning of constancies by the blind and sighted with SS provides evidence that SS may be able to restore vision-like functionality to the blind in daily tasks.
... (aural, visual, kinesthetic). Generally, weak correlations were found between preferred learning modalities and memorization styles with only visual learners tending to prefer visual memorization strategies (r = .34...
Liu, Shuyan; Kuschpel, Maxim S; Schad, Daniel J; Heinz, Andreas; Rapp, Michael A
The interruption of learning processes by breaks filled with diverse activities is common in everyday life. This study investigated the effects of active computer gaming and passive relaxation (rest and music) breaks on auditory versus visual memory performance. Young adults were exposed to breaks involving (a) open eyes resting, (b) listening to music, and (c) playing a video game, immediately after memorizing auditory versus visual stimuli. To assess learning performance, words were recalled directly after the break (an 8:30 minute delay) and were recalled and recognized again after 7 days. Based on linear mixed-effects modeling, it was found that playing the Angry Birds video game during a short learning break impaired long-term retrieval in auditory learning but enhanced long-term retrieval in visual learning compared with the music and rest conditions. These differential effects of video games on visual versus auditory learning suggest specific interference of common break activities on learning.
Keller, Warren D; Tillery, Kim L; McFadden, Sandra L
To determine whether children with a nonverbal learning disability (NVLD) have a higher incidence of auditory processing disorder (APD), especially in the tolerance-fading memory type of APD, and what associations could be found between performance on neuropsychological, intellectual, memory, and academic measures and APD. Eighteen children with NVLD ranging in age from 6 to 18 years received a central auditory processing test battery to determine incidence and subtype of APD. Psychological measures for assessment of NVLD included the Wechsler Scales, Wide Range Assessment of Memory and Learning, and Wechsler Individual Achievement Test. Neuropsychological measures included the Category Test, Trails A and B, the Tactual Performance Test, Grooved Pegs, and the Speech Sounds Perception Test. Neuropsychological test scores of the NVLD+APD and NVLD groups were compared using analysis of covariance procedures, with Verbal IQ and Performance IQ as covariates. Sixty-one percent of the children were diagnosed with APD, primarily in the tolerance-fading memory subtype. The group of children with APD and NVLD had significantly lower scores on Verbal IQ, Digit Span, Sentence Memory, Block Design, and Speech Sounds Perception than children without APD. An ancillary finding was that the incidence of attention deficit/hyperactivity disorder was significantly higher in children with NVLD (with and without APD) than in the general population. The results indicate that children with NVLD are at risk for APD and that there are several indicators on neuropsychological assessment suggestive of APD. Collaborative, interdisciplinary evaluation of children with learning disorders is needed in order to provide effective therapeutic interventions.
Milne, Alice E; Petkov, Christopher I; Wilson, Benjamin
Language flexibly supports the human ability to communicate using different sensory modalities, such as writing and reading in the visual modality and speaking and listening in the auditory domain. Although it has been argued that nonhuman primate communication abilities are inherently multisensory, direct behavioural comparisons between human and nonhuman primates are scant. Artificial grammar learning (AGL) tasks and statistical learning experiments can be used to emulate ordering relationships between words in a sentence. However, previous comparative work using such paradigms has primarily investigated sequence learning within a single sensory modality. We used an AGL paradigm to evaluate how humans and macaque monkeys learn and respond to identically structured sequences of either auditory or visual stimuli. In the auditory and visual experiments, we found that both species were sensitive to the ordering relationships between elements in the sequences. Moreover, the humans and monkeys produced largely similar response patterns to the visual and auditory sequences, indicating that the sequences are processed in comparable ways across the sensory modalities. These results provide evidence that human sequence processing abilities stem from an evolutionarily conserved capacity that appears to operate comparably across the sensory modalities in both human and nonhuman primates. The findings set the stage for future neurobiological studies to investigate the multisensory nature of these sequencing operations in nonhuman primates and how they compare to related processes in humans. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
The Individually Prescribed Instruction (IPI) Model developed by Bolvin and Glaser (1968) is applied to a perceptual development curriculum for children manifesting learning disabilities. The Model utilizes criterion referenced tests for behavioral objectives in four areas: general motor, visual motor, auditory motor, and integrative. Eight units…
Mandikal Vasuki, Pragati R.; Sharma, Mridula; Ibrahim, Ronny K.; Arciuli, Joanne
Musicians’ brains are considered to be a functional model of neuroplasticity due to the structural and functional changes associated with long-term musical training. In this study, we examined implicit extraction of statistical regularities from a continuous stream of stimuli—statistical learning (SL). We investigated whether long-term musical training is associated with better extraction of statistical cues in an auditory SL (aSL) task and a visual SL (vSL) task—both using the embedded triplet paradigm. Online measures, characterized by event related potentials (ERPs), were recorded during a familiarization phase while participants were exposed to a continuous stream of individually presented pure tones in the aSL task or individually presented cartoon figures in the vSL task. Unbeknown to participants, the stream was composed of triplets. Musicians showed advantages when compared to non-musicians in the online measure (early N1 and N400 triplet onset effects) during the aSL task. However, there were no differences between musicians and non-musicians for the vSL task. Results from the current study show that musical training is associated with enhancements in extraction of statistical cues only in the auditory domain. PMID:28352223
Tzeng, Christina Y.; Alexander, Jessica E.D.; Sidaras, Sabrina K.; Nygaard, Lynne C.
Foreign-accented speech contains multiple sources of variation that listeners learn to accommodate. Extending previous findings showing that exposure to high-variation training facilitates perceptual learning of accented speech, the current study examines to what extent the structure of training materials affects learning. During training, native adult speakers of American English transcribed sentences spoken in English by native Spanish-speaking adults. In Experiment 1, training stimuli were blocked by speaker, sentence, or randomized with respect to speaker and sentence (Variable training). At test, listeners transcribed novel English sentences produced by Spanish-accented speakers. Listeners’ transcription accuracy was highest in the Variable condition, suggesting that varying both speaker identity and sentence across training trials enabled listeners to generalize their learning to novel speakers and linguistic content. Experiment 2 assessed the extent to which ordering of training tokens by a single factor, speaker intelligibility, would facilitate speaker-independent accent learning, finding that listeners’ test performance did not reliably differ across conditions. Overall, these results suggest that the structure of training exposure, specifically trial-by-trial variation on both speaker’s voice and linguistic content, facilitates learning of the systematic properties of accented speech. The current findings suggest a crucial role of training structure in optimizing perceptual learning. Beyond characterizing the types of variation listeners encode in their representations of spoken utterances, theories of spoken language processing should incorporate the role of training structure in learning lawful variation in speech. PMID:27399829
Wilson, Benjamin; Slater, Heather; Kikuchi, Yukiko; Milne, Alice E; Marslen-Wilson, William D; Smith, Kenny; Petkov, Christopher I
Artificial grammars (AG) are designed to emulate aspects of the structure of language, and AG learning (AGL) paradigms can be used to study the extent of nonhuman animals' structure-learning capabilities. However, different AG structures have been used with nonhuman animals and are difficult to compare across studies and species. We developed a simple quantitative parameter space, which we used to summarize previous nonhuman animal AGL results. This was used to highlight an under-studied AG with a forward-branching structure, designed to model certain aspects of the nondeterministic nature of word transitions in natural language and animal song. We tested whether two monkey species could learn aspects of this auditory AG. After habituating the monkeys to the AG, analysis of video recordings showed that common marmosets (New World monkeys) differentiated between well formed, correct testing sequences and those violating the AG structure based primarily on simple learning strategies. By comparison, Rhesus macaques (Old World monkeys) showed evidence for deeper levels of AGL. A novel eye-tracking approach confirmed this result in the macaques and demonstrated evidence for more complex AGL. This study provides evidence for a previously unknown level of AGL complexity in Old World monkeys that seems less evident in New World monkeys, which are more distant evolutionary relatives to humans. The findings allow for the development of both marmosets and macaques as neurobiological model systems to study different aspects of AGL at the neuronal level.
Slevc, L Robert; Shell, Alison R
Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition. © 2015 Elsevier B.V. All rights reserved.
Ditye, Thomas; Kanai, Ryota; Bahrami, Bahador; Muggleton, Neil G; Rees, Geraint; Walsh, Vincent
Practice-dependent changes in brain structure can occur in task relevant brain regions as a result of extensive training in complex motor tasks and long-term cognitive training but little is known about the impact of visual perceptual learning on brain structure. Here we studied the effect of five days of visual perceptual learning in a motion-color conjunction search task using anatomical MRI. We found rapid changes in gray matter volume in the right posterior superior temporal sulcus, an area sensitive to coherently moving stimuli, that predicted the degree to which an individual's performance improved with training. Furthermore, behavioral improvements were also predicted by volumetric changes in an extended white matter region underlying the visual cortex. These findings point towards quick and efficient plastic neural mechanisms that enable the visual brain to deal effectively with changing environmental demands. Copyright © 2013 Elsevier Inc. All rights reserved.
Mandikal Vasuki, Pragati Rao; Sharma, Mridula; Ibrahim, Ronny; Arciuli, Joanne
The question whether musical training is associated with enhanced auditory and cognitive abilities in children is of considerable interest. In the present study, we compared children with music training versus those without music training across a range of auditory and cognitive measures, including the ability to detect implicitly statistical regularities in input (statistical learning). Statistical learning of regularities embedded in auditory and visual stimuli was measured in musically trained and age-matched untrained children between the ages of 9-11years. In addition to collecting behavioural measures, we recorded electrophysiological measures to obtain an online measure of segmentation during the statistical learning tasks. Musically trained children showed better performance on melody discrimination, rhythm discrimination, frequency discrimination, and auditory statistical learning. Furthermore, grand-averaged ERPs showed that triplet onset (initial stimulus) elicited larger responses in the musically trained children during both auditory and visual statistical learning tasks. In addition, children's music skills were associated with performance on auditory and visual behavioural statistical learning tasks. Our data suggests that individual differences in musical skills are associated with children's ability to detect regularities. The ERP data suggest that musical training is associated with better encoding of both auditory and visual stimuli. Although causality must be explored in further research, these results may have implications for developing music-based remediation strategies for children with learning impairments. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.
Huurneman, Bianca; Boonstra, F Nienke; Goossens, Jeroen
To identify predictors of sensitivity to perceptual learning on a computerized, near-threshold letter discrimination task in children with infantile nystagmus (idiopathic IN: n = 18; oculocutaneous albinism accompanied by IN: n = 18). Children were divided into two age-, acuity-, and diagnosis-matched training groups: a crowded (n = 18) and an uncrowded training group (n = 18). Training consisted of 10 sessions spread out over 5 weeks (grand total of 3500 trials). Baseline performance, age, diagnosis, training condition, and perceived pleasantness of training (training joy) were entered as linear regression predictors of training-induced changes on a single- and a crowded-letter task. An impressive 57% of the variability in improvements of single-letter visual acuity was explained by age, training condition, and training joy. Being older and training with uncrowded letters were associated with larger single-letter visual acuity improvements. More training joy was associated with a larger gain from the uncrowded training and a smaller gain from the crowded training. Fifty-six percent of the variability in crowded-letter task improvements was explained by baseline performance, age, diagnosis, and training condition. After regressing out the variability induced by training condition, baseline performance, and age, perceptual learning proved more effective for children with idiopathic IN than for children with albinism accompanied by IN. Training gains increased with poorer baseline performance in idiopaths, but not in children with albinism accompanied by IN. Age and baseline performance, but not training joy, are important prognostic factors for the effect of perceptual learning in children with IN. However, their predictive value for achieving improvements in single-letter acuity and crowded letter acuity, respectively, differs between diagnostic subgroups and training condition. These findings may help with personalized treatment of individuals likely to benefit
Kellman, Philip J; Massey, Christine M; Son, Ji Y
Learning in educational settings emphasizes declarative and procedural knowledge. Studies of expertise, however, point to other crucial components of learning, especially improvements produced by experience in the extraction of information: perceptual learning (PL). We suggest that such improvements characterize both simple sensory and complex cognitive, even symbolic, tasks through common processes of discovery and selection. We apply these ideas in the form of perceptual learning modules (PLMs) to mathematics learning. We tested three PLMs, each emphasizing different aspects of complex task performance, in middle and high school mathematics. In the MultiRep PLM, practice in matching function information across multiple representations improved students' abilities to generate correct graphs and equations from word problems. In the Algebraic Transformations PLM, practice in seeing equation structure across transformations (but not solving equations) led to dramatic improvements in the speed of equation solving. In the Linear Measurement PLM, interactive trials involving extraction of information about units and lengths produced successful transfer to novel measurement problems and fraction problem solving. Taken together, these results suggest (a) that PL techniques have the potential to address crucial, neglected dimensions of learning, including discovery and fluent processing of relations; (b) PL effects apply even to complex tasks that involve symbolic processing; and (c) appropriately designed PL technology can produce rapid and enduring advances in learning. Copyright © 2009 Cognitive Science Society, Inc.
Roelfsema, Pieter R.; van Ooyen, Arjen; Watanabe, Takeo
How does the brain learn those visual features that are relevant for behavior? In this article, we focus on two factors that guide plasticity of visual representations. First, reinforcers cause the global release of diffusive neuromodulatory signals that gate plasticity. Second, attentional feedback
Roelfsema, P.R.; van Ooyen, A.; Watanabe, T.
How does the brain learn those visual features that are relevant for behavior? In this article, we focus on two factors that guide plasticity of visual representations. First, reinforcers cause the global release of diffusive neuromodulatory signals that gate plasticity. Second, attentional feedback
Ashby, F. Gregory; Vucovich, Lauren E.
Feedback is highly contingent on behavior if it eventually becomes easy to predict, and weakly contingent on behavior if it remains difficult or impossible to predict even after learning is complete. Many studies have demonstrated that humans and nonhuman animals are highly sensitive to feedback contingency, but no known studies have examined how…
Choudhury, Anustup; Farrell, Suzanne; Atkins, Robin; Daly, Scott
We present an approach to predict overall HDR display quality as a function of key HDR display parameters. We first performed subjective experiments on a high quality HDR display that explored five key HDR display parameters: maximum luminance, minimum luminance, color gamut, bit-depth and local contrast. Subjects rated overall quality for different combinations of these display parameters. We explored two models | a physical model solely based on physically measured display characteristics and a perceptual model that transforms physical parameters using human vision system models. For the perceptual model, we use a family of metrics based on a recently published color volume model (ICT-CP), which consists of the PQ luminance non-linearity (ST2084) and LMS-based opponent color, as well as an estimate of the display point spread function. To predict overall visual quality, we apply linear regression and machine learning techniques such as Multilayer Perceptron, RBF and SVM networks. We use RMSE and Pearson/Spearman correlation coefficients to quantify performance. We found that the perceptual model is better at predicting subjective quality than the physical model and that SVM is better at prediction than linear regression. The significance and contribution of each display parameter was investigated. In addition, we found that combined parameters such as contrast do not improve prediction. Traditional perceptual models were also evaluated and we found that models based on the PQ non-linearity performed better.
Trzcinski, Natalie K; Gomez-Ramirez, Manuel; Hsiao, Steven S.
Continuous training enhances perceptual discrimination and promotes neural changes in areas encoding the experienced stimuli. This type of experience-dependent plasticity has been demonstrated in several sensory and motor systems. Particularly, non-human primates trained to detect consecutive tactile bar indentations across multiple digits showed expanded excitatory receptive fields (RFs) in somatosensory cortex. However, the perceptual implications of these anatomical changes remain undetermined. Here, we trained human participants for nine days on a tactile task that promoted expansion of multi-digit RFs. Participants were required to detect consecutive indentations of bar stimuli spanning multiple digits. Throughout the training regime we tracked participants’ discrimination thresholds on spatial (grating orientation) and temporal tasks on the trained and untrained hands in separate sessions. We hypothesized that training on the multi-digit task would decrease perceptual thresholds on tasks that require stimulus processing across multiple digits, while also increasing thresholds on tasks requiring discrimination on single digits. We observed an increase in orientation thresholds on a single-digit. Importantly, this effect was selective for the stimulus orientation and hand used during multi-digit training. We also found that temporal acuity between digits improved across trained digits, suggesting that discriminating the temporal order of multi-digit stimuli can transfer to temporal discrimination of other tactile stimuli. These results suggest that experience-dependent plasticity following perceptual learning improves and interferes with tactile abilities in manners predictive of the task and stimulus features used during training. PMID:27422224
Bonaccorsi, Joyce; Berardi, Nicoletta; Sale, Alessandro
Amblyopia is the most common form of impairment of visual function affecting one eye, with a prevalence of about 1-5% of the total world population. Amblyopia usually derives from conditions of early functional imbalance between the two eyes, owing to anisometropia, strabismus, or congenital cataract, and results in a pronounced reduction of visual acuity and severe deficits in contrast sensitivity and stereopsis. It is widely accepted that, due to a lack of sufficient plasticity in the adult brain, amblyopia becomes untreatable after the closure of the critical period in the primary visual cortex. However, recent results obtained both in animal models and in clinical trials have challenged this view, unmasking a previously unsuspected potential for promoting recovery even in adulthood. In this context, non invasive procedures based on visual perceptual learning, i.e., the improvement in visual performance on a variety of simple visual tasks following practice, emerge as particularly promising to rescue discrimination abilities in adult amblyopic subjects. This review will survey recent work regarding the impact of visual perceptual learning on amblyopia, with a special focus on a new experimental model of perceptual learning in the amblyopic rat.
Full Text Available Amblyopia is the most common form of impairment of visual function affecting one eye, with a prevalence of about 1-5% of the total world population. Amblyopia usually derives from conditions of early functional imbalance between the two eyes, owing to anisometropia, strabismus, or congenital cataract, and results in a pronounced reduction of visual acuity and severe deficits in contrast sensitivity and stereopsis. It is widely accepted that, due to a lack of sufficient plasticity in the adult brain, amblyopia becomes untreatable after the closure of the critical period in the primary visual cortex. However, recent results obtained both in animal models and in clinical trials have challenged this view, unmasking a previously unsuspected potential for promoting recovery even in adulthood. In this context, non invasive procedures based on visual perceptual learning, i.e. the improvement in visual performance on a variety of simple visual tasks following practice, emerge as particularly promising to rescue discrimination abilities in adult amblyopic subjects. This review will survey recent work regarding the impact of visual perceptual learning on amblyopia, with a special focus on a new experimental model of perceptual learning in the amblyopic rat.
Mano, Hiroaki; Yoshida, Wako; Shibata, Kazuhisa; Zhang, Suyi; Koltzenburg, Martin; Kawato, Mitsuo; Seymour, Ben
The location of a sensory cortex for temperature perception remains a topic of substantial debate. Both the parietal-opercular (SII) and posterior insula have been consistently implicated in thermosensory processing, but neither region has yet been identified as the locus of fine temperature discrimination. Using a perceptual learning paradigm in male and female humans, we show improvement in discrimination accuracy for subdegree changes in both warmth and cool detection over 5 d of repetitive training. We found that increases in discriminative accuracy were specific to the temperature (cold or warm) being trained. Using structural imaging to look for plastic changes associated with perceptual learning, we identified symmetrical increases in gray matter volume in the SII cortex. Furthermore, we observed distinct, adjacent regions for cold and warm discrimination, with cold discrimination having a more anterior locus than warm. The results suggest that thermosensory discrimination is supported by functionally and anatomically distinct temperature-specific modules in the SII cortex.SIGNIFICANCE STATEMENT We provide behavioral and neuroanatomical evidence that perceptual learning is possible within the temperature system. We show that structural plasticity localizes to parietal-opercular (SII), and not posterior insula, providing the best evidence to date resolving a longstanding debate about the location of putative "temperature cortex." Furthermore, we show that cold and warm pathways are behaviorally and anatomically dissociable, suggesting that the temperature system has distinct temperature-dependent processing modules. Copyright © 2017 Mano et al.
Młynarski, Wiktor; McDermott, Josh H
Interaction with the world requires an organism to transform sensory signals into representations in which behaviorally meaningful properties of the environment are made explicit. These representations are derived through cascades of neuronal processing stages in which neurons at each stage recode the output of preceding stages. Explanations of sensory coding may thus involve understanding how low-level patterns are combined into more complex structures. To gain insight into such midlevel representations for sound, we designed a hierarchical generative model of natural sounds that learns combinations of spectrotemporal features from natural stimulus statistics. In the first layer, the model forms a sparse convolutional code of spectrograms using a dictionary of learned spectrotemporal kernels. To generalize from specific kernel activation patterns, the second layer encodes patterns of time-varying magnitude of multiple first-layer coefficients. When trained on corpora of speech and environmental sounds, some second-layer units learned to group similar spectrotemporal features. Others instantiate opponency between distinct sets of features. Such groupings might be instantiated by neurons in the auditory cortex, providing a hypothesis for midlevel neuronal computation.
Chung, Wei-Lun; Jarmulowicz, Linda; Bidelman, Gavin M.
This study examined language-specific links among auditory processing, linguistic prosody awareness, and Mandarin (L1) and English (L2) word reading in 61 Mandarin-speaking, English-learning children. Three auditory discrimination abilities were measured: pitch contour, pitch interval, and rise time (rate of intensity change at tone onset).…
Xu, Jingping P; He, Zijiang J; Ooi, Teng Leng
Perceptual learning is an important means for the brain to maintain its agility in a dynamic environment. Top-down focal attention, which selects task-relevant stimuli against competing ones in the background, is known to control and select what is learned in adults. Still unknown, is whether the adult brain is able to learn highly visible information beyond the focus of top-down attention. If it is, we should be able to reveal a purely stimulus-driven perceptual learning occurring in functions that are largely determined by the early cortical level, where top-down attention modulation is weak. Such an automatic, stimulus-driven learning mechanism is commonly assumed to operate only in the juvenile brain. We performed perceptual training to reduce sensory eye dominance (SED), a function that taps on the eye-of-origin information represented in the early visual cortex. Two retinal locations were simultaneously stimulated with suprathreshold, dichoptic orthogonal gratings. At each location, monocular cueing triggered perception of the grating images of the weak eye and suppression of the strong eye. Observers attended only to one location and performed orientation discrimination of the gratings seen by the weak eye, while ignoring the highly visible gratings at the second, unattended, location. We found SED was not only reduced at the attended location, but also at the unattended location. Furthermore, other untrained visual functions mediated by higher cortical levels improved. An automatic, stimulus-driven learning mechanism causes synaptic alterations in the early cortical level, with a far-reaching impact on the later cortical levels. Copyright © 2011 Elsevier Ltd. All rights reserved.
Loo, Jenny Hooi Yin; Bamiou, Doris-Eva; Campbell, Nicci; Luxon, Linda M
This article reviews the evidence for computer-based auditory training (CBAT) in children with language, reading, and related learning difficulties, and evaluates the extent it can benefit children with auditory processing disorder (APD...
Sanes, Dan H.
displays an increased vulnerability to the sensory environment. Here, we identify a precise developmental window during which mild hearing loss affects the maturation of an auditory perceptual cue that is known to support animal communication, including human speech. Furthermore, animals reared with transient hearing loss display deficits in perceptual learning. Our results suggest that speech and language delays associated with transient or permanent childhood hearing loss may be accounted for, in part, by deficits in central auditory processing mechanisms. PMID:26224865
Cantwell, George; Riesenhuber, Maximilian; Roeder, Jessica L; Ashby, F Gregory
The field of computational cognitive neuroscience (CCN) builds and tests neurobiologically detailed computational models that account for both behavioral and neuroscience data. This article leverages a key advantage of CCN-namely, that it should be possible to interface different CCN models in a plug-and-play fashion-to produce a new and biologically detailed model of perceptual category learning. The new model was created from two existing CCN models: the HMAX model of visual object processing and the COVIS model of category learning. Using bitmap images as inputs and by adjusting only a couple of learning-rate parameters, the new HMAX/COVIS model provides impressively good fits to human category-learning data from two qualitatively different experiments that used different types of category structures and different types of visual stimuli. Overall, the model provides a comprehensive neural and behavioral account of basal ganglia-mediated learning. Copyright © 2017 Elsevier Ltd. All rights reserved.
Salgado, João Vinícius; Malloy-Diniz, Leandro Fernandes; Abrantes, Suzana Silva Costa; Moreira, Lafaiete; Schlottfeldt, Carlos Guilherme; Guimarães, Wanderlane; Freitas, Djeane Marcely Ugoline; Oliveira, Juliana; Fuentes, Daniel
The Rey Auditory-Verbal Learning Test, which is used to evaluate learning and memory, is a widely recognized tool in the general literature on neuropsychology. This paper aims at presenting the performance of Brazilian adult subjects on the Rey Auditory-Verbal Learning Test, and was written after we published a previous study on the performance of Brazilian elderly subjects on this same test. A version of the test, featuring a list of high-frequency one-syllable and two-syllable concrete Portuguese substantives, was developed. Two hundred and forty-three (243) subjects from both genders were allocated to 6 different age groups (20-24; 25-29; 30-34; 35-44; 45-54 and 55-60 years old). They were then tested using the Rey Auditory-Verbal Learning Test. Performance on the Rey Auditory-Verbal Learning Test showed a positive correlation with educational level and a negative correlation with age. Women performed significantly better than men. When applied across similar age ranges, our results were similar to those recorded for the English version of the Rey Auditory-Verbal Learning Test. Our results suggest that the adaptation of the Rey Auditory-Verbal Learning Test to Brazilian Portuguese is appropriate and that it is applicable to Brazilian subjects for memory capacity evaluation purposes and across similar age groups and educational levels.
Camilleri, Rebecca; Pavan, Andrea; Campana, Gianluca
It has recently been demonstrated how perceptual learning, that is an improvement in a sensory/perceptual task upon practice, can be boosted by concurrent high-frequency transcranial random noise stimulation (tRNS). It has also been shown that perceptual learning can generalize and produce an improvement of visual functions in participants with mild refractive defects. By using three different groups of participants (single-blind study), we tested the efficacy of a short training (8 sessions) using a single Gabor contrast-detection task with concurrent hf-tRNS in comparison with the same training with sham stimulation or hf-tRNS with no concurrent training, in improving visual acuity (VA) and contrast sensitivity (CS) of individuals with uncorrected mild myopia. A short training with a contrast detection task is able to improve VA and CS only if coupled with hf-tRNS, whereas no effect on VA and marginal effects on CS are seen with the sole administration of hf-tRNS. Our results support the idea that, by boosting the rate of perceptual learning via the modulation of neuronal plasticity, hf-tRNS can be successfully used to reduce the duration of the perceptual training and/or to increase its efficacy in producing perceptual learning and generalization to improved VA and CS in individuals with uncorrected mild myopia. Copyright © 2016 Elsevier Ltd. All rights reserved.
Chung, Susana T L
Perceptual learning has been shown to be effective in improving visual functions in the normal adult visual system, as well as in adults with amblyopia. In this study, the feasibility of applying perceptual learning to enhance reading speed in people with long-standing central vision loss was evaluated. Six observers (mean age, 73.8) with long-standing central vision loss practiced an oral sentence-reading task, with words presented sequentially using rapid serial visual presentation (RSVP). A pre-test consisted of measurements of visual acuities, RSVP reading speeds for six print sizes, the location of the preferred retinal locus for fixation (fPRL), and fixation stability. Training consisted of six weekly sessions of RSVP reading, with 300 sentences presented per session. A post-test, identical with the pre-test, followed the training. All observers showed improved RSVP reading speed after training. The improvement averaged 53% (range, 34-70%). Comparisons of pre- and post-test measurements revealed little changes in visual acuity, critical print size, location of the fPRL, and fixation stability. The specificity of the learning effect, and the lack of changes to the fPRL location and fixation stability suggest that the improvements are not due to observers adopting a retinal location with better visual capability, or an improvement in fixation. Rather, the improvements are likely to represent genuine plasticity of the visual system despite the older ages of the observers, coupled with long-standing sensory deficits. Perceptual learning might be an effective way of enhancing visual performance for people with central vision loss.
Conci, Markus; Müller, Hermann J; von Mühlenen, Adrian
In visual search, detection of a target is faster when it is presented within a spatial layout of repeatedly encountered nontarget items, indicating that contextual invariances can guide selective attention (contextual cueing; Chun & Jiang, 1998). However, perceptual regularities may interfere with contextual learning; for instance, no contextual facilitation occurs when four nontargets form a square-shaped grouping, even though the square location predicts the target location (Conci & von Mühlenen, 2009). Here, we further investigated potential causes for this interference-effect: We show that contextual cueing can reliably occur for targets located within the region of a segmented object, but not for targets presented outside of the object's boundaries. Four experiments demonstrate an object-based facilitation in contextual cueing, with a modulation of context-based learning by relatively subtle grouping cues including closure, symmetry, and spatial regularity. Moreover, the lack of contextual cueing for targets located outside the segmented region was due to an absence of (latent) learning of contextual layouts, rather than due to an attentional bias towards the grouped region. Taken together, these results indicate that perceptual segmentation provides a basic structure within which contextual scene regularities are acquired. This in turn argues that contextual learning is constrained by object-based selection.
Killian, Nathaniel J; Vurro, Milena; Keith, Sarah B; Kyada, Margee J; Pezaris, John S
Visual perceptual grouping, the process of forming global percepts from discrete elements, is experience-dependent. Here we show that the learning time course in an animal model of artificial vision is predicted primarily from the density of visual elements. Three naïve adult non-human primates were tasked with recognizing the letters of the Roman alphabet presented at variable size and visualized through patterns of discrete visual elements, specifically, simulated phosphenes mimicking a thalamic visual prosthesis. The animals viewed a spatially static letter using a gaze-contingent pattern and then chose, by gaze fixation, between a matching letter and a non-matching distractor. Months of learning were required for the animals to recognize letters using simulated phosphene vision. Learning rates increased in proportion to the mean density of the phosphenes in each pattern. Furthermore, skill acquisition transferred from trained to untrained patterns, not depending on the precise retinal layout of the simulated phosphenes. Taken together, the findings suggest that learning of perceptual grouping in a gaze-contingent visual prosthesis can be described simply by the density of visual activation.
Tzeng, Christina Y; Alexander, Jessica E D; Sidaras, Sabrina K; Nygaard, Lynne C
Foreign-accented speech contains multiple sources of variation that listeners learn to accommodate. Extending previous findings showing that exposure to high-variation training facilitates perceptual learning of accented speech, the current study examines to what extent the structure of training materials affects learning. During training, native adult speakers of American English transcribed sentences spoken in English by native Spanish-speaking adults. In Experiment 1, training stimuli were blocked by speaker, sentence, or randomized with respect to speaker and sentence (Variable training). At test, listeners transcribed novel English sentences produced by unfamiliar Spanish-accented speakers. Listeners' transcription accuracy was highest in the Variable condition, suggesting that varying both speaker identity and sentence across training trials enabled listeners to generalize their learning to novel speakers and linguistic content. Experiment 2 assessed the extent to which ordering of training tokens by a single factor, speaker intelligibility, would facilitate speaker-independent accent learning, finding that listeners' test performance did not reliably differ from that in the no-training control condition. Overall, these results suggest that the structure of training exposure, specifically trial-to-trial variation on both speaker's voice and linguistic content, facilitates learning of the systematic properties of accented speech. The current findings suggest a crucial role of training structure in optimizing perceptual learning. Beyond characterizing the types of variation listeners encode in their representations of spoken utterances, theories of spoken language processing should incorporate the role of training structure in learning lawful variation in speech. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Pinaud, R.; Terleph, T. A.; Wynne, R. D.; Tremere, L. A.
Songbirds have emerged as powerful experimental models for the study of auditory processing of complex natural communication signals. Intact hearing is necessary for several behaviors in developing and adult animals including vocal learning, territorial defense, mate selection and individual recognition. These behaviors are thought to require the processing, discrimination and memorization of songs. Although much is known about the brain circuits that participate in sensorimotor (auditory-vocal) integration, especially the ``song-control" system, less is known about the anatomical and functional organization of central auditory pathways. Here we discuss findings associated with a telencephalic auditory area known as the caudomedial nidopallium (NCM). NCM has attracted significant interest as it exhibits functional properties that may support higher order auditory functions such as stimulus discrimination and the formation of auditory memories. NCM neurons are vigorously dr iven by auditory stimuli. Interestingly, these responses are selective to conspecific, relative to heterospecific songs and artificial stimuli. In addition, forms of experience-dependent plasticity occur in NCM and are song-specific. Finally, recent experiments employing high-throughput quantitative proteomics suggest that complex protein regulatory pathways are engaged in NCM as a result of auditory experience. These molecular cascades are likely central to experience-associated plasticity of NCM circuitry and may be part of a network of calcium-driven molecular events that support the formation of auditory memory traces.
Stress is a complex biological reaction common to all living organisms that allows them to adapt to their environments. Chronic stress alters the dendritic architecture and function of the limbic brain areas that affect memory, learning, and emotional processing. This review summarizes our research about chronic stress effects on the auditory system, providing the details of how we developed the main hypotheses that currently guide our research. The aims of our studies are to (1) determine how chronic stress impairs the dendritic morphology of the main nuclei of the rat auditory system, the inferior colliculus (auditory mesencephalon), the medial geniculate nucleus (auditory thalamus), and the primary auditory cortex; (2) correlate the anatomic alterations with the impairments of auditory fear learning; and (3) investigate how the stress-induced alterations in the rat limbic system may spread to nonlimbic areas, affecting specific sensory system, such as the auditory and olfactory systems, and complex cognitive functions, such as auditory attention. Finally, this article gives a new evolutionary approach to understanding the neurobiology of stress and the stress-related disorders.
Gobel, Eric W; Blomeke, Kelsey; Zadikoff, Cindy; Simuni, Tanya; Weintraub, Sandra; Reber, Paul J
Implicit skill learning is hypothesized to depend on nondeclarative memory that operates independent of the medial temporal lobe (MTL) memory system and instead depends on cortico striatal circuits between the basal ganglia and cortical areas supporting motor function and planning. Research with the Serial Reaction Time (SRT) task suggests that patients with memory disorders due to MTL damage exhibit normal implicit sequence learning. However, reports of intact learning rely on observations of no group differences, leading to speculation as to whether implicit sequence learning is fully intact in these patients. Patients with Parkinson's disease (PD) often exhibit impaired sequence learning, but this impairment is not universally observed. Implicit perceptual-motor sequence learning was examined using the Serial Interception Sequence Learning (SISL) task in patients with amnestic Mild Cognitive Impairment (MCI; n = 11) and patients with PD (n = 15). Sequence learning in SISL is resistant to explicit learning and individually adapted task difficulty controls for baseline performance differences. Patients with MCI exhibited robust sequence learning, equivalent to healthy older adults (n = 20), supporting the hypothesis that the MTL does not contribute to learning in this task. In contrast, the majority of patients with PD exhibited no sequence-specific learning in spite of matched overall task performance. Two patients with PD exhibited performance indicative of an explicit compensatory strategy suggesting that impaired implicit learning may lead to greater reliance on explicit memory in some individuals. The differences in learning between patient groups provides strong evidence in favor of implicit sequence learning depending solely on intact basal ganglia function with no contribution from the MTL memory system.
Lai, Mun Yee; Leung, Frederick Koon Shing
This study investigated the relationship between motor-reduced visual perceptual abilities and visual-motor integration abilities of Chinese learning children by employing the Developmental Test of Visual Perception (Hammill, Pearson, & Voress, 1993), in which both abilities are measured in a single test. A total of 72 native Chinese learners of age 5 participated in this study. The findings indicated that the Chinese learners scored much higher in the visual-motor integration tasks than in motor-reduced visual perceptual tasks. The results support the theory of autonomous systems of motor-reduced visual perception and visual-motor integration and query current beliefs about the prior development of the former to the latter for the Chinese learners. To account for the Chinese participants' superior performance in visual-motor integration tasks over motor-reduced visual perceptual tasks, the visual-spatial properties of Chinese characters, general handwriting theories, the motor control theory and the psychogeometric theory of Chinese character-writing are referred to. The significance of the findings is then discussed. Copyright © 2012 Elsevier B.V. All rights reserved.
Cavaco, Sara; Gonçalves, Alexandra; Pinto, Cláudia; Almeida, Eduarda; Gomes, Filomena; Moreira, Inês; Fernandes, Joana; Teixeira-Pinto, Armando
This study aimed to produce adjusted normative data for the Portuguese version of the Auditory Verbal Learning Test (AVLT). The study included 1,068 community-dwelling individuals (736 women, 332 men) aged 18 to 93 years old (Mage = 56 years, SD = 18) who had educational backgrounds ranging from 0 to 24 years (M = 9.8 years, SD = 5.3). The results showed that sex, age, and education were significantly associated with AVLT performance. These demographic characteristics accounted for 24% to 35% of the variance of direct recall trials and for 8% to 39% of the variance of derived recall scores. The normative data for direct and derived recall scores are presented as regression-based algorithms to adjust for sex, age, and education with subsequent correspondence between adjusted scores and percentile distribution. The norms for the recognition correct score are presented as algorithms to estimate the recognition scores for 5th, 10th, and 18th percentiles for each combination of the variables sex, age, and education.
Di Pinto, Marcos; Conklin, Heather M; Li, Chenghong; Xiong, Xiaoping; Merchant, Thomas E
The primary objective of this study was to determine whether children with localized ependymoma experience a decline in verbal or visual-auditory learning after conformal radiation therapy (CRT). The secondary objective was to investigate the impact of age and select clinical factors on learning before and after treatment. Learning in a sample of 71 patients with localized ependymoma was assessed with the California Verbal Learning Test (CVLT-C) and the Visual-Auditory Learning Test (VAL). Learning measures were administered before CRT, at 6 months, and then yearly for a total of 5 years. There was no significant decline on measures of verbal or visual-auditory learning after CRT; however, younger age, more surgeries, and cerebrospinal fluid shunting did predict lower scores at baseline. There were significant longitudinal effects (improved learning scores after treatment) among older children on the CVLT-C and children that did not receive pre-CRT chemotherapy on the VAL. There was no evidence of global decline in learning after CRT in children with localized ependymoma. Several important implications from the findings include the following: (1) identification of and differentiation among variables with transient vs. long-term effects on learning, (2) demonstration that children treated with chemotherapy before CRT had greater risk of adverse visual-auditory learning performance, and (3) establishment of baseline and serial assessment as critical in ascertaining necessary sensitivity and specificity for the detection of modest effects. Copyright 2010 Elsevier Inc. All rights reserved.
Daniel Robert Coates
Full Text Available Several recent studies have shown that perceptual learning can result in improvements inreading speed for people with macular disease (e.g. Chung, 2011; Tarita-Nistor et al., 2014.The improvements were reported as an increase in reading speed defined by specific criteria;however, little is known about how other properties of the reading performance or the participantsperceptual responses change as a consequence of learning. In this paper, we performeddetailed analyses of data following perceptual learning using an RSVP (rapid serial visualpresentation reading task, looking beyond the change in reading speed defined by the thresholdat a given accuracy on a psychometric function relating response accuracy with word exposureduration. Specifically, we explored the statistical characteristics of theresponse data to address two specific questions: was there a change in theslope of the psychometric function and did the improvements in performance occurconsistently across different word exposure durations? Our results show thatthere is a general steepening of the slope of the psychometric function, leadingto non-uniform improvements across stimulus levels.
Swan, Kristen; Myers, Emily
Adults tend to perceive speech sounds from their native language as members of distinct and stable categories; however, they fail to perceive differences between many non-native speech sounds without a great deal of training. The present study investigates the effects of categorization training on adults' ability to discriminate non-native phonetic contrasts. It was hypothesized that only individuals who successfully learned the appropriate categories would show selective improvements in discriminating between-category contrasts. Participants were trained to categorize progressively narrow phonetic contrasts across one of two non-native boundaries, with discrimination pre- and post-tests completed to measure the effects of training on participants' perceptual sensitivity. Results suggest that changes in adults' ability to discriminate a non-native contrast depend on their successful learning of the relevant category structure. Furthermore, post-training identification functions show that changes in perceptual categories specifically correspond to their relative placement of the category boundary. Taken together, these results indicate that learning to assign category labels to a non-native speech continuum is sufficient to induce discontinuous perception of between- versus within-category contrasts.
Van Meel, Chayenne; Daniels, Nicky; de Beeck, Hans Op; Baeck, Annelies
During perceptual learning the visual representations in the brain are altered, but these changes' causal role has not yet been fully characterized. We used transcranial direct current stimulation (tDCS) to investigate the role of higher visual regions in lateral occipital cortex (LO) in perceptual learning with complex objects. We also investigated whether object learning is dependent on the relevance of the objects for the learning task. Participants were trained in two tasks: object recognition using a backward masking paradigm and an orientation judgment task. During both tasks, an object with a red line on top of it were presented in each trial. The crucial difference between both tasks was the relevance of the object: the object was relevant for the object recognition task, but not for the orientation judgment task. During training, half of the participants received anodal tDCS stimulation targeted at the lateral occipital cortex (LO). Afterwards, participants were tested on how well they recognized the trained objects, the irrelevant objects presented during the orientation judgment task and a set of completely new objects. Participants stimulated with tDCS during training showed larger improvements of performance compared to participants in the sham condition. No learning effect was found for the objects presented during the orientation judgment task. To conclude, this study suggests a causal role of LO in relevant object learning, but given the rather low spatial resolution of tDCS, more research on the specificity of this effect is needed. Further, mere exposure is not sufficient to train object recognition in our paradigm.
Full Text Available Transcranial direct current stimulation (tDCS is attracting increasing interest because of its potential for therapeutic use. While its effects have been investigated mainly with motor and visual tasks, less is known in the auditory domain. Past tDCS studies with auditory tasks demonstrated various behavioural outcomes, possibly due to differences in stimulation parameters or task measurements used in each study. Further research using well-validated tasks are therefore required for clarification of behavioural effects of tDCS on the auditory system. Here, we took advantage of findings from a prior functional magnetic resonance imaging study, which demonstrated that the right auditory cortex is modulated during fine-grained pitch learning of microtonal melodic patterns. Targeting the right auditory cortex with tDCS using this same task thus allowed us to test the hypothesis that this region is causally involved in pitch learning. Participants in the current study were trained for three days while we measured pitch discrimination thresholds using microtonal melodies on each day using a psychophysical staircase procedure. We administered anodal, cathodal, or sham tDCS to three groups of participants over the right auditory cortex on the second day of training during performance of the task. Both the sham and the cathodal groups showed the expected significant learning effect (decreased pitch threshold over the three days of training; in contrast we observed a blocking effect of anodal tDCS on auditory pitch learning, such that this group showed no significant change in thresholds over the three days. The results support a causal role for the right auditory cortex in pitch discrimination learning.
De Niear, Matthew A; Gupta, Pranjal B; Baum, Sarah H; Wallace, Mark T
The temporal relationship between auditory and visual cues is a fundamental feature in the determination of whether these signals will be integrated. The window of perceived simultaneity (TBW) is a construct that describes the epoch of time during which asynchronous auditory and visual stimuli are likely to be perceptually bound. Recently, a number of studies have demonstrated the capacity for perceptual training to enhance temporal acuity for audiovisual stimuli (i.e., narrow the TBW). These studies, however, have only examined multisensory perceptual learning that develops in response to feedback that is provided when making judgments on simple, low-level audiovisual stimuli (i.e., flashes and beeps). Here we sought to determine if perceptual training was capable of altering temporal acuity for audiovisual speech. Furthermore, we also explored whether perceptual training with simple or complex audiovisual stimuli generalized across levels of stimulus complexity. Using a simultaneity judgment (SJ) task, we measured individuals' temporal acuity (as estimated by the TBW) prior to, immediately following, and one week after four consecutive days of perceptual training. We report that temporal acuity for audiovisual speech stimuli is enhanced following perceptual training using speech stimuli. Additionally, we find that changes in temporal acuity following perceptual training do not generalize across the levels of stimulus complexity in this study. Overall, the results suggest that perceptual training is capable of enhancing temporal acuity for audiovisual speech in adults, and that the dynamics of the changes in temporal acuity following perceptual training differ between simple audiovisual stimuli and more complex audiovisual speech stimuli. Copyright © 2017. Published by Elsevier Inc.
Maddox, W. Todd; Chandrasekaran, Bharath; Smayda, Kirsten; Yi, Han-Gyol; Koslov, Seth; Beevers, Christopher G.
In vision an extensive literature supports the existence of competitive dual-processing systems of category learning that are grounded in neuroscience and are partially-dissociable. The reflective system is prefrontally-mediated and uses working memory and executive attention to develop and test rules for classifying in an explicit fashion. The reflexive system is striatally-mediated and operates by implicitly associating perception with actions that lead to reinforcement. Although categorization is fundamental to auditory processing, little is known about the learning systems that mediate auditory categorization and even less is known about the effects of individual difference in the relative efficiency of the two learning systems. Previous studies have shown that individuals with elevated depressive symptoms show deficits in reflective processing. We exploit this finding to test critical predictions of the dual-learning systems model in audition. Specifically, we examine the extent to which the two systems are dissociable and competitive. We predicted that elevated depressive symptoms would lead to reflective-optimal learning deficits but reflexive-optimal learning advantages. Because natural speech category learning is reflexive in nature, we made the prediction that elevated depressive symptoms would lead to superior speech learning. In support of our predictions, individuals with elevated depressive symptoms showed a deficit in reflective-optimal auditory category learning, but an advantage in reflexive-optimal auditory category learning. In addition, individuals with elevated depressive symptoms showed an advantage in learning a non-native speech category structure. Computational modeling suggested that the elevated depressive symptom advantage was due to faster, more accurate, and more frequent use of reflexive category learning strategies in individuals with elevated depressive symptoms. The implications of this work for dual-process approach to auditory
Andrew T Astle
Full Text Available Practice helps improve performance on a variety of visual tasks. Previous studies have shown that the magnitude of these improvements is inversely proportional to initial levels of performance, with subjects who perform more poorly at the start tending to improve most during perceptual training. If initial performance levels determine the absolute magnitude of learning, it follows that equating performance at the start of training should lead to equivalent amounts of learning. Here we test this prediction by comparing learning on an abutting Vernier alignment task with stimuli presented at two retinal eccentricities (5 and 15 deg equated in terms of either retinal size (unscaled stimuli or cortical size (scaled stimuli. Prior to learning, unscaled stimuli produced larger alignment thresholds at the more peripheral eccentricity, whereas scaled stimuli produced equivalent alignment thresholds. Consistent with previous work, we found that the magnitude of learning for participants who trained over eight daily sessions with the unscaled stimuli (n=11 was significantly larger at 15 than 5 degrees eccentricity. However, when stimuli were spatially scaled (n=11, we found equivalent amounts of learning at each location. These data suggest differences in the magnitude of learning can be accounted for by differences in the cortical representation of stimuli. Cortical scale may set not only the initial performance level but also the upper limit for the magnitude of performance improvements following training.
Wang, Rui; Zhang, Jun-Yun; Klein, Stanley A.; Levi, Dennis M.; Yu, Cong
Perceptual learning, a process in which training improves visual discrimination, is often specific to the trained retinal location, and this location specificity is frequently regarded as an indication of neural plasticity in the retinotopic visual cortex. However, our previous studies have shown that “double training” enables location-specific perceptual learning, such as Vernier learning, to completely transfer to a new location where an irrelevant task is practiced. Here we show that Vernier learning can be actuated by less location-specific orientation or motion-direction learning to transfer to completely untrained retinal locations. This “piggybacking” effect occurs even if both tasks are trained at the same retinal location. However, piggybacking does not occur when the Vernier task is paired with a more location-specific contrast-discrimination task. This previously unknown complexity challenges the current understanding of perceptual learning and its specificity/transfer. Orientation and motion-direction learning, but not contrast and Vernier learning, appears to activate a global process that allows learning transfer to untrained locations. Moreover, when paired with orientation or motion-direction learning, Vernier learning may be “piggybacked” by the activated global process to transfer to other untrained retinal locations. How this task-specific global activation process is achieved is as yet unknown. PMID:25398974
Stothers, Margot; Klein, Perry D
It is not clear from research whether, or to what extent, reading comprehension is impaired in adults who have learning disabilities (LD). The influence of perceptual organization (PO) and phonological awareness (PA) on reading comprehension was investigated. PO and PA are cognitive functions that have been examined in previous research for their roles in nonverbal LD and phonological dyslexia, respectively. Nonverbal tests of PO and non-reading tests of PA were administered to a sample of adults with postsecondary education. Approximately two thirds of the sample had previously been diagnosed as having LD. In a multiple regression analysis, tests of PO and PA were used to predict scores for tests of reading comprehension and mechanics. Despite the nonverbal nature of the perceptual organizational test stimuli, PO strongly predicted reading comprehension. Tests of PA predicted decoding and reading speed. Results were interpreted as supporting the hypothesis that integrative processes usually characterized as nonverbal were nonetheless used by readers with and without disabilities to understand text. The study's findings have implications for understanding the reading of adults with learning disabilities, and the nature of reading comprehension in general.
Banai, Karen; Yifat, Rachel
Although the contribution of perceptual processes to language skills during infancy is well recognized, the role of perception in linguistic processing beyond infancy is not well understood. In the experiments reported here, we asked whether manipulating the perceptual context in which stimuli are presented across trials influences how preschool children perform visual (shape-size identification; Experiment 1) and auditory (syllable identification; Experiment 2) tasks. Another goal was to determine whether the sensitivity to perceptual context can explain part of the variance in oral language skills in typically developing preschool children. Perceptual context was manipulated by changing the relative frequency with which target visual (Experiment 1) and auditory (Experiment 2) stimuli were presented in arrays of fixed size, and identification of the target stimuli was tested. Oral language skills were assessed using vocabulary, word definition, and phonological awareness tasks. Changes in perceptual context influenced the performance of the majority of children on both identification tasks. Sensitivity to perceptual context accounted for 7% to 15% of the variance in language scores. We suggest that context effects are an outcome of a statistical learning process. Therefore, the current findings demonstrate that statistical learning can facilitate both visual and auditory identification processes in preschool children. Furthermore, consistent with previous findings in infants and in older children and adults, individual differences in statistical learning were found to be associated with individual differences in language skills of preschool children. Copyright © 2015 Elsevier Inc. All rights reserved.
Bieszczad, Kasia M; Weinberger, Norman M
Associative memory for auditory-cued events involves specific plasticity in the primary auditory cortex (A1) that facilitates responses to tones which gain behavioral significance, by modifying representational parameters of sensory coding. Learning strategy, rather than the amount or content of learning, can determine this learning-induced cortical (high order) associative representational plasticity (HARP). Thus, tone-contingent learning with signaled errors can be accomplished either by (1) responding only during tone duration ("tone-duration" strategy, T-Dur), or (2) responding from tone onset until receiving an error signal for responses made immediately after tone offset ("tone-onset-to-error", TOTE). While rats using both strategies achieve the same high level of performance, only those using the TOTE strategy develop HARP, viz., frequency-specific decreased threshold (increased sensitivity) and decreased bandwidth (increased selectivity) (Berlau & Weinberger, 2008). The present study challenged the generality of learning strategy by determining if high motivation dominates in the formation of HARP. Two groups of adult male rats were trained to bar-press during a 5.0kHz (10s, 70dB) tone for a water reward under either high (HiMot) or moderate (ModMot) levels of motivation. The HiMot group achieved a higher level of correct performance. However, terminal mapping of A1 showed that only the ModMot group developed HARP, i.e., increased sensitivity and selectivity in the signal-frequency band. Behavioral analysis revealed that the ModMot group used the TOTE strategy while HiMot subjects used the T-Dur strategy. Thus, type of learning strategy, not level of learning or motivation, is dominant for the formation of cortical plasticity. Copyright 2009 Elsevier Inc. All rights reserved.
Full Text Available Previous behavioral studies have shown that human perceptual learning (PL occurs not only within active training sessions but also between sessions when no actual training is conducted. Once acquired, the learning effect can last for a long term, from months to even years, without further training (Karni et al, Nature, 1993. It's not clear, however, whether fast (within-session and slow (between-session visual PL involves different neural mechanisms, and whether both contribute to the long-term preservation. Recently, by observing the time course of learning-associated ERP changes over a period of six months, we found that fast and slow learning involved distinct ERP changes, which played different role on the preservation of PL: while the ERP changes associated with fast learning can only last for a short term (several days, those associated with slow learning can be retained for a long-term (several months after training has been stopped. We have observed these findings in two distinct visual tasks: line orientation detection (Qu et al., Neuropsychologia, 2010 and vernier tasks (Ding et al., in preparation so far. A general model of PL was proposed based on our findings and literatures.
Heinrich, S P
The idea of compensating or even rectifying refractive errors and presbyopia with the help of vision training is not new. For most approaches, however, scientific evidence is insufficient. A currently promoted method is "perceptual learning", which is assumed to improve stimulus processing in the brain. The basic phenomena of perceptual learning have been demonstrated by a multitude of studies. Some of these specifically address the case of refractive errors and presbyopia. However, many open questions remain, in particular with respect to the transfer of practice effects to every-day vision. At present, the method should therefore be judged with caution.
Full Text Available Perceptual decision making in which decisions are reached primarily from extracting and evaluating sensory information requires close interactions between the sensory system and decision-related networks in the brain. Uncertainty pervades every aspect of this process and can be considered related to either the stimulus signal or decision criterion. Here, we investigated the learning-induced reduction of both the signal and criterion uncertainty in two perceptual decision tasks based on two Glass pattern stimulus sets. This was achieved by manipulating spiral angle and signal level of radial and concentric Glass patterns. The behavioral results showed that the participants trained with a task based on criterion comparison improved their categorization accuracy for both tasks, whereas the participants who were trained on a task based on signal detection improved their categorization accuracy only on their trained task. We fitted the behavioral data with a computational model that can dissociate the contribution of the signal and criterion uncertainties. The modeling results indicated that the participants trained on the criterion comparison task reduced both the criterion and signal uncertainty. By contrast, the participants who were trained on the signal detection task only reduced their signal uncertainty after training. Our results suggest that the signal uncertainty can be resolved by training participants to extract signals from noisy environments and to discriminate between clear signals, which are evidenced by reduced perception variance after both training procedures. Conversely, the criterion uncertainty can only be resolved by the training of fine discrimination. These findings demonstrate that uncertainty in perceptual decision-making can be reduced with training but that the reduction of different types of uncertainty is task-dependent.
Full Text Available One of the variables that influence motor learning is the learner’s previous experience, which may provide perceptual and motor elements to be transferred to a novel motor skill. For swimming skills, several motor experiences may prove effective. Purpose. The aim was to analyse the influence of previous experience in playing in water, swimming lessons, and music or dance lessons on learning the breaststroke kick. Methods. The study involved 39 Physical Education students possessing basic swimming skills, but not the breaststroke, who performed 400 acquisition trials followed by 50 retention and 50 transfer trials, during which stroke index as well as rhythmic and spatial configuration indices were mapped, and answered a yes/no questionnaire regarding previous experience. Data were analysed by ANOVA (p = 0.05 and the effect size (Cohen’s d ≥0.8 indicating large effect size. Results. The whole sample improved their stroke index and spatial configuration index, but not their rhythmic configuration index. Although differences between groups were not significant, two types of experience showed large practical effects on learning: childhood water playing experience only showed major practically relevant positive effects, and no experience in any of the three fields hampered the learning process. Conclusions. The results point towards diverse impact of previous experience regarding rhythmic activities, swimming lessons, and especially with playing in water during childhood, on learning the breaststroke kick.
Radford, Nola T; Tanguma, Jesus; Gonzalez, Marcia; Nericcio, Mary Anne; Newman, Denis G
A case study of DW, an 11-yr. old monolingual, English-speaking boy who exhibits stuttering, language delay, and ADHD is presented. DW experienced only limited improvement during stuttering therapy received in public schools, according to parents and the public school clinician. The purpose of this case study was to assess whether fluency treatment which incorporated Mediated Learning, Delayed Auditory Feedback, and Speech Motor Repatterning would enhance progress. Therapy was delivered in two treatments, with each treatment being 5 wk. of intense therapy, separated by one year. Treatment 1 of combined Mediated Learning and Delayed Auditory Feedback yielded improvement in fluency, judged by parents and the teacher to be clinically significant. The improved fluency was maintained for one year when DW was pretested for participation in Treatment 2, which combined Mediated Learning, Delayed Auditory Feedback, and Speech Motor Repatterning Exercises. As no conclusions are possible, further study is needed.
Diedler, Jennifer; Pietz, Joachim; Brunner, Monika; Hornberger, Cornelia; Bast, Thomas; Rupp, André
We examined basic auditory temporal processing in children with language-based learning problems (LPs) applying magnetencephalography. Auditory-evoked fields of 43 children (27 LP, 16 controls) were recorded while passively listening to 100-ms white noise bursts with temporal gaps of 3, 6, 10 and 30 ms inserted after 5 or 50 ms. The P1m was evaluated by spatio-temporal source analysis. Psychophysical gap-detection thresholds were obtained for the same participants. Thirty-two percent of the LP children were not able to perform the early gap psychoacoustic task. In addition, LP children displayed a significant delay of the P1m during the early gap task. These findings provide evidence for a diminished neuronal representation of short auditory stimuli in the primary auditory cortex of LP children.
Matsushita, Masanori; Matsuda, Yasushi; Takeuchi, Hiro-Aki; Satoh, Ryohei; Watanabe, Aiko; Zandbergen, Matthijs A.; Manabe, Kazuchika; Kawashima, Takashi; Bolhuis, Johan J.
Parrots and songbirds learn their vocalizations from a conspecific tutor, much like human infants acquire spoken language. Parrots can learn human words and it has been suggested that they can use them to communicate with humans. The caudomedial pallium in the parrot brain is homologous with that of songbirds, and analogous to the human auditory association cortex, involved in speech processing. Here we investigated neuronal activation, measured as expression of the protein product of the immediate early gene ZENK, in relation to auditory learning in the budgerigar (Melopsittacus undulatus), a parrot. Budgerigar males successfully learned to discriminate two Japanese words spoken by another male conspecific. Re-exposure to the two discriminanda led to increased neuronal activation in the caudomedial pallium, but not in the hippocampus, compared to untrained birds that were exposed to the same words, or were not exposed to words. Neuronal activation in the caudomedial pallium of the experimental birds was correlated significantly and positively with the percentage of correct responses in the discrimination task. These results suggest that in a parrot, the caudomedial pallium is involved in auditory learning. Thus, in parrots, songbirds and humans, analogous brain regions may contain the neural substrate for auditory learning and memory. PMID:22701714
Full Text Available Parrots and songbirds learn their vocalizations from a conspecific tutor, much like human infants acquire spoken language. Parrots can learn human words and it has been suggested that they can use them to communicate with humans. The caudomedial pallium in the parrot brain is homologous with that of songbirds, and analogous to the human auditory association cortex, involved in speech processing. Here we investigated neuronal activation, measured as expression of the protein product of the immediate early gene ZENK, in relation to auditory learning in the budgerigar (Melopsittacus undulatus, a parrot. Budgerigar males successfully learned to discriminate two Japanese words spoken by another male conspecific. Re-exposure to the two discriminanda led to increased neuronal activation in the caudomedial pallium, but not in the hippocampus, compared to untrained birds that were exposed to the same words, or were not exposed to words. Neuronal activation in the caudomedial pallium of the experimental birds was correlated significantly and positively with the percentage of correct responses in the discrimination task. These results suggest that in a parrot, the caudomedial pallium is involved in auditory learning. Thus, in parrots, songbirds and humans, analogous brain regions may contain the neural substrate for auditory learning and memory.
Shenoy, Nandita; Shenoy K, Ashok; U P, Ratnakar
VARK is a questionnaire which was developed by Neil Fleming (www.vark.learn.com), who was a teacher and an educator in New Zealand, with respect to the perceptual preferences in learning. V stands for Visual- the students learn best from pictures, graphs and diagrams. A stand for Aural - the students learn best from spoken words, lectures and discussions. R stands for Reading - the students learn best from reading and writing texts. K stands for Kinesthetic - the students learn best when they move their bodies and manipulate things with their own hands. The aim of the recent study was to investigate the learning styles among the dental students in our clinical set up. The VARK-questionnaire contains 13 multiple-choice- questions with four possibilities to select an answer. Each possibility represents one of the four modes of perception. But, one can select more than one answer for each question, which is necessary for the identification of the poly modal modes of perception and learning. This is also a psychometric problem when an attempt is being made to state a measure of the reliability of the questionnaire. The VARK-questionnaire was distributed among 100 students and we received filled forms from only 70 students. This sample size represented a 70% response rate from the students in the class and it was markedly above the level which was required to make conclusions about the student preferences for receiving and processing information. The students spent about 10 minutes in an ordinary lesson to fill in the questionnaire. The students' register numbers and names were used in the study and no blinding was practised. We analyzed their learning styles with their performances in the university exams. This was a questionnaire based clinical study. The responses from the students in our University where classified into the multi-modal (VARK), tri-modal (VRK, VAK, VAR, ARK), bi-modal (VR, VA, VK, RK) and the uni-modal (V, A, R.K) categories. The results showed that
Lee, Geoffrey W.; Zambetta, Fabio; Li, Xiaodong; Paolini, Antonio G.
Objective. In this paper we propose a novel application of reinforcement learning to the area of auditory neural stimulation. We aim to develop a simulation environment which is based off real neurological responses to auditory and electrical stimulation in the cochlear nucleus (CN) and inferior colliculus (IC) of an animal model. Using this simulator we implement closed loop reinforcement learning algorithms to determine which methods are most effective at learning effective acoustic neural stimulation strategies. Approach. By recording a comprehensive set of acoustic frequency presentations and neural responses from a set of animals we created a large database of neural responses to acoustic stimulation. Extensive electrical stimulation in the CN and the recording of neural responses in the IC provides a mapping of how the auditory system responds to electrical stimuli. The combined dataset is used as the foundation for the simulator, which is used to implement and test learning algorithms. Main results. Reinforcement learning, utilising a modified n-Armed Bandit solution, is implemented to demonstrate the model’s function. We show the ability to effectively learn stimulation patterns which mimic the cochlea’s ability to covert acoustic frequencies to neural activity. Time taken to learn effective replication using neural stimulation takes less than 20 min under continuous testing. Significance. These results show the utility of reinforcement learning in the field of neural stimulation. These results can be coupled with existing sound processing technologies to develop new auditory prosthetics that are adaptable to the recipients current auditory pathway. The same process can theoretically be abstracted to other sensory and motor systems to develop similar electrical replication of neural signals.
Full Text Available Background and Aim: Learning disability is a term referes to a group of disorders manifesting listening, reading, writing, or mathematical problems. These children mostly have attention difficulties in classroom that leads to many learning problems. In this study we aimed to compare the auditory attention of 7 to 9 year old children with learning disability to non- learning disability age matched normal group.Methods: Twenty seven male 7 to 9 year old students with learning disability and 27 age and sex matched normal conrols were selected with unprobable simple sampling. 27 In order to evaluate auditory selective and divided attention, Farsi versions of speech in noise and dichotic digit test were used respectively.Results: Comparison of mean scores of Farsi versions of speech in noise in both ears of 7 and 8 year-old students in two groups indicated no significant difference (p>0.05 Mean scores of 9 year old controls was significant more than those of the cases only in the right ear (p=0.033. However, no significant difference was observed between mean scores of dichotic digit test assessing the right ear of 9 year-old learning disability and non learning disability students (p>0.05. Moreover, mean scores of 7 and 8 year- old students with learning disability was less than those of their normal peers in the left ear (p>0.05.Conclusion: Selective auditory attention is not affected in the optimal signal to noise ratio, while divided attention seems to be affected by maturity delay of auditory system or central auditory system disorders.
The purpose of this study was to investigate relationships between grade level, perceptual learning style preferences, and language learning strategies among Taiwanese English as a Foreign Language (EFL) students in grades 7 through 9. Three hundred and ninety junior high school students participated in this study. The instruments for data…
Atienza, Mercedes; Cantero, Jose L; Stickgold, Robert
Perceptual learning can develop over extended periods, with slow, at times sleep-dependent, improvement seen several days after training. As a result, performance can become more automatic, that is, less dependent on voluntary attention. This study investigates whether the brain correlates of this enhancement of automaticity are sleep-dependent. Event-related potentials produced in response to complex auditory stimuli were recorded while subjects' attention was focused elsewhere. We report here that following training on an auditory discrimination task, performance continued to improve, without significant further training, for 72 hr. At the same time, several event-related potential components became evident 48-72 hr after training. Posttraining sleep deprivation prevented neither the continued performance improvement nor the slow development of cortical dynamics related to an enhanced familiarity with the task. However, those brain responses associated with the automatic shift of attention to unexpected stimuli failed to develop. Thus, in this auditory learning paradigm, posttraining sleep appears to reduce the voluntary attentional effort required for successful perceptual discrimination by facilitating the intrusion of a potentially meaningful stimulus into one's focus of attention for further evaluation.
Full Text Available Deviant stimuli, violating regularities in a sensory environment, elicit the Mismatch Negativity (MMN, largely described in the Event-Related Potential literature. While it is widely accepted that the MMN reflects more than basic change detection, a comprehensive description of mental processes modulating this response is still lacking. Within the framework of predictive coding, deviance processing is part of an inference process where prediction errors (the mismatch between incoming sensations and predictions established through experience are minimized. In this view, the MMN is a measure of prediction error, which yields specific expectations regarding its modulations by various experimental factors. In particular, it predicts that the MMN should decrease as the occurrence of a deviance becomes more predictable. We conducted a passive oddball EEG study and manipulated the predictability of sound sequences by means of different temporal structures. Importantly, our design allows comparing mismatch responses elicited by predictable and unpredictable violations of a simple repetition rule and therefore departs from previous studies that investigate violations of different time-scale regularities. We observed a decrease of the MMN with predictability and interestingly, a similar effect at earlier latencies, within 70 ms after deviance onset. Following these pre-attentive responses, a reduced P3a was measured in the case of predictable deviants. We conclude that early and late deviance responses reflect prediction errors, triggering belief updating within the auditory hierarchy. Beside, in this passive study, such perceptual inference appears to be modulated by higher-level implicit learning of sequence statistical structures. Our findings argue for a hierarchical model of auditory processing where predictive coding enables implicit extraction of environmental regularities.
Alamia, Andrea; Solopchuk, Oleg; D'Ausilio, Alessandro; Van Bever, Violette; Fadiga, Luciano; Olivier, Etienne; Zénon, Alexandre
Because Broca's area is known to be involved in many cognitive functions, including language, music, and action processing, several attempts have been made to propose a unifying theory of its role that emphasizes a possible contribution to syntactic processing. Recently, we have postulated that Broca's area might be involved in higher-order chunk processing during implicit learning of a motor sequence. Chunking is an information-processing mechanism that consists of grouping consecutive items in a sequence and is likely to be involved in all of the aforementioned cognitive processes. Demonstrating a contribution of Broca's area to chunking during the learning of a nonmotor sequence that does not involve language could shed new light on its function. To address this issue, we used offline MRI-guided TMS in healthy volunteers to disrupt the activity of either the posterior part of Broca's area (left Brodmann's area [BA] 44) or a control site just before participants learned a perceptual sequence structured in distinct hierarchical levels. We found that disruption of the left BA 44 increased the processing time of stimuli representing the boundaries of higher-order chunks and modified the chunking strategy. The current results highlight the possible role of the left BA 44 in building up effector-independent representations of higher-order events in structured sequences. This might clarify the contribution of Broca's area in processing hierarchical structures, a key mechanism in many cognitive functions, such as language and composite actions.
Thomas, Roha M; Kaipa, Ramesh; Ganesh, Attigodu Chandrashekara
The current study aimed to compare the auditory interference control of participants with Learning Disability (LD) to a control group on two versions of an auditory Stroop task. A group of eight children with LD (clinical group) and another group of eight typically developing children (control group) served as participants. All the participants were involved in a semantic and a gender identification-based auditory Stroop task. Each participant was presented with eight different words (10 times) that were pre-recorded by a male and a female speaker. The semantic task required the participants to ignore the speaker's gender and attend to the meaning of the word, and vice-versa for the gender identification task. The participants' performance accuracy and reaction time (RT) was measured on both the tasks. Control group participants significantly outperformed the clinical group participants on both the tasks with regard to performance accuracy as well as RT. The results suggest that children with LD have problems in suppressing irrelevant auditory stimuli and focusing on the relevant auditory stimuli. This can be attributed to the auditory processing problems in these children. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Full Text Available Adults with sensory impairment, such as reduced hearing acuity, have impaired ability to recall identifiable words, even when their memory is otherwise normal. We hypothesize that poorer stimulus quality causes weaker activity in neurons responsive to the stimulus and more time to elapse between stimulus onset and identification. The weaker activity and increased delay to stimulus identification reduce the necessary strengthening of connections between neurons active before stimulus presentation and neurons active at the time of stimulus identification. We test our hypothesis through a biologically motivated computational model, which performs item recognition, memory formation and memory retrieval. In our simulations, spiking neurons are distributed into pools representing either items or context, in two separate, but connected winner-takes-all (WTA networks. We include associative, Hebbian learning, by comparing multiple forms of spike-timing dependent plasticity (STDP, which strengthen synapses between coactive neurons during stimulus identification. Synaptic strengthening by STDP can be sufficient to reactivate neurons during recall if their activity during a prior stimulus rose strongly and rapidly. We find that a single poor quality stimulus impairs recall of neighboring stimuli as well as the weak stimulus itself. We demonstrate that within the WTA paradigm of word recognition, reactivation of separate, connected sets of non-word, context cells permits reverse recall. Also, only with such coactive context cells, does slowing the rate of stimulus presentation increase recall probability. We conclude that significant temporal overlap of neural activity patterns, absent from individual WTA networks, is necessary to match behavioral data for word recall.
Miller, Paul; Wingfield, Arthur
Adults with sensory impairment, such as reduced hearing acuity, have impaired ability to recall identifiable words, even when their memory is otherwise normal. We hypothesize that poorer stimulus quality causes weaker activity in neurons responsive to the stimulus and more time to elapse between stimulus onset and identification. The weaker activity and increased delay to stimulus identification reduce the necessary strengthening of connections between neurons active before stimulus presentation and neurons active at the time of stimulus identification. We test our hypothesis through a biologically motivated computational model, which performs item recognition, memory formation and memory retrieval. In our simulations, spiking neurons are distributed into pools representing either items or context, in two separate, but connected winner-takes-all (WTA) networks. We include associative, Hebbian learning, by comparing multiple forms of spike-timing-dependent plasticity (STDP), which strengthen synapses between coactive neurons during stimulus identification. Synaptic strengthening by STDP can be sufficient to reactivate neurons during recall if their activity during a prior stimulus rose strongly and rapidly. We find that a single poor quality stimulus impairs recall of neighboring stimuli as well as the weak stimulus itself. We demonstrate that within the WTA paradigm of word recognition, reactivation of separate, connected sets of non-word, context cells permits reverse recall. Also, only with such coactive context cells, does slowing the rate of stimulus presentation increase recall probability. We conclude that significant temporal overlap of neural activity patterns, absent from individual WTA networks, is necessary to match behavioral data for word recall. PMID:20631822
Ryan, Tamara E.
The purpose of this study was to determine the effects of auditory integration training (AIT) on a component of the executive function of working memory; specifically, to determine if learning preferences might have an interaction with AIT to increase the outcome for some learners. The question asked by this quantitative pretest posttest design is…
Newman-Norlund, R.D.; Frey, S.H.; Petitto, L.A.; Grafton, S.T.
Longitudinal changes in brain activity during second language (L2) acquisition of a miniature finite-state grammar, named Wernickese, were identified with functional magnetic resonance imaging (fMRI). Participants learned either a visual sign language form or an auditory-verbal form to equivalent
Full Text Available Abstract Background This paper presents the results of a set of experiments in which we used continuous auditory feedback to augment motor training exercises. This feedback modality is mostly underexploited in current robotic rehabilitation systems, which usually implement only very basic auditory interfaces. Our hypothesis is that properly designed continuous auditory feedback could be used to represent temporal and spatial information that could in turn, improve performance and motor learning. Methods We implemented three different experiments on healthy subjects, who were asked to track a target on a screen by moving an input device (controller with their hand. Different visual and auditory feedback modalities were envisaged. The first experiment investigated whether continuous task-related auditory feedback can help improve performance to a greater extent than error-related audio feedback, or visual feedback alone. In the second experiment we used sensory substitution to compare different types of auditory feedback with equivalent visual feedback, in order to find out whether mapping the same information on a different sensory channel (the visual channel yielded comparable effects with those gained in the first experiment. The final experiment applied a continuously changing visuomotor transformation between the controller and the screen and mapped kinematic information, computed in either coordinate system (controller or video, to the audio channel, in order to investigate which information was more relevant to the user. Results Task-related audio feedback significantly improved performance with respect to visual feedback alone, whilst error-related feedback did not. Secondly, performance in audio tasks was significantly better with respect to the equivalent sensory-substituted visual tasks. Finally, with respect to visual feedback alone, video-task-related sound feedback decreased the tracking error during the learning of a novel
Background This paper presents the results of a set of experiments in which we used continuous auditory feedback to augment motor training exercises. This feedback modality is mostly underexploited in current robotic rehabilitation systems, which usually implement only very basic auditory interfaces. Our hypothesis is that properly designed continuous auditory feedback could be used to represent temporal and spatial information that could in turn, improve performance and motor learning. Methods We implemented three different experiments on healthy subjects, who were asked to track a target on a screen by moving an input device (controller) with their hand. Different visual and auditory feedback modalities were envisaged. The first experiment investigated whether continuous task-related auditory feedback can help improve performance to a greater extent than error-related audio feedback, or visual feedback alone. In the second experiment we used sensory substitution to compare different types of auditory feedback with equivalent visual feedback, in order to find out whether mapping the same information on a different sensory channel (the visual channel) yielded comparable effects with those gained in the first experiment. The final experiment applied a continuously changing visuomotor transformation between the controller and the screen and mapped kinematic information, computed in either coordinate system (controller or video), to the audio channel, in order to investigate which information was more relevant to the user. Results Task-related audio feedback significantly improved performance with respect to visual feedback alone, whilst error-related feedback did not. Secondly, performance in audio tasks was significantly better with respect to the equivalent sensory-substituted visual tasks. Finally, with respect to visual feedback alone, video-task-related sound feedback decreased the tracking error during the learning of a novel visuomotor perturbation, whereas
Hirata, Yukari; Kelly, Spencer D; Huang, Jessica; Manansala, Michael
Research has shown that hand gestures affect comprehension and production of speech at semantic, syntactic, and pragmatic levels for both native language and second language (L2). This study investigated a relatively less explored question: Do hand gestures influence auditory learning of an L2 at the segmental phonology level? To examine auditory learning of phonemic vowel length contrasts in Japanese, 88 native English-speaking participants took an auditory test before and after one of the following 4 types of training in which they (a) observed an instructor in a video speaking Japanese words while she made syllabic-rhythm hand gesture, (b) produced this gesture with the instructor, (c) observed the instructor speaking those words and her moraic-rhythm hand gesture, or (d) produced the moraic-rhythm gesture with the instructor. All of the training types yielded similar auditory improvement in identifying vowel length contrast. However, observing the syllabic-rhythm hand gesture yielded the most balanced improvement between word-initial and word-final vowels and between slow and fast speaking rates. The overall effect of hand gesture on learning of segmental phonology is limited. Implications for theories of hand gesture are discussed in terms of the role it plays at different linguistic levels.
Ellen de Wit
Presentatie CPLOL congres Florence In this systematic review, six electronic databases were searched for peer-reviewed studies using the key words auditory processing, auditory diseases, central [Mesh], and auditory perceptual. Two reviewers independently assessed relevant studies by inclusion
Full Text Available In order to develop evidence-based rehabilitation protocols post stroke, one must first reconcile the vast heterogeneity in the post-stroke population and develop protocols to facilitate motor learning in the various subgroups. The main purpose of this study is to show that auditory constraints interact with the stage of recovery post stroke to influence motor learning. We characterized the stages of upper limb recovery using task-based kinematic measures in twenty subjects with chronic hemiparesis, and used a bimanual wrist extension task using a custom-made wrist trainer to facilitate learning of wrist extension in the paretic hand under four auditory conditions: 1 without auditory cueing; 2 to non-musical happy sounds; 3 to self-selected music; and 4 to a metronome beat set at a comfortable tempo. Two bimanual trials (15 s each were followed by one unimanual trial with the paretic hand over six cycles under each condition. Clinical metrics, wrist and arm kinematics and electromyographic activity were recorded. Hierarchical cluster analysis with the Mahalanobis metric based on baseline speed and extent of wrist movement stratified subjects into three distinct groups which reflected their stage of recovery: spastic paresis, spastic co-contraction, and minimal paresis. In spastic paresis, the metronome beat increased wrist extension, but also increased muscle co-activation across the wrist. In contrast, in spastic co-contraction, no auditory stimulation increased wrist extension and reduced co-activation. In minimal paresis, wrist extension did not improve under any condition. The results suggest that auditory task constraints interact with stage of recovery during motor learning after stroke, perhaps due to recruitment of distinct neural substrates over the course of recovery. The findings advance our understanding of the mechanisms of progression of motor recovery and lay the foundation for personalized treatment algorithms post stroke.
Kim, Yong-Hwan; Kang, Dong-Wha; Kim, Dongho; Kim, Hye-Jin; Sasaki, Yuka; Watanabe, Takeo
Visual perceptual learning (VPL) is defined as long-term improvement in performance on a visual-perception task after visual experiences or training. Early studies have found that VPL is highly specific for the trained feature and location, suggesting that VPL is associated with changes in the early visual cortex. However, the generality of visual skills enhancement attributable to action video-game experience suggests that VPL can result from improvement in higher cognitive skills. If so, experience in real-time strategy (RTS) video-game play, which may heavily involve cognitive skills, may also facilitate VPL. To test this hypothesis, we compared VPL between RTS video-game players (VGPs) and non-VGPs (NVGPs) and elucidated underlying structural and functional neural mechanisms. Healthy young human subjects underwent six training sessions on a texture discrimination task. Diffusion-tensor and functional magnetic resonance imaging were performed before and after training. VGPs performed better than NVGPs in the early phase of training. White-matter connectivity between the right external capsule and visual cortex and neuronal activity in the right inferior frontal gyrus (IFG) and anterior cingulate cortex (ACC) were greater in VGPs than NVGPs and were significantly correlated with RTS video-game experience. In both VGPs and NVGPs, there was task-related neuronal activity in the right IFG, ACC, and striatum, which was strengthened after training. These results indicate that RTS video-game experience, associated with changes in higher-order cognitive functions and connectivity between visual and cognitive areas, facilitates VPL in early phases of training. The results support the hypothesis that VPL can occur without involvement of only visual areas. Significance statement: Although early studies found that visual perceptual learning (VPL) is associated with involvement of the visual cortex, generality of visual skills enhancement by action video-game experience
Girls tend to acquire language skills faster than boys do. Furthermore, specific language impairment and dyslexia are diagnosed more often in males than in females. We investigated whether auditory verbal learning skills in boys are inferior to those of girls as a possible cause for gender dependency in language acquisition. In a retrospective study, data from 386 children (245 male, 141 female) age 6 years to 9 years 11 months were investigated. The Auditory Verbal Learning Test (Verbaler Lern- und Merkfähigkeitstest) was administered. After gender, age, and IQ matching, girls showed a small advantage in long-term memory and recognition. Our results are in contrast to findings that suggest superior verbal memory and learning in adult females compared with males.
Leclercq, Virginie; Le Dantec, Christophe C; Seitz, Aaron R
The mechanisms guiding our learning and memory processes are of key interest to human cognition. While much research shows that attention and reinforcement processes help guide the encoding process, there is still much to know regarding how our brains choose what to remember. Recent research of task-irrelevant perceptual learning (TIPL) has found that information presented coincident with important events is better encoded even if participants are not aware of its presence (see Seitz & Watanabe, 2009). However a limitation of existing studies of TIPL is that they provide little information regarding the depth of encoding supported by pairing a stimulus with a behaviorally relevant event. The objective of this research was to understand the depth of encoding of information that is learned through TIPL. To do so, we adopted a variant of the "remember/know" paradigm, recently reported by Ingram, Mickes, and Wixted (2012), in which multiple confidence levels of both familiar (know) and remember reports are reported (Experiment 1), and in which episodic information is tested (Experiment 2). TIPL was found in both experiments, with higher recognition performance for target-paired than for distractor-paired images. Furthermore, TIPL benefitted both "familiar" and "remember" reports. The results of Experiment 2 indicate that the most confident "remember" response was associated with episodic information, where participants were able to access the location of image presentation for these items. Together, these results indicate that TIPL results in a deep enhancement in the encoding of target-paired information. Copyright © 2013 Elsevier B.V. All rights reserved.
Full Text Available Introduction: Educators of the health care profession (teachers are committed in preparing future health care providers, but are facing many challenges in transmitting their ever expanding knowledge to the students. This study was done to focus on different learning styles among dental students. Aim: To assess different learning preferences among dental students. Materials and Methods: This is a descriptive cross-sectional questionnaire study using visual, auditory, reading-writing, and kinesthetic questionnaire among dental students. Results: Majority 75.8% of the students preferred multimodal learning style. Multimodal learning was common among clinical students. No statistical significant difference of learning styles in relation to gender (P > 0.05. Conclusion: In the present study, majority of students preferred multimodal learning preference. Knowledge about the learning style preference of different profession can help to enhance the teaching method for the students.
Nicolas Jean Bourguignon
Full Text Available A combination of lexical bias and altered auditory feedback was used to investigate the influence of higher-order linguistic knowledge on the perceptual aspects of speech motor control. Subjects produced monosyllabic real words or pseudo-words containing the vowel [ε] (as in head under conditions of altered auditory feedback involving a decrease in vowel first formant (F1 frequency. This manipulation had the effect of making the vowel sound more similar to [I] (as in hid, affecting the lexical status of produced words in two Lexical-Change (LC groups (either changing them from real words to pseudo-words: e.g., less – liss, or pseudo-words to real words: e.g., kess – kiss. Two No-Lexical-Change (NLC control groups underwent the same auditory feedback manipulation during the production of [ε] real- or pseudo-words, only without any resulting change in lexical status (real words to real words: e.g., mess – miss, or pseudo-words to pseudo-words: e.g., ness – niss. The results from the LC groups indicate that auditory-feedback-based speech motor learning is sensitive to the lexical status of the stimuli being produced, in that speakers tend to keep their acoustic speech outcomes within the auditory-perceptual space corresponding to the task-related side of the word/non-word boundary (real words or pseudo-words. For the NLC groups, however, no such effect of lexical status is observed.
Francis, Alexander L.; Ciocca, Valter; Ma, Lian
In a tonal language syllabic pitch patterns contribute to lexical meaning. Perceptual assimilation models of cross-language perception predict speakers of another tonal language should assimilate Cantonese lexical tones to native tonal categories, affecting identification, discrimination and acquisition. For nontonal language speakers, two possibilities exist. If pitch information is ignored, vowels with different tones should assimilate to the same native category, lowering performance. If tonal information is attended but unused in native categorization, Cantonese tones could be nonassimilable and therefore easily discriminated, and possibly easily identified or learned. Here, native speakers of Mandarin Chinese and American English were trained to identify Cantonese words differing in lexical tone. Discrimination and identification were tested before and after training. Both groups initially performed well on upper register tones (high level, high rising, mid level) and poorly on lower (low falling, low level, low rising). Mandarin listeners improved most at identifying low falling tones; English listeners improved most on low level and low rising tones. Training primarily appeared to improve listeners' ability to make categorical decisions based on direction of pitch change, a feature reportedly under-attended by English speakers, but preferred by Mandarin speakers. [Work supported by research funding from The University of Hong Kong.
Bernard, Jean-Baptiste; Arunkumar, Amit; Chung, Susana T L
In a previous study, Chung, Legge, and Cheung (2004) showed that training using repeated presentation of trigrams (sequences of three random letters) resulted in an increase in the size of the visual span (number of letters recognized in a glance) and reading speed in the normal periphery. In this study, we asked whether we could optimize the benefit of trigram training on reading speed by using trigrams more specific to the reading task (i.e., trigrams frequently used in the English language) and presenting them according to their frequencies of occurrence in normal English usage and observers' performance. Averaged across seven observers, our training paradigm (4 days of training) increased the size of the visual span by 6.44 bits, with an accompanied 63.6% increase in the maximum reading speed, compared with the values before training. However, these benefits were not statistically different from those of Chung, Legge, and Cheung (2004) using a random-trigram training paradigm. Our findings confirm the possibility of increasing the size of the visual span and reading speed in the normal periphery with perceptual learning, and suggest that the benefits of training on letter recognition and maximum reading speed may not be linked to the types of letter strings presented during training. Copyright © 2012 Elsevier Ltd. All rights reserved.
Croom, Adam M
Aesthetic non-cognitivists deny that aesthetic statements express genuinely aesthetic beliefs and instead hold that they work primarily to express something non-cognitive, such as attitudes of approval or disapproval, or desire. Non-cognitivists deny that aesthetic statements express aesthetic beliefs because they deny that there are aesthetic features in the world for aesthetic beliefs to represent. Their assumption, shared by scientists and theorists of mind alike, was that language-users possess cognitive mechanisms with which to objectively grasp abstract rules fixed independently of human responses, and that cognizers are thereby capable of grasping rules for the correct application of aesthetic concepts without relying on evaluation or enculturation. However, in this article I use Wittgenstein's rule-following considerations to argue that psychological theories grounded upon this so-called objective model of rule-following fail to adequately account for concept acquisition and mastery. I argue that this is because linguistic enculturation, and the perceptual learning that's often involved, influences and enables the mastery of aesthetic concepts. I argue that part of what's involved in speaking aesthetically is to belong to a cultural practice of making sense of things aesthetically, and that it's within a socio-linguistic community, and that community's practices, that such aesthetic sense can be made intelligible.
Chung, Susana T L; Li, Roger W; Silver, Michael A; Levi, Dennis M
Amblyopia is a developmental disorder that results in a wide range of visual deficits. One proven approach to recovering vision in adults with amblyopia is perceptual learning (PL). Recent evidence suggests that neuromodulators can enhance adult plasticity. In this pilot study, we asked whether donepezil, a cholinesterase inhibitor, enhances visual PL in adults with amblyopia. Nine adults with amblyopia were first trained on a low-contrast single-letter identification task while taking a daily dose (5 mg) of donepezil throughout training. Following 10,000 trials of training, participants showed improved contrast sensitivity for identifying single letters. However, the magnitude of improvement was no greater than, and the rate of improvement was slower than, that obtained in a previous study in which six adults with amblyopia were trained using an identical task and protocol but without donepezil (Chung et al., 2012). In addition, we measured transfer of learning effects to other tasks and found that for donepezil, the post-pre performance ratios in both a size-limited (acuity) and a spacing-limited (crowding) task were not significantly different from those found in the previous study without donepezil administration. After an interval of several weeks, six participants returned for a second course of training on identifying flanked (crowded) letters, again with concurrent donepezil administration. Although this task has previously been shown to be highly amenable to PL in adults with amblyopia (Chung et al., 2012; Hussain et al., 2012), only one observer in our study showed significant learning over 10,000 trials of training. Auxiliary experiments showed that the lack of a learning effect on this task during donepezil administration was not due to either the order of training of the two tasks or the use of a sequential training paradigm. Our results reveal that cholinergic enhancement with donepezil during training does not improve or speed up PL of single
Hoffmann, Pablo F.
. One of the key issues when designing such training systems is in the assessment of transfer of learning. In this study we present data on the learning of an auditory task involving sinusoidal amplitude- and frequency-modulated tones. Modulation rate discrimination thresholds were measured during pre......-training, training, a post-training stages. During training, listeners were divided into two groups; one group trained on amplitude-modulation rate discrimination and the other group trained on frequency-modulation rate discrimination. Results will be discussed in terms of their implications for training...... applications by addressing the transfer of learning across carrier frequency, modulation rate, and modulation type....
Cristina F B Murphy
Full Text Available Despite the well-established involvement of both sensory ("bottom-up" and cognitive ("top-down" processes in literacy, the extent to which auditory or cognitive (memory or attention learning transfers to phonological and reading skills remains unclear. Most research has demonstrated learning of the trained task or even learning transfer to a closely related task. However, few studies have reported "far-transfer" to a different domain, such as the improvement of phonological and reading skills following auditory or cognitive training. This study assessed the effectiveness of auditory, memory or attention training on far-transfer measures involving phonological and reading skills in typically developing children. Mid-transfer was also assessed through untrained auditory, attention and memory tasks. Sixty 5- to 8-year-old children with normal hearing were quasi-randomly assigned to one of five training groups: attention group (AG, memory group (MG, auditory sensory group (SG, placebo group (PG; drawing, painting, and a control, untrained group (CG. Compliance, mid-transfer and far-transfer measures were evaluated before and after training. All trained groups received 12 x 45-min training sessions over 12 weeks. The CG did not receive any intervention. All trained groups, especially older children, exhibited significant learning of the trained task. On pre- to post-training measures (test-retest, most groups exhibited improvements on most tasks. There was significant mid-transfer for a visual digit span task, with highest span in the MG, relative to other groups. These results show that both sensory and cognitive (memory or attention training can lead to learning in the trained task and to mid-transfer learning on a task (visual digit span within the same domain as the trained tasks. However, learning did not transfer to measures of language (reading and phonological awareness, as the PG and CG improved as much as the other trained groups. Further
Xiong, Ying-Zi; Zhang, Jun-Yun; Yu, Cong
Perceptual learning is often orientation and location specific, which may indicate neuronal plasticity in early visual areas. However, learning specificity diminishes with additional exposure of the transfer orientation or location via irrelevant tasks, suggesting that the specificity is related to untrained conditions, likely because neurons representing untrained conditions are neither bottom-up stimulated nor top-down attended during training. To demonstrate these top-down and bottom-up contributions, we applied a "continuous flash suppression" technique to suppress the exposure stimulus into sub-consciousness, and with additional manipulations to achieve pure bottom-up stimulation or top-down attention with the transfer condition. We found that either bottom-up or top-down influences enabled significant transfer of orientation and Vernier discrimination learning. These results suggest that learning specificity may result from under-activations of untrained visual neurons due to insufficient bottom-up stimulation and/or top-down attention during training. High-level perceptual learning thus may not functionally connect to these neurons for learning transfer.
Full Text Available Visual perceptual learning (VPL can improve spatial vision in normally sighted and visually impaired individuals. Although previous studies of humans and large animals have explored the neural basis of VPL, elucidation of the underlying cellular and molecular mechanisms remains a challenge. Owing to the advantages of molecular genetic and optogenetic manipulations, the mouse is a promising model for providing a mechanistic understanding of VPL. Here, we thoroughly evaluated the effects and properties of VPL on spatial vision in C57BL/6J mice using a two-alternative, forced-choice visual water task. Briefly, the mice underwent prolonged training at near the individual threshold of contrast or spatial frequency (SF for pattern discrimination or visual detection for 35 consecutive days. Following training, the contrast-threshold trained mice showed an 87% improvement in contrast sensitivity (CS and a 55% gain in visual acuity (VA. Similarly, the SF-threshold trained mice exhibited comparable and long-lasting improvements in VA and significant gains in CS over a wide range of SFs. Furthermore, learning largely transferred across eyes and stimulus orientations. Interestingly, learning could transfer from a pattern discrimination task to a visual detection task, but not vice versa. We validated that this VPL fully restored VA in adult amblyopic mice and old mice. Taken together, these data indicate that mice, as a species, exhibit reliable VPL. Intrinsic signal optical imaging revealed that mice with perceptual training had higher cut-off SFs in primary visual cortex (V1 than those without perceptual training. Moreover, perceptual training induced an increase in the dendritic spine density in layer 2/3 pyramidal neurons of V1. These results indicated functional and structural alterations in V1 during VPL. Overall, our VPL mouse model will provide a platform for investigating the neurobiological basis of VPL.
Perrier, Pascal; Schwartz, Jean-Luc; Diard, Julien
Shifts in perceptual boundaries resulting from speech motor learning induced by perturbations of the auditory feedback were taken as evidence for the involvement of motor functions in auditory speech perception. Beyond this general statement, the precise mechanisms underlying this involvement are not yet fully understood. In this paper we propose a quantitative evaluation of some hypotheses concerning the motor and auditory updates that could result from motor learning, in the context of various assumptions about the roles of the auditory and somatosensory pathways in speech perception. This analysis was made possible thanks to the use of a Bayesian model that implements these hypotheses by expressing the relationships between speech production and speech perception in a joint probability distribution. The evaluation focuses on how the hypotheses can (1) predict the location of perceptual boundary shifts once the perturbation has been removed, (2) account for the magnitude of the compensation in presence of the perturbation, and (3) describe the correlation between these two behavioral characteristics. Experimental findings about changes in speech perception following adaptation to auditory feedback perturbations serve as reference. Simulations suggest that they are compatible with a framework in which motor adaptation updates both the auditory-motor internal model and the auditory characterization of the perturbed phoneme, and where perception involves both auditory and somatosensory pathways. PMID:29357357
Full Text Available This study investigated how environmental design shapes perceptual-motor exploration, when meta-stable regions of performance are created. Here, we examined how creating meta-stable regions of performance could destabilize pre-existing skills, favoring greater exploration of performance environments, exemplified in this study by climbing surfaces. In this investigation we manipulated hold orientations on an indoor climbing wall to examine how nine climbers explored, learned and transferred various trunk-rolling motion patterns and hand grasping movements. The learning protocol consisted of four sessions, in which climbers randomly ascended three different routes, as fluently as possible. All three routes were 10.3m in height and composed of 20 hand-holds at the same locations on an artificial climbing wall; only hold orientations were altered: (i a horizontal-edge route was designed to afford horizontal hold grasping, (ii a vertical-edge route afforded vertical hold grasping, and (iii, a double-edge route was designed to afford both horizontal and vertical hold grasping. As a meta-stable condition of performance invite an individual to both exploit his pre-existing behavioral repertoire (i.e., horizontal hold grasping pattern and trunk face to the wall and explore new behaviors (i.e., vertical hold grasping and trunk side to the wall, it was hypothesized that the double-edge route characterized a meta-stable region of performance. Data were collected from inertial measurement units located on the neck and hip of each climber, allowing us to compute rolling motion referenced to the artificial climbing wall. Information on ascent duration, the number of exploratory and performatory movements for locating hand-holds, and hip path was also observed in video footage from a frontal camera worn by participants. Climbing fluency was assessed by calculating geometric index of entropy. Results showed that the meta-stable condition of performance may have
Seifert, Ludovic; Boulanger, Jérémie; Orth, Dominic; Davids, Keith
This study investigated how environmental design shapes perceptual-motor exploration, when meta-stable regions of performance are created. Here, we examined how creating meta-stable regions of performance could destabilize pre-existing skills, favoring greater exploration of performance environments, exemplified in this study by climbing surfaces. In this investigation we manipulated hold orientations on an indoor climbing wall to examine how nine climbers explored, learned, and transferred various trunk-rolling motion patterns and hand grasping movements. The learning protocol consisted of four sessions, in which climbers randomly ascended three different routes, as fluently as possible. All three routes were 10.3 m in height and composed of 20 hand-holds at the same locations on an artificial climbing wall; only hold orientations were altered: (i) a horizontal-edge route was designed to afford horizontal hold grasping, (ii) a vertical-edge route afforded vertical hold grasping, and (iii), a double-edge route was designed to afford both horizontal and vertical hold grasping. As a meta-stable condition of performance invite an individual to both exploit his pre-existing behavioral repertoire (i.e., horizontal hold grasping pattern and trunk face to the wall) and explore new behaviors (i.e., vertical hold grasping and trunk side to the wall), it was hypothesized that the double-edge route characterized a meta-stable region of performance. Data were collected from inertial measurement units located on the neck and hip of each climber, allowing us to compute rolling motion referenced to the artificial climbing wall. Information on ascent duration, the number of exploratory and performatory movements for locating hand-holds, and hip path was also observed in video footage from a frontal camera worn by participants. Climbing fluency was assessed by calculating geometric index of entropy. Results showed that the meta-stable condition of performance may have afforded
Farrell, Tara M; Morgan, Amanda; MacDougall-Shackleton, Scott A
In songbirds, early-life environments critically shape song development. Many studies have demonstrated that developmental stress impairs song learning and the development of song-control regions of the brain in males. However, song has evolved through signaller-receiver networks and the effect stress has on the ability to receive auditory signals is equally important, especially for females who use song as an indicator of mate quality. Female song preferences have been the metric used to evaluate how developmental stress affects auditory learning, but preferences are shaped by many non-cognitive factors and preclude the evaluation of auditory learning abilities in males. To determine whether developmental stress specifically affects auditory learning in both sexes, we subjected juvenile European starlings, Sturnus vulgaris, to either an ad libitum or an unpredictable food supply treatment from 35 to 115 days of age. In adulthood, we assessed learning of both auditory and visual discrimination tasks. Females reared in the experimental group were slower than females in the control group to acquire a relative frequency auditory task, and slower than their male counterparts to acquire an absolute frequency auditory task. There was no difference in auditory performance between treatment groups for males. However, on the colour association task, birds from the experimental group committed more errors per trial than control birds. There was no correlation in performance across the cognitive tasks. Developmental stress did not affect all cognitive processes equally across the sexes. Our results suggest that the male auditory system may be more robust to developmental stress than that of females.
Price, Amanda; Shin, Jacqueline C.
The current study examined the contribution of brain areas affected by Parkinson's disease (PD) to sequence learning, with a specific focus on response-related processes, spatial attentional control, and executive functioning. Patients with mild PD, patients with moderate PD, and healthy age-matched participants performed three tasks--a sequence…
Leasa, Marleny; Corebima, Aloysius D.; Ibrohim; Suwono, Hadi
Students have unique ways in managing the information in their learning process. VARK learning styles associated with memory are considered to have an effect on emotional intelligence. This quasi-experimental research was conducted to compare the emotional intelligence among the students having auditory, reading, and kinesthetic learning styles in…
Schiavio, A.; Timmers, R.
The present study investigated the role of motor and audiovisual learning in the memorization of four\\ud tonally ambiguous melodies for piano. A total of one hundred and twenty participants divided into\\ud three groups - pianists, other musicians (non-pianists), and non-musicians - learned the melodies\\ud through either playing them on a keyboard (‘playing condition’), through performing the melodies on a\\ud piano without auditory feedback (‘silent playing condition’), through watching a vide...
Scheerer, Nichole E; Tumber, Anupreet K; Jones, Jeffery A
Hearing one's own voice is important for regulating ongoing speech and for mapping speech sounds onto articulator movements. However, it is currently unknown whether attention mediates changes in the relationship between motor commands and their acoustic output, which are necessary as growth and aging inevitably cause changes to the vocal tract. In this study, participants produced vocalizations while they heard their vocal pitch persistently shifted downward one semitone in both single- and dual-task conditions. During the single-task condition, participants vocalized while passively viewing a visual stream. During the dual-task condition, participants vocalized while also monitoring a visual stream for target letters, forcing participants to divide their attention. Participants' vocal pitch was measured across each vocalization, to index the extent to which their ongoing vocalization was modified as a result of the deviant auditory feedback. Smaller compensatory responses were recorded during the dual-task condition, suggesting that divided attention interfered with the use of auditory feedback for the regulation of ongoing vocalizations. Participants' vocal pitch was also measured at the beginning of each vocalization, before auditory feedback was available, to assess the extent to which the deviant auditory feedback was used to modify subsequent speech motor commands. Smaller changes in vocal pitch at vocalization onset were recorded during the dual-task condition, suggesting that divided attention diminished sensorimotor learning. Together, the results of this study suggest that attention is required for the speech motor control system to make optimal use of auditory feedback for the regulation and planning of speech motor commands. Copyright © 2016 the American Physiological Society.
Bruna Ferreira Valenzuela de Oliveira
Full Text Available OBJETIVO: Analisar parâmetros perceptivo-auditivos e acústicos da voz em indivíduos adultos gagos. MÉTODOS: Foram analisados 15 indivíduos gagos do gênero masculino na faixa etária de 21 a 41 anos (média 26,6 anos, atendidos no Centro Clínico de Fonoaudiologia da instituição no período de fevereiro de 2005 a julho de 2007. Os parâmetros perceptivo-auditivos analisados envolveram a qualidade vocal, tipo de voz, ressonância, tensão vocal, velocidade de fala, coordenação pneumofônica, ataque vocal e gama tonal; quanto aos parâmetros acústicos, foram analisadas a frequência fundamental e sua variabilidade durante a fala espontânea. RESULTADOS: A análise perceptivo-auditiva mostrou que as características mais frequentes nos indivíduos gagos foram: qualidade vocal normal (60%, ressonância alterada (66%, tensão vocal (86%, ataque vocal alterado (73%, velocidade de fala normal (54%, gama tonal alterada (80% e coordenação pneumofônica alterada (100%. No entanto, a análise estatística revelou que apenas a presença de tensão vocal, coordenação pneumofônica e a gama tonal alteradas apresentaram-se estatisticamente significativas nos indivíduos gagos estudados. Na análise acústica, a frequência fundamental variou de 125,54 a 149,59 Hz e a variabilidade da frequência fundamental foi de 16 a 21 semitons ou 112,50 a 172,40 Hz. CONCLUSÃO: Os parâmetros perceptivo-auditivos analisados que tiveram frequência significativa nos indivíduos gagos estudados foram: presença de tensão vocal, alteração da gama tonal e na coordenação pneumofônica. Desta forma, é importante avaliar os aspectos vocais nesses pacientes, pois a desordem da fluência pode comprometer alguns parâmetros vocais podendo ocasionar disfonia.PURPOSE: To analyze auditory-perceptual and acoustic parameters of the voices of adult stutterers. METHODS: Fifteen male stutterers in the age range from 21 to 41 years (mean 26.6 years, attended at the
Lewandowski, Brian; Vyssotski, Alexei; Hahnloser, Richard H R; Schmidt, Marc
Communication between auditory and vocal motor nuclei is essential for vocal learning. In songbirds, the nucleus interfacialis of the nidopallium (NIf) is part of a sensorimotor loop, along with auditory nucleus avalanche (Av) and song system nucleus HVC, that links the auditory and song systems. Most of the auditory information comes through this sensorimotor loop, with the projection from NIf to HVC representing the largest single source of auditory information to the song system. In addition to providing the majority of HVC's auditory input, NIf is also the primary driver of spontaneous activity and premotor-like bursting during sleep in HVC. Like HVC and RA, two nuclei critical for song learning and production, NIf exhibits behavioral-state dependent auditory responses and strong motor bursts that precede song output. NIf also exhibits extended periods of fast gamma oscillations following vocal production. Based on the converging evidence from studies of physiology and functional connectivity it would be reasonable to expect NIf to play an important role in the learning, maintenance, and production of song. Surprisingly, however, lesions of NIf in adult zebra finches have no effect on song production or maintenance. Only the plastic song produced by juvenile zebra finches during the sensorimotor phase of song learning is affected by NIf lesions. In this review, we carefully examine what is known about NIf at the anatomical, physiological, and behavioral levels. We reexamine conclusions drawn from previous studies in the light of our current understanding of the song system, and establish what can be said with certainty about NIf's involvement in song learning, maintenance, and production. Finally, we review recent theories of song learning integrating possible roles for NIf within these frameworks and suggest possible parallels between NIf and sensorimotor areas that form part of the neural circuitry for speech processing in humans. Copyright © 2013 Elsevier
Sabesan, Ramkumar; Barbot, Antoine; Yoon, Geunyoung
Highly aberrated keratoconic (KC) eyes do not elicit the expected visual advantage from customized optical corrections. This is attributed to the neural insensitivity arising from chronic visual experience with poor retinal image quality, dominated by low spatial frequencies. The goal of this study was to investigate if targeted perceptual learning with adaptive optics (AO) can stimulate neural plasticity in these highly aberrated eyes. The worse eye of 2 KC subjects was trained in a contrast threshold test under AO correction. Prior to training, tumbling 'E' visual acuity and contrast sensitivity at 4, 8, 12, 16, 20, 24 and 28 c/deg were measured in both the trained and untrained eyes of each subject with their routine prescription and with AO correction for a 6mm pupil. The high spatial frequency requiring 50% contrast for detection with AO correction was picked as the training frequency. Subjects were required to train on a contrast detection test with AO correction for 1h for 5 consecutive days. During each training session, threshold contrast measurement at the training frequency with AO was conducted. Pre-training measures were repeated after the 5 training sessions in both eyes (i.e., post-training). After training, contrast sensitivity under AO correction improved on average across spatial frequency by a factor of 1.91 (range: 1.77-2.04) and 1.75 (1.22-2.34) for the two subjects. This improvement in contrast sensitivity transferred to visual acuity with the two subjects improving by 1.5 and 1.3 lines respectively with AO following training. One of the two subjects denoted an interocular transfer of training and an improvement in performance with their routine prescription post-training. This training-induced visual benefit demonstrates the potential of AO as a tool for neural rehabilitation in patients with abnormal corneas. Moreover, it reveals a sufficient degree of neural plasticity in normally developed adults who have a long history of abnormal visual
Full Text Available Human sensory systems allow individuals to see, hear, touch, and interact with the surrounding physical environment. Understanding human perception and its limit enables us to better exploit the psychophysics of human perceptual systems to design more efficient, adaptive algorithms and develop perceptually-inspired computational models. In this talk, I will survey some of recent efforts on perceptually-inspired computing with applications to crowd simulation and multimodal interaction. In particular, I will present data-driven personality modeling based on the results of user studies, example-guided physics-based sound synthesis using auditory perception, as well as perceptually-inspired simplification for multimodal interaction. These perceptually guided principles can be used to accelerating multi-modal interaction and visual computing, thereby creating more natural human-computer interaction and providing more immersive experiences. I will also present their use in interactive applications for entertainment, such as video games, computer animation, and shared social experience. I will conclude by discussing possible future research directions.
needs within the education environment and that many schools are under-supplied in terms of resources and equipment. It is recommended that these teachers receive inservice training on learners’ perceptual motor development and that the Department of Education should provide schools with resources and equipment to prevent these deficiencies in the education system.
Erdener, Dogu; Burnham, Denis
Despite the body of research on auditory-visual speech perception in infants and schoolchildren, development in the early childhood period remains relatively uncharted. In this study, English-speaking children between three and four years of age were investigated for: (i) the development of visual speech perception--lip-reading and visual…
Gilley, Phillip M; Sharma, Anu; Dorman, Michael; Martin, Kathryn
To examine maturation of the central auditory pathways in children with language-based learning problems (LP). Cortical auditory evoked potentials (CAEPs) recorded from 26 children with LP were compared to CAEPs recorded from 38 typical children. CAEP responses were recorded in response to a speech sound, /uh/, which was presented in a stimulus train with decreasing inter-stimulus intervals (ISIs) of 2000, 1000, 560, and 360 ms. We identified three atypical morphological categories of CAEP responses in the LP group. Category 1 responses revealed delayed P1 latencies and absent N1/P2 components. Category 2 responses revealed typical P1 responses, but delayed N1 and P2 responses. Category 3 responses revealed generally low-amplitude CAEP responses. A fourth sub-group of LP children had normal CAEP responses. Overall, the majority of children with LP had abnormal CAEP responses. These children fell into distinct categories based on the abnormalities in maturational patterns of their CAEP responses. We describe a rate sensitive stimulation paradigm which may be used to identify and categorize LP children who exhibit abnormal patterns of central auditory maturation.
Crockett, D J; Hadjistavropoulos, T; Hurwitz, T
The present study examined the manifestation of the primacy and recency effects in patients with anterior brain damage, posterior brain damage, and psychiatric inpatients with no known organic impairment. All three groups of patients demonstrated both a primacy and a recency effect on the Rey Auditory Verbal Learning Test (RAVLT). Differences among the three groups with respect to the magnitude of primacy and recency as well as with other variables reflecting free recall were nonsignificant. These findings limit the use of primacy and recency for the differentiation of memory deficits due to organic and nonorganic causes.
Full Text Available The middle temporal area of the extrastriate visual cortex (area MT is integral to motion perception and is thought to play a key role in the perceptual learning of motion tasks. We have previously found, however, that perceptual learning of a motion discrimination task is possible even when the training stimulus contains locally balanced, motion opponent signals that putatively suppress the response of MT. Assuming at least partial suppression of MT, possible explanations for this learning are that 1 training made MT more responsive by reducing motion opponency, 2 MT remained suppressed and alternative visual areas such as V1 enabled learning and/or 3 suppression of MT increased with training, possibly to reduce noise. Here we used fMRI to test these possibilities. We first confirmed that the motion opponent stimulus did indeed suppress the BOLD response within hMT+ compared to an almost identical stimulus without locally balanced motion signals. We then trained participants on motion opponent or non-opponent stimuli. Training with the motion opponent stimulus reduced the BOLD response within hMT+ and greater reductions in BOLD response were correlated with greater amounts of learning. The opposite relationship between BOLD and behaviour was found at V1 for the group trained on the motion-opponent stimulus and at both V1 and hMT+ for the group trained on the non-opponent motion stimulus. As the average response of many cells within MT to motion opponent stimuli is the same as their response to non-directional flickering noise, the reduced activation of hMT+ after training may reflect noise reduction.
Full Text Available Background: Children with a spatial processing disorder (SPD require a more favorable signal-to-noise ratio in the classroom because they have difficulty perceiving sound source location cues. Previous research has shown that a novel training program - LiSN & Learn - employing spatialized sound, overcomes this deficit. Here we investigate whether improvements in spatial processing ability are specific to the LiSN & Learn training program. Materials and methods: Participants were ten children (aged between 6;0 [years;months] and 9;9 with normal peripheral hearing who were diagnosed as having SPD using the Listening in Spatialized Noise – Sentences Test (LISN-S. In a blinded controlled study, the participants were randomly allocated to train with either the LiSN & Learn or another auditory training program – Earobics - for approximately 15 minutes per day for twelve weeks. Results: There was a significant improvement post-training on the conditions of the LiSN-S that evaluate spatial processing ability for the LiSN & Learn group (p=0.03 to 0.0008, η2=0.75 to 0.95, n=5, but not for the Earobics group (p=0.5 to 0.7, η2=0.1 to 0.04, n=5. Results from questionnaires completed by the participants and their parents and teachers revealed improvements in real-world listening performance post-training were greater in the LiSN & Learn group than the Earobics group. Conclusions: LiSN & Learn training improved binaural processing ability in children with SPD, enhancing their ability to understand speech in noise. Exposure to non-spatialized auditory training does not produce similar outcomes, emphasizing the importance of deficit-specific remediation.
Full Text Available Children with a spatial processing disorder (SPD require a more favorable signal-to-noise ratio in the classroom because they have difficulty perceiving sound source location cues. Previous research has shown that a novel training program - LiSN & Learn - employing spatialized sound, overcomes this deficit. Here we investigate whether improvements in spatial processing ability are specific to the LiSN & Learn training program. Participants were ten children (aged between 6;0 [years;months] and 9;9 with normal peripheral hearing who were diagnosed as having SPD using the Listening in Spatialized Noise - Sentences test (LiSN-S. In a blinded controlled study, the participants were randomly allocated to train with either the LiSN & Learn or another auditory training program - Earobics - for approximately 15 min per day for twelve weeks. There was a significant improvement post-training on the conditions of the LiSN-S that evaluate spatial processing ability for the LiSN & Learn group (P=0.03 to 0.0008, η 2=0.75 to 0.95, n=5, but not for the Earobics group (P=0.5 to 0.7, η 2=0.1 to 0.04, n=5. Results from questionnaires completed by the participants and their parents and teachers revealed improvements in real-world listening performance post-training were greater in the LiSN & Learn group than the Earobics group. LiSN & Learn training improved binaural processing ability in children with SPD, enhancing their ability to understand speech in noise. Exposure to non-spatialized auditory training does not produce similar outcomes, emphasizing the importance of deficit-specific remediation.
Mohammad-Ali Nikouei Mahani
Full Text Available In our daily life, we continually exploit already learned multisensory associations and form new ones when facing novel situations. Improving our associative learning results in higher cognitive capabilities. We experimentally and computationally studied the learning performance of healthy subjects in a visual-auditory sensory associative learning task across active learning, attention cueing learning, and passive learning modes. According to our results, the learning mode had no significant effect on learning association of congruent pairs. In addition, subjects' performance in learning congruent samples was not correlated with their vigilance score. Nevertheless, vigilance score was significantly correlated with the learning performance of the non-congruent pairs. Moreover, in the last block of the passive learning mode, subjects significantly made more mistakes in taking non-congruent pairs as associated and consciously reported lower confidence. These results indicate that attention and activity equally enhanced visual-auditory associative learning for non-congruent pairs, while false alarm rate in the passive learning mode did not decrease after the second block. We investigated the cause of higher false alarm rate in the passive learning mode by using a computational model, composed of a reinforcement learning module and a memory-decay module. The results suggest that the higher rate of memory decay is the source of making more mistakes and reporting lower confidence in non-congruent pairs in the passive learning mode.
Daikoku, Tatsuya; Takahashi, Yuji; Futagami, Hiroko; Tarumoto, Nagayoshi; Yasuda, Hideki
In real-world auditory environments, humans are exposed to overlapping auditory information such as those made by human voices and musical instruments even during routine physical activities such as walking and cycling. The present study investigated how concurrent physical exercise affects performance of incidental and intentional learning of overlapping auditory streams, and whether physical fitness modulates the performances of learning. Participants were grouped with 11 participants with lower and higher fitness each, based on their Vo2max value. They were presented simultaneous auditory sequences with a distinct statistical regularity each other (i.e. statistical learning), while they were pedaling on the bike and seating on a bike at rest. In experiment 1, they were instructed to attend to one of the two sequences and ignore to the other sequence. In experiment 2, they were instructed to attend to both of the two sequences. After exposure to the sequences, learning effects were evaluated by familiarity test. In the experiment 1, performance of statistical learning of ignored sequences during concurrent pedaling could be higher in the participants with high than low physical fitness, whereas in attended sequence, there was no significant difference in performance of statistical learning between high than low physical fitness. Furthermore, there was no significant effect of physical fitness on learning while resting. In the experiment 2, the both participants with high and low physical fitness could perform intentional statistical learning of two simultaneous sequences in the both exercise and rest sessions. The improvement in physical fitness might facilitate incidental but not intentional statistical learning of simultaneous auditory sequences during concurrent physical exercise.
Ohl, Frank W
Rhythmic activity appears in the auditory cortex in both microscopic and macroscopic observables and is modulated by both bottom-up and top-down processes. How this activity serves both types of processes is largely unknown. Here we review studies that have recently improved our understanding of potential functional roles of large-scale global dynamic activity patterns in auditory cortex. The experimental paradigm of auditory category learning allowed critical testing of the hypothesis that global auditory cortical activity states are associated with endogenous cognitive states mediating the meaning associated with an acoustic stimulus rather than with activity states that merely represent the stimulus for further processing. Copyright © 2014. Published by Elsevier Ltd.
Astle, Andrew T.; Webb, Ben S.; McGraw, Paul V.
Background Amblyopia presents early in childhood and affects approximately 3% of western populations. The monocular visual acuity loss is conventionally treated during the “critical periods” of visual development by occluding or penalising the fellow eye to encourage use of the amblyopic eye. Despite the measurable success of this approach in many children, substantial numbers of people still suffer with amblyopia later in life because either they were never diagnosed in childhood, did not respond to the original treatment, the amblyopia was only partially remediated, or their acuity loss returned after cessation of treatment. Purpose In this review, we consider whether the visual deficits of this largely overlooked amblyopic group are amenable to conventional and innovative therapeutic interventions later in life, well beyond the age at which treatment is thought to be effective. Recent findings There is a considerable body of evidence that residual plasticity is present in the adult visual brain and this can be harnessed to improve function in adults with amblyopia. Perceptual training protocols have been developed to optimise visual gains in this clinical population. Results thus far are extremely encouraging: marked visual improvements have been demonstrated, the perceptual benefits transfer to new visual tasks and appear to be relatively enduring. The essential ingredients of perceptual training protocols are being incorporated into video game formats, facilitating home-based interventions. Summary Many studies support perceptual training as a tool for improving vision in amblyopes beyond the critical period. Should this novel form of treatment stand up to the scrutiny of a randomised controlled trial, clinicians may need to re-evaluate their therapeutic approach to adults with amblyopia. PMID:21981034
Nakano, Takashi; Otsuka, Makoto; Yoshimoto, Junichiro; Doya, Kenji
A theoretical framework of reinforcement learning plays an important role in understanding action selection in animals. Spiking neural networks provide a theoretically grounded means to test computational hypotheses on neurally plausible algorithms of reinforcement learning through numerical simulation. However, most of these models cannot handle observations which are noisy, or occurred in the past, even though these are inevitable and constraining features of learning in real environments. This class of problem is formally known as partially observable reinforcement learning (PORL) problems. It provides a generalization of reinforcement learning to partially observable domains. In addition, observations in the real world tend to be rich and high-dimensional. In this work, we use a spiking neural network model to approximate the free energy of a restricted Boltzmann machine and apply it to the solution of PORL problems with high-dimensional observations. Our spiking network model solves maze tasks with perceptually ambiguous high-dimensional observations without knowledge of the true environment. An extended model with working memory also solves history-dependent tasks. The way spiking neural networks handle PORL problems may provide a glimpse into the underlying laws of neural information processing which can only be discovered through such a top-down approach.
Li, Roger W; Tran, Truyet T; Craven, Ashley P; Leung, Tsz-Wing; Chat, Sandy W; Levi, Dennis M
Neurons in the early visual cortex are finely tuned to different low-level visual features, forming a multi-channel system analysing the visual image formed on the retina in a parallel manner. However, little is known about the potential 'cross-talk' among these channels. Here, we systematically investigated whether stereoacuity, over a large range of target spatial frequencies, can be enhanced by perceptual learning. Using narrow-band visual stimuli, we found that practice with coarse (low spatial frequency) targets substantially improves performance, and that the improvement spreads from coarse to fine (high spatial frequency) three-dimensional perception, generalizing broadly across untrained spatial frequencies and orientations. Notably, we observed an asymmetric transfer of learning across the spatial frequency spectrum. The bandwidth of transfer was broader when training was at a high spatial frequency than at a low spatial frequency. Stereoacuity training is most beneficial when trained with fine targets. This broad transfer of stereoacuity learning contrasts with the highly specific learning reported for other basic visual functions. We also revealed strategies to boost learning outcomes 'beyond-the-plateau'. Our investigations contribute to understanding the functional properties of the network subserving stereovision. The ability to generalize may provide a key principle for restoring impaired binocular vision in clinical situations.
Dent, Micheal L.
Birds have proven to be ideal models for the perceptual organization of complex sounds because, like humans, they produce, learn, and use complex acoustic signals for communication. Although conducted in laboratory settings, measures of auditory abilities in birds are usually designed to parallel the acoustic problems faced in their natural habitats, including the location of conspecifics, discrimination among potential mates, prey localization, predator avoidance, and territorial defense. As a result, there is probably more known about hearing in birds under both natural and laboratory conditions than in any other nonhuman organism. Behavioral and/or physiological experiments on complex sound perception in birds have revealed that they exhibit serial pattern perception, can discriminate frequency changes in tones embedded within tonal patterns regardless of stimulus uncertainty conditions, segregate signals into auditory streams, and exhibit comodulation masking release. In addition, binaural experiments have revealed that birds exhibit both the cocktail party effect and the precedence effect. Taken together, these results suggest that, like humans, auditory scene analysis plays a general role in auditory perception in birds and probably other animals that must parse the world into auditory objects. [Work supported by NIH DC006124.
Moav-Scheff, Ronny; Yifat, Rachel; Banai, Karen
Sensitivity to perceptual context (anchoring) has been suggested to contribute to the development of both oral- and written-language skills, but studies of this idea in children have been rare. To determine whether deficient anchoring contributes to the phonological memory and word learning deficits of children with specific language impairment (SLI). 84 preschool children with and without SLI participated in the study. Anchoring to repeated items was evaluated in two tasks - a phonological memory task and a pseudo-word learning task. Compared to children with typical development, children with SLI had poorer phonological memory spans and learned fewer words during the word learning task. In both tasks the poorer performance of children with SLI reflected a smaller effect of anchoring that was manifested in a smaller effect of item repetition on performance. Furthermore, across the entire sample anchoring was significantly correlated with performance in vocabulary and grammar tasks. These findings are consistent with the hypothesis that anchoring contributes to language skills and that children with SLI have impaired anchoring, although further studies are required to determine the role of anchoring in language development. Copyright © 2015 Elsevier Ltd. All rights reserved.
Kapatsinski, Vsevolod; Olejarczuk, Paul; Redford, Melissa A.
We report on rapid perceptual learning of intonation contour categories in adults and 9- to 11-year-old children. Intonation contours are temporally extended patterns, whose perception requires temporal integration and therefore poses significant working memory challenges. Both children and adults form relatively abstract representations of…
Tierney, Mary C.; And Others
Thirty-eight elderly control subjects performed better than did 18 patients with moderate Alzheimer's disease (AD), 33 with severe AD, and 12 with Parkinson's dementia on all measures of the Rey Auditory Verbal Learning Test. Results indicate that the test is useful in distinguishing AD from Parkinson's dementia. (SLD)
Full Text Available In the last several decades a number of studies on perceptual learning in early infancy have suggested that even infants seem to be sensitive to the way objects move and interact in the world. In order to explain the early emergence of infants’ sensitivity to causal patterns in the world some psychologists have proposed that core knowledge of objects and causal relations is innate (Leslie & Keeble 1987, Carey & Spelke, 1994; Keil, 1995; Spelke et al., 1994. The goal of this paper is to examine the nativist developmental model by investigating the criteria that a mechanistic model needs to fulfill if it is to be explanatory. Craver (2006 put forth a number of such criteria and developed a few very useful distinctions between explanation sketches and proper mechanistic explanations. By applying these criteria to the nativist developmental model I aim to show, firstly, that nativists only partially characterize the phenomenon at stake without giving us the details of when and under which conditions perception and attention in early infancy take place. Secondly, nativist start off with a description of the phenomena to be explained (even if it is only a partial description but import into it a particular theory of perception that requires further empirical evidence and further defense on its own. Furthermore, I argue that innate knowledge is a good candidate for a filler term (a term that is used to name the still unknown processes and parts of the mechanism and is likely to become redundant. Recent extensive research on early intermodal perception indicates that the mechanism enabling the perception of regularities and causal patterns in early infancy is grounded in our neurophysiology. However, this mechanism is fairly basic and does not involve highly sophisticated cognitive structures or innate core knowledge. I conclude with a remark that a closer examination of the mechanisms involved in early perceptual learning indicates that the nativism
Altieri, Nicholas; Stevenson, Ryan; Wallace, Mark T.; Wenger, Michael J.
The ability to effectively combine sensory inputs across modalities is vital for acquiring a unified percept of events. For example, watching a hammer hit a nail while simultaneously identifying the sound as originating from the event requires the ability to identify spatio-temporal congruencies and statistical regularities. In this study, we applied a reaction time (RT) and hazard function measure known as capacity (e.g., Townsend and Ashby, 1978) to quantify the extent to which observers learn paired associations between simple auditory and visual patterns in a model theoretic manner. As expected, results showed that learning was associated with an increase in accuracy, but more significantly, an increase in capacity. The aim of this study was to associate capacity measures of multisensory learning, with neural based measures, namely mean Global Field Power (GFP). We observed a co-variation between an increase in capacity, and a decrease in GFP amplitude as learning occurred. This suggests that capacity constitutes a reliable behavioral index of efficient energy expenditure in the neural domain. PMID:24276220
Strait, Dana L; Kraus, Nina
Even in the quietest of rooms, our senses are perpetually inundated by a barrage of sounds, requiring the auditory system to adapt to a variety of listening conditions in order to extract signals of interest (e.g., one speaker's voice amidst others). Brain networks that promote selective attention are thought to sharpen the neural encoding of a target signal, suppressing competing sounds and enhancing perceptual performance. Here, we ask: does musical training benefit cortical mechanisms that underlie selective attention to speech? To answer this question, we assessed the impact of selective auditory attention on cortical auditory-evoked response variability in musicians and non-musicians. Outcomes indicate strengthened brain networks for selective auditory attention in musicians in that musicians but not non-musicians demonstrate decreased prefrontal response variability with auditory attention. Results are interpreted in the context of previous work documenting perceptual and subcortical advantages in musicians for the hearing and neural encoding of speech in background noise. Musicians' neural proficiency for selectively engaging and sustaining auditory attention to language indicates a potential benefit of music for auditory training. Given the importance of auditory attention for the development and maintenance of language-related skills, musical training may aid in the prevention, habilitation, and remediation of individuals with a wide range of attention-based language, listening and learning impairments.
Dana L Strait
Full Text Available Even in the quietest of rooms, our senses are perpetually inundated by a barrage of sounds, requiring the auditory system to adapt to a variety of listening conditions in order to extract signals of interest (e.g., one speaker’s voice amidst others. Brain networks that promote selective attention are thought to sharpen the neural encoding of a target signal, suppressing competing sounds and enhancing perceptual performance. Here, we ask: does musical training benefit cortical mechanisms that underlie selective attention to speech? To answer this question, we assessed the impact of selective auditory attention on cortical auditory-evoked response variability in musicians and nonmusicians. Outcomes indicate strengthened brain networks for selective auditory attention in musicians in that musicians but not nonmusicians demonstrate decreased prefrontal response variability with auditory attention. Results are interpreted in the context of previous work from our laboratory documenting perceptual and subcortical advantages in musicians for the hearing and neural encoding of speech in background noise. Musicians’ neural proficiency for selectively engaging and sustaining auditory attention to language indicates a potential benefit of music for auditory training. Given the importance of auditory attention for the development of language-related skills, musical training may aid in the prevention, habilitation and remediation of children with a wide range of attention-based language and learning impairments.
Bell, Brittany A.; Phan, Mimi L.; Vicario, David S.
How do social interactions form and modulate the neural representations of specific complex signals? This question can be addressed in the songbird auditory system. Like humans, songbirds learn to vocalize by imitating tutors heard during development. These learned vocalizations are important in reproductive and social interactions and in individual recognition. As a model for the social reinforcement of particular songs, male zebra finches were trained to peck for a food reward in response t...
Liebel, Spencer W; Nelson, Jason M
We investigated the auditory and visual working memory functioning in college students with attention-deficit/hyperactivity disorder, learning disabilities, and clinical controls. We examined the role attention-deficit/hyperactivity disorder subtype status played in working memory functioning. The unique influence that both domains of working memory have on reading and math abilities was investigated. A sample of 268 individuals seeking postsecondary education comprise four groups of the present study: 110 had an attention-deficit/hyperactivity disorder diagnosis only, 72 had a learning disability diagnosis only, 35 had comorbid attention-deficit/hyperactivity disorder and learning disability diagnoses, and 60 individuals without either of these disorders comprise a clinical control group. Participants underwent a comprehensive neuropsychological evaluation, and licensed psychologists employed a multi-informant, multi-method approach in obtaining diagnoses. In the attention-deficit/hyperactivity disorder only group, there was no difference between auditory and visual working memory functioning, t(100) = -1.57, p = .12. In the learning disability group, however, auditory working memory functioning was significantly weaker compared with visual working memory, t(71) = -6.19, p attention-deficit/hyperactivity disorder only group, there were no auditory or visual working memory functioning differences between participants with either a predominantly inattentive type or a combined type diagnosis. Visual working memory did not incrementally contribute to the prediction of academic achievement skills. Individuals with attention-deficit/hyperactivity disorder did not demonstrate significant working memory differences compared with clinical controls. Individuals with a learning disability demonstrated weaker auditory working memory than individuals in either the attention-deficit/hyperactivity or clinical control groups.
Kähne, Thilo; Richter, Sandra; Kolodziej, Angela; Smalla, Karl-Heinz; Pielot, Rainer; Engler, Alexander; Ohl, Frank W; Dieterich, Daniela C; Seidenbecher, Constanze; Tischmeyer, Wolfgang; Naumann, Michael; Gundelfinger, Eckart D
Learning and memory processes are accompanied by rearrangements of synaptic protein networks. While various studies have demonstrated the regulation of individual synaptic proteins during these processes, much less is known about the complex regulation of synaptic proteomes. Recently, we reported that auditory discrimination learning in mice is associated with a relative down-regulation of proteins involved in the structural organization of synapses in various brain regions. Aiming at the identification of biological processes and signaling pathways involved in auditory memory formation, here, a label-free quantification approach was utilized to identify regulated synaptic junctional proteins and phosphoproteins in the auditory cortex, frontal cortex, hippocampus, and striatum of mice 24 h after the learning experiment. Twenty proteins, including postsynaptic scaffolds, actin-remodeling proteins, and RNA-binding proteins, were regulated in at least three brain regions pointing to common, cross-regional mechanisms. Most of the detected synaptic proteome changes were, however, restricted to individual brain regions. For example, several members of the Septin family of cytoskeletal proteins were up-regulated only in the hippocampus, while Septin-9 was down-regulated in the hippocampus, the frontal cortex, and the striatum. Meta analyses utilizing several databases were employed to identify underlying cellular functions and biological pathways. Data are available via ProteomeExchange with identifier PXD003089. How does the protein composition of synapses change in different brain areas upon auditory learning? We unravel discrete proteome changes in mouse auditory cortex, frontal cortex, hippocampus, and striatum functionally implicated in the learning process. We identify not only common but also area-specific biological pathways and cellular processes modulated 24 h after training, indicating individual contributions of the regions to memory processing. © 2016 The
Keith A. Hawkins
Full Text Available Practice effects in memory testing complicate the interpretation of score changes over repeated testings, particularly in clinical applications. Consequently, several alternative forms of the Auditory Verbal Learning Test (AVLT have been developed. Studies of these typically indicate that the forms examined are equivalent. However, the implication that the forms in the literature are interchangeable must be tempered by several caveats. Few studies of equivalence have been undertaken; most are restricted to the comparison of single pairs of forms, and the pairings vary across studies. These limitations are exacerbated by the minimal overlapping across studies in variables reported, or in the analyses of equivalence undertaken. The data generated by these studies are nonetheless valuable, as significant practice effects result from serial use of the same form. The available data on alternative AVLT forms are summarized, and recommendations regarding form development and the determination of form equivalence are offered.
Lavoie, Monica; Bherer, Louis; Joubert, Sven; Gagnon, Jean-François; Blanchet, Sophie; Rouleau, Isabelle; Macoir, Joël; Hudon, Carol
The aim of this study was to establish normative data for the Rey Auditory Verbal Learning Test, a test assessing verbal episodic memory, in the older French-Quebec population. A total of 432 French-speaking participants aged between 55 and 93 years old, from the Province of Quebec (Canada), were included in the study. Using multiple regression analyses, normative data were developed for five variable of interest, namely scores on trial 1, sum of trials 1 to 5, interference list B, immediate recall of list A, and delayed recall of list A. Results showed that age, education, and sex were associated with performance on all variables. Equations to calculate the expected score for a participant based on sex, age, and education level as well as the Z score were developed. This study provides clinicians with normative data that take into account the participants' sociodemographic characteristics, thus giving a more accurate interpretation of the results.
Kalish, Michael L.; Newell, Ben R.; Dunn, John C.
It is sometimes supposed that category learning involves competing explicit and procedural systems, with only the former reliant on working memory capacity (WMC). In 2 experiments participants were trained for 3 blocks on both filtering (often said to be learned explicitly) and condensation (often said to be learned procedurally) category…
Full Text Available Classical models of speech consider an antero-posterior distinction between perceptive and productive functions. However, the selective alteration of neural activity in speech motor centers, via transcranial magnetic stimulation, was shown to affect speech discrimination. On the automatic speech recognition (ASR side, the recognition systems have classically relied solely on acoustic data, achieving rather good performance in optimal listening conditions. The main limitations of current ASR are mainly evident in the realistic use of such systems. These limitations can be partly reduced by using normalization strategies that minimize inter-speaker variability by either explicitly removing speakers’ peculiarities or adapting different speakers to a reference model. In this paper we aim at modeling a motor-based imitation learning mechanism in ASR. We tested the utility of a speaker normalization strategy that uses motor representations of speech and compare it with strategies that ignore the motor domain. Specifically, we first trained a regressor through state-of-the-art machine learning techniques to build an auditory-motor mapping, in a sense mimicking a human learner that tries to reproduce utterances produced by other speakers. This auditory-motor mapping maps the speech acoustics of a speaker into the motor plans of a reference speaker. Since, during recognition, only speech acoustics are available, the mapping is necessary to recover motor information. Subsequently, in a phone classification task, we tested the system on either one of the speakers that was used during training or a new one. Results show that in both cases the motor-based speaker normalization strategy almost always outperforms all other strategies where only acoustics is taken into account.
Barton, Christine; Robbins, Amy McConkey
Musical experiences are a valuable part of the lives of children with cochlear implants (CIs). In addition to the pleasure, relationships and emotional outlet provided by music, it serves to enhance or 'jumpstart' other auditory and cognitive skills that are critical for development and learning throughout the lifespan. Musicians have been shown to be 'better listeners' than non-musicians with regard to how they perceive and process sound. A heuristic model of music therapy is reviewed, including six modulating factors that may account for the auditory advantages demonstrated by those who participate in music therapy. The integral approach to music therapy is described along with the hybrid approach to pediatric language intervention. These approaches share the characteristics of placing high value on ecologically valid therapy experiences, i.e., engaging in 'real' music and 'real' communication. Music and language intervention techniques used by the authors are presented. It has been documented that children with CIs consistently have lower music perception scores than do their peers with normal hearing (NH). On the one hand, this finding matters a great deal because it provides parameters for setting reasonable expectations and highlights the work still required to improve signal processing with the devices so that they more accurately transmit music to CI listeners. On the other hand, the finding might not matter much if we assume that music, even in its less-than-optimal state, functions for CI children, as for NH children, as a developmental jumpstarter, a language-learning tool, a cognitive enricher, a motivator, and an attention enhancer.
Canevari, Claudia; Badino, Leonardo; D'Ausilio, Alessandro; Fadiga, Luciano; Metta, Giorgio
Classical models of speech consider an antero-posterior distinction between perceptive and productive functions. However, the selective alteration of neural activity in speech motor centers, via transcranial magnetic stimulation, was shown to affect speech discrimination. On the automatic speech recognition (ASR) side, the recognition systems have classically relied solely on acoustic data, achieving rather good performance in optimal listening conditions. The main limitations of current ASR are mainly evident in the realistic use of such systems. These limitations can be partly reduced by using normalization strategies that minimize inter-speaker variability by either explicitly removing speakers' peculiarities or adapting different speakers to a reference model. In this paper we aim at modeling a motor-based imitation learning mechanism in ASR. We tested the utility of a speaker normalization strategy that uses motor representations of speech and compare it with strategies that ignore the motor domain. Specifically, we first trained a regressor through state-of-the-art machine learning techniques to build an auditory-motor mapping, in a sense mimicking a human learner that tries to reproduce utterances produced by other speakers. This auditory-motor mapping maps the speech acoustics of a speaker into the motor plans of a reference speaker. Since, during recognition, only speech acoustics are available, the mapping is necessary to “recover” motor information. Subsequently, in a phone classification task, we tested the system on either one of the speakers that was used during training or a new one. Results show that in both cases the motor-based speaker normalization strategy slightly but significantly outperforms all other strategies where only acoustics is taken into account. PMID:23818883
Full Text Available Background and Aim: Auditory memory plays an important role in developing language skills and learning. The aim of the present study was to assess auditory verbal memory and learning performanceof 18-30 year old healthy adults using the Persian version of the Rey Auditory-Verbal Learning Test(RAVLT.Methods: This descriptive, cross-sectional study was coducted on seventy 18-30 year old healthy females with the mean age of 23.2 years and a standard deviation (SD of 2.4 years. Different aspectsof memory, like immediate recall, delayed recall, recognition, forgetting rate, interference and learning, were assessed using the Persian version of RAVLT.Results: Mean score increased from 8.94 (SD=1.91 on the first trial to 13.70 (SD=1.18 on the fifth trial. Total learning mean score was 12.19 (SD=1.08, and mean learning rate was 4.76. Mean scoresof the participants on the delayed recall and recognition trials were 13.47 (SD=1.2, and 14.72(SD=0.53, respectively. The proactive and retroactive interference scores were 0.86 and 0.96,respectively. The forgetting rate score was 1.01 and the retrieval score was 0.90.Conclusion: The auditory-verbal memory and learning performance of healthy Persian-speaking females was similar to the performance of the same population in other countries. Therefore, the Persian version of RAVLT is valid for assessment of memory function in the Persian-speaking female population.
Chetverikov, Andrey; Campana, Gianluca; Kristjánsson, Árni
Recent evidence suggests that observers can grasp patterns of feature variations in the environment with surprising efficiency. During visual search tasks where all distractors are randomly drawn from a certain distribution rather than all being homogeneous, observers are capable of learning highly complex statistical properties of distractor sets. After only a few trials (learning phase), the statistical properties of distributions - mean, variance and crucially, shape - can be learned, and these representations affect search during a subsequent test phase (Chetverikov, Campana, & Kristjánsson, 2016). To assess the limits of such distribution learning, we varied the information available to observers about the underlying distractor distributions by manipulating set size during the learning phase in two experiments. We found that robust distribution learning only occurred for large set sizes. We also used set size to assess whether the learning of distribution properties makes search more efficient. The results reveal how a certain minimum of information is required for learning to occur, thereby delineating the boundary conditions of learning of statistical variation in the environment. However, the benefits of distribution learning for search efficiency remain unclear. Copyright © 2017 Elsevier Ltd. All rights reserved.
Full Text Available Auditory reafferences are real-time auditory products created by a person’s own movements. Whereas the interdependency of action and perception is generally well studied, the auditory feedback channel and the influence of perceptual processes during movement execution remain largely unconsidered. We argue that movements have a rhythmic character that is closely connected to sound, making it possible to manipulate auditory reafferences online to understand their role in motor control. We examined if step sounds, occurring as a by-product of running, have an influence on the performance of a complex movement task. Twenty participants completed a hurdling task in three auditory feedback conditions: a control condition with normal auditory feedback, a white noise condition in which sound was masked, and a delayed auditory feedback condition. Overall time and kinematic data were collected. Results show that delayed auditory feedback led to a significantly slower overall time and changed kinematic parameters. Our findings complement previous investigations in a natural movement situation with nonartificial auditory cues. Our results support the existing theoretical understanding of action–perception coupling and hold potential for applied work, where naturally occurring movement sounds can be implemented in the motor learning processes.
Raviv, Limor; Arnon, Inbal
Infants, children and adults are capable of extracting recurring patterns from their environment through statistical learning (SL), an implicit learning mechanism that is considered to have an important role in language acquisition. Research over the past 20 years has shown that SL is present from very early infancy and found in a variety of tasks and across modalities (e.g., auditory, visual), raising questions on the domain generality of SL. However, while SL is well established for infants and adults, only little is known about its developmental trajectory during childhood, leaving two important questions unanswered: (1) Is SL an early-maturing capacity that is fully developed in infancy, or does it improve with age like other cognitive capacities (e.g., memory)? and (2) Will SL have similar developmental trajectories across modalities? Only few studies have looked at SL across development, with conflicting results: some find age-related improvements while others do not. Importantly, no study to date has examined auditory SL across childhood, nor compared it to visual SL to see if there are modality-based differences in the developmental trajectory of SL abilities. We addressed these issues by conducting a large-scale study of children's performance on matching auditory and visual SL tasks across a wide age range (5-12y). Results show modality-based differences in the development of SL abilities: while children's learning in the visual domain improved with age, learning in the auditory domain did not change in the tested age range. We examine these findings in light of previous studies and discuss their implications for modality-based differences in SL and for the role of auditory SL in language acquisition. A video abstract of this article can be viewed at: https://www.youtube.com/watch?v=3kg35hoF0pw. © 2017 John Wiley & Sons Ltd.
Wightman, Frederic L.; Jenison, Rick
All auditory sensory information is packaged in a pair of acoustical pressure waveforms, one at each ear. While there is obvious structure in these waveforms, that structure (temporal and spectral patterns) bears no simple relationship to the structure of the environmental objects that produced them. The properties of auditory objects and their layout in space must be derived completely from higher level processing of the peripheral input. This chapter begins with a discussion of the peculiarities of acoustical stimuli and how they are received by the human auditory system. A distinction is made between the ambient sound field and the effective stimulus to differentiate the perceptual distinctions among various simple classes of sound sources (ambient field) from the known perceptual consequences of the linear transformations of the sound wave from source to receiver (effective stimulus). Next, the definition of an auditory object is dealt with, specifically the question of how the various components of a sound stream become segregated into distinct auditory objects. The remainder of the chapter focuses on issues related to the spatial layout of auditory objects, both stationary and moving.
Pace-Schott, Edward F; Spencer, Rebecca M C
Improvements in motor sequence learning come about via goal-based learning of the sequence of visual stimuli and muscle-based learning of the sequence of movement responses. In young adults, consolidation of goal-based learning is observed after intervals of sleep but not following wake, whereas consolidation of muscle-based learning is greater following intervals with wake compared to sleep. While the benefit of sleep on motor sequence learning has been shown to decline with age, how sleep contributes to consolidation of goal-based vs. muscle-based learning in older adults (OA) has not been disentangled. We trained young (n = 62) and older (n = 50) adults on a motor sequence learning task and re-tested learning following 12 h intervals containing overnight sleep or daytime wake. To probe consolidation of goal-based learning of the sequence, half of the participants were re-tested in a configuration in which the stimulus sequence was the same but, due to a shift in stimulus-response mapping, the movement response sequence differed. To probe consolidation of muscle-based learning, the remaining participants were tested in a configuration in which the stimulus sequence was novel, but now the sequence of movements used for responding was unchanged. In young adults, there was a significant condition (goal-based vs. muscle-based learning) by interval (sleep vs. wake) interaction, F(1,58) = 6.58, p = 0.013: goal-based learning tended to be greater following sleep compared to wake, t(29) = 1.47, p = 0.072. Conversely, muscle-based learning was greater following wake than sleep, t(29) = 2.11, p = 0.021. Unlike young adults, this interaction was not significant in OA, F(1,46) = 0.04, p = 0.84, nor was there a main effect of interval, F(1,46) = 1.14, p = 0.29. Thus, OA do not preferentially consolidate sequence learning over wake or sleep.
Cantwell, George; Crossley, Matthew J; Ashby, F Gregory
Virtually all current theories of category learning assume that humans learn new categories by gradually forming associations directly between stimuli and responses. In information-integration category-learning tasks, this purported process is thought to depend on procedural learning implemented via dopamine-dependent cortical-striatal synaptic plasticity. This article proposes a new, neurobiologically detailed model of procedural category learning that, unlike previous models, does not assume associations are made directly from stimulus to response. Rather, the traditional stimulus-response (S-R) models are replaced with a two-stage learning process. Multiple streams of evidence (behavioral, as well as anatomical and fMRI) are used as inspiration for the new model, which synthesizes evidence of multiple distinct cortical-striatal loops into a neurocomputational theory. An experiment is reported to test a priori predictions of the new model that: (1) recovery from a full reversal should be easier than learning new categories equated for difficulty, and (2) reversal learning in procedural tasks is mediated within the striatum via dopamine-dependent synaptic plasticity. The results confirm the predictions of the new two-stage model and are incompatible with existing S-R models.
Hsiao, Janet Hui-Wen
In Chinese orthography, a dominant character structure exists in which a semantic radical appears on the left and a phonetic radical on the right (SP characters); a minority opposite arrangement also exists (PS characters). As the number of phonetic radical types is much greater than semantic radical types, in SP characters the information is skewed to the right, whereas in PS characters it is skewed to the left. Through training a computational model for SP and PS character recognition that takes into account of the locations in which the characters appear in the visual field during learning, but does not assume any fundamental hemispheric processing difference, we show that visual field differences can emerge as a consequence of the fundamental structural differences in information between SP and PS characters, as opposed to the fundamental processing differences between the two hemispheres. This modeling result is also consistent with behavioral naming performance. This work provides strong evidence that perceptual learning, i.e., the information structure of word stimuli to which the readers have long been exposed, is one of the factors that accounts for hemispheric asymmetry effects in visual word recognition. Copyright © 2011 Elsevier Inc. All rights reserved.
McCoy, Thomasin E; Conrad, Amy L; Richman, Lynn C; Nopoulos, Peg C; Bell, Edward F
The purpose of this study was to evaluate immediate auditory and visual memory processes in learning disability subtypes of 40 children born preterm. Three subgroups of children were examined: (a) primary language disability group (n = 13), (b) perceptual-motor disability group (n = 14), and (c) no learning disability diagnosis group without identified language or perceptual-motor learning disability (n = 13). Between-group comparisons indicate no significant differences in immediate auditory or visual memory performances between language and perceptual-motor learning disability groups. Within-group comparisons revealed that both learning disability groups performed significantly lower on a task of immediate memory when the mode of stimulus presentation and mode of response were visual.
Mandarin Chinese lexical tones pose difficulties for non-native speakers whose first languages contrast or do not contrast lexical tones. In this study, both tone language and non-tone language speaking learners of Mandarin Chinese were trained for three weeks to identify the four Mandarin lexical tones. One group took the production training with both visual and audio feedback using Kay Sona Speech II software. The target tones produced by native Mandarin speakers were played back through a pair of headphones and the pitch contours of the target tones were displayed on the computer screen on the top window to be compared with the trainees productions which appear in real time in the bottom window. Another group of participants took the perceptual training only with four-way forced choice identification tasks with immediate feedback. The same training tokens were used in both training modes. Pretest and post test data in perception and production were collected from both groups and were compared for effectiveness of training procedures.
Phan, Mimi L.; Vicario, David S.
How do social interactions form and modulate the neural representations of specific complex signals? This question can be addressed in the songbird auditory system. Like humans, songbirds learn to vocalize by imitating tutors heard during development. These learned vocalizations are important in reproductive and social interactions and in individual recognition. As a model for the social reinforcement of particular songs, male zebra finches were trained to peck for a food reward in response to one song stimulus (GO) and to withhold responding for another (NoGO). After performance reached criterion, single and multiunit neural responses to both trained and novel stimuli were obtained from multiple electrodes inserted bilaterally into two songbird auditory processing areas [caudomedial mesopallium (CMM) and caudomedial nidopallium (NCM)] of awake, restrained birds. Neurons in these areas undergo stimulus-specific adaptation to repeated song stimuli, and responses to familiar stimuli adapt more slowly than to novel stimuli. The results show that auditory responses differed in NCM and CMM for trained (GO and NoGO) stimuli vs. novel song stimuli. When subjects were grouped by the number of training days required to reach criterion, fast learners showed larger neural responses and faster stimulus-specific adaptation to all stimuli than slow learners in both areas. Furthermore, responses in NCM of fast learners were more strongly left-lateralized than in slow learners. Thus auditory responses in these sensory areas not only encode stimulus familiarity, but also reflect behavioral reinforcement in our paradigm, and can potentially be modulated by social interactions. PMID:25475353
François, Clément; Schön, Daniele
There is increasing evidence that humans and other nonhuman mammals are sensitive to the statistical structure of auditory input. Indeed, neural sensitivity to statistical regularities seems to be a fundamental biological property underlying auditory learning. In the case of speech, statistical regularities play a crucial role in the acquisition of several linguistic features, from phonotactic to more complex rules such as morphosyntactic rules. Interestingly, a similar sensitivity has been shown with non-speech streams: sequences of sounds changing in frequency or timbre can be segmented on the sole basis of conditional probabilities between adjacent sounds. We recently ran a set of cross-sectional and longitudinal experiments showing that merging music and speech information in song facilitates stream segmentation and, further, that musical practice enhances sensitivity to statistical regularities in speech at both neural and behavioral levels. Based on recent findings showing the involvement of a fronto-temporal network in speech segmentation, we defend the idea that enhanced auditory learning observed in musicians originates via at least three distinct pathways: enhanced low-level auditory processing, enhanced phono-articulatory mapping via the left Inferior Frontal Gyrus and Pre-Motor cortex and increased functional connectivity within the audio-motor network. Finally, we discuss how these data predict a beneficial use of music for optimizing speech acquisition in both normal and impaired populations. Copyright © 2013 Elsevier B.V. All rights reserved.
Ravan, Maryam; Reilly, James P; Trainor, Laurel J; Khodayari-Rostamabad, Ahmad
To develop a high performance machine learning (ML) approach for predicting the age and consequently the state of brain development of infants, based on their event related potentials (ERPs) in response to an auditory stimulus. The ERP responses of twenty-nine 6-month-olds, nineteen 12-month-olds and 10 adults to an auditory stimulus were derived from electroencephalogram (EEG) recordings. The most relevant wavelet coefficients corresponding to the first- and second-order moment sequences of the ERP signals were then identified using a feature selection scheme that made no a priori assumptions about the features of interest. These features are then fed into a classifier for determination of age group. We verified that ERP data could yield features that discriminate the age group of individual subjects with high reliability. A low dimensional representation of the selected feature vectors show significant clustering behavior corresponding to the subject age group. The performance of the proposed age group prediction scheme was evaluated using the leave-one-out cross validation method and found to exceed 90% accuracy. This study indicates that ERP responses to an acoustic stimulus can be used to predict the age and consequently the state of brain development of infants. This study is of fundamental scientific significance in demonstrating that a machine classification algorithm with no a priori assumptions can classify ERP responses according to age and with further work, potentially provide useful clues in the understanding of the development of the human brain. A potential clinical use for the proposed methodology is the identification of developmental delay: an abnormal condition may be suspected if the age estimated by the proposed technique is significantly less than the chronological age of the subject. Copyright © 2011 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Kraus, Nina; Slater, Jessica; Thompson, Elaine C.; Hornickel, Jane; Strait, Dana L.; Nicol, Trent; White-Schwoch, Travis
The young nervous system is primed for sensory learning, facilitating the acquisition of language and communication skills. Social and linguistic impoverishment can limit these learning opportunities, eventually leading to language-related challenges such as poor reading. Music training offers a promising auditory learning strategy by directing attention to meaningful acoustic elements of the soundscape. In light of evidence that music training improves auditory skills and their neural substrates, there are increasing efforts to enact community-based programs to provide music instruction to at-risk children. Harmony Project is a community foundation that has provided free music instruction to over 1000 children from Los Angeles gang-reduction zones over the past decade. We conducted an independent evaluation of biological effects of participating in Harmony Project by following a cohort of children for 1 year. Here we focus on a comparison between students who actively engaged with sound through instrumental music training vs. students who took music appreciation classes. All children began with an introductory music appreciation class, but midway through the year half of the children transitioned to the instrumental training. After the year of training, the children who actively engaged with sound through instrumental music training had faster and more robust neural processing of speech than the children who stayed in the music appreciation class, observed in neural responses to a speech sound /d/. The neurophysiological measures found to be enhanced in the instrumentally-trained children have been previously linked to reading ability, suggesting a gain in neural processes important for literacy stemming from active auditory learning. Despite intrinsic constraints on our study imposed by a community setting, these findings speak to the potential of active engagement with sound (i.e., music-making) to engender experience-dependent neuroplasticity and may inform the
Full Text Available The young nervous system is primed for sensory learning, facilitating the acquisition of language and communication skills. Social and linguistic impoverishment can limit these learning opportunities, eventually leading to language-related challenges such as poor reading. Music training offers a promising auditory learning strategy by directing attention to meaningful acoustic elements in the soundscape. In light of evidence that music training improves auditory skills and their neural substrates, there are increasing efforts to enact community-based programs to provide music instruction to at-risk children. Harmony Project is a community foundation that has provided free music instruction to over 1,000 children from Los Angeles gang-reduction zones over the past decade. We conducted an independent evaluation of biological effects of participating in Harmony Project by following a cohort of children for one year. Here we focus on a comparison between students who actively engaged with sound through instrumental music training vs. students who took music appreciation classes. All children began with an introductory music appreciation class, but midway through the year half of the children transitioned to an instrumental training class. After the year of training, the children who actively engaged with sound through instrumental music training had faster and more robust neural processing of speech than the children who stayed in the music appreciation class, observed in neural responses to a speech sound /d/. The neurophysiological measures found to be enhanced in the instrumentally trained children have been previously linked to reading ability, suggesting a gain in neural processes important for literacy stemming from active auditory learning. These findings speak to the potential of active engagement with sound (i.e., music-making to engender experience-dependent neuroplasticity during trand may inform the development of strategies for auditory
Kraus, Nina; Slater, Jessica; Thompson, Elaine C; Hornickel, Jane; Strait, Dana L; Nicol, Trent; White-Schwoch, Travis
The young nervous system is primed for sensory learning, facilitating the acquisition of language and communication skills. Social and linguistic impoverishment can limit these learning opportunities, eventually leading to language-related challenges such as poor reading. Music training offers a promising auditory learning strategy by directing attention to meaningful acoustic elements of the soundscape. In light of evidence that music training improves auditory skills and their neural substrates, there are increasing efforts to enact community-based programs to provide music instruction to at-risk children. Harmony Project is a community foundation that has provided free music instruction to over 1000 children from Los Angeles gang-reduction zones over the past decade. We conducted an independent evaluation of biological effects of participating in Harmony Project by following a cohort of children for 1 year. Here we focus on a comparison between students who actively engaged with sound through instrumental music training vs. students who took music appreciation classes. All children began with an introductory music appreciation class, but midway through the year half of the children transitioned to the instrumental training. After the year of training, the children who actively engaged with sound through instrumental music training had faster and more robust neural processing of speech than the children who stayed in the music appreciation class, observed in neural responses to a speech sound /d/. The neurophysiological measures found to be enhanced in the instrumentally-trained children have been previously linked to reading ability, suggesting a gain in neural processes important for literacy stemming from active auditory learning. Despite intrinsic constraints on our study imposed by a community setting, these findings speak to the potential of active engagement with sound (i.e., music-making) to engender experience-dependent neuroplasticity and may inform the
Abrahamse, E.L.; van der Lubbe, Robert Henricus Johannes; Verwey, Willem B.
Sequence learning in serial reaction time (SRT) tasks has been investigated mostly with unimodal stimulus presentation. This approach disregards the possibility that sequence acquisition may be guided by multiple sources of sensory information simultaneously. In the current study we trained
Caras, Melissa L.; Sanes, Dan H.
Sensory pathways display heightened plasticity during development, yet the perceptual consequences of early experience are generally assessed in adulthood. This approach does not allow one to identify transient perceptual changes that may be linked to the central plasticity observed in juvenile animals. Here, we determined whether a brief period of bilateral auditory deprivation affects sound perception in developing and adult gerbils. Animals were reared with bilateral earplugs, either from ...
Lalonde, Kaylah; Holt, Rachael Frush
This study used the auditory evaluation framework [Erber (1982). Auditory Training (Alexander Graham Bell Association, Washington, DC)] to characterize the influence of visual speech on audiovisual (AV) speech perception in adults and children at multiple levels of perceptual processing. Six- to eight-year-old children and adults completed auditory and AV speech perception tasks at three levels of perceptual processing (detection, discrimination, and recognition). The tasks differed in the level of perceptual processing required to complete them. Adults and children demonstrated visual speech influence at all levels of perceptual processing. Whereas children demonstrated the same visual speech influence at each level of perceptual processing, adults demonstrated greater visual speech influence on tasks requiring higher levels of perceptual processing. These results support previous research demonstrating multiple mechanisms of AV speech processing (general perceptual and speech-specific mechanisms) with independent maturational time courses. The results suggest that adults rely on both general perceptual mechanisms that apply to all levels of perceptual processing and speech-specific mechanisms that apply when making phonetic decisions and/or accessing the lexicon. Six- to eight-year-old children seem to rely only on general perceptual mechanisms across levels. As expected, developmental differences in AV benefit on this and other recognition tasks likely reflect immature speech-specific mechanisms and phonetic processing in children. PMID:27106318
Moradi, Elaheh; Hallikainen, Ilona; Hänninen, Tuomo; Tohka, Jussi
Rey's Auditory Verbal Learning Test (RAVLT) is a powerful neuropsychological tool for testing episodic memory, which is widely used for the cognitive assessment in dementia and pre-dementia conditions. Several studies have shown that an impairment in RAVLT scores reflect well the underlying pathology caused by Alzheimer's disease (AD), thus making RAVLT an effective early marker to detect AD in persons with memory complaints. We investigated the association between RAVLT scores (RAVLT Immediate and RAVLT Percent Forgetting) and the structural brain atrophy caused by AD. The aim was to comprehensively study to what extent the RAVLT scores are predictable based on structural magnetic resonance imaging (MRI) data using machine learning approaches as well as to find the most important brain regions for the estimation of RAVLT scores. For this, we built a predictive model to estimate RAVLT scores from gray matter density via elastic net penalized linear regression model. The proposed approach provided highly significant cross-validated correlation between the estimated and observed RAVLT Immediate (R = 0.50) and RAVLT Percent Forgetting (R = 0.43) in a dataset consisting of 806 AD, mild cognitive impairment (MCI) or healthy subjects. In addition, the selected machine learning method provided more accurate estimates of RAVLT scores than the relevance vector regression used earlier for the estimation of RAVLT based on MRI data. The top predictors were medial temporal lobe structures and amygdala for the estimation of RAVLT Immediate and angular gyrus, hippocampus and amygdala for the estimation of RAVLT Percent Forgetting. Further, the conversion of MCI subjects to AD in 3-years could be predicted based on either observed or estimated RAVLT scores with an accuracy comparable to MRI-based biomarkers.
Full Text Available Rey's Auditory Verbal Learning Test (RAVLT is a powerful neuropsychological tool for testing episodic memory, which is widely used for the cognitive assessment in dementia and pre-dementia conditions. Several studies have shown that an impairment in RAVLT scores reflect well the underlying pathology caused by Alzheimer's disease (AD, thus making RAVLT an effective early marker to detect AD in persons with memory complaints. We investigated the association between RAVLT scores (RAVLT Immediate and RAVLT Percent Forgetting and the structural brain atrophy caused by AD. The aim was to comprehensively study to what extent the RAVLT scores are predictable based on structural magnetic resonance imaging (MRI data using machine learning approaches as well as to find the most important brain regions for the estimation of RAVLT scores. For this, we built a predictive model to estimate RAVLT scores from gray matter density via elastic net penalized linear regression model. The proposed approach provided highly significant cross-validated correlation between the estimated and observed RAVLT Immediate (R = 0.50 and RAVLT Percent Forgetting (R = 0.43 in a dataset consisting of 806 AD, mild cognitive impairment (MCI or healthy subjects. In addition, the selected machine learning method provided more accurate estimates of RAVLT scores than the relevance vector regression used earlier for the estimation of RAVLT based on MRI data. The top predictors were medial temporal lobe structures and amygdala for the estimation of RAVLT Immediate and angular gyrus, hippocampus and amygdala for the estimation of RAVLT Percent Forgetting. Further, the conversion of MCI subjects to AD in 3-years could be predicted based on either observed or estimated RAVLT scores with an accuracy comparable to MRI-based biomarkers.
Norton, Daniel J.; McBain, Ryan K.; Ongur, Dost; Chen, Yue
Schizophrenia patients exhibit perceptual and cognitive deficits, including in visual motion processing. Given that cognitive systems depend upon perceptual inputs, improving patients' perceptual abilities may be an effective means of cognitive intervention. In healthy people, motion perception can be enhanced through perceptual learning, but it…
’. In this paper, I review recent neurocognitive research suggesting that the auditory system is sensitive to structural information about real-world objects. Instead of focusing solely on perceptual sound features as determinants of auditory objects, I propose that real-world object properties are inherent......The auditory system transforms patterns of sound energy into perceptual objects but the precise definition of an ‘auditory object’ is much debated. In the context of music listening, Pierre Schaeffer argued that ‘sound objects’ are the fundamental perceptual units in ‘musical objects...
’. In this paper, I review recent neurocognitive research suggesting that the auditory system is sensitive to structural information about real-world objects. Instead of focusing solely on perceptual sound features as determinants of auditory objects, I propose that real-world object properties are inherent......The auditory system transforms patterns of sound energy into perceptual objects but the precise definition of an ‘auditory object’ is much debated. In the context of music listening, Pierre Schaeffer argued that ‘sound objects’ are the fundamental perceptual units in ‘musical objects...
Full Text Available Processes of motor control and learning in sports as well as in motor rehabilitation are based on perceptual functions and emergent motor representations. Here a new method of movement sonification is described which is designed to tune in more comprehensively the auditory system into motor perception to enhance motor learning. Usually silent features of the cyclic movement pattern "indoor rowing" are sonified in real time to make them additionally available to the auditory system when executing the movement. Via real time sonification movement perception can be enhanced in terms of temporal precision and multi-channel integration. But beside the contribution of a single perceptual channel to motor perception and motor representation also mechanisms of multisensory integration can be addressed, if movement sonification is configured adequately: Multimodal motor representations consisting of at least visual, auditory and proprioceptive components - can be shaped subtly resulting in more precise motor control and enhanced motor learning.
Koedijker, J.M.; Poolton, J.M.; Maxwell, J.P.; Oudejans, R.R.D.; Beek, P.J.; Masters, R.S.W.
We sought to gain more insight into the effects of attention focus and time constraints on skill learning and performance in novices and experts by means of two complementary experiments using a table tennis paradigm. Experiment 1 showed that skill-focus conditions and slowed ball frequency
McAuley, J Devin; Henry, Molly J; Wedd, Alan; Pleskac, Timothy J; Cesario, Joseph
Two experiments investigated the effects of musicality and motivational orientation on auditory category learning. In both experiments, participants learned to classify tone stimuli that varied in frequency and duration according to an initially unknown disjunctive rule; feedback involved gaining points for correct responses (a gains reward structure) or losing points for incorrect responses (a losses reward structure). For Experiment 1, participants were told at the start that musicians typically outperform nonmusicians on the task, and then they were asked to identify themselves as either a "musician" or a "nonmusician." For Experiment 2, participants were given either a promotion focus prime (a performance-based opportunity to gain entry into a raffle) or a prevention focus prime (a performance-based criterion that needed to be maintained to avoid losing an entry into a raffle) at the start of the experiment. Consistent with a regulatory-fit hypothesis, self-identified musicians and promotion-primed participants given a gains reward structure made more correct tone classifications and were more likely to discover the optimal disjunctive rule than were musicians and promotion-primed participants experiencing losses. Reward structure (gains vs. losses) had inconsistent effects on the performance of nonmusicians, and a weaker regulatory-fit effect was found for the prevention focus prime. Overall, the findings from this study demonstrate a regulatory-fit effect in the domain of auditory category learning and show that motivational orientation may contribute to musician performance advantages in auditory perception.
Gabay, Yafit; Karni, Avi; Banai, Karen
Speech perception can improve substantially with practice (perceptual learning) even in adults. Here we compared the effects of four training protocols that differed in whether and how task difficulty was changed during a training session, in terms of the gains attained and the ability to apply (transfer) these gains to previously un-encountered items (tokens) and to different talkers. Participants trained in judging the semantic plausibility of sentences presented as time-compressed speech and were tested on their ability to reproduce, in writing, the target sentences; trail-by-trial feedback was afforded in all training conditions. In two conditions task difficulty (low or high compression) was kept constant throughout the training session, whereas in the other two conditions task difficulty was changed in an adaptive manner (incrementally from easy to difficult, or using a staircase procedure). Compared to a control group (no training), all four protocols resulted in significant post-training improvement in the ability to reproduce the trained sentences accurately. However, training in the constant-high-compression protocol elicited the smallest gains in deciphering and reproducing trained items and in reproducing novel, untrained, items after training. Overall, these results suggest that training procedures that start off with relatively little signal distortion ("easy" items, not far removed from standard speech) may be advantageous compared to conditions wherein severe distortions are presented to participants from the very beginning of the training session.
Full Text Available Speech perception can improve substantially with practice (perceptual learning even in adults. Here we compared the effects of four training protocols that differed in whether and how task difficulty was changed during a training session, in terms of the gains attained and the ability to apply (transfer these gains to previously un-encountered items (tokens and to different talkers. Participants trained in judging the semantic plausibility of sentences presented as time-compressed speech and were tested on their ability to reproduce, in writing, the target sentences; trail-by-trial feedback was afforded in all training conditions. In two conditions task difficulty (low or high compression was kept constant throughout the training session, whereas in the other two conditions task difficulty was changed in an adaptive manner (incrementally from easy to difficult, or using a staircase procedure. Compared to a control group (no training, all four protocols resulted in significant post-training improvement in the ability to reproduce the trained sentences accurately. However, training in the constant-high-compression protocol elicited the smallest gains in deciphering and reproducing trained items and in reproducing novel, untrained, items after training. Overall, these results suggest that training procedures that start off with relatively little signal distortion ("easy" items, not far removed from standard speech may be advantageous compared to conditions wherein severe distortions are presented to participants from the very beginning of the training session.
Kouni, Sophia N; Giannopoulos, Sotirios; Ziavra, Nausika; Koutsojannis, Constantinos
' was found to be abnormal, as low as the auditory brainstem. Because ABRs mature in early life, this can help to identify subjects with acoustically based learning problems and apply early intervention, rehabilitation, and treatment. Further studies and more experience with more patients and pathological conditions such as plasticity of the auditory system, cochlear implants, hearing aids, presbycusis, or acoustic neuropathy are necessary until this type of testing is ready for clinical application. © 2013 Elsevier Inc. All rights reserved.
Thirty-six visual and auditory tests were given to 113 fifth and sixth grade students. Second-order analysis yielded two well-defined factors representing Fluid and Crystallized Intelligence and two perceptual factors corresponding to General Visualization and General Auditory Function. Perceptual factors were not clearly separated from broad…
Munjir, Norulsuhada; Othman, Zahiruddin; Zakaria, Rahimah; Shafin, Nazlahshaniza; Hussain, Noor Aini; Desa, Anisah Mat; Ahmad, Asma Hayati
This study aims to develop two alternate forms for Malay version of Auditory Verbal Learning Test (MAVLT) and to determine their equivalency and practice effect. Ninety healthy volunteers were subjected to the following neuropsychological tests at baseline, and at one month interval according to their assigned group; group 1 (MAVLT - MAVLT), group 2 (MAVLT - Alternate Form 1 - Alternate Form 1), and group 3 (MAVLT - Alternate Form 2 - Alternate Form 2). There were no significant difference in the mean score of all the trials at baseline among the three groups, and most of the mean score of trials between MAVLT and Alternate Form 1, and between MAVLT and Alternate Form 2. There was significant improvement in the mean score of each trial when the same form was used repeatedly at the interval of one month. However, there was no significant improvement in the mean score of each trial when the Alternate Form 2 was used during repeated neuropsychological testing. The MAVLT is a reliable instrument for repeated neuropsychological testing as long as alternate forms are used. The Alternate Form 2 showed better equivalency to MAVLT and less practice effects.
Anton, B S; Player, N I; Bennett, T L
Albino rats were pre-exposed to stimuli in an otherwise visually sparse environment, with visibility and opportunity to manipulate the forms controlled during rearing. Analysis indicated that pre-exposing animals to stimuli which provided either tactual-kinesthetic feedback or highly visible forms significantly facilitated subsequent discrimination learning. The findings question the adequacy of either an attention-getting or tactual-kinesthetic feedback to account for differences in transfer effects in studies using two- and three-dimensional forms. It is suggested that the visibility of the forms and the opportunity to inspect the forms during pre-exposure is the important variable in studies of this type.
Full Text Available Birds can rely on a variety of cues for orientation during migration and homing. Celestial rotation provides the key information for the development of a functioning star and/or sun compass. This celestial compass seems to be the primary reference for calibrating the other orientation systems including the magnetic compass. Thus, detection of the celestial rotational axis is crucial for bird orientation. Here, we use operant conditioning to demonstrate that homing pigeons can principally learn to detect a rotational centre in a rotating dot pattern and we examine their behavioural response strategies in a series of experiments. Initially, most pigeons applied a strategy based on local stimulus information such as movement characteristics of single dots. One pigeon seemed to immediately ignore eccentric stationary dots. After special training, all pigeons could shift their attention to more global cues, which implies that pigeons can learn the concept of a rotational axis. In our experiments, the ability to precisely locate the rotational centre was strongly dependent on the rotational velocity of the dot pattern and it crashed at velocities that were still much faster than natural celestial rotation. We therefore suggest that the axis of the very slow, natural, celestial rotation could be perceived by birds through the movement itself, but that a time-delayed pattern comparison should also be considered as a very likely alternative strategy.
Full Text Available Biological and technical systems operate in a rich multimodal environment. Due to the diversity of incoming sensory streams a system perceives and the variety of motor capabilities a system exhibits there is no single representation and no singular unambiguous interpretation of such a complex scene. In this work we propose a novel sensory processing architecture, inspired by the distributed macro-architecture of the mammalian cortex. The underlying computation is performed by a network of computational maps, each representing a different sensory quantity. All the different sensory streams enter the system through multiple parallel channels. The system autonomously associates and combines them into a coherent representation, given incoming observations. These processes are adaptive and involve learning. The proposed framework introduces mechanisms for self-creation and learning of the functional relations between the computational maps, encoding sensorimotor streams, directly from the data. Its intrinsic scalability, parallelisation, and automatic adaptation to unforeseen sensory perturbations make our approach a promising candidate for robust multisensory fusion in robotic systems. We demonstrate this by applying our model to a 3D motion estimation on a quadrotor.
Full Text Available Perceptual learning has been shown to produce an improvement of visual acuity (VA and contrast sensitivity (CS both in subjects with amblyopia and refractive defects such as myopia or presbyopia. Transcranial random noise stimulation (tRNS has proven to be efficacious in accelerating neural plasticity and boosting perceptual learning in healthy participants. In this study we investigated whether a short behavioural training regime using a contrast detection task combined with online tRNS was as effective in improving visual functions in participants with mild myopia compared to a two-month behavioural training regime without tRNS (Camilleri et al., 2014. After two weeks of perceptual training in combination with tRNS, participants showed an improvement of 0.15 LogMAR in uncorrected VA (UCVA that was comparable with that obtained after eight weeks of training with no tRNS, and an improvement in uncorrected CS (UCCS at various spatial frequencies (whereas no UCCS improvement was seen after eight weeks of training with no tRNS. On the other hand, a control group that trained for two weeks without stimulation did not show any significant UCVA or UCCS improvement. These results suggest that the combination of behavioural and neuromodulatory techniques can be fast and efficacious in improving sight in individuals with mild myopia.
The Use of Music and Other Forms of Organized Sound as a Therapeutic Intervention for Students with Auditory Processing Disorder: Providing the Best Auditory Experience for Children with Learning Differences
Faronii-Butler, Kishasha O.
This auto-ethnographical inquiry used vignettes and interviews to examine the therapeutic use of music and other forms of organized sound in the learning environment of individuals with Central Auditory Processing Disorders. It is an investigation of the traditions of healing with sound vibrations, from its earliest cultural roots in shamanism and…
Fostick, Leah; Bar-El, Sharona; Ram-Tsur, Ronit
The present study focuses on examining the hypothesis that auditory temporal perception deficit is a basic cause for reading disabilities among dyslexics. This hypothesis maintains that reading impairment is caused by a fundamental perceptual deficit in processing rapid auditory or visual stimuli. Since the auditory perception involves a number of…
Bell, Brittany A; Phan, Mimi L; Vicario, David S
How do social interactions form and modulate the neural representations of specific complex signals? This question can be addressed in the songbird auditory system. Like humans, songbirds learn to vocalize by imitating tutors heard during development. These learned vocalizations are important in reproductive and social interactions and in individual recognition. As a model for the social reinforcement of particular songs, male zebra finches were trained to peck for a food reward in response to one song stimulus (GO) and to withhold responding for another (NoGO). After performance reached criterion, single and multiunit neural responses to both trained and novel stimuli were obtained from multiple electrodes inserted bilaterally into two songbird auditory processing areas [caudomedial mesopallium (CMM) and caudomedial nidopallium (NCM)] of awake, restrained birds. Neurons in these areas undergo stimulus-specific adaptation to repeated song stimuli, and responses to familiar stimuli adapt more slowly than to novel stimuli. The results show that auditory responses differed in NCM and CMM for trained (GO and NoGO) stimuli vs. novel song stimuli. When subjects were grouped by the number of training days required to reach criterion, fast learners showed larger neural responses and faster stimulus-specific adaptation to all stimuli than slow learners in both areas. Furthermore, responses in NCM of fast learners were more strongly left-lateralized than in slow learners. Thus auditory responses in these sensory areas not only encode stimulus familiarity, but also reflect behavioral reinforcement in our paradigm, and can potentially be modulated by social interactions. Copyright © 2015 the American Physiological Society.
Full Text Available Despite the wealth of research on differences between experts and novices with respect to their perceptual-cognitive background (e.g., mental representations, gaze behavior, little is known about the change of these perceptual-cognitive components over the course of motor learning. In the present study, changes in one’s mental representation, quiet eye behavior, and outcome performance were examined over the course of skill acquisition as it related to physical and mental practice. Novices (N = 45 were assigned to one of three conditions: physical practice, physical practice plus mental practice, and no practice. Participants in the practice groups trained on a golf putting task over the course of three days, either by repeatedly executing the putt, or by both executing and imaging the putt. Findings revealed improvements in putting performance across both practice conditions. Regarding the perceptual-cognitive changes, participants practicing mentally and physically revealed longer quiet eye durations as well as more elaborate representation structures in comparison to the control group, while this was not the case for participants who underwent physical practice only. Thus, in the present study, combined mental and physical practice led to both formation of mental representations in long-term memory and longer quiet eye durations. Interestingly, the length of the quiet eye directly related to the degree of elaborateness of the underlying mental representation, supporting the notion that the quiet eye reflects cognitive processing. This study is the first to show that the quiet eye becomes longer in novices practicing a motor action. Moreover, the findings of the present study suggest that perceptual and cognitive adaptations co-occur over the course of motor learning.
Ripamonti, Caterina; Westland, Stephen
We suggest that color constancy and perceptual transparency might be explained by the same underlying mechanism. For color constancy, Foster and Nascimento (1994) found that cone-excitation ratios between surfaces seen under one illuminant and cone-excitation ratios between the same surfaces seen under a different illuminant were almost constant. In the case of perceptual transparency we also found that cone-excitation ratios between surfaces illuminated directly and cone-excitation ratios between the same surfaces seen through a transparent filter were almost invariant (Westland and Ripamonti, 2000). We compare the ability of the cone-excitation-ratio invariance model to predict perceptual transparency with an alternative model based on convergence in color space (D'Zmura et al., 1997). Psychophysical data are reported from experiments where by subjects were asked to select which of two stimuli represented a Mondrian image partially covered by a homogeneous transparent filter. One of the stimuli was generated from the convergence model and the other was a modified version of the first stimulus such that the cone- excitation ratios were perfectly invariant. Subjects consistently selected the invariant stimulus confirming our hypothesis that perception of transparency is predicted by the degree of deviation frm an invariant ratio for the cone excitations.
Chyzhyk, Darya; Graña, Manuel; Öngür, Döst; Shinn, Ann K
Auditory hallucinations (AH) are a symptom that is most often associated with schizophrenia, but patients with other neuropsychiatric conditions, and even a small percentage of healthy individuals, may also experience AH. Elucidating the neural mechanisms underlying AH in schizophrenia may offer insight into the pathophysiology associated with AH more broadly across multiple neuropsychiatric disease conditions. In this paper, we address the problem of classifying schizophrenia patients with and without a history of AH, and healthy control (HC) subjects. To this end, we performed feature extraction from resting state functional magnetic resonance imaging (rsfMRI) data and applied machine learning classifiers, testing two kinds of neuroimaging features: (a) functional connectivity (FC) measures computed by lattice auto-associative memories (LAAM), and (b) local activity (LA) measures, including regional homogeneity (ReHo) and fractional amplitude of low frequency fluctuations (fALFF). We show that it is possible to perform classification within each pair of subject groups with high accuracy. Discrimination between patients with and without lifetime AH was highest, while discrimination between schizophrenia patients and HC participants was worst, suggesting that classification according to the symptom dimension of AH may be more valid than discrimination on the basis of traditional diagnostic categories. FC measures seeded in right Heschl's gyrus (RHG) consistently showed stronger discriminative power than those seeded in left Heschl's gyrus (LHG), a finding that appears to support AH models focusing on right hemisphere abnormalities. The cortical brain localizations derived from the features with strong classification performance are consistent with proposed AH models, and include left inferior frontal gyrus (IFG), parahippocampal gyri, the cingulate cortex, as well as several temporal and prefrontal cortical brain regions. Overall, the observed findings suggest that
Rebecca M. Calisi
Full Text Available Changes in hormones can affect many types of learning in vertebrates. Adults experience fluctuations in a multitude of hormones over a temporal scale, from local, rapid action to more long-term, seasonal changes. Endocrine changes during development can affect behavioral outcomes in adulthood, but how learning is affected in adults by hormone fluctuations experienced during adulthood is less understood. Previous reports have implicated the sex steroid hormone estradiol (E2 in both male and female vertebrate cognitive functioning. Here, we examined the effects of E2 on auditory recognition and learning in male European starlings (Sturnus vulgaris. European starlings are photoperiodic, seasonally breeding songbirds that undergo different periods of reproductive activity according to annual changes in day length. We simulated these reproductive periods, specifically 1. photosensitivity, 2. photostimulation, and 3. photorefractoriness in captive birds by altering day length. During each period, we manipulated circulating E2 and examined multiple measures of learning. To manipulate circulating E2, we used subcutaneous implants containing either 17-β E2 and/or fadrozole (FAD, a highly specific aromatase inhibitor that suppresses E2 production in the body and the brain, and measured the latency for birds to learn and respond to short, male conspecific song segments (motifs. We report that photostimulated birds given E2 had higher response rates and responded with better accuracy than those given saline controls or FAD. Conversely, photosensitive, animals treated with E2 responded with less accuracy than those given FAD. These results demonstrate how circulating E2 and photoperiod can interact to shape auditory recognition and learning in adults, driving it in opposite directions in different states.
Douglas M Shiller
Full Text Available Auditory input is essential for normal speech development and plays a key role in speech production throughout the life span. In traditional models, auditory input plays two critical roles: 1 establishing the acoustic correlates of speech sounds that serve, in part, as the targets of speech production, and 2 as a source of feedback about a talker's own speech outcomes. This talk will focus on both of these roles, describing a series of studies that examine the capacity of children and adults to adapt to real-time manipulations of auditory feedback during speech production. In one study, we examined sensory and motor adaptation to a manipulation of auditory feedback during production of the fricative “s”. In contrast to prior accounts, adaptive changes were observed not only in speech motor output but also in subjects' perception of the sound. In a second study, speech adaptation was examined following a period of auditory–perceptual training targeting the perception of vowels. The perceptual training was found to systematically improve subjects' motor adaptation response to altered auditory feedback during speech production. The results of both studies support the idea that perceptual and motor processes are tightly coupled in speech production learning, and that the degree and nature of this coupling may change with development.
Daikhin, Luba; Raviv, Ofri; Ahissar, Merav
Purpose: The reading deficit for people with dyslexia is typically associated with linguistic, memory, and perceptual-discrimination difficulties, whose relation to reading impairment is disputed. We proposed that automatic detection and usage of serial sound regularities for individuals with dyslexia is impaired (anchoring deficit hypothesis),…
Christiansen, Simon Krogholt
The ability to perceptually segregate concurrent sound sources and focus one’s attention on a single source at a time is essential for the ability to use acoustic information. While perceptual experiments have determined a range of acoustic cues that help facilitate auditory stream segregation......, it is not clear how the auditory system realizes the task. This thesis presents a study of the mechanisms involved in auditory stream segregation. Through a combination of psychoacoustic experiments, designed to characterize the influence of acoustic cues on auditory stream formation, and computational models...... of auditory processing, the role of auditory preprocessing and temporal coherence in auditory stream formation was evaluated. The computational model presented in this study assumes that auditory stream segregation occurs when sounds stimulate non-overlapping neural populations in a temporally incoherent...
Goll, Johanna C.; Kim, Lois G.; Hailstone, Julia C.; Lehmann, Manja; Buckley, Aisling; Crutch, Sebastian J.; Warren, Jason D.
The cognition of nonverbal sounds in dementia has been relatively little explored. Here we undertook a systematic study of nonverbal sound processing in patient groups with canonical dementia syndromes comprising clinically diagnosed typical amnestic Alzheimer's disease (AD; n = 21), progressive nonfluent aphasia (PNFA; n = 5), logopenic progressive aphasia (LPA; n = 7) and aphasia in association with a progranulin gene mutation (GAA; n = 1), and in healthy age-matched controls (n = 20). Based on a cognitive framework treating complex sounds as ‘auditory objects’, we designed a novel neuropsychological battery to probe auditory object cognition at early perceptual (sub-object), object representational (apperceptive) and semantic levels. All patients had assessments of peripheral hearing and general neuropsychological functions in addition to the experimental auditory battery. While a number of aspects of auditory object analysis were impaired across patient groups and were influenced by general executive (working memory) capacity, certain auditory deficits had some specificity for particular dementia syndromes. Patients with AD had a disproportionate deficit of auditory apperception but preserved timbre processing. Patients with PNFA had salient deficits of timbre and auditory semantic processing, but intact auditory size and apperceptive processing. Patients with LPA had a generalised auditory deficit that was influenced by working memory function. In contrast, the patient with GAA showed substantial preservation of auditory function, but a mild deficit of pitch direction processing and a more severe deficit of auditory apperception. The findings provide evidence for separable stages of auditory object analysis and separable profiles of impaired auditory object cognition in different dementia syndromes. PMID:21689671
The aim of this study was to determine if simulation aided by media technology contributes towards an increase in knowledge, empathy, and a change in attitudes in regards to auditory hallucinations for nursing students. A convenience sample of 60 second-year undergraduate nursing students from an Australian university was invited to be part of the study. A pre-post-test design was used, with data analysed using a paired samples t-test to identify pre- and post-changes on nursing students' scores on knowledge of auditory hallucinations. Nine of the 11 questions reported statistically-significant results. The remaining two questions highlighted knowledge embedded within the curriculum, with therapeutic communication being the core work of mental health nursing. The implications for practice are that simulation aided by media technology increases the knowledge of students in regards to auditory hallucinations. © 2013 Australian College of Mental Health Nurses Inc.
Huurneman, Bianca; Boonstra, F Nienke; Cox, Ralf Fa; Cillessen, Antonius Hn; van Rens, Ger
% interoptotype spacing were most sensitive to capture crowding effects. The groups that showed the largest crowding effects were individuals with CN, VI adults with central scotomas and children with CVI. Perceptual Learning seems to be a promising technique to reduce excessive foveal crowding effects.
Veispak, Anneli; Boets, Bart; Männamaa, Mairi; Ghesquière, Pol
Similar to many sighted children who struggle with learning to read, a proportion of blind children have specific difficulties related to reading braille which cannot be easily explained. A lot of research has been conducted to investigate the perceptual and cognitive processes behind (impairments in) print reading. Very few studies, however, have aimed for a deeper insight into the relevant perceptual and cognitive processes involved in braille reading. In the present study we investigate the relations between reading achievement and auditory, speech, phonological and tactile processing in a population of Estonian braille reading children and youngsters and matched sighted print readers. Findings revealed that the sequential nature of braille imposes constant decoding and effective recruitment of phonological skills throughout the reading process. Sighted print readers, on the other hand, seem to switch between the use of phonological and lexical processing modes depending on the familiarity, length and structure of the word. Copyright © 2012 Elsevier Ltd. All rights reserved.
Hirata, Yukari; Kelly, Spencer D.
Purpose: Previous research has found that auditory training helps native English speakers to perceive phonemic vowel length contrasts in Japanese, but their performance did not reach native levels after training. Given that multimodal information, such as lip movement and hand gesture, influences many aspects of native language processing, the…
Plack, Christopher J; Barker, Daphne; Prendergast, Garreth
Dramatic results from recent animal experiments show that noise exposure can cause a selective loss of high-threshold auditory nerve fibers without affecting absolute sensitivity permanently. This cochlear neuropathy has been described as hidden hearing loss, as it is not thought to be detectable using standard measures of audiometric threshold. It is possible that hidden hearing loss is a common condition in humans and may underlie some of the perceptual deficits experienced by people with clinically normal hearing. There is some evidence that a history of noise exposure is associated with difficulties in speech discrimination and temporal processing, even in the absence of any audiometric loss. There is also evidence that the tinnitus experienced by listeners with clinically normal hearing is associated with cochlear neuropathy, as measured using Wave I of the auditory brainstem response. To date, however, there has been no direct link made between noise exposure, cochlear neuropathy, and perceptual difficulties. Animal experiments also reveal that the aging process itself, in the absence of significant noise exposure, is associated with loss of auditory nerve fibers. Evidence from human temporal bone studies and auditory brainstem response measures suggests that this form of hidden loss is common in humans and may have perceptual consequences, in particular, regarding the coding of the temporal aspects of sounds. Hidden hearing loss is potentially a major health issue, and investigations are ongoing to identify the causes and consequences of this troubling condition. © The Author(s) 2014.
Benoit, Charles-Etienne; Dalla Bella, Simone; Farrugia, Nicolas; Obrig, Hellmuth; Mainka, Stefan; Kotz, Sonja A
It is well established that auditory cueing improves gait in patients with idiopathic Parkinson's disease (IPD). Disease-related reductions in speed and step length can be improved by providing rhythmical auditory cues via a metronome or music. However, effects on cognitive aspects of motor control have yet to be thoroughly investigated. If synchronization of movement to an auditory cue relies on a supramodal timing system involved in perceptual, motor, and sensorimotor integration, auditory cueing can be expected to affect both motor and perceptual timing. Here, we tested this hypothesis by assessing perceptual and motor timing in 15 IPD patients before and after a 4-week music training program with rhythmic auditory cueing. Long-term effects were assessed 1 month after the end of the training. Perceptual and motor timing was evaluated with a battery for the assessment of auditory sensorimotor and timing abilities and compared to that of age-, gender-, and education-matched healthy controls. Prior to training, IPD patients exhibited impaired perceptual and motor timing. Training improved patients' performance in tasks requiring synchronization with isochronous sequences, and enhanced their ability to adapt to durational changes in a sequence in hand tapping tasks. Benefits of cueing extended to time perception (duration discrimination and detection of misaligned beats in musical excerpts). The current results demonstrate that auditory cueing leads to benefits beyond gait and support the idea that coupling gait to rhythmic auditory cues in IPD patients relies on a neuronal network engaged in both perceptual and motor timing.
Chen, Nihong; Bi, Taiyong; Zhou, Tiangang; Li, Sheng; Liu, Zili; Fang, Fang
Much has been debated about whether the neural plasticity mediating perceptual learning takes place at the sensory or decision-making stage in the brain. To investigate this, we trained human subjects in a visual motion direction discrimination task. Behavioral performance and BOLD signals were measured before, immediately after, and two weeks after training. Parallel to subjects' long-lasting behavioral improvement, the neural selectivity in V3A and the effective connectivity from V3A to IPS (intraparietal sulcus, a motion decision-making area) exhibited a persistent increase for the trained direction. Moreover, the improvement was well explained by a linear combination of the selectivity and connectivity increases. These findings suggest that the long-term neural mechanisms of motion perceptual learning are implemented by sharpening cortical tuning to trained stimuli at the sensory processing stage, as well as by optimizing the connections between sensory and decision-making areas in the brain. Copyright © 2015 Elsevier Inc. All rights reserved.
Estudo do comportamento vocal no ciclo menstrual: avaliação perceptivo-auditiva, acústica e auto-perceptiva Vocal behavior during menstrual cycle: perceptual-auditory, acoustic and self-perception analysis
Luciane C. de Figueiredo
Full Text Available Durante o período pré-menstrual é comum a ocorrência de disfonia, e são poucas as mulheres que se dão conta dessa variação da voz dentro do ciclo menstrual (Quinteiro, 1989. OBJETIVO: Verificar se há diferença no padrão vocal de mulheres no período de ovulação em relação ao primeiro dia do ciclo menstrual, utilizando-se da análise perceptivo-auditiva, da espectrografia, dos parâmetros acústicos e quando esta diferença está presente, se é percebida pelas mulheres. FORMA DE ESTUDO: Caso-controle. MATERIAL E MÉTODO: A amostra coletada foi de 30 estudantes de Fonoaudiologia, na faixa etária de 18 a 25 anos, não-fumantes, com ciclo menstrual regular e sem o uso de contraceptivo oral. As vozes foram gravadas no primeiro dia de menstruação e no décimo-terceiro dia pós-menstruação (ovulação, para posterior comparação. RESULTADOS: Observou-se durante o período menstrual que as vozes estão rouco-soprosa de grau leve a moderado, instáveis, sem a presença de quebra de sonoridade, com pitch e loudness adequados e ressonância equilibrada. Há pior qualidade de definição dos harmônicos, maior quantidade de ruído entre eles e menor extensão dos harmônicos superiores. Encontramos uma f0 mais aguda, jitter e shimmer aumentados e PHR diminuída. CONCLUSÃO: No período menstrual há mudanças na qualidade vocal, no comportamento dos harmônicos e nos parâmetros vocais (f0,jitter, shimmer e PHR. Além disso, a maioria das estudantes de Fonoaudiologia não percebeu a variação da voz durante o ciclo menstrual.During the premenstruation period dysphonia often can be observed and only few women are aware of this voice variation (Quinteiro, 1989. AIM: To verify if there are vocal quality variations between the ovulation period and the first day of the menstrual cycle, by using perceptual-auditory and acoustic analysis, including spectrography, and the self perception of the vocal changes when it occurs. STUDY DESIGN: Case
Gramuglia, Andréa Cristina Joia; Tavares, Elaine L M; Rodrigues, Sérgio Augusto; Martins, Regina H G
Vocal nodules constitute the major cause of dysphonia during childhood. Auditory-perceptual and acoustic vocal analyses have been used to differentiate vocal nodules from normal voice in children. To study the value of auditory-perceptual and acoustic vocal analyses in assessments of children with nodules. Diagnostic test study. A comparative study was carried out including 100 children with videolaryngoscopic diagnosis of vocal nodules (nodule group-NG); and 100 children without vocal symptoms and with normal videolaryngoscopic exams (control group-CG). The age range of both groups was between 4 and 11 years. All children underwent auditory-perceptual vocal analyses (GRBASI scale); maximum phonation time and s/z ratio were calculated, and acoustic vocal analysis (MDVP software) were carried out. There was no difference in the values of maximum phonation time and s/z ratio between groups. Auditory-perceptual analysis indicated greater compromising of voice parameters for NG, compared to CG: G (79 versus 24), R (53 versus 3), B (67 versus 23) and S (35 versus 1). The values of acoustic parameters jitter, PPQ, shimmer, APQ, NHR and SPI were higher for NG for CG. The parameter f0 did not differ between groups. Compromising of auditory-perceptual (G, R, B and S) and acoustic vocal parameters (jitter, PPQ, shimmer, APQ, NHR and SPI) was greater for children with nodules than for those of the control group, which makes them important methods for assessing child dysphonia. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Bonin, Tanor L; Trainor, Laurel J; Belyk, Michel; Andrews, Paul W
Music can evoke powerful emotions in listeners. Here we provide the first empirical evidence that the principles of auditory scene analysis and evolutionary theories of emotion are critical to a comprehensive theory of musical emotion. We interpret these data in light of a theoretical framework termed "the source dilemma hypothesis," which predicts that uncertainty in the number, identity or location of sound objects elicits unpleasant emotions by presenting the auditory system with an incoherent percept, thereby motivating listeners to resolve the auditory ambiguity. We describe two experiments in which source location and timbre were manipulated to change uncertainty in the auditory scene. In both experiments, listeners rated tonal and atonal melodies with congruent auditory scene cues as more pleasant than melodies with incongruent auditory scene cues. These data suggest that music's emotive capacity relies in part on the perceptual uncertainty it produces regarding the auditory scene. Copyright © 2016. Published by Elsevier B.V.
Full Text Available Abstract Background Little is known about the contribution of transcranial direct current stimulation (tDCS to the exploration of memory functions. The aim of the present study was to examine the behavioural effects of right or left-hemisphere frontal direct current delivery while committing to memory auditory presented nouns on short-term learning and subsequent long-term retrieval. Methods Twenty subjects, divided into two groups, performed an episodic verbal memory task during anodal, cathodal and sham current application on the right or left dorsolateral prefrontal cortex (DLPFC. Results Our results imply that only cathodal tDCS elicits behavioural effects on verbal memory performance. In particular, left-sided application of cathodal tDCS impaired short-term verbal learning when compared to the baseline. We did not observe tDCS effects on long-term retrieval. Conclusion Our results imply that the left DLPFC is a crucial area involved in short-term verbal learning mechanisms. However, we found further support that direct current delivery with an intensity of 1.5 mA to the DLPFC during short-term learning does not disrupt longer lasting consolidation processes that are mainly known to be related to mesial temporal lobe areas. In the present study, we have shown that the tDCS technique has the potential to modulate short-term verbal learning mechanism.
The Identification of Children with Perceptual-Motor Dysfunction; A Study of Perceptual-Motor Dysfunction among Emotionally Disturbed, Educable Mentally Retarded and Normal Children in the Pittsburgh Public Schools.
Rosner, Jerome; And Others
The Rosner Perceptual Survey (RPS) and the Rosner-Richman Perceptual Survey (RRPS) were developed for screening perceptual motor dysfunction. The RPS consisted of 17 subtests of visual motor and auditory motor functions, general motor skills, self awareness, and integrative function; the RRPS, intended for teacher or paraprofessional use, included…
Simon, Jonathan Z
Auditory objects, like their visual counterparts, are perceptually defined constructs, but nevertheless must arise from underlying neural circuitry. Using magnetoencephalography (MEG) recordings of the neural responses of human subjects listening to complex auditory scenes, we review studies that demonstrate that auditory objects are indeed neurally represented in auditory cortex. The studies use neural responses obtained from different experiments in which subjects selectively listen to one of two competing auditory streams embedded in a variety of auditory scenes. The auditory streams overlap spatially and often spectrally. In particular, the studies demonstrate that selective attentional gain does not act globally on the entire auditory scene, but rather acts differentially on the separate auditory streams. This stream-based attentional gain is then used as a tool to individually analyze the different neural representations of the competing auditory streams. The neural representation of the attended stream, located in posterior auditory cortex, dominates the neural responses. Critically, when the intensities of the attended and background streams are separately varied over a wide intensity range, the neural representation of the attended speech adapts only to the intensity of that speaker, irrespective of the intensity of the background speaker. This demonstrates object-level intensity gain control in addition to the above object-level selective attentional gain. Overall, these results indicate that concurrently streaming auditory objects, even if spectrally overlapping and not resolvable at the auditory periphery, are individually neurally encoded in auditory cortex, as separate objects. Copyright © 2014 Elsevier B.V. All rights reserved.
Roy, Saborni; Nag, Tapas C; Upadhyay, Ashish Datt; Mathur, Rashmi; Jain, Suman
Rhythmic sound or music is known to improve cognition in animals and humans. We wanted to evaluate the effects of prenatal repetitive music stimulation on the remodelling of the auditory cortex and visual Wulst in chicks. Fertilized eggs (0 day) of white leghorn chicken (Gallus domesticus) during incubation were exposed either to music or no sound from embryonic day 10 until hatching. Auditory and visual perceptual learning and synaptic plasticity, as evident by synaptophysin and PSD-95 expression, were done at posthatch days (PH) 1, 2 and 3. The number of responders was significantly higher in the music stimulated group as compared to controls at PH1 in both auditory and visual preference tests. The stimulated chicks took significantly lesser time to enter and spent more time in the maternal area in both preference tests. A significantly higher expression of synaptophysin and PSD-95 was observed in the stimulated group in comparison to control at PH1-3 both in the auditory cortex and visual Wulst. A significant inter-hemispheric and gender-based difference in expression was also found in all groups. These results suggest facilitation of postnatal perceptual behaviour and synaptic plasticity in both auditory and visual systems following prenatal stimulation with complex rhythmic music.
A total of 12 kindergarten children participated in a study to determine whether children with auditory learning disability would achieve significantly better scores in reading when taught by the sight method as compared with the phonetic method of instruction and whether such children would exhibit significantly better self-concepts when placed…
Cantiani, Chiara; Riva, Valentina; Piazza, Caterina; Bettoni, Roberta; Molteni, Massimo; Choudhury, Naseem; Marino, Cecilia; Benasich, April A
Infants' ability to discriminate between auditory stimuli presented in rapid succession and differing in fundamental frequency (Rapid Auditory Processing [RAP] abilities) has been shown to be anomalous in infants at familial risk for Language Learning Impairment (LLI) and to predict later language outcomes. This study represents the first attempt to investigate RAP in Italian infants at risk for LLI (FH+), examining two critical acoustic features: frequency and duration, both embedded in a rapidly-presented acoustic environment. RAP skills of 24 FH+ and 32 control (FH-) Italian 6-month-old infants were characterized via EEG/ERP using a multi-feature oddball paradigm. Outcome measures of expressive vocabulary were collected at 20 months. Group differences favoring FH- infants were identified: in FH+ infants, the latency of the N2* peak was delayed and the mean amplitude of the positive mismatch response was reduced, primarily for frequency discrimination and within the right hemisphere. Moreover, both EEG measures were correlated with language scores at 20 months. Results indicate that RAP abilities are atypical in Italian infants with a first-degree relative affected by LLI and that this impacts later linguistic skills. These findings provide a compelling cross-linguistic comparison with previous research on American infants, supporting the biological unity hypothesis of LLI. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Full Text Available Auditory integration training (AIT is a hearing enhancement training process for sensory input anomalies found in individuals with autism, attention deficit hyperactive disorder, dyslexia, hyperactivity, learning disability, language impairments, pervasive developmental disorder, central auditory processing disorder, attention deficit disorder, depressin, and hyperacute hearing. AIT, recently introduced in the United States, and has received much notice of late following the release of The Sound of a Moracle, by Annabel Stehli. In her book, Mrs. Stehli describes before and after auditory integration training experiences with her daughter, who was diagnosed at age four as having autism.
Full Text Available Auditory integration training (AIT is a hearing enhancement training process for sensory input anomalies found in individuals with autism, attention deficit hyperactive disorder, dyslexia, hyperactivity, learning disability, language impairments, pervasive developmental disorder, central auditory processing disorder, attention deficit disorder, depression, and hyper acute hearing. AIT, recently introduced in the United States, and has received much notice of late following the release of the sound of a miracle, by Annabel Stehli. In her book, Mrs. Stehli describes before and after auditory integration training experiences with her daughter, who was diagnosed at age four as having autism.
Full Text Available Absolute pitch (AP is the rare ability of musicians to identify the pitch of tonal sound without external reference. While there have been behavioral and neuroimaging studies on the characteristics of AP, how the AP is implemented in human brains remains largely unknown. AP can be viewed as comprising of two subprocesses: perceptual (processing auditory input to extract a pitch chroma and associative (linking an auditory representation of pitch chroma with a verbal/non-verbal label. In this review, we focus on the nature of the perceptual subprocess of AP. Two different models on how the perceptual subprocess works have been proposed: either via absolute pitch categorization (APC or based on absolute pitch memory (APM. A major distinction between the two views is that whether the AP uses unique auditory processing (i.e., APC that exists only in musicians with AP or it is rooted in a common phenomenon (i.e., APM, only with heightened efficiency. We review relevant behavioral and neuroimaging evidence that supports each notion. Lastly, we list open questions and potential ideas to address them.
Tian, Xing; Poeppel, David
.... Imagined speech production ("articulation imagery"), which induces the kinesthetic feeling of articulator movement and its auditory consequences, provides a new angle because of the concurrent involvement of motor and perceptual systems...
Houston, Derek M.; Stewart, Jessica; Moberly, Aaron; Hollich, George; Miyamoto, Richard T.
Word-learning skills were tested in normal-hearing 12- to 40-month-olds and in deaf 22- to 40-month-olds 12 to 18 months after cochlear implantation. Using the Intermodal Preferential Looking Paradigm (IPLP), children were tested for their ability to learn two novel-word/novel-object pairings. Normal-hearing children demonstrated learning on this…
Full Text Available Motor learning is a process whereby the acquisition of new skills occurs with practice, and can be influenced by the provision of feedback. An important question is what frequency of feedback facilitates motor learning. The guidance hypothesis assumes that the provision of less augmented feedback is better than more because a learner can use his/her own inherent feedback. However, it is unclear whether this hypothesis holds true for all types of augmented feedback, including for example sonified information about performance. Thus, we aimed to test what frequency of augmented sonified feedback facilitates the motor learning of a novel joint coordination pattern. Twenty healthy volunteers first reached to a target with their arm (baseline phase. We manipulated this baseline kinematic data for each individual to create a novel target joint coordination pattern. Participants then practiced to learn the novel target joint coordination pattern, receiving either feedback on every trial i.e. 100% feedback (n = 10, or every other trial, i.e. 50% feedback (n = 10 (acquisition phase. We created a sonification system to provide the feedback. This feedback was a pure tone that varied in intensity in proportion to the error of the performed joint coordination relative to the target pattern. Thus, the auditory feedback contained information about performance in real-time (i.e. concurrent, knowledge of performance feedback. Participants performed the novel joint coordination pattern with no-feedback immediately after the acquisition phase (immediate retention phase, and on the next day (delayed retention phase. The root-mean squared error (RMSE and variable error (VE of joint coordination were significantly reduced during the acquisition phase in both 100% and 50% feedback groups. There was no significant difference in VE between the groups at immediate and delayed retention phases. However, at both these retention phases, the 100% feedback group showed
Gobes, Sharon M H; Jennings, Rebecca B; Maeda, Rie K
Male zebra finches, Taeniopygia guttata, acquire their song during a sensitive period for auditory-vocal learning by imitating conspecific birds. Laboratory studies have shown that the sensitive period for song acquisition covers a developmental phase lasting from 25 to 65days post hatch (dph); formation of auditory memory primarily occurs between 25 and 35dph. The duration of the sensitive period is, however, dependent upon model availability. If a tutor is not available early in development, birds will learn from an adult male introduced to their cage even after they reach 65dph. Birds who are exposed to a second tutor as late as 63dph can successfully adjust their song 'template' to learn a new song model. However, if second-tutor song exposure occurs after 65dph, learning of a new tutor's song will not occur for most individuals. Here, we review the literature as well as novel studies from our own laboratory concerning sensitive periods for auditory memory formation in zebra finches; these behavioral studies indicate that there are developmental constraints on imitative learning in zebra finches. Copyright © 2017 Elsevier B.V. All rights reserved.
Lucker, Jay R.
Many children with problems learning in school can have educational deficits due to underlying auditory processing disorders (APD). For these children, they can be identified as having auditory learning disabilities. Furthermore, auditory learning disabilities is identified as a specific learning disability (SLD) in the IDEA. Educators and…
Leila Cardoso Teruya
Full Text Available The present study aimed to assess the performance of healthy Brazilian adults on the Rey Auditory Verbal Learning Test (RAVLT, a test devised for assessing memory, and to investigate the influence of the variables age, sex and education on the performance obtained, and finally to suggest scores which may be adopted for assessing memory with this instrument. The performance of 130 individuals, subdivided into groups according to age and education, was assessed. Overall performance decreased with age. Schooling presented a strong and positive relationship with scores on all subitems analyzed except learning, for which no influence was found. Mean scores of subitems analyzed did not differ significantly between men and women, except for the delayed recall subitem. This manuscript describes RAVLT scores according to age and education. In summary, this is a pilot study that presents a profile of Brazilian adults on A1, A7, recognition and LOT subitem.O objetivo deste estudo foi avaliar o desempenho de adultos normais brasileiros no Rey Auditory Verbal Learning Test (RAVLT, um teste destinado à avaliação da memória, e investigar a influência das variáveis idade, sexo e escolaridade no desempenho obtido, além de sugerir escores que possam ser utilizados na avaliação da memória segundo este instrumento. Foi avaliado o desempenho de 130 indivíduos, subdivididos em grupos de acordo com a idade e escolaridade. O desempenho geral no teste diminuiu com o aumento da idade. A escolaridade apresentou relação forte e positiva com os escores em todos os subitens analisados, exceto no aprendizado, no qual não foi verificada influência. As médias dos escores dos subitens analisados não foram estatisticamente diferentes entre homens e mulheres, exceto no subitem recordação tardia. Descrevemos os escores no RAVLT de acordo com faixa etária e escolaridade neste manuscrito.
Dunn, John C.; Newell, Ben R.; Kalish, Michael L.
Evidence that learning rule-based (RB) and information-integration (II) category structures can be dissociated across different experimental variables has been used to support the view that such learning is supported by multiple learning systems. Across 4 experiments, we examined the effects of 2 variables, the delay between response and feedback…
Aaron V Berard
Full Text Available Playing certain types of video games for a long time can improve a wide range of mental processes, from visual acuity to cognitive control. Frequent gamers have also displayed generalized improvements in perceptual learning. In the Texture Discrimination Task (TDT, a widely used perceptual learning paradigm, participants report the orientation of a target embedded in a field of lines and demonstrate robust over-night improvement. However, changing the orientation of the background lines midway through TDT training interferes with overnight improvements in overall performance on TDT. Interestingly, prior research has suggested that this effect will not occur if a one-hour break is allowed in between the changes. These results have suggested that after training is over, it may take some time for learning to become stabilized and resilient against interference. Here, we tested whether frequent gamers have faster stabilization of perceptual learning compared to non-gamers and examined the effect of daily video game playing on interference of training of TDT with one background orientation on perceptual learning of TDT with a different background orientation. As a result, we found that non-gamers showed overnight performance improvement only on one background orientation, replicating previous results with the interference in TDT. In contrast, frequent gamers demonstrated overnight improvements in performance with both background orientations, suggesting that they are better able to overcome interference in perceptual learning. This resistance to interference suggests that video game playing not only enhances the amplitude and speed of perceptual learning but also leads to faster and/or more robust stabilization of perceptual learning.
Berard, Aaron V.; Cain, Matthew S.; Watanabe, Takeo; Sasaki, Yuka
Playing certain types of video games for a long time can improve a wide range of mental processes, from visual acuity to cognitive control. Frequent gamers have also displayed generalized improvements in perceptual learning. In the Texture Discrimination Task (TDT), a widely used perceptual learning paradigm, participants report the orientation of a target embedded in a field of lines and demonstrate robust over-night improvement. However, changing the orientation of the background lines midway through TDT training interferes with overnight improvements in overall performance on TDT. Interestingly, prior research has suggested that this effect will not occur if a one-hour break is allowed in between the changes. These results have suggested that after training is over, it may take some time for learning to become stabilized and resilient against interference. Here, we tested whether frequent gamers have faster stabilization of perceptual learning compared to non-gamers and examined the effect of daily video game playing on interference of training of TDT with one background orientation on perceptual learning of TDT with a different background orientation. As a result, we found that non-gamers showed overnight performance improvement only on one background orientation, replicating previous results with the interference in TDT. In contrast, frequent gamers demonstrated overnight improvements in performance with both background orientations, suggesting that they are better able to overcome interference in perceptual learning. This resistance to interference suggests that video game playing not only enhances the amplitude and speed of perceptual learning but also leads to faster and/or more robust stabilization of perceptual learning. PMID:25807394
Julia A Mossbridge
Full Text Available Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements, it is not well known what sensory mechanisms generally track ongoing auditory-visual synchrony for non-speech signals in a complex auditory-visual environment. To begin to address this question, we used music and visual displays that varied in the dynamics of multiple features (e.g., auditory loudness and pitch; visual luminance, color, size, motion, and organization across multiple time scales. Auditory activity (monitored using auditory steady-state responses, ASSR was selectively reduced in the left hemisphere when the music and dynamic visual displays were temporally misaligned. Importantly, ASSR was not affected when attentional engagement with the music was reduced, or when visual displays presented dynamics clearly dissimilar to the music. These results appear to suggest that left-lateralized auditory mechanisms are sensitive to auditory-visual temporal alignment, but perhaps only when the dynamics of auditory and visual streams are similar. These mechanisms may contribute to correct auditory-visual binding in a busy sensory environment.
McBride, Thomas J; Rodriguez-Contreras, Adrian; Trinh, Angela; Bailey, Robert; Debello, William M
Computational models predict that experience-driven clustering of coactive synapses is a mechanism for information storage. This prediction has remained untested, because it is difficult to approach through time-lapse analysis. Here, we exploit a unique feature of the barn owl auditory localization pathway that permits retrospective analysis of prelearned and postlearned circuitry: owls reared wearing prismatic spectacles develop an adaptive microcircuit that coexists with the native one but can be analyzed independently based on topographic location. To visualize the clustering of axodendritic contacts (potential synapses) within these zones, coactive axons were labeled by focal injection of fluorescent tracer and their target dendrites labeled with an antibody directed against CaMKII (calcium/calmodulin-dependent protein kinase type II, alpha subunit). Using high-resolution confocal imaging, we measured the distance from each contact to its nearest neighbor on the same branch of dendrite. We found that the distribution of intercontact distances for the adaptive zone was shifted dramatically toward smaller values compared with distributions for either the maladaptive zone of the same animals or the adaptive zone of normal juveniles, which indicates that a dynamic clustering of contacts had occurred. Moreover, clustering in the normal zone was greater in normal juveniles than in prism-adapted owls, indicative of declustering. These data demonstrate that clustering is bidirectionally adjustable and tuned by behaviorally relevant experience. The microanatomical configurations in all zones of both experimental groups matched the functional circuit strengths that were assessed by in vivo electrophysiological mapping. Thus, the observed changes in clustering are appropriately positioned to contribute to the adaptive strengthening and weakening of auditory-driven responses.
Anzures, Gizelle; Wheeler, Andrea; Quinn, Paul C.; Pascalis, Olivier; Slater, Alan M.; Heron-Delaney, Michelle; Tanaka, James W.; Lee, Kang
Perceptual narrowing in the visual, auditory, and multisensory domains has its developmental origins during infancy. The current study shows that experimentally induced experience can reverse the effects of perceptual narrowing on infants' visual recognition memory of other-race faces. Caucasian 8- to 10-month-olds who could not discriminate…
The objective of this project is to discuss a versatile speech enhancement method based on the human auditory model. In this project a speech enhancement scheme is being described which meets the demand for quality noise reduction algorithms which are capable of operating at a very low signal to noise ratio. We will be discussing how proposed speech enhancement system is capable of reducing noise with little speech degradation in diverse noise environments. In this model to reduce the resi...
We describe a study on the motivation of trainees in e-learning-based professional training and on the effect of their motivation upon the perceptions they build about the quality of the courses. We propose the concepts of "perceived motivational gap" and "real motivational gap" as indicators of e-learning quality, which…
Chen, Sufen; Sussman, Elyse S.
The purpose of the study was to test the hypothesis that sound context modulates the magnitude of auditory distraction, indexed by behavioral and electrophysiological measures. Participants were asked to identify tone duration, while irrelevant changes occurred in tone frequency, tone intensity, and harmonic structure. Frequency deviants were randomly intermixed with standards (Uni-Condition), with intensity deviants (Bi-Condition), and with both intensity and complex deviants (Tri-Condition). Only in the Tri-Condition did the auditory distraction effect reflect the magnitude difference among the frequency and intensity deviants. The mixture of the different types of deviants in the Tri-Condition modulated the perceived level of distraction, demonstrating that the sound context can modulate the effect of deviance level on processing irrelevant acoustic changes in the environment. These findings thus indicate that perceptual contrast plays a role in change detection processes that leads to auditory distraction. PMID:23886958
Laurent, Raphaël; Barnaud, Marie-Lou; Schwartz, Jean-Luc; Bessière, Pierre; Diard, Julien
There is a consensus concerning the view that both auditory and motor representations intervene in the perceptual processing of speech units. However, the question of the functional role of each of these systems remains seldom addressed and poorly understood. We capitalized on the formal framework of Bayesian Programming to develop COSMO (Communicating Objects using Sensory-Motor Operations), an integrative model that allows principled comparisons of purely motor or purely auditory implementations of a speech perception task and tests the gain of efficiency provided by their Bayesian fusion. Here, we show 3 main results: (a) In a set of precisely defined "perfect conditions," auditory and motor theories of speech perception are indistinguishable; (b) When a learning process that mimics speech development is introduced into COSMO, it departs from these perfect conditions. Then auditory recognition becomes more efficient than motor recognition in dealing with learned stimuli, while motor recognition is more efficient in adverse conditions. We interpret this result as a general "auditory-narrowband versus motor-wideband" property; and (c) Simulations of plosive-vowel syllable recognition reveal possible cues from motor recognition for the invariant specification of the place of plosive articulation in context that are lacking in the auditory pathway. This provides COSMO with a second property, where auditory cues would be more efficient for vowel decoding and motor cues for plosive articulation decoding. These simulations provide several predictions, which are in good agreement with experimental data and suggest that there is natural complementarity between auditory and motor processing within a perceptuo-motor theory of speech perception. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Gutschalk, Alexander; Dykstra, Andrew R
Our auditory system is constantly faced with the task of decomposing the complex mixture of sound arriving at the ears into perceptually independent streams constituting accurate representations of individual sound sources. This decomposition, termed auditory scene analysis, is critical for both survival and communication, and is thought to underlie both speech and music perception. The neural underpinnings of auditory scene analysis have been studied utilizing invasive experiments with animal models as well as non-invasive (MEG, EEG, and fMRI) and invasive (intracranial EEG) studies conducted with human listeners. The present article reviews human neurophysiological research investigating the neural basis of auditory scene analysis, with emphasis on two classical paradigms termed streaming and informational masking. Other paradigms - such as the continuity illusion, mistuned harmonics, and multi-speaker environments - are briefly addressed thereafter. We conclude by discussing the emerging evidence for the role of auditory cortex in remapping incoming acoustic signals into a perceptual representation of auditory streams, which are then available for selective attention and further conscious processing. This article is part of a Special Issue entitled Human Auditory Neuroimaging. Copyright © 2013 Elsevier B.V. All rights reserved.
The purpose of this individualized perceptual skills curriculum is to ensure that each child acquires facility in processing concrete information before being exposed to abstraction demands of an academic program. The four major curriculum areas described are general motor, visual motor, auditory motor, and integrative. Unit areas are defined,…
Esther Gómez Lacabex
Full Text Available http://dx.doi.org/10.5007/2175-8026.2008n55p173 This study examines the ability to identify the English phonological contrast full vowel-schwa by Spanish learners of English after two different types of training: auditory and articulatory. Perceptual performance was measured in isolated words in order to investigate the effect of training and in sentences to study the robustness of acquisition in generalizing to a context which was not used during training. Subjects were divided into three groups: two experimental groups, one undergoing perceptual training and one undergoing production based training, and a control group. Both experimental groups' perception of the reduced vowel improved significantly after training. Results indicated that students were able to generalize their reduced vowel identification abilities to the new context. The control group did not show any significant improvement. Our findings agree with studies that have demonstrated positive effects of phonetic training (Derwing. Munro & Wiebe, 1998; Rochet, 1995; Cenoz & García Lecumberri, 1995, 1999. Interestingly, the results also support the facilitating view between perception and production since production training proved beneficial in the development of perceptual abilities (Catford & Pisoni, 1970; Mathews, 1997. Finally, our data showed that training resulted in robust learning, since students were able generalize their improved perceptual abilities to a new context.
Esther Gómez Lacabex
Full Text Available This study examines the ability to identify the English phonological contrast full vowel-schwa by Spanish learners of English after two different types of training: auditory and articulatory. Perceptual performance was measured in isolated words in order to investigate the effect of training and in sentences to study the robustness of acquisition in generalizing to a context which was not used during training. Subjects were divided into three groups: two experimental groups, one undergoing perceptual training and one undergoing production based training, and a control group. Both experimental groups' perception of the reduced vowel improved significantly after training. Results indicated that students were able to generalize their reduced vowel identification abilities to the new context. The control group did not show any significant improvement. Our findings agree with studies that have demonstrated positive effects of phonetic training (Derwing. Munro & Wiebe, 1998; Rochet, 1995; Cenoz & García Lecumberri, 1995, 1999. Interestingly, the results also support the facilitating view between perception and production since production training proved beneficial in the development of perceptual abilities (Catford & Pisoni, 1970; Mathews, 1997. Finally, our data showed that training resulted in robust learning, since students were able generalize their improved perceptual abilities to a new context.
We describe a study on the motivation of trainees in e-learning-based professional training and on the effect of their motivation upon the perceptions they build about the quality of the courses. Keyword...
Ana Claudia Figueiredo Frizzo
Full Text Available The information presented in this paper demonstrates the author's experience in previews cross-sectional studies conducted in Brazil, in comparison with the current literature. Over the last ten years, AEP has been used in children with learning disabilities. This method is critical to analyze the quality of the processing in time and indicates the specific neural demands and circuits of the sensorial and cognitive process in this clinical population. Some studies with children with dyslexia and learning disabilities were shown here to illustrate the use of AEP in this population.
Altvater-Mackensen, Nicole; Grossmann, Tobias
Infants' language exposure largely involves face-to-face interactions providing acoustic and visual speech cues but also social cues that might foster language learning. Yet, both audiovisual speech information and social information have so far received little attention in research on infants' early language development. Using a preferential…
Page, Mike P. A.; Cumming, Nick; Norris, Dennis; Hitch, Graham J.; McNeil, Alan M.
In 5 experiments, a Hebb repetition effect, that is, improved immediate serial recall of an (unannounced) repeating list, was demonstrated in the immediate serial recall of visual materials, even when use of phonological short-term memory was blocked by concurrent articulation. The learning of a repeatedly presented letter list in one modality…
Bishop, Dorothy V M; Hsu, Hsinjen Julie
It has been proposed that children with Specific Language Impairment (SLI) have a selective deficit in procedural learning, with relatively spared declarative learning. In previous studies we and others confirmed deficits in procedural learning of sequences, using both verbal and nonverbal materials. Here we studied the same children using a task that implicates the declarative system, auditory-visual paired associate learning. There were parallel tasks for verbal materials (vocabulary learning) and nonverbal materials (meaningless patterns and sounds). Participants were 28 children with SLI aged 7-11 years, 28 younger typically-developing children matched for raw scores on a test of receptive grammar, and 20 typically-developing children matched on chronological age. Children were given four sessions of paired-associate training using a computer game adopting an errorless learning procedure, during which they had to select a picture from an array of four to match a heard stimulus. In each session they did both vocabulary training, where the items were eight names and pictures of rare animals, and nonverbal training, where stimuli were eight visual patterns paired with complex nonverbal sounds. A total of 96 trials of each type was presented over four days. In all groups, accuracy improved across the four sessions for both types of material. For the vocabulary task, the age-matched control group outperformed the other two groups in the starting level of performance, whereas for the nonverbal paired-associate task, there were no reliable differences between groups. In both tasks, rate of learning was comparable for all three groups. These results are consistent with the Procedural Deficit Hypothesis of SLI, in finding spared declarative learning on a nonverbal auditory-visual paired associate task. On the verbal version of the task, the SLI group had a deficit in learning relative to age-matched controls, which was evident on the first block in the first session
Lansford, Kaitlin L; Liss, Julie M; Norton, Rebecca E
In this investigation, the construct of perceptual similarity was explored in the dysarthrias. Specifically, we employed an auditory free-classification task to determine whether listeners could cluster speakers by perceptual similarity, whether the clusters mapped to acoustic metrics, and whether the clusters were constrained by dysarthria subtype diagnosis. Twenty-three listeners blinded to speakers' medical and dysarthria subtype diagnoses participated. The task was to group together (drag and drop) the icons corresponding to 33 speakers with dysarthria on the basis of how similar they sounded. Cluster analysis and multidimensional scaling (MDS) modeled the perceptual dimensions underlying similarity. Acoustic metrics and perceptual judgments were used in correlation analyses to facilitate interpretation of the derived dimensions. Six clusters of similar-sounding speakers and 3 perceptual dimensions underlying similarity were revealed. The clusters of similar-sounding speakers were not constrained by dysarthria subtype diagnosis. The 3 perceptual dimensions revealed by MDS were correlated with metrics for articulation rate, intelligibility, and vocal quality, respectively. This study shows (a) feasibility of a free-classification approach for studying perceptual similarity in dysarthria, (b) correspondence between acoustic and perceptual metrics to clusters of similar-sounding speakers, and (c) similarity judgments transcended dysarthria subtype diagnosis.
Kurt, Mehmet; Bilginer, Hayriye
In the globalizing world economy, for the realization of international trade is increasing the need for foreign language learning. TR63 (Kahramanmaras, Osmaniye and Hatay) region is increasing its export every day. Besides these advances, interest is awakening in foreign language education among the region. In preparing syllabus for this kind of…
Full Text Available The flourishing of studies on the neural correlates of decision-making calls for an appraisal of the relation between perceptual decisions and conscious perception. By exploiting the long integration time of noisy motion stimuli, and by forcing human observers to make difficult speeded decisions--sometimes a blind guess--about stimulus direction, we traced the temporal buildup of motion discrimination capability and perceptual awareness, as assessed trial by trial through direct rating. We found that both increased gradually with motion coherence and viewing time, but discrimination was systematically leading awareness, reaching a plateau much earlier. Sensitivity and criterion changes contributed jointly to the slow buildup of perceptual awareness. It made no difference whether motion discrimination was accomplished by saccades or verbal responses. These findings suggest that perceptual awareness emerges on the top of a developing or even mature perceptual decision. We argue that the middle temporal (MT cortical region does not confer us the full phenomenic depth of motion perception, although it may represent a precursor stage in building our subjective sense of visual motion.
Ruan, Qingwei; Ma, Cheng; Zhang, Ruxin; Yu, Zhuowei
The development of presbycusis, or age-related hearing loss, is determined by a combination of genetic and environmental factors. The auditory periphery exhibits a progressive bilateral, symmetrical reduction of auditory sensitivity to sound from high to low frequencies. The central auditory nervous system shows symptoms of decline in age-related cognitive abilities, including difficulties in speech discrimination and reduced central auditory processing, ultimately resulting in auditory perceptual abnormalities. The pathophysiological mechanisms of presbycusis include excitotoxicity, oxidative stress, inflammation, aging and oxidative stress-induced DNA damage that results in apoptosis in the auditory pathway. However, the originating signals that trigger these mechanisms remain unclear. For instance, it is still unknown whether insulin is involved in auditory aging. Auditory aging has preclinical lesions, which manifest as asymptomatic loss of periphery auditory nerves and changes in the plasticity of the central auditory nervous system. Currently, the diagnosis of preclinical, reversible lesions depends on the detection of auditory impairment by functional imaging, and the identification of physiological and molecular biological markers. However, despite recent improvements in the application of these markers, they remain under-utilized in clinical practice. The application of antisenescent approaches to the prevention of auditory aging has produced inconsistent results. Future research will focus on the identification of markers for the diagnosis of preclinical auditory aging and the development of effective interventions. © 2013 Japan Geriatrics Society.
Benoit, Charles-Etienne; Dalla Bella, Simone; Farrugia, Nicolas; Obrig, Hellmuth; Mainka, Stefan; Kotz, Sonja A.
It is well established that auditory cueing improves gait in patients with idiopathic Parkinson’s disease (IPD). Disease-related reductions in speed and step length can be improved by providing rhythmical auditory cues via a metronome or music. However, effects on cognitive aspects of motor control have yet to be thoroughly investigated. If synchronization of movement to an auditory cue relies on a supramodal timing system involved in perceptual, motor, and sensorimotor integration, auditory cueing can be expected to affect both motor and perceptual timing. Here, we tested this hypothesis by assessing perceptual and motor timing in 15 IPD patients before and after a 4-week music training program with rhythmic auditory cueing. Long-term effects were assessed 1 month after the end of the training. Perceptual and motor timing was evaluated with a battery for the assessment of auditory sensorimotor and timing abilities and compared to that of age-, gender-, and education-matched healthy controls. Prior to training, IPD patients exhibited impaired perceptual and motor timing. Training improved patients’ performance in tasks requiring synchronization with isochronous sequences, and enhanced their ability to adapt to durational changes in a sequence in hand tapping tasks. Benefits of cueing extended to time perception (duration discrimination and detection of misaligned beats in musical excerpts). The current results demonstrate that auditory cueing leads to benefits beyond gait and support the idea that coupling gait to rhythmic auditory cues in IPD patients relies on a neuronal network engaged in both perceptual and motor timing. PMID:25071522
Full Text Available It is well established that auditory cueing improves gait in patients with Idiopathic Parkinson’s Disease (IPD. Disease-related reductions in speed and step length can be improved by providing rhythmical auditory cues via a metronome or music. However, effects on cognitive aspects of motor control have yet to be thoroughly investigated. If synchronization of movement to an auditory cue relies on a supramodal timing system involved in perceptual, motor and sensorimotor integration, auditory cueing can be expected to affect both motor and perceptual timing. Here we tested this hypothesis by assessing perceptual and motor timing in 15 IPD patients before and after a four-week music training program with rhythmic auditory cueing. Long-term effects were assessed one month after the end of the training. Perceptual and motor timing was evaluated with the Battery for the Assessment of Auditory Sensorimotor and Timing Abilities (BAASTA and compared to that of age-, gender-, and education-matched healthy controls. Prior to training, IPD patients exhibited impaired perceptual and motor timing. Training improved patients’ performance in tasks requiring synchronization with isochronous sequences, and enhanced their ability to adapt to durational changes in a sequence in hand tapping tasks. Benefits of cueing extended to time perception (duration discrimination and detection of misaligned beats in musical excerpts. The current results demonstrate that auditory cueing leads to benefits beyond gait and support the idea that coupling gait to rhythmic auditory cues in IPD patients relies on a neuronal network engaged in both perceptual and motor timing.
Marta I Garrido
Full Text Available Surprising events in the environment can impair task performance. This might be due to complete distraction, leading to lapses during which performance is reduced to guessing. Alternatively, unpredictability might cause a graded withdrawal of perceptual resources from the task at hand and thereby reduce sensitivity. Here we attempt to distinguish between these two mechanisms. Listeners performed a novel auditory pitch—duration discrimination, where stimulus loudness changed occasionally and incidentally to the task. Responses were slower and less accurate in the surprising condition, where loudness changed unpredictably, than in the predictable condition, where the loudness was held constant. By explicitly modelling both lapses and changes in sensitivity, we found that unpredictable changes diminished sensitivity but did not increase the rate of lapses. These findings suggest that background environmental uncertainty can disrupt goal-directed behaviour. This graded processing strategy might be adaptive in potentially threatening contexts, and reflect a flexible system for automatic allocation of perceptual resources.
João Vinícius Salgado
Full Text Available OBJECTIVE: The Rey Auditory-Verbal Learning Test, which is used to evaluate learning and memory, is a widely recognized tool in the general literature on neuropsychology. This paper aims at presenting the performance of Brazilian adult subjects on the Rey Auditory-Verbal Learning Test, and was written after we published a previous study on the performance of Brazilian elderly subjects on this same test. METHOD: A version of the test, featuring a list of high-frequency one-syllable and two-syllable concrete Portuguese substantives, was developed. Two hundred and forty-three (243 subjects from both genders were allocated to 6 different age groups (20-24; 25-29; 30-34; 35-44; 45-54 and 55-60 years old. They were then tested using the Rey Auditory-Verbal Learning Test. RESULTS: Performance on the Rey Auditory-Verbal Learning Test showed a positive correlation with educational level and a negative correlation with age. Women performed significantly better than men. When applied across similar age ranges, our results were similar to those recorded for the English version of the Rey Auditory-Verbal Learning Test. CONCLUSION: Our results suggest that the adaptation of the Rey Auditory-Verbal Learning Test to Brazilian Portuguese is appropriate and that it is applicable to Brazilian subjects for memory capacity evaluation purposes and across similar age groups and educational levels.OBJETIVO: O Rey Auditory-Verbal Learning Test é um teste mundialmente reconhecido na literatura neuropsicológica que avalia o aprendizado e a memória. Na sequência de um estudo anterior, que apresentou o desempenho dos idosos brasileiros no Rey Auditory-Verbal Learning Test, este trabalho apresenta o desempenho dos adultos brasileiros no Rey Auditory-Verbal Learning Test. MÉTODO: Uma versão do teste foi desenvolvida com uma lista de alta frequência, com uma ou duas sílabas de substantivos concretos do português do Brasil. Duzentos e quarenta e três (243 indiv
Full Text Available OBJETIVO: Esclarecer a relação entre dificuldades de aprendizagem e o transtorno do processamento auditivo em uma turma de segunda série. MÉTODOS: Através da aplicação de testes de leitura os alunos foram classificados quanto à fluência em leitura, sendo um com maior fluência (grupo A e outro com menor fluência (grupo B. Os testes de processamento auditivo foram comparados entre os grupos. RESULTADOS: Todos os participantes apresentaram dificuldades de aprendizagem e transtorno do processamento auditivo em quase todos os subperfis primários. Verificou-se que a variável memória sequencial verbal do grupo de menor fluência em leitura (grupo B foi significantemente melhor (p=0,030. CONCLUSÃO: Questiona-se o diagnóstico de transtorno primário do processamento auditivo e salienta-se a importância da memória sequencial verbal no aprendizado da leitura e escrita. Em face do que foi observado, mais pesquisas deverão ser realizadas objetivando o estudo dessa variável e sua relação com o processamento auditivo temporal.PURPOSE: To clarify the relationship between learning difficulties and auditory processing disorder in second grade students. METHODS: Based on the application of reading tests, the students of a second grade class of an elementary school were classified into two groups, according to their reading fluency: a group with better fluency (group A and another with less fluency (group B. A between-group analysis of the auditory processing tests was carried out. RESULTS: All participants presented learning difficulties and auditory processing disorder in almost every primary subprofiles. It was observed that the verbal sequential memory abilities of the less fluent group (group B was significantly better (p=0,030. CONCLUSION: The diagnosis of primary auditory processing disorder is questioned, and it is emphasized the importance of stimulating verbal sequential memory to the learning of reading and writing abilities. In
Leibold, Christian; van Hemmen, J. Leo
Time differences between the two ears are an important cue for animals to azimuthally locate a sound source. The first binaural brainstem nucleus, in mammals the medial superior olive, is generally believed to perform the necessary computations. Its cells are sensitive to variations of interaural time differences of about 10 μs. The classical explanation of such a neuronal time-difference tuning is based on the physical concept of delay lines. Recent data, however, are inconsistent with a temporal delay and rather favor a phase delay. By means of a biophysical model we show how spike-timing-dependent synaptic learning explains precise interplay of excitation and inhibition and, hence, accounts for a physical realization of a phase delay.
Keller, E; Rothenberger, A; Göpfert, M
In the present study 3 hypotheses were investigated: first, the notion that an aphasic impairment of vowel perception is not associated with particular aphasic syndromes or lesion sites, second, that it is a disorder comparable to a general impairment of perception in a normal speaker caused by some form of interference, and third, that perceptual phonemic discrimination is a separate process from the phonemic discriminative function necessary for speech production. The hypotheses were tested by means of a vowel discrimination test administered to 50 German-speaking aphasic patients (roughly equally divided between Broca's, mixed non-fluent, Wernicke's and mixed fluent groups); the same test, masked by white noise at -10 dB was also administered to 20 normal native speakers of German. Results were in support of all 3 hypotheses. First, aphasic patients' error patterns were similar across fluent and nonfluent groups and for all lesion sites. Second, the error distributions of aphasics with slight auditory impairment resembled those of normal subjects in the -10 dB white noise condition, while distributions of aphasics with severe auditory impairment were indicative of an added component of guessing behaviour. And third, the patients' performance on the discrimination task differed from that shown on a comparable repetition test. (It was argued that repetition involves a patient's expressive capacity in addition to his perceptual capacity). The differentiation of perceptual and expressive phonemic discrimination was further supported by an analysis of the speech errors occurring in the spontaneous (purely expressive) speech and in the repetition (expressive plus perceptual) tasks of 16 French Canadian and 5 English Canadian aphasics.
Papp, III, Albert Louis [Univ. of California, Davis, CA (United States)
This dissertation describes a methodology and example implementation for the dynamic regulation of temporally overlapping auditory messages in computer-user interfaces. The regulation mechanism exists to schedule numerous overlapping auditory messages in such a way that each individual message remains perceptually distinct from all others. The method is based on the research conducted in the area of auditory scene analysis. While numerous applications have been engineered to present the user with temporally overlapped auditory output, they have generally been designed without any structured method of controlling the perceptual aspects of the sound. The method of scheduling temporally overlapping sounds has been extended to function in an environment where numerous applications can present sound independently of each other. The Centralized Audio Presentation System is a global regulation mechanism that controls all audio output requests made from all currently running applications. The notion of multimodal objects is explored in this system as well. Each audio request that represents a particular message can include numerous auditory representations, such as musical motives and voice. The Presentation System scheduling algorithm selects the best representation according to the current global auditory system state, and presents it to the user within the request constraints of priority and maximum acceptable latency. The perceptual conflicts between temporally overlapping audio messages are examined in depth through the Computational Auditory Scene Synthesizer. At the heart of this system is a heuristic-based auditory scene synthesis scheduling method. Different schedules of overlapped sounds are evaluated and assigned penalty scores. High scores represent presentations that include perceptual conflicts between over-lapping sounds. Low scores indicate fewer and less serious conflicts. A user study was conducted to validate that the perceptual difficulties predicted by
D'Ausilio, Alessandro; Bartoli, Eleonora; Maffongelli, Laura; Berry, Jeffrey James; Fadiga, Luciano
Audiovisual speech perception is likely based on the association between auditory and visual information into stable audiovisual maps. Conflicting audiovisual inputs generate perceptual illusions such as the McGurk effect. Audiovisual mismatch effects could be either driven by the detection of violations in the standard audiovisual statistics or via the sensorimotor reconstruction of the distal articulatory event that generated the audiovisual ambiguity. In order to disambiguate between the two hypotheses we exploit the fact that the tongue is hidden to vision. For this reason, tongue movement encoding can solely be learned via speech production but not via others׳ speech perception alone. Here we asked participants to identify speech sounds while matching or mismatching visual representations of tongue movements which were shown. Vision of congruent tongue movements facilitated auditory speech identification with respect to incongruent trials. This result suggests that direct visual experience of an articulator movement is not necessary for the generation of audiovisual mismatch effects. Furthermore, we suggest that audiovisual integration in speech may benefit from speech production learning. Copyright © 2014 Elsevier Ltd. All rights reserved.
Daliri, Ayoub; Max, Ludo
Auditory modulation during speech movement planning is limited in adults who stutter (AWS), but the functional relevance of the phenomenon itself remains unknown. We investigated for AWS and adults who do not stutter (AWNS) (a) a potential relationship between pre-speech auditory modulation and auditory feedback contributions to speech motor learning and (b) the effect on pre-speech auditory modulation of real-time versus delayed auditory feedback. Experiment I used a sensorimotor adaptation paradigm to estimate auditory-motor speech learning. Using acoustic speech recordings, we quantified subjects' formant frequency adjustments across trials when continually exposed to formant-shifted auditory feedback. In Experiment II, we used electroencephalography to determine the same subjects' extent of pre-speech auditory modulation (reductions in auditory evoked potential N1 amplitude) when probe tones were delivered prior to speaking versus not speaking. To manipulate subjects' ability to monitor real-time feedback, we included speaking conditions with non-altered auditory feedback (NAF) and delayed auditory feedback (DAF). Experiment I showed that auditory-motor learning was limited for AWS versus AWNS, and the extent of learning was negatively correlated with stuttering frequency. Experiment II yielded several key findings: (a) our prior finding of limited pre-speech auditory modulation in AWS was replicated; (b) DAF caused a decrease in auditory modulation for most AWNS but an increase for most AWS; and (c) for AWS, the amount of auditory modulation when speaking with DAF was positively correlated with stuttering frequency. Lastly, AWNS showed no correlation between pre-speech auditory modulation (Experiment II) and extent of auditory-motor learning (Experiment I) whereas AWS showed a negative correlation between these measures. Thus, findings suggest that AWS show deficits in both pre-speech auditory modulation and auditory-motor learning; however, limited pre
O'Donovan, Jonathan J; Furlong, Dermot J
This paper describes the design of a bilinear time-frequency distribution which is a joint model of temporal and spectral masking. The distribution is used to generate temporally evolving excitation patterns of nonstationary signals and systems and is conceived as a tool for acousticians and engineers for perceptual time-frequency analysis. Distribution time and frequency resolutions are controlled by a separable kernel consisting of a set of low-pass time and frequency smoothing windows. These windows are designed by adapting existing psychoacoustic models of auditory resolution, rather than using mathematical window functions. Cross-term interference and windowing clutter are highly suppressed for the distribution, ensuring resolution accuracy over a dynamic range sufficient to encompass that of the auditory system (in excess of 100 dB). Application to the analysis of a synthetic and two real signals are included to demonstrate the approach.
Adriana Marques de Oliveira
Full Text Available OBJETIVO: caracterizar e comparar, por meio de testes comportamentais, o processamento auditivo de escolares com diagnóstico interdisciplinar de (I distúrbio da aprendizagem, (II dislexia e (III escolares com bom desempenho acadêmico. MÉTODOS: participaram deste estudo 30 escolares na faixa etária de 8 a 16 anos de idade, de ambos os gêneros, de 2ª a 4ª séries do ensino fundamental, divididos em três grupos: GI composto por 10 escolares com diagnóstico interdisciplinar de distúrbio de aprendizagem, GII: composto por 10 escolares com diagnóstico interdisciplinar de dislexia e GIII composto por 10 escolares sem dificuldades de aprendizagem, pareados segundo gênero e faixa etária com os grupos GI e GII. Foram realizadas avaliação audiológica e de processamento auditivo. RESULTADOS: os escolares de GIII apresentaram desempenho superior nos testes de processamento auditivo em relação aos escolares de GI e GII. GI apresentou desempenho inferior nas habilidades auditivas avaliadas para testes dicóticos de dígitos e dissílabos alternados, logoaudiometria pediátrica, localização sonora, memória verbal e não-verbal, ao passo que GII apresentou as mesmas alterações de GI, com exceção do teste de logoaudiometria pediátrica. CONCLUSÃO: os escolares com transtornos de aprendizagem apresentaram desempenho inferior nos testes de processamento auditivo, sendo que os escolares com distúrbio de aprendizagem apresentaram maior número de habilidades auditivas alteradas, em comparação com os escolares com dislexia, por terem apresentado atenção sustentada reduzida. O grupo de escolares com dislexia apresentou alterações decorrentes da dificuldade relacionada à codificação e decodificação de estímulos sonoros.PURPOSE: to characterize and compare, by means of behavioral tests, the auditory processing of students with an interdisciplinary diagnosis of (I learning disorder, (II dyslexia and (III students with good academic
Alain, Claude; Tremblay, Kelly
The perception of complex acoustic signals such as speech and music depends on the interaction between peripheral and central auditory processing. As information travels from the cochlea to primary and associative auditory cortices, the incoming sound is subjected to increasingly more detailed and refined analysis. These various levels of analyses are thought to include low-level automatic processes that detect, discriminate and group sounds that are similar in physical attributes such as frequency, intensity, and location as well as higher-level schema-driven processes that reflect listeners' experience and knowledge of the auditory environment. In this review, we describe studies that have used event-related brain potentials in investigating the processing of complex acoustic signals (e.g., speech, music). In particular, we examine the role of hearing loss on the neural representation of s